uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,564,315 | arxiv | \section{Introduction}
Image registration is a fundamental task in image processing, whose importance soars with growing number of different types of devices and increasing availability. It also serves as a crucial step in a great variety of biomedical imaging applications. Registration techniques can be generally divided into rigid registration and non-rigid registration \cite{oh2017deformable}. Rigid image registration moves or rotates image pixels uniformly so that the pixelwise relations are kept before and after the transformation. Non-rigid image registration method, also known as deformable registration, changes the pixelwise relation and produces a translation map for each pixel.
The registration task requires user to input a pair of images acquired from different devices, among which one denoted as \textit{moving}, indicating the image that needs to be aligned, and the other denoted as \textit{fixed}, indicating the target coordinate system of the alignment, and outputs the aligned image after transformation, commonly denoted as \textit{moved}. Traditional rigid registration methods often solve an optimization problem for each pair of images. However, solving a pairwise optimization problem can be computationally intensive and slow in practice \cite{balakrishnan2019voxelmorph}. With the advent of deep learning in recent decades, a number of Convolutional Neural Network (CNN) architectures are proposed for registration. The pipelines of deep learning-based techniques consist of two types. One is to use a CNN to directly model the transformation from the input image pair and the final aligned output. The limitation is that it requires ground truth during the training phase, either the pixelwise translation map (registration field) or the aligned image corresponding to the \textit{moving} image. The output \textit{moved} image is not necessarily the same as the \textit{fixed} image in all registration scenarios due to the measurement noise and the artifacts in the generation process of the \textit{fixed} image. Thus, the ground truth is usually hard to acquire. The other type of the pipeline is to use a CNN to model the registration field and utilize a Spatial Transformer Network \cite{jaderberg2015spatial} to perform the registration instead of modeling the transformation directly, making the pipeline unsupervised.
Although there are a number of approaches proposed for image registration, few of them provide a thorough study for the differences between the state-of-art deep learning-based non-rigid registration trained with rigid and non-rigid data and rigid registration approaches on rigid and non-rigid testing data recently. In our work, we compared several state-of-art non-rigid and rigid registration frameworks and our major contributions are: 1) we generated our data set using images from the Kaggle Dog vs Cat Competition; 2) we reproduced the state-of-art 3D unsupervised non-rigid registration approach Voxelmorph \cite{balakrishnan2019voxelmorph} in 2D and improve the registration results by adding a gaussian layer for registration field compared to the original architecture; 3) we reproduced several state-of-art rigid registration methods including SimpleElastix \cite{marstal2016simpleelastix}, Oriented FAST and Rotated BRIEF (ORB) \cite{rublee2011orb} and intensity-based image registration by \href{https://www.mathworks.com/help/images/intensity-based-automatic-image-registration.html#d117e16633}{Matlab}.
\section{Related Work}
\subsection{Rigid image registration}
Rigid image registration generally utilizes a linear transformation, which includes translation, rotation, scaling, shearing and other affine transformations. Extensive studies have been conducted on the topic of rigid registration \cite{eggert1997estimating,letteboer2003rigid,leroy2004rigid,commowick2012block,debayle2016rigid,ourselin2000block,feldmar1996rigid}. Recently, there are three widely used tools for rigid registration. The first is the intensity-based approach \cite{rohde2003adaptive,myronenko2010intensity,klein2009elastix}. Matlab has embedded function called imregister for this method, making it accessible and easy to use. The second is the ORB based approach, which builds on FAST key points detector \cite{rosten2006machine,rosten2008faster} and BRIEF feature descriptor \cite{calonder2010brief}. The third is the famous SimpleElastix model \cite{marstal2016simpleelastix}, which is an extension of Elastix \cite{klein2009elastix}. SimpleElastix also contains spline non-rigid transformation function. These rigid registration methods require user to specify the transformation model before the registration, which limits their generalization ability when dealing when unknown transformation.
\subsection{Non-rigid image registration}
Several studies propose pairwise optimization methods for non-rigid image registration within displacement vector fields, including elastic type models, free-form deformations with b-splines \cite{rueckert1999nonrigid}, discrete methods \cite{dalca2016patch,glocker2008dense} and Demons \cite{pennec1999understanding,thirion1998image}. There are also several methods proposing diffeomorphic transformation-based methods including Large Diffeomorphic Distance Metric Mapping (LDDMM) \cite{beg2005computing,zhang2017frequency,cao2005large,ceritoglu2009multi,hernandez2009registration,joshi2000landmark,oishi2009atlas}, DARTEL \cite{ashburner2007fast} and diffeomorphic demons \cite{vercauteren2009diffeomorphic}. These methods are not learning based and need to be repeated for each pair, which is time-consuming when dealing with large data set.
There are also some recent papers proposing using neural networks to learn the registration function, but all of them rely on ground truth translation map \cite{cao2017deformable,krebs2017robust,rohe2017svf,sokooti2017nonrigid,yang2017quicksilver,liao2017artificial,cao2018deformable}. In common registration applications, it is hard to acquire the translation map from two natural images taken by two different camera systems. Hu \etal \cite{hu2018weakly} put forward a weakly supervised deep learning-based registration approach, but it still requires a proportion of ground truth. More recently, several unsupervised methods have been proposed \cite{de2017end,li2017non,li2018non}. They utilize neural networks to model the registration field and then apply the spatial transformer network \cite{jaderberg2015spatial} to warp the image. However, the methods are only tested on a small subsets of volumes, such as 3D regions and have not been compared with other popular models like LDDMM or U-net. Balakrishnan \etal \cite{balakrishnan2018unsupervised} then propose an unsupervised learning method for deformable image registration. The group extends the method to Voxelmorph \cite{balakrishnan2019voxelmorph} and demonstrates impressive performance on various data set, which is considered as the state-of-art. In this work, we reproduce this paper and we improve the registration result by adding a gaussian layer after obtaining the registration flow.
\section{Method}
\subsection{2D Voxelmorph}
Let $(I_m,I_f)$ be the input pair for the Voxelmorph over 2D spatial domain $\Omega=\mathbb{R}^2$, where $I_m$ denotes \textit{moving} image and $I_f$ denotes \textit{fixed} image. The Voxelmorph models the registration field function (pixelwise translation map) $g_\theta(I_m,I_f)=\mathbf{\phi}$ by using a neural network. The $\theta$ denotes the network parameter and $\phi$ denotes the estimated registration field. Voxelmorph utilizes a Spatial Transformer Network (STN) \cite{jaderberg2015spatial} to compute the \textit{moved} image $I_m\circ\phi$. Stochastic gradient descent is used to find the optimal $\hat{\theta}$.
The CNN architecture used in the 2D Voxelmorph is based on U-net. First proposed by Ronneberger \etal \cite{ronneberger2015u} at 2015, U-net architecture has widely been used in registration and segmentation. The architecture implemented in our work is shown in Fig.~\ref{fig:Unet}.
\begin{figure}
\centering
\includegraphics[scale=0.3]{Figures/Picture2.png}
\caption{U-net based architecture used to model the registration field.}
\label{fig:Unet}
\end{figure}
The input size of the U-net architecture is $256\times256\times6$ as we concatenate the RGB channels of the \textit{moving} and \textit{fixed} image. Convolutions in 2D with kernel size 3 and stride 2 are implemented in the encoder and decoder. Each convolution is followed by a LeakyReLU with parameter $0.2$. The original input size is denoted as $1$ for simplification and the successive layer utilize $2\times2$ max pooling operation, shrinking the size by $2$. The size of the output registration field $\phi$ is $256\times256\times2$. Different from the original 3D Voxelmorph architecture, we add a Gaussian blur layer $\phi'=gauss(\phi)$ after registration field to smooth the pixelwise displacement.
After obtaining the registration field, we construct a differentiable operation based on STN \cite{jaderberg2015spatial} to compute the $I_m\circ\phi'$ with bilinear interpolation. We utilize the unsupervised loss function in the original 3D Voxelmorph, which penalizes the appearance difference and local spatial variation in $\phi'$
\begin{equation}
L(I_m,I_f,\phi')=MSE(I_m,I_f)+\lambda||\Delta\phi'||^2.
\end{equation}
The pipeline of 2D Voxelmorph is depicted in Fig.~\ref{fig:Voxelmorph}.
\begin{figure*}
\centering
\includegraphics[scale=0.5]{Figures/Picture3.png}
\caption{2D Voxelmorph architecture. The input \textit{moving} and \textit{fixed} images are resized to $256\times256\times3$ and concatenated to $256\times256\times6$ for the CNN to estimate registration field. }
\label{fig:Voxelmorph}
\end{figure*}
\subsection{SimpleElastix}
Developed based on Elastix \cite{klein2009elastix}, SimpleElastix is one of the favorite tools for rigid image registration. It also contains non-rigid image library using B-spline polynomial. The main idea of SimpleElastix is to solve a pairwise optimization problem by minimizing the cost function $C$. The optimization can be formulated as
\begin{equation}
\hat{T} = \underset{T}{\text{argmin}}\; C(T,I_f,I_m),
\end{equation}
with cost function defined as
\begin{equation}
C(T,I_f,I_m) = -S(T,I_f,I_m)+\gamma P(T),
\end{equation}
where $T$ is the transformation matrix, $S$ is the similarity measurement and $P$ is the penalty term with regularizer parameter $\gamma$. SimpleElastix is based on the parametric approach to solve the optimization problem, where the number of possible transformation is limited by introducing a parametrization (model) of the transform. The optimization becomes
\begin{equation}
\hat{T}_\mu = \underset{T_\mu}{\text{argmin}}\; C(T_\mu,I_f,I_m),
\end{equation}
$T_\mu$ denotes the parametrization model and vector $\mu$ contains the values of the transformation parameters. In our case for 2D rigid transformation, the parameter vector $\mu$ contains one rotation angle and the translation in $x$ and $y$ direction
\begin{equation}
\hat{\mu} = \text{argmin}\;C(\mu,I_f,I_m).
\end{equation}
\subsection{ORB}
\begin{figure}
\centering
\includegraphics[scale=0.35]{Figures/Picture4.png}
\caption{Implementation pipeline for ORB approach.}
\label{fig:ORB}
\end{figure}
ORB-based registration is often called feature-based, since sparse sets of features are detected and matched in two images \cite{rublee2011orb}. The final output of the method is a \textit{moved} image after calculating the $3\times3$ transformation matrix $T$. The ORB registration approach could be divided into 4 stages including image preprocessing, feature detection, feature matching and image warping. The pipeline is shown in Fig.~\ref{fig:ORB}. We first read both \textit{moving} and \textit{fixed} image and convert them into grayscale using the following empirical function
\begin{equation}
I_g = 0.299R+0.587G+0.114B
\end{equation}
where $(R,G,B)$ denotes the original pixel value for three channels in $I_m$,$I_f$ and $I_g$ denotes the output grayscale image. We then use feature detector, which consists of a locator and descriptor, to extract features from the input image. A locator identifies points on the image that are consistent under image transformations and a detector tells the appearance of the identified points by encoding them into arrays of numbers. In the implementation, we adopt FAST locator and BRIEF descriptor. In the next stage, we match the generated features using hamming distance and sort out the top corresponding points in the two images for the transformation matrix calculation. RANSAC is further utilized for improving the robustness. In the last stage, we warp the image with the $T$ to calculate the final output.
\subsection{Intensity-based Registration}
\begin{figure}
\centering
\includegraphics[scale=0.32]{Figures/Picture5.png}
\caption{The flow chart for intensity-based image registration using Matlab.}
\label{fig:Intensity}
\end{figure}
Intensity-based image registration is an iterative optimization process and is widely used in Matlab. The method requires prior information of initial transformation matrix $T_0$, the metric and an optimizer. In our work, we choose mean square similariy metric and regular step gradient descent optimizer. The initial transformation matrix defines the type of 2-D transformation that aligns $I_m$ with $I_f$. The metric is used to describe the similarity for evaluating the accuracy of our registration. We need two images as the input and get a scalar result to show how similar these two images are. And in order to reshape the metric, the optimizer is used to define the method for minimizing or maximizing the similarity metric. The pipeline is shown in Fig.~\ref{fig:Intensity}. The key pairwise optimization objective is to provide accurate estimation on transformation matrix $T$, which is a rigid registration approach.
\section{Experiment}
\subsection{Data generator}
The data set used in this work is generated from the Kaggle Dogs vs Cats competition. We downloaded 1200 images and separate them into two groups: 1000 images for training and 200 images for testing. These downloaded images are considered as \textit{moving} images. The \textit{fixed} images in the training and testing set are generated using Spatial Transformer Network \cite{jaderberg2015spatial} with ground truth translation map. The pipeline is shown in Fig.~\ref{fig:Datagen}.
\begin{figure}
\centering
\includegraphics[scale=0.4]{Figures/Picture1.png}
\caption{Data generator for \textit{fixed} image from \textit{moving} image.}
\label{fig:Datagen}
\end{figure}
The transformation matrices and its random entries for each pair used in the generator are listed in Table~\ref{tab:transmax}.
\begin{table}
\caption{Transformation and random matrix used in the data generator.}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{|c|c|c|}
\hline
Type & Matrix & Random Seed \\ \hline
\xrowht{40pt}
Translation & $\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
t_x & t_y & 1
\end{bmatrix}$ & $t_x,t_y \in[-5,5]$ \\ \hline \xrowht{40pt}
Shearing & $\begin{bmatrix}
1 & sh_x & 0 \\
sh_y & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}$ & $sh_x,sh_y \in[0,0.15]$ \\ \hline \xrowht{40pt}
Scaling & $\begin{bmatrix}
s_x & 0 & 0 \\
0 & s_y & 0 \\
0 & 0 & 1
\end{bmatrix}$ & $s_x,s_y \in[0.9,1]$ \\ \hline \xrowht{40pt}
Rotation & $\begin{bmatrix}
\cos(q) & \sin(q) & 0 \\
-\sin(q) & \cos(q) & 0 \\
0 & 0 & 1
\end{bmatrix}$ & $q\in[-5,5]$ \\\hline \xrowht{40pt}
Pixelwise& $\begin{bmatrix}
p_{11} & ... & p_{18} \\
\vdotswithin{p_{81}} & \vdotswithin{...} & \vdotswithin{p_{88}} \\
p_{81} & ... & p_{88}
\end{bmatrix}$ & $p_{ij} \in[-5,5]$ \\ \hline
\end{tabular}} \label{tab:transmax}
\end{table}
The rigid transformation matrix $T$ is a $3\times3$ matrix. We take the pixel shift in Cartesian coordinate system to calculate the translation map from $T$. Let $[x,y,1]^T$ denote the homogeneous coordinate in \textit{moving} image and $[x',y',1]^T$ denote the coordinate in the \textit{fixed} image, we have
\begin{equation}
\begin{bmatrix}
x' \\ y' \\1
\end{bmatrix}=\begin{bmatrix}
t_{11} & t_{12} & t_{13}\\
t_{21} & t_{22} & t_{23}\\
t_{31} & t_{32} & t_{33}
\end{bmatrix}
\begin{bmatrix}
x \\ y \\1
\end{bmatrix},
\end{equation}
and pixel shift can be calculated as
\begin{equation}
\begin{bmatrix}
\Delta x\\
\Delta y
\end{bmatrix}
= \begin{bmatrix}
x'-x\\
y'-y
\end{bmatrix}.
\end{equation}
Thus we could have the ground truth translation map with shape $256\times256\times2$, where the first channel represents the pixel shift in $x$ and second represents $y$ for each pixel. For non-rigid transformation, we produce a $8\times8$ random matrix and upsample it to $256\times256$ for $x$ and $y$ and concatenate the two channels to generate the ground truth translation map. Random seed in Table~\ref{tab:transmax} denotes the random entries generated in the transformation matrices for each image pair. For instance, in translation transformation, the random entries in $T$ are $t_x$ and $t_y$ in range $[-5,5]$.
Two different types of training data are produced using the technique described above. The first type, \textit{rigidset}, is generated by separating our 1000 downloaded images into 5 categories, each containing 200 images, and conducting
spatial transformations mentioned in Table~\ref{tab:transmax} on each category separately. The second type, \textit{nonrigidset}, is generated with the entire 1000 images using translation map upsampled from the pixelwise random matrix. The testing set is separated into 5 types, which are translation, rotation, scaling, shearing, pixelwise nonrigid, and each contains 40 images. We test the performance of Voxelmorph using \textit{rigidset} and \textit{nonrigidset} separately and compare the result. For notation simplicity, we denote Voxelmorph(NN) as trained with \textit{nonrigidset} without gaussian layer, Voxelmorph(RN) as trained with \textit{rigidset} without gaussian layer, Voxelmorph(NG) as trained with \textit{nonrigidset} with gaussian layer and Voxelmorph(RG) as trained with \textit{rigidset} with gaussian layer in the following article.
\subsection{Experiment setup}
The Voxelmorph is implemented in Python with Keras in Tensorflow backend and CUDA Deep Neural Network (cuDNN) library. The model is trained and tested on NVIDIA GPU GTX 2080 Ti with 11GB memory. The total number of epoch is 1500 and each image is resized to $256\times256\times 3$. The SimpleElastix, ORB are implemented in Python and Intensity-based registration is implemented in Matlab.
\subsection{Evaluation metrics}
The quantitative evaluation is conducted by calculating root-mean-square error (RMSE) and mean absolute error (MAE) between the estimated translation map and ground truth translation map, each with a size of $256\times256\times2$. Two channels represent pixel shift in $x$ and $y$ separately.
\subsubsection{Root mean square error}
Let $\hat{t}_{ij}$ denotes the element in estimated translation map and $t_{ij}$ denotes the element in ground truth translation map. The RMSE is calculated as
\begin{equation}
RMSE = \sqrt{\frac{1}{N}\sum_{j=1}^{N_{col}}\sum_{i=1}^{N_{row}}(\hat{t}_{ij}-t_{ij})^2},
\end{equation}
where $N$ denotes the total number of points, $N_{col}$ denotes number of column pixels, $N_{row}$ denotes number of row pixels. In our case, $N_{col}=N_{row}=256$.
\subsubsection{Mean absolute error}
The MAE is calculated as
\begin{equation}
MAE = \frac{1}{N}\sum_{j=1}^{N_{col}}\sum_{i=1}^{N_{row}}|\hat{t}_{ij}-t_{ij}|.
\end{equation}
\section{Results}
\subsection{Quantitative assessment}
The quantitative assessment using RMSE metric in Cartesian $x$ and $y$ are reported in Table~\ref{tab:RMSEx} and Table~\ref{tab:RMSEy} respectively. The MAE in $x$ and $y$ are reported in Table~\ref{tab:MAEx} and Table~\ref{tab:MAEy}. The best performance observed in rigid transformation testing is implemented by SimpleElastix. It achieves a high score in both RMSE and MAE with an average of $0.11$ in $x$, $0.11$ in $y$ reported in RMSE and $0.09$ in $x$, $0.09$ in y reported in MAE. In non-rigid transformation, Voxelmorph(RN) achieves the best score with $2.63$ in $x$ and $2.63$ in $y$ using RMSE metric, $2.16$ in $x$ and $2.14$ in $y$ using MAE metric.
From Table~\ref{tab:RMSEx}, we notice that by introducing a gaussian blur layer, Voxelmorph(RG) improves the RMSE score significantly compared with Voxelmorph(RN) in scaling and shearing, which is the original architecture trained with \textit{rigidset}. Similar results are also observed in Table~\ref{tab:RMSEy}, Table~\ref{tab:MAEx} and Table~\ref{tab:MAEy}. This demonstrates the effectiveness of the gaussian blur, which smooths the translation map. In our tasks such as translation, rotation and non-rigid pixelwise transformation, Voxelmorph with gaussian blur layer shows relative the same result after training using both \textit{nonrigidset} and \textit{rigidset}.
\begin{table*}[htbp]
\caption{RMSE error for x coordinate in pixel(px).}
\resizebox{1\textwidth}{!}{
\begin{tabular}{cccccccc}
\toprule
RMSE(px) & SimpleElastix & ORB & Intensity-based & Voxelmorph(NN) & Voxelmorph(RN) & Voxelmorph(NG) & Voxelmorph(RG) \\\midrule
Translation & $\mathbf{0.11\pm 0.08}$ & $0.26\pm 0.21$ & $0.28\pm 0.18$ & $3.25\pm 1.26$ & $3.03\pm 1.29$ & $3.58\pm 1.12$ & $3.9\pm 2.00$ \\
Rotation & $\mathbf{0.13\pm 0.09}$ & $0.31\pm 0.23$ & $0.28\pm 0.13$ & $6.88\pm 3.24$ & $7.06\pm 3.07$ & $7.06\pm 3.14$ & $8.33\pm 3.73$ \\
Scaling & $\mathbf{0.09\pm 0.08}$ & $0.48\pm 0.66$ & $1.11\pm 1.21$ & $6.44\pm 4.15$ & $7.00\pm 4.16$ & $6.83\pm 3.98$ & $\mathit{6.26\pm 3.52}$ \\
Shearing & $\mathbf{0.11\pm 0.10}$ & $0.45\pm 0.31$ & $4.80\pm 3.20$ & $11.26\pm 6.4$ & $10.18\pm 5.95$ & $11.75\pm 6.57$ & $\mathit{6.63\pm 3.64}$ \\
Pixelwise & $3.91\pm 3.60$ & $4.60\pm 3.40$ & $2.83\pm 1.98$ & $3.00\pm 0.09$ & $\mathbf{2.63\pm 0.07}$ & $3.23\pm 0.14$ & $2.87\pm 0.18$\\
\bottomrule
\end{tabular}}\label{tab:RMSEx}
\end{table*}
\begin{table*}[htbp]
\caption{RMSE error for y coordinate in pixel(px).}
\resizebox{1\textwidth}{!}{
\begin{tabular}{cccccccc}
\toprule
RMSE(px) & SimpleElastix & ORB & Intensity-based & Voxelmorph(NN) & Voxelmorph(RN) & Voxelmorph(NG) & Voxelmorph(RG) \\\midrule
Translation & $\mathbf{0.10\pm 0.10}$ & $0.26\pm 0.21$ & $0.30\pm 0.20$ & $2.94\pm 1.47$ & $2.54\pm 1.55$ & $3.24\pm 1.36$ & $2.76\pm 1.81$ \\
Rotation & $\mathbf{0.11\pm 0.10}$ & $0.29\pm 0.26$ & $0.26\pm 0.13$ & $6.93\pm 3.25$ & $7.14\pm 3.18$ & $7.20\pm 2.94$ & $8.23\pm 3.93$ \\
Scaling & $\mathbf{0.11\pm 0.08}$ & $0.47\pm 0.56$ & $1.78\pm 1.58$ & $6.91\pm 3.87$ & $6.27\pm 3.56$ & $\mathit{6.22\pm 3.38}$ & $\mathit{5.60\pm 3.18}$ \\
Shearing & $\mathbf{0.13\pm 0.13}$ & $0.43\pm 0.34$ & $7.80\pm 4.43$ & $9.46\pm 5.85$ & $8.95\pm 5.36$ & $10.21\pm 5.93$ & $\mathit{6.31\pm 3.87}$ \\
Pixelwise & $3.87\pm 2.27$ & $4.24\pm 2.64$ & $3.23\pm 2.31$ & $2.89\pm 0.16$ & $\mathbf{2.63\pm 0.08}$ & $3.19\pm 0.19$ & $2.90\pm 0.17$\\
\bottomrule
\end{tabular}}\label{tab:RMSEy}
\end{table*}
\begin{table*}[htbp]
\caption{MAE error for x coordinate in pixel(px).}
\resizebox{1\textwidth}{!}{
\begin{tabular}{cccccccc}
\toprule
MAE(px) & SimpleElastix & ORB & Intensity-based & Voxelmorph(NN) & Voxelmorph(RN) & Voxelmorph(NG) & Voxelmorph(RG) \\ \midrule
Translation & $\mathbf{0.09\pm 0.07}$ & $0.21\pm 0.18$ & $0.23\pm 0.15$ & $2.98\pm 1.31$ & $2.90\pm 1.32$ & $3.15\pm 1.14$ & $3.75\pm 2.02$ \\
Rotation & $\mathbf{0.11\pm 0.07}$ & $0.26\pm 0.19$ & $0.23\pm 0.10$ & $5.95\pm 2.91$ & $5.94\pm 2.68$ & $6.05\pm 2.85$ & $6.94\pm 3.23$ \\
Scaling & $\mathbf{0.08\pm 0.07}$ & $0.41\pm 0.55$ & $0.93\pm 1.02$ & $5.35\pm 3.52$ & $5.90\pm 3.57$ & $5.62\pm 3.43$ & $\mathit{5.09\pm 3.06}$ \\
Shearing & $\mathbf{0.09\pm 0.09}$ & $0.37\pm 0.26$ & $3.99\pm 2.65$ & $9.92\pm 5.63$ & $8.53\pm 5.03$ & $10.28\pm 5.92$ & $\mathit{4.57\pm 2.50}$ \\
Pixelwise & $3.23\pm 2.95$ & $3.84\pm 2.83$ & $2.36\pm 1.62$ & $2.41\pm 0.07$ & $\mathbf{2.16\pm 0.05}$ & $2.56\pm 0.12$ & $2.29\pm 0.14$\\\bottomrule
\end{tabular}}\label{tab:MAEx}
\end{table*}
\begin{table*}[htbp]
\caption{MAE error for y coordinate in pixel(px).}
\resizebox{1\textwidth}{!}{
\begin{tabular}{cccccccc}
\toprule
MAE(px) & SimpleElastix & ORB & Intensity-based & Voxelmorph(NN) & Voxelmorph(RN) & Voxelmorph(NG) & Voxelmorph(RG) \\\midrule
Translation & $\mathbf{0.09\pm 0.08}$ & $0.22\pm 0.17$ & $0.25\pm 0.17$ & $2.64\pm 1.49$ & $2.39\pm 1.58$ & $2.83\pm 1.37$ & $2.58\pm 1.86$ \\
Rotation & $\mathbf{0.09\pm 0.08}$ & $0.24\pm 0.21$ & $0.23\pm 0.11$ & $5.98\pm 2.90$ & $6.02\pm 2.86$ & $6.22\pm 2.65$ & $6.89\pm 3.46$ \\
Scaling & $\mathbf{0.10\pm 0.07}$ & $0.40\pm 0.47$ & $1.51\pm 1.34$ & $5.75\pm 3.32$ & $5.20\pm 2.98$ & $\mathit{5.17\pm 2.96}$ & $\mathit{4.54\pm 2.73}$ \\
Shearing & $\mathbf{0.11\pm 0.11}$ & $0.36\pm 0.28$ & $6.58\pm 3.78$ & $7.96\pm 5.02$ & $7.40\pm 4.50$ & $8.65\pm 5.20$ & $\mathit{4.53\pm 2.73}$ \\
Pixelwise & $3.22\pm 1.91$ & $3.53\pm 2.17$ & $2.70\pm 1.90$ & $2.29\pm 0.12$ & $\mathbf{2.14\pm 0.06}$ & $2.58\pm 0.16$ & $2.34\pm 0.13$ \\\bottomrule
\end{tabular}}\label{tab:MAEy}
\end{table*}
\subsection{Visual assessment}
\begin{figure*}
\centering
\includegraphics[scale=0.4]{Figures/Picture6-1.pdf}
\caption{Visual assessment for testing on \textit{rigidset}.}
\label{fig:rigidtest}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.4]{Figures/Picture7-1.pdf}
\caption{Visual assessment for testing on \textit{nonrigidset}.}
\label{fig:nonrigidtest}
\end{figure*}
The visual assessment is demonstrated in Fig.~\ref{fig:rigidtest} and Fig.~\ref{fig:nonrigidtest}. We compare the results generated by algorithms mentioned in this paper with the ground truth. In our case, the input is the \textit{moving} image and the ground truth image is the image warped by the ground truth translation map using spatial transformer network. Pixelwise 1-4 in Fig.~\ref{fig:nonrigidtest} denote different types of translation map generated by different random matrices.
From Fig.~\ref{fig:rigidtest}, we can see that rigid transformation methods based on SimpleElastix, ORB and Intensity-based produce a black boundary on \textit{moved} images and lose some information. The reason is that the warping is performed for the entire image instead of each pixel. In column 5-8, Voxelmorph produces a more consistent result compared with column 1-4. We also notice that the training data demonstrates a difference in Voxelmorph. When trained with \textit{rigidset}, Voxelmorph preserves the relative pixel relation better compared to the model trained with \textit{nonrigidset}. For instance, in the shearing row, Voxelmorph(RN) and Voxelmorph(RG) preserve a straight line in a cat body while the Voxelmorph(NN) and Voxelmorph(NG) warp the line into a curve.
From Fig.~\ref{fig:nonrigidtest}, we can see that ORB and Intensity-based approach fail to produce pixelwise \textit{moved} image. For instance in Pixelwise 4, the box lines are still straight in these two methods as the rigid transformation considers a linear transformation instead of a pixelwise warping. SimpleElastix demonstrates a impressive result in non-rigid transformation. Voxelmorph trained in \textit{nonrigidset} and \textit{rigidset} show comparable performance.
\section{Conclusion}
In this paper, we provide a comparative study for the state-of-art non-rigid image registration method and rigid image registration methods and show that the deep learning-based method doesn's always have a better performance. We reproduce Voxelmorph and its variations from \textit{rigidset} and \textit{nonrigidset}. We also reproduce several rigid transformation approaches including SimpleElastix, ORB and Intensity-based registration. We add a gaussian blur layer and improve the Voxelmorph performance in rigid transformation. Our result is evaluated in terms of RMSE and MAE and it is observed that SimpleElastix demonstrates the best performance in rigid transformation while Voxelmorph(RN) achieves best score in pixelwise transformation. In the future, we intend to evaluate our idea on natural images and combine the advantages of SimpleElastix and Voxelmorph.
\section{Acknowledgement}
The authors would like to thank Prof. Kadambi and TA Guangyuan Zhao for the excellent teaching and service in the entire quarter.
{\small
\bibliographystyle{ieee}
|
1,108,101,564,316 | arxiv | \section{Introduction}
Automatic speaker identification/recognition (ASI/ASR), that is, the automated process of inferring the identity of a person from an utterance made by him/her, on the basis of speaker-specific information embedded in the corresponding speech signal, has important practical applications. For example, it can be used to verify identity claims made by users seeking access to secure systems. It has great potential in application areas like voice dialing, secure banking over a telephone network, telephone shopping, database access services, information and reservation services, voice mail, security control for confidential information, and remote access to computers. Another important application of speaker recognition technology is in forensics.
Speaker recognition, being essentially a pattern recognition problem, can be specified broadly in terms of the features used and the classification technique adopted. From experience gained over the past several years from research going on, it has been possible to identify certain groups of features that can be extracted from the complex speech signal, which carry a great deal of speaker-specific information. In conjunction with these features, researchers have also identified classifiers which perform admirably. Mel Frequency Cepstral Coefficients (MFCCs) and Linear Prediction Cepstral Coefficients (LPCCs) are the popularly used features, while Gaussian Mixture Models (GMMs), Hidden Markov Models (HMMs), Vector Quantization (VQ) and Neural Networks are some of the more successful speaker models/classification tools. Any good review article on speaker recognition (for example,~\cite{campbell1997,furui1997,kinnunen2010}), contains details and citations about more than a few of these features and models. It is quite apparent that much of the research involves juggling various features and speaker models in different combinations to get new ASR methodologies.
Reynolds~\cite{reynolds1995,reynolds1995} proposed a speaker recognition system based on MFCCs as features and GMMs as speaker models and, by implementing it on the benchmark data sets TIMIT~\cite{fisher1986,garofolo1993} and NTIMIT~\cite{garofolo1993}, demonstrated that it works almost flawlessly on clean speech (TIMIT) and quite well on noisy telephone speech (NTIMIT). This successful application of GMMs for modeling speaker identity is motivated by the interpretation that the Gaussian components represent some general speaker-dependent spectral shapes, and also by the capability of mixtures to model arbitrary densities. This approach is one of the most effective approaches available in the literature, as far as accuracy on large speaker databases is concerned.
In this paper, a novel approach has been proposed for solving the speaker identification problem through the minimization, over all $K$ speaker classes, of statistical divergences~\cite{basu2011} between the (hypothetical) probabilty distribution ($g$) of feature vectors from the test utterance and the probability distribution $f_k$ of the feature vector corresponding to the $k$-th speaker class, $k=1,2,\ldots,K$. The motivation for this approach is provided by the observation that, for one such measure, namely, the Likelihood Disparity, it (the proposed approach) becomes equivalent to the highly successful maximum likelihood classification rule based on Gaussiam Mixture Models for speaker classes~\cite{reynolds1995} with Mel Frequency Cepstral Coefficients (MFCCs) as features. This approach has been made more robust to the possible presence of outlying observations through the use of robustified versions of associated estimators. Three different divergence measures have been considered in this work, and it has been established empirically, with the help of a couple of speech corpora, that the proposed method outperforms the baseline method of Reynolds, when Mel Frequency Cepstral Coefficients (MFCCs) are used as features, both in isolation and in combination with delta MFCC features (Section~\ref{sec:feature}). Moreover, its performance is found to be enhanced significantly in conjunction with the following two-pronged approach, which had been shown earlier~\cite{pal2014} to improve the classification accuracy of the basic MFCC-GMM speaker recognition system of Reynolds:
\begin{itemize}
\item \textit{Incorporation of the individual correlation structures of the feature sets into the model for each speaker}: This is a significant aspect of the speaker models that Reynolds had ignored by assuming the MFCCs to be independent. In fact, this has given rise to the misconception that MFCCs are uncorrelated. Our objective is achieved by the simple device of the Principal Component Transformation (PCT)~\cite{rao2001}. This is a linear transformation derived from the covariance matrix of the feature vectors obtained from the training utterances of a given speaker, and is applied to the feature vectors of the corresponding speaker to make the individual coefficients uncorrelated. Due to differences in the correlation structures, these transformations are also different for different speakers. The GMMs are fitted on the feature vectors transformed by the principal component transformations rather than the original featuress. For testing, to determine the likelihood values with respect to a given target speaker model, the feature vectors computed from the test utterance are rotated by the principal component transformation corresponding to that speaker.
\item \textit{Combination of different classifiers based on the MFCC-GMM model: }
Different classifiers are built by varying some of the parameters of the model. The performance of these classifiers in terms of classification accuracy also varies to some extent. By combining the decisions of these classifiers in a suitable way, an aggregate classifier is built whose performance is better than any of the constituent classifiers.
\end{itemize}
The application of Principal Component Analysis (PCA) is certainly not new in the domain of speaker recognition, though the primary aim has been to implement dimensionality reduction~\cite{chien2004,hanilci2009,seo2001,suri2013,vijendra2013,zhang2003} for improving performance. The novelty of the approach used here (proposed by Pal \textsl{et al}.~\cite{pal2014} lies in the fact that the principle underlying PCA has been used to make the features uncorrelated, without trying to reduce the size of the data set. To emphasize this feature, we refer to our implementation as the Principal Component Transformation (PCT) and not PCA. Moreover, another unique feature of our approach is as follows. We compute the PCT for each speaker on the training utterances and store them. GMMs for a speaker are estimated based on the feature vectors transformed by its PCT. For testing, unlike what has been reported in other work, in order to determine the likelihood values with respect to a given target speaker model, the MFCCs computed from the test utterance are rotated by the PCT for that target speaker, and not the PCT determined from the test signal itself. The motivation is that if the test signal comes from this target speaker, when transformed by the corresponding PCT, it will match the model better.
The principle of combination or aggregation of classifiers for improvement in accuracy has been used successfully in the past for speaker recognition, for example, by Besacier and Bonastre~\cite{besacier2000}, Altin\c{c}ay and Demirekler~\cite{altincay2003}, Hanil\c{c}i and Erta\c{s}~\cite{hanilci2009}, Trabelsi and Ben Ayed~\cite{trabelsi2013}. In the approach proposed in this work, different type of classifiers are not combined. Rather, a few GMM-based classifiers are generated and their decisions are combined. This is somewhat similar to the principle of \textit{Bagging}~\cite{breiman1996} or \textit{Random Forests}~\cite{breiman2001}.
The proposed approach has been implemented on the benchmark speech corpus, NTIMIT, as well as a relatively new bilingual speech corpus NISIS~\cite{pal2012}, and noticeable improvement in recognition performance is observed in both cases, when Mel Frequency Cepstral Coefficients (MFCCs) are used as features, both in isolation and in combination with delta MFCC features.
The paper is organized as follows. The minimum distance (or divergence) approach is introduced in the following section, together with a few divergence measures. The proposed approach is presented in Section 3, which also outlines the motivation for it. Section 4 gives a brief description of the speech corpora used, namely, NISIS and NTIMIT, and contains results obtained by applying the proposed approach on them, which clearly establish its effectiveness. Section 5 summarizes the contribution of this work and proposes future directions for research in this area.
\section{Divergence Measures}
Let $f$ and $g$ be two probability density functions. Let the Pearson's residual~\cite{lindsay1994} for $g$, relative to $f$, at the value $\bm{x}$ be defined as $$ \delta(\bm{x}) = \frac{g(\bm{x})}{f(\bm{x})} - 1. $$
The residual is equal to zero at such values where the densities $g$ and $f$ are identical. We will consider divergences between $g$ and $f$ defined by the general form
\begin{equation}
\label{eq:general}
\rho_C(g,f) = \int_{\bm{x}} C(\delta(\bm{x}))\,f(\bm{x})\,{d\bm{x}},
\end{equation}
where $C$ is a thrice differentiable, strictly convex function on $[-1, \infty)$, satisfying $C(0) = 0$.
Specific forms of the function $C$ generate different divergence measures. In particular, the likelihood disparity (LD) is generated when $C(\delta) \; = \; (\delta+1) \, \log(\delta+1) \; - \; \delta$. Thus, $$ LD(g,f) = \int_{\bm{x}} [(\delta(\bm{x})+1) \, \log(\delta(\bm{x})+1) \; - \; \delta(\bm{x})] \; f(\bm{x}) \, d\bm{x} $$ which ultimately reduces upon simplification to
\begin{equation}
LD(g,f) = \int_{\bm{x}} \log(\delta(\bm{x})+1) \, dG = \int_{\bm{x}} \log(g(\bm{x}))\, dG \; - \int_{\bm{x}} \log(f(\bm{x}))\,dG,
\end{equation}
where $G$ is the distribution function corresponding to $g$.
For the Hellinger distance (HD), since $ C(\delta) = 2(\sqrt{\delta+1}-1)^2 $, we have $$ HD(g,f) = 2\int_{\bm{x}} \big(\sqrt{\frac{g(\bm{x})}{f(\bm{x})}} - 1\big)^2 f(\bm{x}) \, d\bm{x}, $$ which can be expressed (upto an additive constant independent of $g$ and $f$) as
\begin{equation}
\label{eq:3}
HD(g,f) = -4\int_{\bm{x}} \frac{1}{\sqrt{\delta(\bm{x}) + 1}} \, dG.
\end{equation}
For Pearson's chi-square (PCS) divergence, $ C(\delta) = \delta^2/2 $, so $$ PCS(g,f) = \frac{1}{2}\int_{\bm{x}} \big(\frac{g(\bm{x})}{f(\bm{x})} - 1\big)^2 f(\bm{x}) \, d\bm{x}, $$ which simplifies (upto an additive constant independent of $g$ and $f$) to
\begin{equation}
\label{eq:4}
PCS(g, f) = \frac{1}{2}\int_{\bm{x}} \big(\delta(\bm{x})+1\big) \, dG.
\end{equation}
The divergences within the general class described in (\ref{eq:general}) have been called disparities~\cite{basu2011,lindsay1994}. The LD, HD and the PCS denote three prominent members of this class.
\subsection{Minimum Distance Estimation}
Let $X_1, X_2,\ldots,X_n$ represent a random sample from a distribution $G$ having a probability density function
$g$ with respect to the Lebesgue measure. Let $\hat{g}_n$
represent a density estimator of $g$ based on the random sample. Let the parametric model family $\cal F$, which models
the true data-generating distribution $G$, be defined as ${\cal F}=\{F_\theta:\theta \in \Theta \subseteq I\!\!R^p\}$, where $\Theta$ is the parameter space. Let $\cal G$ denote the class of all distributions having densities with respect to the
Lebesgue measure, this class being assumed to be convex. It is further assumed that both the data-generating distribution $G$ and the model family
$\cal F$ belong to $\cal G$. Let $g$ and $f_{\theta}$ denote the probability density functions corresponding to $G$ and $F_{\theta}$. Note that $\theta$ may represent a continuous parameter as in usual parametric inference problems of statistics, or it may be discrete-valued, if it denotes the class label in a classification problem like speaker recognition.
The minimum distance estimation approach for estimating the parameter $\theta$ involves the determination the element of the model family
which provides the closest match to the data in terms of the distance (more generally, divergence) under
consideration.
That is, the minimum distance estimator $\hat{\theta}$ of $\theta$ based on the divergence $\rho_C$ is defined by the relation
$$\rho_C(\hat{g}_n,f_{\hat{\theta}})=\min_{\theta \in \Theta} \rho_C(\hat{g}_n,f_{\theta}).$$
When we use the likelihood disparity (LD) to assess
the closeness between the data and the model densities, we determine the element $f_\theta$ which is closest to $g$ in terms of the likelihood disparity. In this case the procedure, as we have seen in Equation (\ref{eq:LD}), becomes equivalent to the choice of the element $f_\theta$ which maximizes
$\int_{\bm{x}} \log(f_\theta(\bm{x}))\,dG(\bm{x}).$ As $g$ (and the corresponding distribution function $G$) is unknown, we need to optimize a sample based version of the objective function. While in general this will require the construction of a kernel density estimator $\hat{g}$ (or an alternative density estimator), in case of the likelihood disparity this is provided by simply replacing the differential $dG$ with
$dG_n$, where $G_n$ is the empirical distribution function. The procedure based on the minimization of the objective function in Equation (2) then further simplifies to the maximization of
$$\frac{1}{n} \sum_{i=1}^n \log f_\theta(X_i)$$
which is equivalent to the maximization of the log likelihood.
The above demonstrates a simple fact, well-known in the density-based minimum distance literature or in information theory, but not well-perceived by most scientists including many statisticians: the maximization of the log-likelihood is equivalently a minimum distance procedure. This provides our basic motivation in this paper. Although we base our numerical work on the three divergences considered in the previous section, our primary intent is to study the general class of minimum distance procedures in the speech-recognition context such that the maximum likelihood procedure is a special case of our approach. Many of the other divergences within the class generated by
Equation (\ref{eq:general}) also have equivalent objective functions that are to be maximized to obtain the solution and have simple interpretations.
However, in one respect the likelihood disparity is unique. It is the only divergence in this class where the sample based version of the objective function may be created by the simple use of the empirical and no other nonparametric density estimation is required. Observe that both in Equations (\ref{eq:3}) and (\ref{eq:4}), the integrand involves $\delta(\bm{x})$, and therefore a density estimate for $g$ is required even after replacing $dG$ by $dG_n$.
\subsection{Robustified Minimum Distance Estimators}
When the divergence $\rho_C(\hat{g}_n, f_\theta)$ is differentiable with respect to $\theta$, the minimum distance estimator $\hat{\theta}$ of $\theta$ based on the
divergence $\rho_C$ is obtained by solving the estimating equation
\begin{equation}
-\nabla \rho_C(\hat{g}_n,f_{\theta})=\int_x A(\delta(x))\nabla f_\theta (x) dx=0,
\end{equation}
where the function $A(\delta)$ is defined as $$A(\delta)=C'(\delta)(\delta+1)-C(\delta).$$
If the function $A(\delta)$ satisfies
$A(0) = 0$ and $A'(0) = 1$ then it is termed the Residual Adjustment Function (RAF) of the divergence. Here $\nabla$ denotes the gradient operator with respect to $\theta$, and $C'(\cdot)$ and $A'(\cdot)$ represent the respective derivatives of the functions $C$ and $A$ with respect to their arguments.
Since the estimating equations of the different minimum distance estimators differ only in the form of the residual adjustment
function $A(\delta)$, it follows that the properties of these
estimators must be determined by the form of the corresponding function $A(\delta)$. Since
$A'(\delta) = (\delta + 1)C''(\delta)$ and, as $C(\cdot)$ is a strictly convex function on $[-1, \infty)$, $A'(\delta) > 0$ for
$\delta > −1$; hence $A(\cdot)$ is a strictly increasing function on $[−1,\infty)$.
Geometrically, the RAF is the most important tool to demonstrate the general behaviour or the heuristic robustness properties of the minimum distance estimators corresponding to the class defined in (\ref{eq:general}).
A dampened response to increasing positive $\delta$ will ensure that the RAF shrinks
the effect of large outliers as $\delta$ increases, thus providing a strategy for making the corresponding minimum distance estimator robust to outliers.
For the likelihood disparity (LD), $C(\delta)$ is unbounded for large positive values of the residual $\delta$. and the corresponding estimating equation is given by, $$ -\nabla\: LD(g,f_\theta) = \int_{\bm{x}} \delta\nabla f_\theta = 0. $$ So, the residual adjustment function (RAF) for LD, $A_{LD}(\delta) = \delta$, increases linearly in $\delta$. Thus, to dampen the effect of outliers, a modified $A(\delta)$ function could be used, which is defined as
\begin{equation}
A(\delta) =
\begin{cases}
0 &\quad\text{for } \; \delta \in [-1, \alpha] \cup [\alpha^*, \infty); \\
\delta &\quad\text{for } \; \delta \in (\alpha, \alpha^*).
\end{cases}
\end{equation}
This eliminates the effect of large $\delta$ residuals beyond the range $(\alpha, \alpha^*)$. This proposal is in the spirit of the trimmed mean.
The $C(\delta)$ function for the modified LD (MLD) reduces to
\begin{equation}
C_{MLD}(\delta) =
\begin{cases}
0 &\quad\text{for } \; \delta \in [-1, \alpha] \cup [\alpha^*, \infty); \\
(\delta+1)\log(\delta+1) - \delta &\quad\text{for } \; \delta \in (\alpha, \alpha^*).
\end{cases}
\end{equation}
Similarly, the RAF for the Hellinger distance is $ A_{HD} = 2(\sqrt{\delta+1} - 1) $, which too is unbounded for large values of $\delta$, in spite of its local robustness properties. To obtain a robustified estimator, the RAF is modified to
\begin{equation}
A(\delta) =
\begin{cases}
0 &\quad\text{for } \; \delta \in [-1, \alpha] \cup [\alpha^*, \infty); \\
2(\sqrt{\delta+1} - 1) &\quad\text{for } \; \delta \in (\alpha, \alpha^*),
\end{cases}
\end{equation}
so that the $C(\delta)$ function for the modified HD (MHD) becomes
\begin{equation}
C_{MHD}(\delta) =
\begin{cases}
0 &\quad\text{for } \; \delta \in [-1, \alpha] \cup [\alpha^*, \infty); \\
2(\sqrt{\delta+1}-1)^2 &\quad\text{for } \; \delta \in (\alpha, \alpha^*).
\end{cases}
\end{equation}
For Pearson's chi-square (PCS) divergence, $A(\delta) = \delta + \frac{\delta^2}{2}$ is again unbounded for large $\delta$, so the RAF is modified to
\begin{equation}
A(\delta) =
\begin{cases}
0 &\quad\text{for } \; \delta \in [-1, \alpha] \cup [\alpha^*, \infty); \\
\delta + \frac{\delta^2}{2} &\quad\text{for } \; \delta \in (\alpha, \alpha^*),
\end{cases}
\end{equation}
so that the $C(\delta)$ function for the modified PCS (MPCS) becomes
\begin{equation}
C_{MPCS}(\delta) =
\begin{cases}
0 &\quad\text{for } \; \delta \in [-1, \alpha] \cup [\alpha^*, \infty); \\
\frac{\delta^2}{2} &\quad\text{for } \; \delta \in (\alpha, \alpha^*).
\end{cases}
\end{equation}
In Figure 1, we have presented the RAFs of our three candidate divergences, the LD, the HD and the PCS. Notice that they have three different forms. The RAF of the LD is linear, that of the HD is concave, while the PCS has a convex RAF. We have chosen our three candidates as representatives of these three types, so that we have a wide description of the divergences of the different types.
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=0.64]{RAF_Plot}
\caption{The Residual Adjustment Functions (RAFs) of the LD, HD and PCS divergences}
\end{center}
\end{figure}
\textbf{Remark 1}: In the above proposals, the approach to robustness is not through the intrinsic behaviour of the divergences, but through the trimming of highly discordant residuals. For small-to-moderate residuals, the RAFs of these divergences are not widely different, as all of them relate to the treatment of residuals which do not exhibit extreme departures from the model. However, these small deviations often provide substantial differences in the in the behavior of the corresponding estimators. We hope to find out how the small departures exhibited in these divergences are reflected in their classification performance.
\textbf{Remark 2}: In this paper, our minimization of the divergence will be over a discrete set corresponding to the indices of the existing speakers in the database that the new utterance is matched against. Thus we will not directly use the estimating equation in (5) to ascertain the minimizer. In fact if we restrict ourselves just to the three divergences considered here, there would be no reason to use the residual adjustment function. However these divergences are only representatives of a bigger class, and generally the properties of the minimum distance estimators are best understood through residual adjustment function. Reconstructing the function $C(\cdot)$ from the residual adjustment function $A(\cdot)$ requires solving an appropriate differential equation. When this reconstruction does not lead to a closed form of the $C(\cdot)$, one has to directly use the form of the residual adjustment function for the minimizations considered in this paper.
\textbf{Remark 3}: Any divergence of the form described in Equation (1) can be expressed in terms of several distinct $C(\delta)$ functions. While they lead to the same divergence when integrated over the entire space, when the range is truncated by eliminating very large and very small residuals, the role of the $C(\cdot)$ function becomes important. In this section we have modified the likelihood disparity, the Hellinger distance and the Pearson's chi-square by truncating the $C(\cdot)$ functions having the form
$$
C_{LD}(\delta) = (\delta +1) \log (\delta + 1) - \delta,\;\;
C_{HD} = 2(\sqrt{\delta +1} - 1)^2,\;\;
C_{PCS} = \frac{\delta^2}{2}.
$$
One could also modify the versions presented in Equations (2), (3) and (4) in a similar spirit and obtain truncated solutions of the minimization problem under study.
\section{The Proposed Approach}
\label{sec:proposed}
It is assumed that probability distribution $g$ for the (unknown) speaker of the test utterance is unknown. However, it can be estimated by $\hat{g}$ computed from the test utterance using the feature vectors $\bm{x}_i$'s, corresponding to a number of overlapping short-duration segments into which the segment can be divided. The proposed approach aims to identify $k^*$, for which $f_{k^*}$ is \textit{most similar} to $g$ in the \textit{minimum distance} sense, where $f_k,\:k=1,2,\ldots,K$, are the probability models for the $K$ speaker classes. In other words, the proposed approach infers that speaker number $k^*$ has uttered the test speech if $$ k^* = \argmin_k \rho_C(g,f_k), $$ where $\rho_C(\cdot,\cdot)$ is some statistical divergence measure between two probability density functions, for a given choice of the function $C$.
If the Pearson's residual for $g$ relative to $f_k$ at the value $\bm{x}$ be defined by $$ \delta_k(\bm{x}) = \frac{g(\bm{x})}{f_k(\bm{x})} - 1, $$ then the divergence between $g$ and $f_k$ is given by $$ \rho_C(g, f_k) = \int_{\bm{x}} C(\delta_k(\bm{x}))\,f_k(\bm{x})\,{d\bm{x}}. $$
Let $\bm{X}_1,\bm{X}_2, \ldots, \bm{X}_M$ be a random sample of size $M$ from $g$ and let us estimate the corresponding distribution function $G$ by the empirical distribution function $$ G_n(\bm{x}) = \frac{1}{M} \sum\limits_{i = 1}^{M} \mathds{1}_{(X_i \, \leq \; \bm{x})} $$ based on the data $\bm{x}_i, \; i = 1,\ldots,M$, where $\mathds{1}_{(A)}$ is the indicator of the set $A$.
\subsection{Modified Minimum Distance Estimation}
As noted earlier, specific forms of the function $C(\cdot)$ generate different divergence measures. In the following, we will describe the identification of the speaker of the test utterance based on the three divergencees considered in Section 2.
\subsubsection{Estimation based on the Likelihood Disparity}
The likelihood disparity (LD) between $g$ and $f_k$ is (upto an additive constant)
\begin{equation}
\label{eq:LD}
LD(g,f_k) = \int_{\bm{x}} \log(\delta_k(\bm{x})+1) \, dG = \int_{\bm{x}} \log(g(\bm{x}))\, dG \; - \int_{\bm{x}} \log(f_k(\bm{x}))\,dG,
\end{equation}
Under the proposed approach, the speaker of a test utterance is identified by minimizing the likelihood disparity between $g$ and the $f_k$'s, that is, as speaker number $k^*$ if $$ k^* = \argmin_k LD(g,f_k) = \argmax_k \int_{\bm{x}} \log(f_k(\bm{x}))\,dG $$ where the second equality holds because the first term in the expression of $LD(g,f_k)$ given in Equation (12) does not involve $f_k$. Since $\int_{\bm{x}} \log(f_k(\bm{x}))\,dG_n$ is an estimator of $\int_{\bm{x}} \log(f_k(\bm{x}))\,dG$, we have
\begin{equation}
\label{eq:loglik}
\int_{\bm{x}} \log(f_k(\bm{x}))\,dG \approx \int_{\bm{x}} \log(f_k(\bm{x}))\,dG_n = \frac{1}{M} \sum\limits_{i = 1}^M \log(f_k(\bm{x_i})).
\end{equation}
Therefore, we will choose the index by maximizing the log-likelihood, which gives
\begin{equation}
\label{mle}
\hat{k}^* = \argmax_k \sum\limits_{i = 1}^M \log(f_k(\bm{x_i})) = \argmax_k \prod\limits_{i = 1}^M f_k(\bm{x_i}).
\end{equation}
\subsubsection{Estimation based on the Hellinger Distance}
Using the form described in Equation (\ref{eq:3}), the Hellinger distance (HD) between $g$ and $f_k$ is the same (upto an additive constant) as
\begin{equation}
\label{eq:HD}
HD(g, f_k) = -4\int_{\bm{x}} \frac{1}{\sqrt{\delta_k(\bm{x}) + 1}} \, dG.
\end{equation}
By the same reasoning as before, the speaker of the test utterance is determined to be speaker number $k^*$, by minimizing the empirical version of the Hellinger distance between $g$ and $f_k$'s, that is,
\begin{equation}
\hat{k}^* = \argmax_k \sum\limits_{i = 1}^M \frac{1}{\sqrt{\delta_k(\bm{x_i})+1}}.
\end{equation}
We have dropped the factor of $1/M$ as it has no role in the maximization. However in this case we have to substitute a density estimate of $g$ in the
expression of $\delta_k$. Here we will do this using a Gaussian mixture model.
\subsubsection{Estimation based on the Pearson Chi-square Distance}
Using the form described in Equation~(\ref{eq:4}), the Pearson's chi-square between $g$ and $f_k$ is the same as (up to an additive constant)
\begin{equation}
\label{eq:PCS}
PCS(g, f_k) = \frac{1}{2}\int_{\bm{x}} \big(\delta_k(\bm{x})+1\big) \, dG.
\end{equation}
Thus, as before, speaker number $k^*$ is identified as having produced the test utterance if
\begin{equation}
\hat{k}^* = \argmin_k \sum\limits_{i = 1}^M \big(\delta_k(\bm{x_i})+1\big).
\end{equation}
For each of the three divergences considered in Sections 3.1.1-3.1.3, we trim the empirical versions of the divergences in the spirit of Section 2.2. This will mean that our modified objective function for the three divergences (LD, HD and PCS)) are, respectively
$$\sum_{i \in B} \log f_k(\bm{x}_i)
, ~~ \sum_{i \in B} \frac{1}{\sqrt{\delta_k(\bm{x}_i)+1}}, ~{\rm and}~ \sum_{i \in B} (\delta_k(\bm{x}_i)+1),$$
where the set $B$ may be defined as $B = \{i| \delta_k(\bm{x}_i) \in (\alpha, \alpha^*)\}$; the set $B$ depends on $k$ also, but we keep the dependence implicit. In our experimentation, we have varied both $\alpha$ and $\alpha^*$ in order to control the effect of both outliers and inliers and chose the pair that led to maximum speaker identification accuracy.
\subsection{Minimum Rescaled Modified Distance Estimation}
In our implementation of the above proposal, we chose $\alpha$ and $\alpha^*$ not as absolutely fixed values, but as values which will provide a fixed level of trimming (like 10\% or 20\%). However, on account of the very high dimensionality of the data and the availability of a relatively small number of data points for each test utterance, the estimated densities are often very spiky, leading to very high estimated densities at the observed data points. This, in turn, often leads to very high Pearson residuals at such observations. Since the choice of the tuning parameters is related to the trimming of a fixed proportion of observations, many of the untrimmed observations may still be associated with very high Pearson residuals, which makes the estimation unreliable. As a result, $\delta$ becomes very large at a majority of the sample points of the test utterances, which impacts heavily on the divergence measures.
From~(\ref{eq:LD}) we see that, $\delta_k(\bm{x}_i), \; i = 1,\ldots,M$ are in logarithmic scale in the expression of LD. In fact Equation (13) shows that the final objective function in case of the empirical version of the likelihood disparity does not directly depend on the values of the Pearson residuals at all. Thus, although $\delta_k(\bm{x}_i)$ values are large, LD gives quite sensible divergence values. But, in case of the HD as given in Eq.~(\ref{eq:HD}) and the PCS as given in Eq.~(\ref{eq:PCS}), we find that the divergence values are greatly affected by the large $\delta_k(\bm{x}_i)$ values for majority of $i$'s. Thus, in order to reduce the impact of large $\delta$ values on the HD and PCS, we propose a scaled version of the residual $\delta$ as follows:
\begin{equation}
\delta^* = \mbox{sign}(\delta) \; |\delta|^\beta
\end{equation}
where
$$
\mbox{sign}(\delta) =
\begin{cases}
1 &\quad\text{for } \; \delta \geq 0, \\
-1 &\quad\text{for } \; \delta < \,0.
\end{cases}
$$
and $\beta$ is a positive scaling parameter which can be used to control the impact of $\delta$.
For a value of $\beta$ significantly smaller than 1, $\delta^*$ is scaled down to a much smaller value in magnitude compared to $\delta$. With this modification, then, our relevant objective functions for the LD, HD and PCS are
$$\sum_{i \in B} \log f_k(\bm{x}_i)
, ~~ \sum_{i \in B} \frac{1}{\sqrt{\delta^*_k(\bm{x}_i)+1}}, ~{\rm and}~ \sum_{i \in B} (\delta^*_k(\bm{x}_i)+1).$$
Notice that the objective function for LD remains the same as described in Section 3.1, but the objective functions for the HD and PCS are the same only when $\beta = 1$.
We will refer to the estimators obtained by minimizing the rescaled, modified objective functions as the Minimum Rescaled Modified Distance Estimators (MRMDEs) of type I. Only in case of the likelihood disparity the rescaling part is absent.
\subsection{Minimum Rescaled Modified Distance Estimators (MRMDEs) of Type II}
In the previous subsection we have described the construction of the MRMDEs of type I. In Remark 3 we have mentioned that the same divergence may be constructed by several distinct $C(\cdot)$ functions. While they provide identical results when integrated over the entire space, the modified versions corresponding to the different $C(\cdot)$ functions are necessarily different, although the differences are often small.
Note that
$$\int C(\delta_k(\bm{x})) f_k(\bm{x}) d\bm{x}
= \int \frac{C(\delta_k(\bm{x})}{(\delta_k(\bm{x})+1}
dG(\bm{x}),$$
and using the same principles as in Sections 3.1 and 3.2,
we propose the minimization of the objective function
$$\sum_{i \in B} \frac{C(\delta^*(\bm{x}_i))}{(\delta_k^*(\bm{x}_i)+1)}$$
for the evaluation of the MRMDEs of Type II. Here the relevant $C(\cdot)$
functions corresponding to the LD, HD and PCS are as defined in Equations
(7), (9) and (11). Note that in this case the rescaling has to be applied to all
the three divergences, and not just to HD and PCS only.
\section{The Principal Component Transformation}
The idea of principal component transformation (PCT) as proposed in an earlier work~\cite{pal2014} has also been used here. Let the PCT matrix of $k^{th}$ speaker be $P_k, \; k = 1,\ldots,K$ and $X_k(d \times M_k)$ be the training feature matrix for $k^{th}$ speaker, where $d$ = dimension of feature vector and $M_k$ = number of feature vectors. In the training phase, we first get the transformed feature matrix $X_k^*$ as,
\begin{equation}
\label{eq:PCT}
X^*_k = P_k X_k
\end{equation}
and then use it to train $f_k$. Now in the testing phase, we extract the feature matrix from a test utterance represented by $X$, compute the PCT matrix $P$ and obtain the transformed feature matrix $X^*$ as in~(\ref{eq:PCT}). Then we train the model $g$ using $X^*$.
Let us define $f_k^*$ as, $$ f_k^*(\bm{x}) = f_k(P_k \bm{x}) $$ and $g^*$ as, $$ g^*(\bm{x}) = g(P\bm{x}) $$ It is easy to check that $f_k^*, \; k = 1,\ldots,K $ and $g^*$ are densities, as $ P_k $'s and $P$ are orthonormal matrices. Now, we can use $f^*_k$'s as our true speaker models, $g^*$ as the model obtained from the test utterance and obtain the intended speaker following the minimum distance based approach described previously. In particular for LD, we get the new modified equation from~(\ref{eq:loglik}) as,
\begin{equation}
\hat{k}^* = \argmax_k \sum\limits_{i = 1}^{M}\log(f^*_k(\bm{x}_i)) = \argmax_k \sum\limits_{i = 1}^{M}\log(f_k(P_k \bm{x}_i))
\end{equation}
which is the same as the PCT-based approach proposed in our previous work~\cite{pal2014}.
\setlength{\unitlength}{1in}
\begin{figure}[htb!]
\begin{center}
\framebox(6.2,1.6){
\begin{tikzpicture}[node distance=2cm]
\node (step1) [item_elp, align=center] {Training \\ Utterance};
\node (step2) [item_elp, align=center, xshift=3.6cm] {MFCC \\ Vectors};
\node (step2_0) [item_elp, align=center, xshift=7.2cm] {Computation \\ of PCT};
\node (step2_1) [item_elp, align=center, xshift=11.5cm] {PC-Transformed \\ MFCC Vectors};
\node (step3) [item_elp, align=center, yshift=-2cm, xshift=7cm] {Estimation of \\ Speaker GMM($f_k$)};
\node (step4) [item_rec, align=center, xshift=12.1cm,yshift=-2cm] {PCT, GMM database \\ for speaker};
\draw [arrow_t] (step1) -- (step2);
\draw [arrow_t] (step2) -- (step3);
\draw [arrow_t] (step2) -- (step2_0);
\draw [arrow_t] (step2_0) -- (step2_1);
\draw [arrow_t] (step3) -- (step4);
\draw [arrow_t] (step2_0) -- (step4);
\draw [arrow_t] (step2_1) -- (step3);
\end{tikzpicture}
}
\small(a) Training Module
\vspace{3mm}
\hbox
\framebox(6.2,1.8){
\begin{tikzpicture}[node distance=2cm]
\node (step2_1) [item_rec, align=center, xshift=12.1cm] {PCT, GMM database \\ for Speaker};
\node (step1) [item_elp, align=center, yshift=-1cm] {Test \\ Utterance};
\node (step2_0) [item_elp, align=center, xshift=3.6cm, yshift=-1cm] {MFCC \\ Vectors};
\node (step3) [item_elp, align=center, xshift=7.8cm, yshift=-1cm] {PCT-transformed \\ MFCC Vectors};
\node (step4) [item_elp, align=center, yshift=-3cm, xshift=7cm] {Estimation of \\ Test Utterance GMM};
\node (step5) [item_rec, align=center, xshift=12.4cm,yshift=-3cm] {Divergence \\ Measure};
\draw [arrow_t] (step1) -- (step2_0);
\draw [arrow_t] (step2_0) -- (step3);
\draw [arrow_t] (step2_1) -- node[anchor=south] {PCT} (step3);
\draw [arrow_t] (step3) -- (step4);
\draw [arrow_t] (step2_1) -- node[anchor=west] {GMM} (step5);
\draw [arrow_t] (step4) -- node[anchor=south] {GMM} (step5);
\end{tikzpicture}
}
\small(b) Test Module
\vspace{3mm}
\hbox
\framebox(6.2,2.4){
\begin{tikzpicture}[node distance=2cm]
\node (step1_0) [item_rec, align=center] {Classifier no. 1};
\node (step1_1) [item_rec, align=center, yshift=-1.2cm] {Classifier no. 2};
\node (step1_2) [item_rec, align=center, yshift=-2.4cm] {Classifier no. 3};
\node (step1_3) [item_rec, align=center, yshift=-3.6cm] {Classifier no. 4};
\node (step2_0) [item_rec, align=center, xshift = 4cm, yshift = 0.5cm] {Divergence from \\ Speaker Model no. 1};
\node (step2_1) [item_rec, align=center, xshift = 4cm, yshift = -1cm] {Divergence from \\ Speaker Model no. 2};
\node (step2_2) [item_rec1, align=center, xshift = 4cm, yshift = -2.5cm] {:};
\node (step2_3) [item_rec, align=center, xshift = 4cm, yshift = -4cm] {Divergence from \\ Speaker Model no. N};
\node (step3) [item_rec, align=center, xshift=8cm, yshift = -1.75cm] {Minimizer};
\node (step4) [item_rec, align=center, xshift=12cm, yshift = -1.75cm] {Classification};
\draw [arrow_t] (step1_0.east) -- (step2_1.west);
\draw [arrow_t] (step1_0.east) -- (step2_2.west);
\draw [arrow_t] (step1_0.east) -- (step2_3.west);
\draw [arrow_t] (step1_1.east) -- (step2_0.west);
\draw [arrow_t] (step1_1.east) -- (step2_1.west);
\draw [arrow_t] (step1_1.east) -- (step2_2.west);
\draw [arrow_t] (step1_1.east) -- (step2_3.west);
\draw [arrow_t] (step1_2.east) -- (step2_0.west);
\draw [arrow_t] (step1_2.east) -- (step2_1.west);
\draw [arrow_t] (step1_2.east) -- (step2_2.west);
\draw [arrow_t] (step1_2.east) -- (step2_3.west);
\draw [arrow_t] (step1_3.east) -- (step2_0.west);
\draw [arrow_t] (step1_3.east) -- (step2_1.west);
\draw [arrow_t] (step1_3.east) -- (step2_2.west);
\draw [arrow_t] (step1_3.east) -- (step2_3.west);
\draw [arrow_t] (step2_0.east) -- (step3.west);
\draw [arrow_t] (step2_1.east) -- (step3.west);
\draw [arrow_t] (step2_2.east) -- (step3.west);
\draw [arrow_t] (step2_3.east) -- (step3.west);
\draw [arrow_t] (step3.east) -- (step4.west);
\end{tikzpicture}
}
\small(c) Classifier Combination (using 4 classifiers)
\vspace{5mm}
\caption{Flow charts for the three components of the proposed speaker identification method}
\end{center}
\end{figure}
Flow charts of the different components (training, testing and classifier combination) of the proposed approach are given in Figure 2.
\section{Implementation and Results}
The proposed approach was validated on two speech corpora, whose details are given in the following section.
\subsection{ISIS and NISIS: New Speech Corpora}
ISIS (an acronym for Indian Statistical Institute Speech) and NISIS (Noisy ISIS)~\cite{pal2012} are speech corpora, which respectively contain simultaneously-recorded microphone and telephone speech of 105 speakers, over multiple sessions, spontaneous as well as read, in two languages (Bangla and English), recorded in a typical office environment with moderate background noise. They were created in the Indian Statistical Institute, Kolkata, as a part of a project funded by the Department of Information Technology, Ministry of Communications and Information Technology, Government of India, during 2004-07. The speakers had Bangla or another Indian language as their mother tongue, and so were non-native English speakers.
Particulars of both corpora are given below:
\begin{itemize}
\item Number of speakers : 105 (53 male + 52 female)
\item Recording environment: moderately quiet computer room
\item Sessions per speaker: 4 (numbered I, II, III and IV)
\item Interval between sessions: 1 week to about 2 months
\item Types of utterances in Bangla and English per session:
\begin{itemize}
\item 10 isolated words (randomly drawn from a specific text corpus, and generally different for all speakers and sessions)
\item answers to 8 questions (these answers included dates, phone numbers, alphabetic sequences, and a few words spoken spontaneously)
\item 12 sentences (first two sentences common to all speakers, the remaining randomly drawn from the text corpus, duration ranging from 3-10 seconds)
\end{itemize}
\end{itemize}
Thus, for each session, there are two sets of recordings per speaker, one each in Bangla and English, containing 21 files each.
\subsection{The Benchmark Telephone Speech Corpus NTIMIT}
NTIMIT~\cite{fisher1993,jankowski1990}, like TIMIT~\cite{fisher1986,garofolo1993} is an acoustic-phonetic speech corpus in English, belonging to the Linguistic Data Consortium (LDC) of the University of Pennsylvania. TIMIT consists of clean microphone recordings of 10 different read sentences (2 \textit{sa}, 3 \textit{si} and 5 \textit{sx} sentences, some of which have rich phonetic variability), uttered by 630 speakers (438 males and 192 females) from eight major dialect regions of the USA. It is characterized by 8-\textit{kHz} bandwidth and lack of intersession variability, acoustic noise, and microphone variability or distortion. These features make TIMIT a benchmark of choice for researchers in several areas of speech processing.
NTIMIT, on the other hand, is the speech from the TIMIT database played through a carbon-button telephone handset and recorded over local and long-distance telephone loops. This provides speech identical to TIMIT, except that it is degraded through carbon-button transduction and actual telephone line conditions. Performance differences between identical experiments on TIMIT and NTIMIT are therefore, expected to arise primarily from the degrading effects of telephone transmission. Since the ordinary MFCC-GMM model achieves near perfect accuracy on TIMIT, further improvement seems to be unlikely. Therefore we have experimented with the NTIMIT database exclusively.
\subsection{Features Used}
\label{sec:feature}
The features used in this work are the widely-used Mel-frequency cepstral coefficients (MFCCs)~\cite{davis1980}, which are coefficients that collectively make up a Mel Frequency Cepstrum (MFC). The latter is a representation of the short-time power spectrum of a sound signal, based on a linear cosine transform of a log-energy spectrum on a nonlinear mel scale of frequency. It exploits auditory principles, as well as the decorrelating property of the cepstrum, and is amenable to compensation for convolution distortion. As such, it has turned out to be one of the most effective feature representations in speech-related recognition tasks~\cite{quatieri2008}. A given speech signal is partitioned into overlapping segments or frames, and MFCCs are computed for each such frame. Based on a bank of $K$ filters, a set of $M$ MFCCs is computed from each frame~\cite{pal2014}.
In addition, the delta Mel-frequency cepstral coefficients~\cite{quatieri2008}, which are nothing but the first-order frame-to-frame differences of the MFCCs, have also been used.
\subsection{Results}
The evaluation of the proposed method has been performed with the help of 10 recordings per speaker in both corpora, with the help of two different data sets:
\begin{itemize}
\item Dataset 6:4: consisting of the first 6 utterances for training and remaining 4 for testing
\item Dataset 8:2: consisting of thefirst 8 utterances for training and remaining 2 for testing
\end{itemize}
In addition, evaluation has been done on two different sets of features:
\begin{itemize}
\item FS-I: 20 MFCCs and 20 delta MFCCs
\item FS-II: 39 MFCCs
\end{itemize}
To implement the ensemble classification principle, on a number of competing MFCC-GMM classifiers were generated by varying certain tuning parameters of the generic MFCC-GMM classifier; the values of the parameters tuned (\textit{window size}, \textit{minimum frequency} and \textit{maximum frequency}) are mentioned in the tables.
The accuracy of the aggregated GMM-MFCC classifier is obtained by combining the likelihood scores of the individual classifier components.
The best performance observed on NTIMIT in our earlier work~|cite{pal2014} has been summarized in Table~\ref{tab:results_prev} without the PCT (WOPCT) as well as with PCT (WPCT). These will be used as the baseline for assessing the efficacy of the proposed approach based on the Minimum Rescaled Modified Distance Estimators (MRMDEs), employing all three divergence measures described in Section~\ref{sec:proposed}.
\subsection{Results with NTIMIT}
\label{sec:resntimit}
Table~\ref{tab:ntimit} gives the identification accuracy on NTIMIT with the proposed approach, using all three divergence measures described in . From the latter it is evident that significant improvement has been achieved with MRMDEs based on all three divergence measures. Moreover, in each case, FS-I, which contains 20 MFCCs and 20 delta MFCCs, gives uniformly better performance than FS-II, consisting of 39 MFCCs only. Overall, the best performance of 56.19\% with the 6:4 dataset and 67.86\% with the 8:2 dataset has been obtained with the LD divergence, using FS-I. These represent an improvement of over 10\% over the baseline performance.
\begin{table}[h]
\caption{Performance of the Baseline MFCC-GMM Speaker Identification system}
\label{tab:results_prev}
\begin{center}
\setlength{\extrarowheight}{1pt}
\begin{tabular}{|c|c|d{2cm}|d{2cm}|d{2cm}|d{2cm}|d{2cm}|}
\hline
\multirow{2}{*}{Corpus} &\multirow{2}{*}{Data set} & \multicolumn{2}{c|}{Individual} & \multicolumn{2}{c|}{Aggregate} \\
\cline{3-6}
&& WOPCT & WPCT & WOPCT & WPCT \\
\hline
\multirow{2}{*}{NTIMIT} &6:4 & 34.96 & 42.26 & 40.36 & 45.99 \\
\cline{2-6}
&8:2 & 42.41 & 52.30 & 49.05 & 55.63 \\ \hline
\multirow{2}{*}{\shortstack{NISIS\\(ES-I)}}&6:4 & 68.50 & 85.50 & 71.50 & 86.50 \\ \cline{2-6}
& 8:2 &76.00&89.00&77.00&91.50 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{sidewaystable}[htb!]
\scriptsize
\begin{center}
{
\caption{Identification accuracy on NTIMIT under the proposed approach}
\label{tab:ntimit}
\setlength{\extrarowheight}{1pt}
\begin{tabular}{|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|}
\hline
\multirow{3}{*}{Dataset} & \multirow{3}{*}{\shortstack{Experi-\\ment}} & \multirow{3}{*}{\shortstack{Window\\Size\\(ms)}} & \multicolumn{4}{c|}{Based on $C_{MLD}(\delta)$} & \multicolumn{4}{c|}{Based on $C_{MHD}(\delta)$} & \multicolumn{4}{c|}{Based on $C_{MPCS}(\delta)$} \\
\cline{4-15}
& & & \multicolumn{2}{c|}{WOPCT} & \multicolumn{2}{c|}{WPCT} & \multicolumn{2}{c|}{WOPCT} & \multicolumn{2}{c|}{WPCT} & \multicolumn{2}{c|}{WOPCT} & \multicolumn{2}{c|}{WPCT} \\
\cline{4-15}
& & & FS-I & FS-II & FS-I & FS-II & FS-I & FS-II & FS-I & FS-II & FS-I & FS-II & FS-I & FS-II \\
\hline
\multirow{3}{*}{6:4} & 1 & 0.020 & \textbf{43.293} & \textbf{40.952} & 46.547 & 45.595 & \textbf{41.507} & \textbf{38.095} & \textbf{45.357} & \textbf{43.373} & 39.246 & \textbf{36.269} & 43.452 & 41.031 \\
\cline{2-15}
& 2 & 0.030 & 42.936 & 39.127 & \textbf{47.142} & \textbf{45.714} & 41.269 & 36.389 & 45.158 & 43.214 & \textbf{39.603} & 35.317 & \textbf{43.650} & \textbf{42.222} \\
\cline{2-15}
& \multicolumn{2}{c|}{Combined} & \multicolumn{2}{c|}{52.540} & \multicolumn{2}{c|}{56.190} & \multicolumn{2}{c|}{49.563} & \multicolumn{2}{c|}{53.730} & \multicolumn{2}{c|}{51.667} & \multicolumn{2}{c|}{53.889} \\
\hline
\multirow{3}{*}{8:2} & 1 & 0.020 & \textbf{56.031} & \textbf{52.381} & 59.523 & \textbf{57.539} & 53.571 & 49.603 & 57.539 & 54.761 & 51.587 & 46.587 & 55.159 & 50.555 \\
\cline{2-15}
& 2 & 0.030 & 56.270 & 49.365 & \textbf{60.079} & \textbf{57.539} & 54.444 & 46.666 & 57.301 & 55.317 & 52.142 & 44.365 & 56.349 & 53.253 \\
\cline{2-15}
& \multicolumn{2}{c|}{Combined} & \multicolumn{2}{c|}{64.524} & \multicolumn{2}{c|}{67.857} & \multicolumn{2}{c|}{61.429} & \multicolumn{2}{c|}{64.206} & \multicolumn{2}{c|}{63.571} & \multicolumn{2}{c|}{66.111} \\
\hline
\end{tabular}
\bigskip \bigskip \bigskip \bigskip
\caption{Identfication accuracy on NISIS (ES-I) under the proposed approach}
\label{tab:nisis}
\setlength{\extrarowheight}{1pt}
\begin{tabular}{|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|d{1cm}|}
\hline
\multirow{3}{*}{Dataset} & \multirow{3}{*}{\shortstack{Experi-\\ment}} & \multirow{3}{*}{\shortstack{Min\\Freq\\(Hz)}} & \multirow{3}{*}{\shortstack{Max\\Freq\\(Hz)}} & \multirow{3}{*}{\shortstack{Window\\Size\\(ms)}} & \multicolumn{4}{c|}{Based on $C_{MLD}(\delta)$} & \multicolumn{4}{c|}{Based on $C_{MHD}(\delta)$} & \multicolumn{4}{c|}{Based on $C_{MPCS}(\delta)$} \\
\cline{6-17}
& & & & & \multicolumn{2}{c|}{FS-I} & \multicolumn{2}{c|}{FS-II} & \multicolumn{2}{c|}{FS-I} & \multicolumn{2}{c|}{FS-II} & \multicolumn{2}{c|}{FS-I} & \multicolumn{2}{c|}{FS-II} \\
\cline{6-17}
& & & & & WOPCT & WPCT & WOPCT & WPCT & WOPCT & WPCT & WOPCT & WPCT & WOPCT & WPCT & WOPCT & WPCT \\
\hline
\multirow{4}{*}{6:4} & 1 & 200 & 4000 & 0.020 & 83.25 & 88 & 81.75 & 85.5 & 82.25 & 86.5 & 78.75 & 83.5 & 79 & 85 & 78.5 & 81.75 \\
\cline{2-17}
& 2 & 200 & 4000 & 0.030 & 83.75 & 86.75 & 79.5 & 85.5 & 82.25 & 84.25 & 76.25 & 83.25 & 80 & 84.25 & 75.5 & 82.25 \\
\cline{2-17}
& 3 & 0 & 5500 & 0.020 & 86.5 & \textbf{89.75} & 82.75 & 85.75 & 84.5 & \textbf{89.5} & 81.75 & 84 & \textbf{83} & \textbf{88.75} & \textbf{82.5} & 84.5 \\
\cline{2-17}
& 4 & 0 & 5500 & 0.030 & \textbf{87.75} & 89 & \textbf{83} & \textbf{87.75} & \textbf{86} & 87.75 & \textbf{82} & \textbf{86} & 82.75 & 87 & 81 & \textbf{84.75} \\
\cline{2-17}
& \multicolumn{4}{c|}{1-4 Combined} & 87.5 & 92 & 84.5 & 88.5 & 86 & 89.5 & 83 & 86.5 & 85.5 & 88.75 & 83.75 & 86.5 \\
\hline
\multirow{4}{*}{8:2} & 1 & 200 & 4000 & 0.020 & 88.5 & 93 & 85 & \textbf{91.5} & 86.5 & 90.5 & 85 & 88.85 & 86 & 91 & 83.5 & 86.5 \\
\cline{2-17}
& 2 & 200 & 4000 & 0.030 & 89.5 & 93 & 85.5 & \textbf{91.5} & 87 & 90.5 & 82.5 & 89 & \textbf{88} & 91 & 82 & 88 \\
\cline{2-17}
& 3 & 0 & 5500 & 0.020 & 90 & \textbf{94.5} & 89 & \textbf{91.5} & 86 & \textbf{92} & 87.5 & 90 & 87.5 & \textbf{91.5} & \textbf{86.5} & 89 \\
\cline{2-17}
& 4 & 0 & 5500 & 0.030 & \textbf{90.5} & 92.5 & \textbf{89.5} & 93 & \textbf{89} & 90.5 & \textbf{88} & \textbf{90.5} & \textbf{88} & 90.5 & 86 & \textbf{91} \\
\cline{2-17}
& \multicolumn{4}{c|}{1-4 Combined} & 90 & 94.5 & 92.5 & 93.5 & 88 & 92.5 & 88 & 92.5 & 89.5 & 91.5 & 90 & 91.5 \\
\hline
\end{tabular}
}
\end{center}
\end{sidewaystable}
\subsection{Results with NISIS}
The best performance observed on NISIS using English recordings from Session I only (referred to as ES-I) in our earlier work (Bose \textit{et al.}, 2014) has been summarized in Table~\ref{tab:results_prev} without the PCT (WOPCT) as well as with PCT (WPCT), while Table~\ref{tab:nisis} gives the identification accuracy on it with the proposed approach, using all three divergence measures described in Section~\ref{sec:proposed}. As in the case of NTIMIT, it is seen that significant improvement has been achieved with MRMDEs in each divergence measure. Moreover, as observed earlier with NTIMIT, FS-I gives uniformly better performance than FS-II, in each instance. Again, as before~\ref{sec:resntimit} he best overall performance of 92\% with the 6:4 dataset and 94.5\% with the 8:2 dataset has been obtained with the LD divergence. These represent an improvement of about 6\% over the baseline performance.
It is worth noting that the improvement on NISIS is not as dramatic as that with NTIMIT. The explanation is that, the baseline performance with NISIS being quite high to begin with, there is not too much scope for improving that further. This may possibly be another positive feature of the proposed approach, namely, its ability to provide a relatively stronger boost to weaker baseline methods.
\section{Conclusions}
In the usual approach of Speaker identification, the probability distribution of the MFCC features for each speaker is modeled using Gaussian Mixture Models. For a test utterance, its MFCC feature vectors are matched with the speaker models using the likelihood scores derived from each model. The test utterance is assigned to the model with highest likelihood score.
In this work, a novel solution to the speaker identification problem is proposed through minimization of statistical divergences between the probability distribution ($g$) of feature vectors derived from the test utterance and the probability distributions of the feature vectors corresponding to the speaker classes. This approach is made more robust to the presence of outliers, through the use of suitably modified versions of the standard divergence measures. Three such measures were considered -- the likelihood disparity, the Hellinger distance and the Pearson chi-square distance.
It turns out that the proposed approach with the likelihood disparity, when the empirical distribution function is used to estimate $g$, becomes equivalent to maximum likelihood classification with Gaussian Mixture Models (GMMs) for speaker classes, the usual approach discussed above. The usual approach was used for example, by Reynolds (1995) yielding excellent results. Significant improvement in classification accuracy is observed under the current approach on the benchmark speech corpus NTIMIT and a new bilingual speech corpus NISIS, with MFCC features, both in isolation and in combination with delta MFCC features.
Further, the ubiquitous principal component transformation, by itself and in conjunction with the principle of classifier combination, improved the performance even further.
\section{Acknowledgement}
The authors gratefully acknowledge the contribution of Ms Disha Chakrabarti and Ms Enakshi Saha to this work.
|
1,108,101,564,317 | arxiv | \section{Introduction}
Inequalities for information measures are widely used in many applications.
In \cite{part1, part1_arxiv}, we investigated tight bounds between the Shannon entropy \cite{shannon} and the $\ell_{\alpha}$-norm, as shown in Theorems \ref{th:extremes} and \ref{th:extremes2} of Section \ref{subsect:prev}.
Using Theorems \ref{th:extremes} and \ref{th:extremes2}, we \cite{part1, part1_arxiv} showed tight bounds between the Shannon entropy and several information measures \cite{renyi, tsallis2, boekee}.
In this study, we extend the previous work \cite{part1, part1_arxiv} from information measures of $n$-ary probability vector to \emph{conditional} information measures of joint probability distributions.
Accurately, we provide the tight bounds of the expectation of $\ell_{\alpha}$-norm with a fixed conditional Shannon entropy in Theorem \ref{th:cond_extremes}, and vice versa in Theorem \ref{th:cond_extremes2}.
Directly extending Theorem \ref{th:cond_extremes} to Corollary \ref{cor:cond_extremes}, we obtain the tight bounds of several conditional entropies, which are related to the expectation of $\ell_{\alpha}$-norm, with a fixed conditional Shannon entropy.
In Section \ref{subsect:DMC}, we consider applications of Corollary \ref{cor:cond_extremes} for discrete memoryless channels (DMCs) under a uniform input distribution.
On the other hand, Section \ref{subsect:alpha_half} provides the exact formula of the bounds of Theorems \ref{th:cond_extremes} and \ref{th:cond_extremes2} with $\alpha = \frac{1}{2}$.
\section{Preliminaries}
\subsection{Probability distributions and its information measures}
Let $\mathcal{P}_{n}$ denotes the set of all $n$-ary probability vectors for an integer $n \ge 2$.
In particular, we define the $n$-ary equiprobable distribution
\begin{align}
\bvec{u}_{n}
\triangleq
(u_{1}, u_{2}, \dots, u_{n}) \in \mathcal{P}_{n}
\end{align}
as $u_{i} = \frac{1}{n}$ for $i \in \{ 1, 2, \dots, n \}$.
Moreover, we define the following two $n$-ary probability vectors:
(i) the $n$-ary probability vector
\begin{align}
\bvec{v}_{n}( p )
\triangleq
(v_{1}(p), v_{2}(p), \dots v_{n}(p)) \in \mathcal{P}_{n}
\end{align}
for $p \in [0, \frac{1}{n}]$ is defined as
\begin{align}
v_{i}( p )
=
\begin{cases}
1 - (n-1) p
& \mathrm{if} \ i = 1 , \\
p
& \mathrm{otherwise} ,
\end{cases}
\end{align}
and (ii) the $n$-ary probability vector%
\footnote{The definition of $\bvec{w}_{n}( \cdot )$ is similar to the definition of \cite[Eq. (26)]{verdu}.}
\begin{align}
\bvec{w}_{n}( p )
\triangleq
(w_{1}( p ), w_{2}( p ), \dots, w_{n}( p )) \in \mathcal{P}_{n}
\end{align}
for $p \in [\frac{1}{n}, 1]$ is defined as
\begin{align}
w_{i}( p )
=
\begin{cases}
p
& \mathrm{if} \ 1 \le i \le \lfloor p^{-1} \rfloor , \\
1 - \lfloor p^{-1} \rfloor p
& \mathrm{if} \ i = \lfloor p^{-1} \rfloor + 1 , \\
0
& \mathrm{otherwise} ,
\end{cases}
\end{align}
where $\lfloor \cdot \rfloor$ denotes the floor function.
For an $n$-ary random variable $X \sim \bvec{p} \in \mathcal{P}_{n}$, we define the Shannon entropy \cite{shannon} of $X \sim \bvec{p} \in \mathcal{P}_{n}$ as
\begin{align}
H( X )
=
H( \bvec{p} )
\triangleq
- \sum_{i=1}^{n} p_{i} \ln p_{i} ,
\end{align}
where $\ln$ denotes the natural logarithm and assume that
$0 \ln 0 = 0$.
Moreover, we define the $\ell_{\alpha}$-norm of $\bvec{p} \in \mathcal{P}_{n}$ as
\begin{align}
\| \bvec{p} \|_{\alpha}
\triangleq
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha}}
\end{align}
for $\alpha \in (0, \infty)$.
Note that $\lim_{\alpha \to \infty} \| \bvec{p} \|_{\alpha} = \| \bvec{p} \|_{\infty} \triangleq \max \{ p_{1}, p_{2}, \dots, p_{n} \}$ for $\bvec{p} \in \mathcal{P}_{n}$.
We next introduce the conditional entropies.
For a pair of random variables%
\footnote{The random variable $Y$ can be considered both as discrete and continuous.}
$(X, Y) \sim P_{X|Y} P_{Y}$ which $X$ follows an $n$-ary distribution, i.e., $P_{X|Y}( \cdot \mid y ) \in \mathcal{P}_{n}$ for all realization $y$ of $Y$, let the conditional R\'{e}nyi entropy \cite{arimoto} of order $\alpha \in (0, 1) \cup (1, \infty)$ be denoted by
\begin{align}
H_{\alpha}( X \mid Y )
\triangleq
\frac{ \alpha }{ 1 - \alpha } \ln \mathbb{E}[ \| P_{X|Y}( \cdot \mid Y ) \|_{\alpha} ]
\label{eq:cond_Renyi}
\end{align}
where $\mathbb{E}[ \cdot ]$ denotes the expectation of the random variable.
Besides, if $\alpha = 1$, then it is defined that
\begin{align}
H_{1}( X \mid Y ) = H(X \mid Y) \triangleq \mathbb{E}[ H( P_{X|Y}( \cdot \mid Y ) ) ]
\end{align}
is the conditional Shannon entropy \cite{shannon}.
In this study, we examine relationships between $H(X \mid Y)$ and $\mathbb{E}[ \| P_{X|Y}(\cdot \mid Y) \|_{\alpha} ]$ to evaluate relations between the conditional Shannon entropy and several information measures which are related to the expectations of $\ell_{\alpha}$-norm.
\subsection{Bounds on Shannon entropy and $\ell_{\alpha}$-norm}
\label{subsect:prev}
In this subsection, we introduce the results of our previous work \cite{part1, part1_arxiv}.
For simplicity, we define
$
H_{\sbvec{v}_{n}}( p )
\triangleq
H( \bvec{v}_{n}( p ) )
$
and
$
H_{\sbvec{w}_{n}}( p )
\triangleq
H( \bvec{w}_{n}( p ) )
$.
Moreover, we denote by $H_{\sbvec{v}_{n}}^{-1} : [0, \ln n] \to [0, \frac{1}{n}]$ the inverse function of $H_{\sbvec{v}_{n}}( p )$ for $p \in [0, \frac{1}{n}]$
and we also denote by $H_{\sbvec{w}_{n}}^{-1} : [0, \ln n] \to [\frac{1}{n}, 1]$ the inverse function of $H_{\sbvec{w}_{n}}( p )$ for $p \in [\frac{1}{n}, 1]$.
The following two theorems were derived in \cite{part1, part1_arxiv}.
\begin{theorem}
\label{th:extremes}
Let $\bar{\bvec{v}}_{n}( \bvec{p} ) \triangleq \bvec{v}_{n}( H_{\sbvec{v}_{n}}^{-1}( H( \bvec{p} ) ) )$ and $\bar{\bvec{w}}_{n}( \bvec{p} ) \triangleq \bvec{w}_{n}( H_{\sbvec{w}_{n}}^{-1}( H( \bvec{p} ) ) )$.
Then, we observe that
\begin{align}
\| \bar{\bvec{w}}_{n}( \bvec{p} ) \|_{\alpha} \le \| \bvec{p} \|_{\alpha} \le \| \bar{\bvec{v}}_{n}( \bvec{p} ) \|_{\alpha}
\label{ineq:extremes}
\end{align}
for any $n \ge 2$, any $\bvec{p} \in \mathcal{P}_{n}$, and any $\alpha \in (0, \infty)$.
\end{theorem}
\begin{theorem}
\label{th:extremes2}
Let $p \in [0, \frac{1}{n}]$ and $p^{\prime} \in [\frac{1}{n}, 1]$ be chosen to satisfy
\begin{align}
\| \bvec{v}_{n}( p ) \|_{\alpha}
=
\| \bvec{p} \|_{\alpha}
=
\| \bvec{w}_{n}( p^{\prime} ) \|_{\alpha}
\label{ineq:norm_v_to_w}
\end{align}
for fixed $n \ge 2$, $\bvec{p} \in \mathcal{P}_{n}$, and $\alpha \in (0, 1) \cup (1, \infty)$.
Then, we observe that
\begin{align}
0 < \alpha < 1
\ & \Longrightarrow \
H_{\sbvec{v}_{n}}( p ) \le H( \bvec{p} ) \le H_{\sbvec{w}_{n}}( p^{\prime} ) ,
\\
\alpha > 1
\ & \Longrightarrow \
H_{\sbvec{w}_{n}}( p^{\prime} ) \le H( \bvec{p} ) \le H_{\sbvec{v}_{n}}( p ) .
\end{align}
\end{theorem}
\begin{figure}[!t]
\centering
\subfloat[The case $\alpha = \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsN_8ary_half.pdf}
\put(-5, 30){\rotatebox{90}{$\| \bvec{p} \|_{\alpha}$}}
\put(75, -2){$H( \bvec{p} )$}
\put(97, 1.5){\scriptsize [nats]}
\put(29, 32){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(70, 29){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\put(67, 12){\small $H_{\sbvec{v}_{n}}( p ) = \chi_{n}( \alpha )$}
\put(66, 13){\vector(-2, -1){11}}
\put(40, 46){inflection}
\put(46, 41){point}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $\alpha = 2$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsN_8ary_2.pdf}
\put(-5, 30){\rotatebox{90}{$\| \bvec{p} \|_{\alpha}$}}
\put(75, -2.5){$H( \bvec{p} )$}
\put(97, 1.5){\scriptsize [nats]}
\put(60, 41){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(40, 23){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\put(45, 11){\small $H_{\sbvec{v}_{n}}( p ) = \chi_{n}( \alpha )$}
\put(75, 12){\vector(4, -1){17}}
\put(88, 20){inflection}
\put(95, 15){point}
\end{overpic}
}
\caption{
Plot of the boundary of $\mathcal{R}_{n}( \alpha )$ with $n = 8$.
If $0 < \alpha < 1$, then the upper- and lower-boundaries correspond to distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$, respectively.
If $\alpha > 1$, then these correspondences are reversed.
The inflection point of the curve $p \mapsto (H_{\sbvec{v}_{n}}( p ), \| \bvec{v}_{n}( p ) \|_{\alpha})$ is at $H_{\sbvec{v}_{n}}( p ) = \chi_{n}( \alpha )$ (see Lemma \ref{lem:convex_v}).}
\label{fig:region_P6_half}
\end{figure}
Theorems \ref{th:extremes} and \ref{th:extremes2} imply the exact feasible region of
\begin{align}
\mathcal{R}_{n}( \alpha )
\triangleq
\{ (H( \bvec{p} ), \| \bvec{p} \|_{\alpha}) \mid \bvec{p} \in \mathcal{P}_{n} \}
\label{def:region}
\end{align}
for $n \ge 2$ and $\alpha \in (0, 1) \cup (1, \infty)$.
We illustrate feasible regions of $\mathcal{R}_{n}( \alpha )$ in Fig. \ref{fig:region_P6_half}.
In this study, we extend $\mathcal{R}_{n}( \alpha )$ to the region between the \emph{conditional} Shannon entropy and the \emph{expectation} of $\ell_{\alpha}$-norm.
\section{Bounds on conditional Shannon entropy and expectation of $\ell_{\alpha}$-norm}
We define%
\footnote{Note that the alphabet $\mathcal{Y}$ of $\mathcal{R}_{n}^{\mathrm{cond}}( \alpha )$ must has elements more than one.}
\begin{align}
\mathcal{R}_{n}^{\mathrm{cond}}( \alpha )
& \triangleq
\left\{ \left. \left( \vphantom{\sum} H(X \mid Y), \mathbb{E}[ \| P_{X|Y}(\cdot \mid Y) \|_{\alpha} ] \right) \; \right| \, P_{XY} \in \mathcal{P}(\mathcal{X} \times \mathcal{Y} ) \ \mathrm{and} \ |\mathcal{X}| = n \right\} ,
\end{align}
where $\mathcal{P}( \cdot )$ denotes the set of all probability distributions on the alphabet and $| \cdot |$ denotes the cardinality of the finite set.
Using Theorems \ref{th:extremes} and \ref{th:extremes2}, Theorem \ref{th:convexhull} establishes the exact feasible regions of $\mathcal{R}_{n}^{\mathrm{cond}}( \alpha )$ as follows:
\begin{theorem}
\label{th:convexhull}
For any $n \ge 2$ and any $\alpha \in (0, \infty)$, we observe that
\begin{align}
\mathcal{R}_{n}^{\mathrm{cond}}( \alpha )
=
\mathrm{Conv} ( \mathcal{R}_{n}( \alpha ) ) ,
\end{align}
where $\mathrm{Conv}( \mathcal{R} )$ denotes the convex hull of the set $\mathcal{R}$.
\end{theorem}
\begin{IEEEproof}[Proof of Theorem \ref{th:convexhull}]
We provide the proof of Theorem \ref{th:convexhull} in a similar manner to \cite[p. 517]{tebbe} or \cite[Theorem 1]{feder}.
First note that, for any $n \ge 2$ and any $\alpha \in (0, +\infty)$, the set $\mathcal{R}_{n}( \alpha )$ is a subset of $\mathbb{R}^{2}$ since $0 \le H( \bvec{p} ) \le \ln n$ and $\min\{ 1, n^{\frac{1}{\alpha}-1} \} \le \| \bvec{p} \|_{\alpha} \le \max\{ 1, n^{\frac{1}{\alpha}-1} \}$ for $\bvec{p} \in \mathcal{P}_{n}$.
Moreover, we see that arbitrary point of $\mathcal{R}_{n}^{\mathrm{cond}}( \alpha )$ is a convex combination of points of $\mathcal{R}_{n}( \alpha )$.
Therefore, it follows from \cite[Theorem 2.3]{rockafellar} that $\mathcal{R}_{n}^{\mathrm{cond}}( \alpha )$ is the convex hull of $\mathcal{R}_{n}( \alpha )$.
\end{IEEEproof}
Therefore, Theorem \ref{th:convexhull} obtains the exact feasible region of $\mathcal{R}_{n}^{\mathrm{cond}}( \alpha )$ from Theorems \ref{th:extremes} and \ref{th:extremes2}.
Moreover, we will investigate the exact boundary of $\mathcal{R}_{n}^{\mathrm{cond}}( \alpha )$ in the paper.
More precisely, we examine the tight bounds between the conditional Shannon entropy and the expectation of $\ell_{\alpha}$-norm, as with Theorems \ref{th:extremes} and \ref{th:extremes2}.
To accomplished the end, we now derive some lemmas.
The $\alpha$-logarithm function \cite{tsallis} is defined by
\begin{align}
\ln_{\alpha} x
\triangleq
\frac{ x^{1-\alpha} - 1 }{ 1 - \alpha }
\end{align}
for $\alpha \neq 1$ and $x > 0$;
besides, since $\lim_{\alpha \to 1} \ln_{\alpha} x = \ln x$ by L'H\^{o}pital's rule, it is defined that $\ln_{1} x \triangleq \ln x$.
For the $\alpha$-logarithm function, we can see the following useful lemma.
\begin{lemma}
\label{lem:IT_ineq}
For $\alpha < \beta$ and $x > 0$, we observe that
\begin{align}
\ln_{\alpha} x \ge \ln_{\beta} x
\end{align}
with equality if and only if $x = 1$.
\end{lemma}
\begin{IEEEproof}[Proof of Lemma \ref{lem:IT_ineq}]
We consider the monotonicity of $\ln_{\alpha} x$ with respect to $\alpha$.
Direct calculation yields
\begin{align}
\frac{ \partial \ln_{\alpha} x }{ \partial \alpha }
& =
\frac{ \partial }{ \partial \alpha } \left( \frac{ x^{1-\alpha} - 1 }{ 1 - \alpha } \right)
\\
& =
\frac{ \partial }{ \partial \alpha } \left( \frac{ x^{1-\alpha} }{ 1 - \alpha } \right) - \frac{ \partial }{ \partial \alpha } \left( \frac{ 1 }{ 1 - \alpha } \right)
\\
& =
\left( \frac{ \left( \frac{ \partial (x^{1-\alpha}) }{ \partial \alpha } \right) (1 - \alpha) - x^{1-\alpha} \left( \frac{ \partial (1-\alpha) }{ \partial \alpha } \right) }{ (1 - \alpha)^{2} } \right) - \left( - \frac{ \left( \frac{ \partial (1 - \alpha) }{ \partial \alpha } \right) }{ (1 - \alpha)^{2} } \right)
\\
& =
\left( \frac{ (\ln x) (-1) x^{1-\alpha} (1-\alpha) - x^{1-\alpha} (-1) }{ (1-\alpha)^{2} } \right) - \left( - \frac{ -1 }{ (1-\alpha)^{2} } \right)
\\
& =
\left( \frac{ - (\ln x) x^{1-\alpha} (1-\alpha) + x^{1-\alpha} }{ (1-\alpha)^{2} } \right) - \left( \frac{ 1 }{ (1-\alpha)^{2} } \right)
\\
& =
\frac{ - (\ln x) x^{1-\alpha} (1-\alpha) + x^{1-\alpha} - 1 }{ (1-\alpha)^{2} }
\\
& =
\frac{ - (\ln x) x^{1-\alpha} (1-\alpha) + (1 - \alpha) \ln_{\alpha} x }{ (1-\alpha)^{2} }
\\
& =
\frac{ \ln_{\alpha} x - x^{1-\alpha} \ln x }{ 1-\alpha }
\\
& =
\frac{ x^{\alpha} \ln_{\alpha} x - x \ln x }{ x^{\alpha} (1-\alpha) } .
\label{eq:diff1_lnq}
\end{align}
Then, we can see that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial \ln_{\alpha} x }{ \partial \alpha } \right)
& =
\operatorname{sgn} \! \left( \frac{ x^{\alpha} \ln_{\alpha} x - x \ln x }{ x^{\alpha} (1-\alpha) } \right)
\\
& =
\operatorname{sgn} \! \left( \frac{ 1 }{ x^{\alpha} (1-\alpha) } \right) \cdot \, \operatorname{sgn} \! \left( \vphantom{\sum} x^{\alpha} \ln_{\alpha} x - x \ln x \right) ,
\label{eq:sign_diff1_qlog}
\end{align}
where $\operatorname{sgn} : \mathbb{R} \to \{ -1, 0, 1 \}$ denote the sign function, i.e.,
\begin{align}
\operatorname{sgn} ( x )
\triangleq
\begin{cases}
1
& \mathrm{if} \ x > 0 , \\
0
& \mathrm{if} \ x = 0 , \\
-1
& \mathrm{if} \ x < 0 .
\end{cases}
\end{align}
Thus, to check the sign of $\frac{ \partial \ln_{\alpha} x }{ \partial \alpha }$, we now examine the functions $\frac{ 1 }{ x^{\alpha} (1-\alpha) }$ and $x^{\alpha} \ln_{\alpha} x - x \ln x$.
We readily see that
\begin{align}
\operatorname{sgn} \! \left( \frac{ 1 }{ x^{\alpha} (1-\alpha) } \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha < 1 , \\
-1
& \mathrm{if} \ \alpha > 1
\end{cases}
\label{eq:sign_1_over_x^a(1-a)}
\end{align}
for $x > 0$.
To see the sign of $x^{\alpha} \ln_{\alpha} x - x \ln x$, we calculate the following derivatives:
\begin{align}
\frac{ \partial }{ \partial \alpha } \left( \vphantom{\sum} x^{\alpha} \ln_{\alpha} x - x \ln x \right)
& =
\frac{ \partial }{ \partial \alpha } \left( \vphantom{\sum} x^{\alpha} \ln_{\alpha} x \right)
\\
& =
\frac{ \partial }{ \partial \alpha } \left( x^{\alpha} \left( \frac{ x^{1-\alpha} - 1 }{ 1 - \alpha } \right) \right)
\\
& =
\frac{ \partial }{ \partial \alpha } \left( \frac{ x - x^{\alpha} }{ 1 - \alpha } \right)
\\
& =
\frac{ \partial }{ \partial \alpha } \left( \frac{ x }{ 1 - \alpha } \right) - \frac{ \partial }{ \partial \alpha } \left( \frac{ x^{\alpha} }{ 1 - \alpha } \right)
\\
& =
\left( - \frac{ - x }{ (1 - \alpha)^{2} } \right) - \left( \frac{ \left( \frac{ \partial x^{\alpha} }{ \partial \alpha } \right) (1-\alpha) - x^{\alpha} \left( \frac{ \partial (1-\alpha) }{ \partial \alpha } \right) }{ (1 - \alpha)^{2} } \right)
\\
& =
\frac{ x }{ (1 - \alpha)^{2} } - \frac{ (\ln x) x^{\alpha} (1-\alpha) + x^{\alpha} }{ (1 - \alpha)^{2} }
\\
& =
\frac{ x - (\ln x) x^{\alpha} (1-\alpha) - x^{\alpha} }{ (1 - \alpha)^{2} }
\\
& =
\frac{ x^{\alpha} (x^{1-\alpha} - 1) - x^{\alpha} (1-\alpha) \ln x }{ (1 - \alpha)^{2} }
\\
& =
\frac{ x^{\alpha} (1 - \alpha) \ln_{\alpha} x - x^{\alpha} (1-\alpha) \ln x }{ (1 - \alpha)^{2} }
\\
& =
\frac{ x^{\alpha} \ln_{\alpha} x - x^{\alpha} \ln x }{ 1 - \alpha }
\\
& =
\frac{ x^{\alpha} (\ln_{\alpha} x - \ln x) }{ 1 - \alpha } ,
\label{eq:diff1_partial_tsallis}
\\
\frac{ \partial (\ln_{\alpha} x - \ln x) }{ \partial x }
& =
\frac{ \partial \ln_{\alpha} x }{ \partial x } - \frac{ \partial \ln x }{ \partial x }
\\
& =
\frac{ \partial }{ \partial x } \left( \frac{ x^{1-\alpha} - 1 }{ 1 - \alpha } \right) - \frac{ 1 }{ x }
\\
& =
\frac{ 1 }{ 1 - \alpha } \left( \frac{ \partial x^{1-\alpha} }{ \partial x } \right) - \frac{ 1 }{ x }
\\
& =
\frac{ (1 - \alpha) x^{-\alpha} }{ 1 - \alpha } - \frac{ 1 }{ x }
\\
& =
\frac{ 1 }{ x^{\alpha} } - \frac{ 1 }{ x }
\\
& =
x^{-1} \left( \vphantom{\sum} x^{1-\alpha} - 1 \right) .
\label{eq:diff_qlog-log}
\end{align}
By the monotonicity of the exponential function, it follows from \eqref{eq:diff_qlog-log} that, if $0 < x \le 1$, then
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial (\ln_{\alpha} x - \ln x) }{ \partial x } \right)
=
\begin{cases}
1
& \mathrm{if} \ x \neq 1 \ \mathrm{and} \ \alpha > 1 , \\
0
& \mathrm{if} \ x = 1 \ \mathrm{or} \ \alpha = 1 , \\
-1
& \mathrm{if} \ x \neq 1 \ \mathrm{and} \ \alpha < 1 ,
\end{cases}
\label{eq:sign_gap_qlog1}
\end{align}
and, if $x \ge 1$, then
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial (\ln_{\alpha} x - \ln x) }{ \partial x } \right)
=
\begin{cases}
1
& \mathrm{if} \ x \neq 1 \ \mathrm{and} \ \alpha < 1 , \\
0
& \mathrm{if} \ x = 1 \ \mathrm{or} \ \alpha = 1 , \\
-1
& \mathrm{if} \ x \neq 1 \ \mathrm{and} \ \alpha > 1 .
\end{cases}
\label{eq:sign_gap_qlog2}
\end{align}
It follows from \eqref{eq:sign_gap_qlog1} and \eqref{eq:sign_gap_qlog2} that the following monotonicity hold:
\begin{itemize}
\item
if $\alpha < 1$, then $\ln_{\alpha} x - \ln x$ is strictly decreasing for $x \in (0, 1]$ and strictly increasing for $x \ge 1$, and
\item
if $\alpha > 1$, then $\ln_{\alpha} x - \ln x$ is strictly increasing for $x \in (0, 1]$ and strictly decreasing for $x \ge 1$.
\end{itemize}
Hence, we have
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} \ln_{\alpha} x - \ln x \right)
=
\begin{cases}
1
& \mathrm{if} \ x \neq 1 \ \mathrm{and} \ \alpha < 1 , \\
0
& \mathrm{if} \ x = 1 \ \mathrm{or} \ \alpha = 1 , \\
-1
& \mathrm{if} \ x \neq 1 \ \mathrm{and} \ \alpha > 1
\end{cases}
\label{eq:gap_qlog_ln_sgn}
\end{align}
for $x > 0$ since $(\ln_{\alpha} x - \ln x) |_{x = 1} = 0$ for $\alpha \in (-\infty, +\infty)$.
Thus, we observe that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial }{ \partial \alpha } \left( \vphantom{\sum} x^{\alpha} \ln_{\alpha} x - x \ln x \right) \right)
& \overset{\eqref{eq:diff1_partial_tsallis}}{=}
\operatorname{sgn} \! \left( \frac{ x^{\alpha} (\ln_{\alpha} x - \ln x) }{ 1 - \alpha } \right)
\\
& =
\operatorname{sgn} \! \left( \frac{ x^{\alpha} }{ 1 - \alpha } \right) \cdot \, \operatorname{sgn} \! \left( \vphantom{\sum} \ln_{\alpha} x - \ln x \right)
\\
& =
\begin{cases}
1
& \mathrm{if} \ x \neq 1 \ \mathrm{and} \ \alpha \neq 1 , \\
0
& \mathrm{if} \ x = 1 ,
\end{cases}
\label{eq:sign_diff1_partial_tsallis}
\end{align}
where the last equality follows from \eqref{eq:gap_qlog_ln_sgn} and
\begin{align}
\operatorname{sgn} \! \left( \frac{ x^{\alpha} }{ 1 - \alpha } \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha < 1 , \\
-1
& \mathrm{if} \ \alpha > 1 .
\end{cases}
\end{align}
Note that $\lim_{\alpha \to 1} \ln_{\alpha} x = \ln_{1} x = \ln x$.
It follows from \eqref{eq:sign_diff1_partial_tsallis} that $x^{\alpha} \ln_{\alpha} x - x \ln x$ with a fixed $x > 0$ is strictly increasing for $\alpha \in (-\infty, +\infty)$ unless $x = 1$.
Hence, we have
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} x^{\alpha} \ln_{\alpha} x - x \ln x \right)
=
\begin{cases}
1
& \mathrm{if} \ x \neq 1 \ \mathrm{and} \ \alpha > 1 , \\
0
& \mathrm{if} \ x = 1 \ \mathrm{or} \ \alpha = 1 , \\
-1
& \mathrm{if} \ x \neq 1 \ \mathrm{and} \ \alpha < 1
\end{cases}
\label{eq:sign_partial_tsallis}
\end{align}
for $x > 0$ since $(x^{\alpha} \ln_{\alpha} x - x \ln x) |_{\alpha = 1} = 0$ for $x > 0$.
Concluding the above calculus, we obtain
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial \ln_{\alpha} x }{ \partial \alpha } \right)
& \overset{\eqref{eq:sign_diff1_qlog}}{=}
\operatorname{sgn} \! \left( \frac{ 1 }{ x^{\alpha} (1-\alpha) } \right) \cdot \, \operatorname{sgn} \! \left( \vphantom{\sum} x^{\alpha} \ln_{\alpha} x - x \ln x \right)
\\
& =
\begin{cases}
0
& \mathrm{if} \ x = 1 , \\
-1
& \mathrm{if} \ x \neq 1 \ \mathrm{and} \ \alpha \neq 1 ,
\end{cases}
\end{align}
where the last equality follows from \eqref{eq:sign_1_over_x^a(1-a)} and \eqref{eq:sign_partial_tsallis}.
Therefore, we have that $\ln_{\alpha} x$ with a fixed $x > 0$ is strictly decreasing for $\alpha \in (-\infty, +\infty)$ unless $x = 1$, which implies Lemma \ref{lem:IT_ineq}.
\end{IEEEproof}
Note that it is easy to see that
\begin{align}
\ln_{0} x
& =
x - 1 ,
\\
\ln_{1} x
& =
\ln x ,
\\
\ln_{2} x
& =
1 - \frac{1}{x}
\end{align}
for $x > 0$;
that is, Lemma \ref{lem:IT_ineq} implies that
\begin{align}
1 - \frac{1}{x} \le \ln x \le x - 1
\label{eq:ITineq}
\end{align}
for $x > 0$, which are famous inequalities in information theory.
We illustrate Lemma \ref{lem:IT_ineq} in Fig. \ref{fig:qlog}.
In this study, we use Lemma \ref{lem:IT_ineq} to prove Lemmas \ref{lem:convex_v} and \ref{lem:Lmin}.
\begin{figure}[!t]
\centering
\begin{overpic}[width = 1\hsize, clip]{g_log.pdf}
\put(94, 17){$x$}
\put(10, 57){$\ln_{\alpha} x$}
\put(48, 55){\color{navyblue} $\ln_{0} x = x - 1$}
\put(84, 54){\color{bluegreen} $\ln_{\frac{1}{2}} x$}
\put(92, 45){$\ln x$}
\put(94, 38.5){\color{orange} $\ln_{\frac{3}{2}} x$}
\put(79, 29){\color{red} $\ln_{2} x = 1 - \frac{1}{x}$}
\end{overpic}
\caption{Plots of $\alpha$-logarithm functions with $\alpha \in \{ 0, \frac{1}{2}, 1, \frac{3}{2}, 2 \}$.}
\label{fig:qlog}
\end{figure}
The following lemma shows the convexities of $\| \bvec{v}_{n}( p ) \|_{\alpha}$ with respect to $H_{\sbvec{v}_{n}}( p )$ for $p \in [0, \frac{1}{n}]$.
\begin{lemma}
\label{lem:convex_v}
For any $\alpha \in (0, 1) \cup (1, \infty)$, $\| \bvec{v}_{2}( p ) \|_{\alpha}$ is strictly concave in $H( \bvec{v}_{2}( p ) ) \in [0, \ln 2]$.
Moreover, for any $n \ge 3$ and any $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$, if $p \in [0, \frac{1}{n}]$, then there exists an inflection point $\chi_{n}( \alpha )$ such that
\begin{itemize}
\setlength{\itemindent}{-1em}
\item
$\| \bvec{v}_{n}( p ) \|_{\alpha}$ is strictly concave in $H_{\sbvec{v}_{n}}( p ) \in [0, \chi_{n}( \alpha )]$ and
\item
$\| \bvec{v}_{n}( p ) \|_{\alpha}$ is strictly convex in $H_{\sbvec{v}_{n}}( p ) \in [\chi_{n}( \alpha ), \ln n]$.
\end{itemize}
In addition, for $n \ge 3$ and $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$, the value $\chi_{n}( \alpha )$ satisfies the following statements:
\begin{itemize}
\setlength{\itemindent}{-1em}
\item
$\chi_{n}( \alpha )$ is strictly increasing for $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$,
\item
$\chi_{n}( \frac{1}{2} ) > \ln n - (1 - \frac{2}{n}) \ln (n-1)$,
\item
$\lim_{\alpha \to 1} \chi_{n}( \alpha ) = \ln 2 + \ln \sqrt{n-1}$, and
\item
$\lim_{\alpha \to \infty} \chi_{n}( \alpha ) = \ln n$.
\end{itemize}
\end{lemma}
We provide the proof of Lemma \ref{lem:convex_v} in Appendix \ref{app:convex_v} by following \cite[Appendix I]{fabregas}.
In Fig. \ref{fig:region_P6_half}, we illustrate the convexity and concavity of $\| \bvec{v}_{n}( p ) \|_{\alpha}$ in $H_{\sbvec{v}_{n}}( p ) \in [0, \ln n]$ with its inflection points.
\if0
\begin{remark}
If $n = 2$, then Lemma \ref{lem:convex_v} shows that $\| \bvec{v}_{n}( p ) \|_{\alpha}$ is always concave in $H_{\sbvec{v}_{n}}( p ) \in [0, \ln n]$.
However, if $n \ge 3$, then an inflection point $\chi_{n}( \alpha )$ is appeared.
This appearance is a novelty with \cite[Appendix I]{fabregas}.
\end{remark}
\fi
\begin{remark}
We now consider what happens when $\alpha \to \infty$ in Lemma \ref{lem:convex_v}.
Assume that $n \ge 3$ and $p \in [0, \frac{1}{n}]$.
Note that $\frac{1}{n} \le \| \bvec{v}_{n}( p ) \|_{\infty} \le 1$.
We verify that%
\footnote{The right-hand side of \eqref{eq:infty_binary} is equivalent to the lower bound of Fano's inequality \cite{fano} since $P_{\mathrm{e}}( \sbvec{p} ) = 1 - \| \sbvec{p} \|_{\infty}$.}
\begin{align}
H_{\sbvec{v}_{n}}( p )
=
h_{2}( \| \bvec{v}_{n}( p ) \|_{\infty} ) + (1 - \| \bvec{v}_{n}( p ) \|_{\infty}) \ln (n-1) ,
\label{eq:infty_binary}
\end{align}
where $h_{2}( x ) \triangleq - x \ln x - (1-x) \ln (1-x)$ denotes the binary entropy function.
Since the right-hand side of \eqref{eq:infty_binary} is strictly decreasing for $\| \bvec{v}_{n}( p ) \|_{\infty} \in [\frac{1}{n}, 1]$ and strictly concave in $\| \bvec{v}_{n}( p ) \|_{\infty} \in [\frac{1}{n}, 1]$, we observe that $\| \bvec{v}_{n}( p ) \|_{\infty}$ is always strictly concave in $H_{\sbvec{v}_{n}}( p ) \in [0, \ln n]$.
Therefore, this concavity of $\| \bvec{v}_{n}( p ) \|_{\infty}$ is consistent with the limiting value $\lim_{\alpha \to \infty} \chi_{n}( \alpha ) = \ln n$ shown in Lemma \ref{lem:convex_v}.
\end{remark}
\if0
On the other hand, in the proof of Lemma \ref{lem:convex_v}, we conjecture more accurate description of Lemma \ref{lem:convex_v} as follows:
\begin{conjecture}
For any fixed $n \ge 3$ and any $\alpha \in (0, 1) \cup (1, +\infty)$, if $p \in [0, \frac{1}{n}]$, then there exists an inflection point at $\chi_{n}( \alpha ) \in ( H_{\sbvec{v}_{n}}( \mathrm{e}^{-n} ), \ln n)$ such that
\begin{itemize}
\item
$\| \bvec{v}_{n}( p ) \|_{\alpha}$ is strictly concave in $H_{\sbvec{v}_{n}}( p ) \in [0, \chi_{n}( \alpha )]$ and
\item
$\| \bvec{v}_{n}( p ) \|_{\alpha}$ is strictly convex in $H_{\sbvec{v}_{n}}( p ) \in [\chi_{n}( \alpha ), \ln n]$.
\end{itemize}
More precisely, for $\alpha \in (0, 1) \cup (1, +\infty)$, the value $\chi_{n}( \alpha ) \in (H_{\sbvec{v}_{n}}( \mathrm{e}^{-n} ), \ln n)$ satisfies the following statements:
\begin{itemize}
\item
$\chi_{n}( \alpha )$ is strictly increasing for $\alpha \in (0, 1) \cup (1, \infty)$,
\item
$\lim_{\alpha \to 0^{+}} \chi_{n}( 0 ) > H_{\sbvec{v}_{n}}( \mathrm{e}^{-n} )$,
\item
$\lim_{\alpha \to 1} \chi_{n}( \alpha ) = \ln 2 + \ln \sqrt{n-1}$, and
\item
$\lim_{\alpha \to +\infty} \chi_{n}( \alpha ) = \ln n$.
\end{itemize}
\end{conjecture}
\fi
In addition, the following lemma shows the concavities of $\| \bvec{w}_{n}( p ) \|_{\alpha}$ with respect to $H_{\sbvec{w}_{n}}( p )$ for $p \in [\frac{1}{n}, 1]$.
\begin{lemma}
\label{lem:concave_w}
For fixed integers $n \ge m \ge 2$ and $\alpha \in (0, 1) \cup (1, \infty)$, if $p \in [\frac{1}{m}, \frac{1}{m-1}]$, then $\| \bvec{w}_{n}( p ) \|_{\alpha}$ is strictly concave in $H_{\sbvec{w}_{n}}( p ) \in [\ln (m-1), \ln m]$.
\end{lemma}
Lemma \ref{lem:concave_w} is proved in Appendix \ref{app:concave_w}.
Lemma \ref{lem:concave_w} implies that $\| \bvec{w}_{n}( p ) \|_{\alpha}$ is a piecewise concave function of $H_{\sbvec{w}_{n}}( p )$, composed of $n-1$ segments.
Note that Fig. \ref{fig:region_P6_half} is consistent with Lemma \ref{lem:concave_w}.
Employing Lemmas \ref{lem:convex_v} and \ref{lem:concave_w}, we can provide the tight bounds between the conditional Shannon entropy and the expectation of $\ell_{\alpha}$-norm, as with Theorems \ref{th:extremes} and \ref{th:extremes2}.
We now define the extremal functions $L_{\min}^{\alpha}(X \mid Y)$ and $L_{\max}^{\alpha}(X \mid Y)$ which take the boundary of $\mathcal{R}_{n}^{\mathrm{cond}}( \alpha )$ as follows:
(i) For a pair of random variable $(X, Y) \sim P_{X|Y} P_{Y}$, we define
\begin{align}
L_{\min}^{\alpha}(X \mid Y)
\triangleq
\lambda \| \bvec{u}_{m} \|_{\alpha} + (1 - \lambda) \| \bvec{u}_{m+1} \|_{\alpha} ,
\end{align}
where
\begin{align}
m
=
\left\lfloor \mathrm{e}^{H(X \mid Y)} \right\rfloor
\quad \mathrm{and} \quad
\lambda
=
\cfrac{ \ln (m+1) - H(X \mid Y) }{ \ln (m+1) - \ln m } .
\notag
\end{align}
Note that the quantity $L_{\min}^{\alpha}(X \mid Y)$ is determined by $\alpha \in (0, 1) \cup (1, \infty)$ and $H(X \mid Y) \in [0, \ln n]$.
In addition, if $|\mathcal{X}| = n$, then we define
\begin{align}
L_{\max}^{\alpha}(X \mid Y)
\triangleq
\begin{cases}
\| \hat{\bvec{v}}_{n}(X \mid Y) \|_{\alpha}
& \mathrm{if} \ H(X \mid Y) \le H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) ) , \\
T_{n, \alpha} (X \mid Y)
& \mathrm{if} \ H(X \mid Y) > H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) ) ,
\end{cases}
\label{def:Lmax}
\end{align}
where $\hat{\bvec{v}}_{n}(X \mid Y) \triangleq \bvec{v}_{n}( H_{\sbvec{v}_{n}}^{-1}(H(X \mid Y)) )$, the value $p_{n}^{\ast}( \alpha )$ denotes the root of the equation
\begin{align}
(\ln n - H_{\sbvec{v}_{n}}( p )) \left( \! \frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p ) } \! \right)
=
\| \bvec{u}_{n} \|_{\alpha} - \| \bvec{v}_{n}( p ) \|_{\alpha}
\label{eq:equation_p^ast}
\end{align}
with respect to $p \in (0, \frac{1}{n})$ for $n \ge 3$, and $T_{n, \alpha} (X \mid Y)$ is defined as
\begin{align}
T_{n, \alpha} (X \mid Y)
& \triangleq
\lambda \| \bvec{v}_{n}( p_{n}^{\ast}( \alpha ) ) \|_{\alpha} + (1-\lambda) \| \bvec{u}_{n} \|_{\alpha} ,
\\
\lambda
& =
\cfrac{ \ln n - H(X \mid Y) }{ \ln n - H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) ) } .
\end{align}
Note that $p_{2}^{\ast}( \alpha ) = \frac{1}{2}$ for $\alpha \in (0, 1) \cup (1, \infty)$ if $n = 2$.
In \eqref{eq:equation_p^ast}, the derivative $\frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p ) }$ is calculated as
\begin{align}
\frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p ) }
& =
\left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \frac{ p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} }{ \ln (1 - (n-1) p) - \ln p } \right) ,
\label{eq:first-order}
\end{align}
where the right-hand side of \eqref{eq:first-order} is derived in \cite[Eq. (59)]{part1_arxiv}.
We defer to present the special case of $p_{n}^{\ast}( \alpha )$ which can be solved easily in Section \ref{subsect:alpha_half}.
Note that $L_{\max}^{\alpha}(X \mid Y) = \| \hat{\bvec{v}}_{n}(X \mid Y) \|_{\alpha}$ when $n = 2$.
For $n \ge 3$, the quantity $L_{\max}^{\alpha}(X \mid Y)$ is determined by $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$ and $H(X \mid Y) \in [0, \ln n]$;
the quantity $T_{n, \alpha} (X \mid Y)$ is linear in $H(X \mid Y) \in [H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) ), \ln n]$.
We give the properties of $L_{\min}^{\alpha}(X \mid Y)$ in the following lemma.
\begin{lemma}
\label{lem:Lmin}
$L_{\min}^{\alpha}(X \mid Y)$ is a piecewise linear function of $H(X \mid Y)$, composed of $n-1$ segments.
More precisely, we observe that
\begin{itemize}
\item[(i)]
if $\alpha \in (0, 1)$, then $L_{\min}^{\alpha}(X \mid Y)$ is strictly increasing for $H(X \mid Y) \in [0, \ln n]$,
\item[(ii)]
if $\alpha \in (1, \infty)$, then $L_{\min}^{\alpha}(X \mid Y)$ is strictly decreasing for $H(X \mid Y) \in [0, \ln n]$, and
\item[(iii)]
$L_{\min}^{\alpha}(X \mid Y)$ is convex in $H(X \mid Y) \in [0, \ln n]$.
\end{itemize}
\end{lemma}
\begin{IEEEproof}[Proof of Lemma \ref{lem:Lmin}]
Let $m = \lfloor \mathrm{e}^{H(X \mid Y)} \rfloor$, i.e., we choose an integer $m$ as $m \le \mathrm{e}^{H(X \mid Y)} < m+1$.
Note that $1 \le m < n$ since $X$ follows an $n$-ary distribution, i.e., $H(X \mid Y) \in [0, \ln n]$.
We readily see that $L_{\min}^{\alpha}(X \mid Y)$ is linear in $H(X \mid Y) \in [\ln m, \ln (m+1))$.
Moreover, we see that
\begin{align}
\lim_{H(X \mid Y) \to (\ln m)^{-}} L_{\min}^{\alpha}(X \mid Y)
& =
\lim_{H(X \mid Y) \to (\ln m)^{+}} L_{\min}^{\alpha}(X \mid Y)
\\
& =
\| \bvec{u}_{m} \|_{\alpha} ;
\end{align}
and therefore, we get that $L_{\min}^{\alpha}(X \mid Y)$ is a piecewise linear continuous function of $H(X \mid Y)$, composed of $n-1$ segments.
Next, we calculate the derivative of $L_{\min}^{\alpha}(X \mid Y)$ with respect to $H(X \mid Y)$ as follows:
\begin{align}
\frac{ \partial L_{\min}^{\alpha}(X \mid Y) }{ \partial H(X \mid Y) }
& =
\frac{ \partial \left( \vphantom{\sum} \lambda \| \bvec{u}_{m} \|_{\alpha} + (1-\lambda) \| \bvec{u}_{m+1} \|_{\alpha} \right) }{ \partial H(X \mid Y) }
\\
& =
\frac{ \| \bvec{u}_{m+1} \|_{\alpha} - \| \bvec{u}_{m} \|_{\alpha} }{ \ln (m+1) - \ln m }
\\
& \overset{\text{(a)}}{=}
\frac{ (m+1)^{\beta-1} - m^{\beta-1} }{ \ln (m+1) - \ln m }
\\
& =
\frac{ \left( \frac{m+1}{m} \right)^{\beta-1} - 1 }{ m^{1-\beta} \left( \ln \frac{ m+1 }{ m } \right) }
\\
& =
- \frac{ \left( \frac{m}{m+1} \right)^{1 - \beta} - 1 }{ m^{1-\beta} \left( \ln \frac{ m }{ m+1 } \right) }
\\
& =
- \frac{ (1 - \beta) \ln_{\beta} \frac{m}{m+1} }{ m^{1-\beta} \left( \ln \frac{m}{m+1} \right) }
\\
& =
\left( \frac{ \beta - 1 }{ m^{1-\beta} } \right) \frac{ \ln_{\beta} \frac{m}{m+1} }{ \ln \frac{m}{m+1} } ,
\label{eq:diff1_Lmin_H}
\end{align}
where (a) follows by the change of variable: $\beta = \frac{1}{\alpha}$.
Then, we can see that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial L_{\min}^{\alpha}(X \mid Y) }{ \partial H(X \mid Y) } \right)
& =
\operatorname{sgn} \! \left( \frac{ \beta - 1 }{ m^{1-\beta} } \right) \! \cdot \operatorname{sgn} \! \left( \frac{ \ln_{\beta} \frac{m}{m+1} }{ \ln \frac{m}{m+1} } \right)
\\
& =
\operatorname{sgn}( \beta - 1 )
\\
& =
\begin{cases}
1
& \mathrm{if} \ \alpha \in (0, 1) , \\
-1
& \mathrm{if} \ \alpha \in (1, \infty) ,
\end{cases}
\end{align}
which implies the monotonicity of $L_{\min}^{\alpha}(X \mid Y)$ with respect to $H(X \mid Y)$.
Finally, we consider the monotonicity of the right-hand side of \eqref{eq:diff1_Lmin_H} with respect to $m \ge 1$.
Noting that
\begin{align}
\frac{ m }{ m+1 } < \frac{ m+1 }{ m+2 } ,
\end{align}
we can check that
\begin{align}
\left( \frac{ \beta - 1 }{ (m+1)^{1-\beta} } \right) \frac{ \ln_{\beta} \frac{m+1}{m+2} }{ \ln \frac{m+1}{m+2} } - \left( \frac{ \beta - 1 }{ m^{1-\beta} } \right) \frac{ \ln_{\beta} \frac{m}{m+1} }{ \ln \frac{m}{m+1} }
& \overset{\text{(a)}}{\ge}
\left( \frac{ \beta - 1 }{ (m+1)^{1-\beta} } \right) \frac{ \ln_{1} \frac{m+1}{m+2} }{ \ln \frac{m+1}{m+2} } - \left( \frac{ \beta - 1 }{ m^{1-\beta} } \right) \frac{ \ln_{\beta} \frac{m}{m+1} }{ \ln \frac{m}{m+1} }
\\
& =
\left( \frac{ \beta - 1 }{ (m+1)^{1-\beta} } \right) - \left( \frac{ \beta - 1 }{ m^{1-\beta} } \right) \frac{ \ln_{\beta} \frac{m}{m+1} }{ \ln \frac{m}{m+1} }
\\
& =
(\beta - 1) \left( \frac{ m^{1-\beta} \left( \ln \frac{m}{m+1} \right) - (m+1)^{1-\beta} \left( \ln_{\beta} \frac{m}{m+1} \right) }{ ( m (m+1) )^{1-\beta} \left( \ln \frac{m}{m+1} \right) } \right)
\\
& =
(\beta - 1) \left( \frac{ \ln \frac{m}{m+1} - \left( \frac{ m+1 }{ m } \right)^{1-\beta} \left( \ln_{\beta} \frac{m}{m+1} \right) }{ (m+1)^{1-\beta} \left( \ln \frac{m}{m+1} \right) } \right)
\\
& =
(\beta - 1) \left( \frac{ \ln \frac{m}{m+1} - \left( \frac{ m }{ m+1 } \right)^{\beta-1} \left( \ln_{\beta} \frac{m}{m+1} \right) }{ (m+1)^{1-\beta} \left( \ln \frac{m}{m+1} \right) } \right)
\\
& \overset{\text{(b)}}{=}
(\beta - 1) \left( \frac{ \ln \frac{m}{m+1} + \ln_{\beta} \frac{m+1}{m} }{ (m+1)^{1-\beta} \left( \ln \frac{m}{m+1} \right) } \right)
\\
& =
(\beta - 1) \left( \frac{ \ln_{\beta} \frac{m+1}{m} - \ln \frac{m+1}{m} }{ (m+1)^{1-\beta} \left( \ln \frac{m}{m+1} \right) } \right)
\\
& \overset{\text{(c)}}{\ge}
(\beta - 1) \left( \frac{ \ln_{1} \frac{m+1}{m} - \ln \frac{m+1}{m} }{ (m+1)^{1-\beta} \left( \ln \frac{m}{m+1} \right) } \right)
\\
& =
0 ,
\label{eq:sign_diff1_Lmin_H}
\end{align}
where
\begin{itemize}
\item
(a) and (c) follow by Lemma \ref{lem:IT_ineq} and
\item
(b) follows from the fact that $- \ln_{\alpha} \frac{1}{x} = x^{\alpha-1} \ln_{\alpha} x$.
\end{itemize}
Hence, for any fixed $\alpha \in (0, 1) \cup (1, \infty)$, the right-hand side of \eqref{eq:diff1_Lmin_H} is strictly increasing for $m \ge 1$.
Since a piecewise linear function is convex if its successive slopes are nondecreasing, the bounds \eqref{eq:sign_diff1_Lmin_H} implies that $L_{\min}^{\alpha}(X \mid Y)$ is convex in $H(X \mid Y) \in [0, \ln n]$.
\end{IEEEproof}
As with Lemma \ref{lem:Lmin}, we also give the properties of $L_{\max}^{\alpha}(X \mid Y)$ for $n \ge 3$ in the following lemma.
\begin{lemma}
\label{lem:Lmax}
For any $n \ge 3$ and any $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$, the value $p_{n}^{\ast}( \alpha )$ is uniquely determined and $L_{\max}^{\alpha}(X \mid Y)$ is a piecewise continuous function of $H(X \mid Y)$.
More precisely, we observe that
\begin{itemize}
\item[(i)]
if $\alpha \in [\frac{1}{2}, 1)$, then $L_{\max}^{\alpha}(X \mid Y)$ is strictly increasing for $H(X \mid Y) \in [0, \ln n]$,
\item[(ii)]
if $\alpha \in (1, \infty)$, then $L_{\max}^{\alpha}(X \mid Y)$ is strictly decreasing for $H(X \mid Y) \in [0, \ln n]$, and
\item[(iii)]
$L_{\max}^{\alpha}(X \mid Y)$ is concave in $H(X \mid Y) \in [0, \ln n]$.
\end{itemize}
\end{lemma}
\begin{IEEEproof}[Proof of Lemma \ref{lem:Lmax}]
If $\alpha > 1$, then we see that
\begin{align}
\lim_{p \to 0^{+}} \left( \! \frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p ) } \! \right)
& \overset{\eqref{eq:first-order}}{=}
\lim_{p \to 0^{+}} \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \frac{ p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} }{ \ln (1 - (n-1) p) - \ln p } \right)
\\
& =
0 .
\end{align}
On the other hand, if $\alpha < 1$, then we see that
\begin{align}
\lim_{p \to 0^{+}} \left( \! \frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p ) } \! \right)
& =
\lim_{p \to 0^{+}} \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \frac{ p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} }{ \ln (1 - (n-1) p) - \ln p } \right)
\\
& \overset{\text{(a)}}{=}
\lim_{p \to 0^{+}} \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \frac{ \frac{ \partial }{ \partial p } (p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1}) }{ \frac{ \partial }{ \partial p } (\ln (1 - (n-1) p) - \ln p) } \right)
\\
& =
\lim_{p \to 0^{+}} \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1}
\notag \\
& \qquad \qquad \qquad \qquad \qquad \times
\left( \frac{ (\alpha-1) p^{\alpha-2} + (\alpha-1) (n-1) (1 - (n-1)p)^{\alpha-2} }{ \left( \frac{p}{1 - (n-1) p} \right) \left( \frac{ \partial }{ \partial p } \left( \frac{ 1 - (n-1) p}{ p } \right) \right) } \right)
\\
& =
\lim_{p \to 0^{+}} \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} (\alpha-1)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \times
\left( \frac{ p^{\alpha-2} + (n-1) (1 - (n-1)p)^{\alpha-2} }{ \left( \frac{p}{1 - (n-1) p} \right) \left( - \frac{ 1 }{ p^{2} } \right) } \right)
\\
& =
\lim_{p \to 0^{+}} \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} (\alpha-1)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \times
\left( \frac{ p^{\alpha-2} + (n-1) (1 - (n-1)p)^{\alpha-2} }{ - \frac{1}{p (1 - (n-1) p)} } \right)
\\
& =
\lim_{p \to 0^{+}} \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} (\alpha-1)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \times
\left( \vphantom{\sum} - p^{\alpha-1} (1 - (n-1) p) - (n-1) p (1 - (n-1)p)^{\alpha-1} \right)
\\
& =
+\infty ,
\end{align}
where (a) follows by L'H\^{o}pital's rule.
Note that the sign of the slope of the line through the two points $(H_{\sbvec{v}_{n}}( 0 ), \| \bvec{v}_{n}( 0 ) \|_{\alpha})$ and $(H_{\sbvec{v}_{n}}( \frac{1}{n} ), \| \bvec{v}_{n}( \frac{1}{n} ) \|_{\alpha})$ is
\begin{align}
\operatorname{sgn} \! \left( \frac{ \| \bvec{v}_{n}( \frac{1}{n} ) \|_{\alpha} - \| \bvec{v}_{n}( 0 ) \|_{\alpha} }{ H_{\sbvec{v}_{n}}( \frac{1}{n} ) - H_{\sbvec{v}_{n}}( 0 ) } \right)
& =
\operatorname{sgn} \! \left( \frac{ n^{\frac{1}{\alpha}-1} - 1 }{ \ln n } \right)
\\
& =
\begin{cases}
1
& \mathrm{if} \ \alpha \in (0, 1) , \\
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha \in (1, \infty) ;
\end{cases}
\end{align}
namely, we get
\begin{align}
\lim_{p \to 0^{+}} \left( \! \frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p ) } \! \right)
>
\frac{ \| \bvec{v}_{n}( \frac{1}{n} ) \|_{\alpha} - \| \bvec{v}_{n}( 0 ) \|_{\alpha} }{ H_{\sbvec{v}_{n}}( \frac{1}{n} ) - H_{\sbvec{v}_{n}}( 0 ) }
\end{align}
for $\alpha \in (0, 1) \cup (1, \infty)$.
Moreover, we see from Lemma \ref{lem:convex_v} that, for a fixed $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$, $\| \bvec{v}_{n}( p ) \|_{\alpha}$ is a strict concave/convex function of $H_{\sbvec{v}_{n}}( p ) \in [0, \ln n]$ which the inflection point is at $H_{\sbvec{v}_{n}}( p ) = \chi_{n}( \alpha )$.
Therefore, for any $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$, there exists a unique $p_{n}^{\ast}( \alpha ) \in (0, \chi_{n}( \alpha ))$ such that the tangent line of the curve $p \mapsto (H_{\sbvec{v}_{n}}( p ), \| \bvec{v}_{n}( p ) \|_{\alpha})$ for $p \in [0, \frac{1}{n}]$ at $H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$ passes through the point $(H_{\sbvec{v}_{n}}( \frac{1}{n} ), \| \bvec{v}_{n}( \frac{1}{n} ) \|_{\alpha})$.
Note that this tangent line corresponds to $T_{n, \alpha} (X \mid Y)$.
It follows from \cite[Lemma 3]{part1, part1_arxiv} that
\begin{itemize}
\item
if $\alpha \in (0, 1)$, then $T_{n, \alpha} (X \mid Y)$ is strictly increasing for $H(X \mid Y) \in [H_{\sbvec{v}_{n}}(p_{n}^{\ast}( \alpha )), \ln n ]$ and
\item
if $\alpha \in (1, \infty)$, then $T_{n, \alpha} (X \mid Y)$ is strictly decreasing for $H(X \mid Y) \in [H_{\sbvec{v}_{n}}(p_{n}^{\ast}( \alpha )), \ln n ]$.
\end{itemize}
Hence, the monotonicity of $L_{\max}^{\alpha}(X \mid Y)$, i.e., (i) and (ii) of Lemma \ref{lem:Lmax}, holds.
Finally, since $p_{n}^{\ast}( \alpha ) \in (0, \chi_{n}(\alpha))$ and $L_{\max}^{\alpha}(X \mid Y)$ is linear in $H(X \mid Y) \in [H_{\sbvec{v}_{n}}(p_{n}^{\ast}( \alpha )), \ln n ]$, we obtain the concavity of $L_{\max}^{\alpha}(X \mid Y)$, i.e., (iii) of Lemma \ref{lem:Lmax}.
\end{IEEEproof}
Using $L_{\min}^{\alpha}(X \mid Y)$ and $L_{\max}^{\alpha}(X \mid Y)$, as with Theorem \ref{th:extremes}, we provide the tight bounds of the expectation of $\ell_{\alpha}$-norm with a fixed conditional Shannon entropy as follows:.
\begin{theorem}
\label{th:cond_extremes}
For $(X, Y) \sim P_{X|Y} P_{Y}$ and $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$,
\begin{align}
L_{\min}^{\alpha}(X \mid Y)
\le
\mathbb{E}[ \| P_{X|Y}(\cdot \mid Y) \|_{\alpha} ]
\le
L_{\max}^{\alpha}(X \mid Y) .
\label{eq:cond_extremes}
\end{align}
In particular, the left-hand inequality also holds for $\alpha \in (0, \frac{1}{2})$.
Furthermore, if $|\mathcal{X}| = 2$, then the right-hand inequality also holds for $\alpha \in (0, \frac{1}{2})$.
\end{theorem}
\begin{IEEEproof}[Proof of Theorem \ref{th:cond_extremes}]
We first prove the lower bound.
Since $L_{\min}^{\alpha}(X \mid Y)$ is a piecewise linear function through the points $(H_{\sbvec{w}_{n}}( m^{-1} ), \| \bvec{w}_{n}( m^{-1} ) \|_{\alpha})$ for integers $m \in [1, n]$, it follows from Lemmas \ref{lem:concave_w} and \ref{lem:Lmin} that
\begin{align}
\| \bvec{w}_{n}( H_{\sbvec{w}_{n}}^{-1}( H(X \mid Y) ) ) \|_{\alpha}
\ge
L_{\min}^{\alpha}(X \mid Y) .
\label{eq:cond_extremes_Lmin}
\end{align}
Moreover, since $L_{\min}^{\alpha}(X \mid Y)$ is convex in $H(X \mid Y) \in [0, \ln n]$, it follows from Theorem \ref{th:convexhull} and Lemma \ref{lem:Lmin} that the right-hand side of \eqref{eq:cond_extremes_Lmin} correspond to the lower-boundary of the convex hull of $\mathcal{R}_{n}( \alpha )$.
Therefore, we have the lower bound.
We second prove the upper bound.
Since $T_{n, \alpha}(X \mid Y)$ is the tangent line of the curve $p \mapsto (H_{\sbvec{v}_{n}}( p ), \| \bvec{v}_{n}( p ) \|_{\alpha})$ for $p \in [0, \frac{1}{n}]$ such that passes through the point $(H_{\sbvec{v}_{n}}( \frac{1}{n} ), \| \bvec{v}_{n}( \frac{1}{n} ) \|_{\alpha})$, it follows from Lemma \ref{lem:convex_v} that
\begin{align}
\| \bvec{v}_{n}( H_{\sbvec{v}_{n}}^{-1}( H(X \mid Y) ) ) \|_{\alpha}
\le
L_{\max}^{\alpha}(X \mid Y) .
\label{eq:cond_extremes_Lmax}
\end{align}
As with \eqref{eq:cond_extremes_Lmin}, since $L_{\max}^{\alpha}(X \mid Y)$ is concave in $H(X \mid Y) \in [0, \ln n]$, it follows from Theorem \ref{th:convexhull} and Lemma \ref{lem:Lmax} that the right-hand side of \eqref{eq:cond_extremes_Lmax} correspond to the upper-boundary of the convex hull of $\mathcal{R}_{n}( \alpha )$.
Therefore, we have the upper bound.
That completes the proof of Theorem \ref{th:cond_extremes}.
\end{IEEEproof}
Theorem \ref{th:cond_extremes} shows the bounds of the expectation of $\ell_{\alpha}$-norm with a fixed conditional Shannon entropy.
Note that, if $|\mathcal{X}| = 2$, then Theorem \ref{th:cond_extremes} is reduced to \cite[Theorem 1]{fabregas}.
Thus, henceforth, we omit the case $|\mathcal{X}| = 2$ in the analyses of this study.
Moreover, since Lemma \ref{lem:convex_v} provides the inflection point $\chi_{n}( \alpha )$ for only $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$, Theorem \ref{th:cond_extremes} does not establish the upper bound of the expectation of $\ell_{\alpha}$-norm for $\alpha \in (0, \frac{1}{2})$ and $n \ge 3$ with a fixed conditional Shannon entropy.
We now prove that the bounds of Theorem \ref{th:cond_extremes} is tight;
namely, we consider the two pairs of random variables $(X^{\prime}, Y^{\prime})$ and $(X^{\prime\prime}, Y^{\prime\prime})$ which attain each equality of \eqref{eq:cond_extremes}.
\begin{definition}
\label{def:RVs_prime1}
Let the pair of random variables $(X^{\prime}, Y^{\prime}) \in \mathcal{X} \times \mathcal{Y}$ be defined as follows:
Set $\mathcal{X} = \{ 1, 2, \dots, n \}$ and $\mathcal{Y} = \{ 0, 1 \}$.
We define the conditional distribution $P_{X^{\prime}|Y^{\prime}}$ as $P_{X^{\prime}|Y^{\prime}}(\cdot \mid y) = \bvec{w}_{n}( (m+y)^{-1} )$ for $y \in \mathcal{Y}$.
\end{definition}
\begin{definition}
\label{def:RVs_prime2}
Let the pair of random variables $(X^{\prime\prime}, Y^{\prime\prime}) \in \mathcal{X} \times \mathcal{Y}$ be defined as follows:
Set $\mathcal{X} = \{ 1, 2, \dots, n \}$ and $\mathcal{Y} = \{ 0, 1 \}$.
If $H(X^{\prime\prime} \mid Y^{\prime\prime}) \le H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$ for a given $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$, then we set $P_{X^{\prime\prime}|Y^{\prime\prime}}(\cdot \mid y) = \bvec{v}_{n}( p )$ for all $y \in \mathcal{Y}$ and some $p \in [0, p_{n}^{\ast}( \alpha )]$.
On the other hand, if $H(X^{\prime\prime} \mid Y^{\prime\prime}) > H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$ for a given $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$, then we set $P_{X^{\prime\prime}|Y^{\prime\prime}}(\cdot \mid 0) = \bvec{v}_{n}( p_{\ast}^{n}( \alpha ) )$ and $P_{X^{\prime\prime}|Y^{\prime\prime}}(\cdot \mid 1) = \bvec{u}_{n}$.
\end{definition}
In Definitions \ref{def:RVs_prime1} and \ref{def:RVs_prime2}, note that the marginal distribution $P_{Y^{\prime}}$ and $P_{Y^{\prime\prime}}$ can be chosen arbitrarily and appropriately.
For these pair of random variables $(X^{\prime}, Y^{\prime})$ and $(X^{\prime\prime}, Y^{\prime\prime})$ of Definitions \ref{def:RVs_prime1} and \ref{def:RVs_prime2}, respectively, the following fact holds.
\begin{fact}
\label{fact:RVs_prime}
For $(X^{\prime}, X^{\prime})$ and $(X^{\prime\prime}, Y^{\prime\prime})$, we observe that
$
\mathbb{E}[ \| P_{X^{\prime}|Y^{\prime}}(\cdot \mid Y^{\prime}) \|_{\alpha} ]
=
L_{\min}^{\alpha}(X^{\prime} \mid Y^{\prime})
$
and
$
\mathbb{E}[ \| P_{X^{\prime\prime}|Y^{\prime\prime}}(\cdot \mid Y^{\prime\prime}) \|_{\alpha} ]
=
L_{\max}^{\alpha}(X^{\prime\prime} \mid Y^{\prime\prime})
$.
\end{fact}
\begin{IEEEproof}[Proof of Fact \ref{fact:RVs_prime}]
For $(X^{\prime}, Y^{\prime})$ of Definition \ref{def:RVs_prime1}, it can be seen that
\begin{align}
H(X^{\prime} \mid Y^{\prime})
& =
\sum_{y \in \{ 0, 1 \}} \! P_{Y^{\prime}}( y ) H(X^{\prime} \mid Y^{\prime} = y)
\\
& =
\sum_{y \in \{ 0, 1 \}} \! P_{Y^{\prime}}( y ) H_{\sbvec{w}_{n}}( (m+y)^{-1} )
\\
& =
\sum_{y \in \{ 0, 1 \}} P_{Y^{\prime}}( y ) \ln (m + y) ,
\label{eq:example_Lmin_H} \\
\mathbb{E}[ \| P_{X^{\prime}|Y^{\prime}}(\cdot \mid Y^{\prime}) \|_{\alpha} ]
& =
\sum_{y \in \{ 0, 1 \}} \! P_{Y^{\prime}}( y ) \| \bvec{w}_{n}( (m+y)^{-1} ) \|_{\alpha}
\\
& =
\sum_{y \in \{ 0, 1 \}} P_{Y^{\prime}}( y ) \| \bvec{u}_{(m+y)} \|_{\alpha} .
\label{eq:example_Lmin_N}
\end{align}
Since $P_{Y^{\prime}}( 0 ) + P_{Y^{\prime}}( 1 ) = 1$, it follows from \eqref{eq:example_Lmin_H} and \eqref{eq:example_Lmin_N} that
\begin{align}
\mathbb{E}[ \| P_{X^{\prime}|Y^{\prime}}(\cdot \mid Y^{\prime}) \|_{\alpha} ]
=
L_{\min}^{\alpha}(X^{\prime} \mid Y^{\prime})
\end{align}
for $\alpha \in (0, 1) \cup (1, \infty)$.
Therefore, the lower bound of \eqref{eq:cond_extremes} is tight.
Moreover, we consider $(X^{\prime\prime}, Y^{\prime\prime})$ of Definition \ref{def:RVs_prime2}.
If $H(X^{\prime\prime} \mid Y^{\prime\prime}) \le H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$, then
it immediately holds that
\begin{align}
\mathbb{E}[ \| P_{X^{\prime\prime}|Y^{\prime\prime}}(\cdot \mid Y^{\prime\prime}) \|_{\alpha} ]
=
L_{\min}^{\alpha}(X^{\prime\prime} \mid Y^{\prime\prime}) .
\end{align}
On the other hand, if $H(X^{\prime\prime} \mid Y^{\prime\prime}) > H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$, then it can be seen that
\begin{align}
H(X^{\prime\prime} \mid Y^{\prime\prime})
& =
\sum_{y \in \{ 0, 1 \}} \! P_{Y^{\prime\prime}}( y ) H(X^{\prime\prime} \mid Y^{\prime\prime} = y)
\\
& =
P_{Y^{\prime\prime}}( 0 ) H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) ) + P_{Y^{\prime\prime}}( 1 ) H( \bvec{u}_{n} )
\\
& =
P_{Y^{\prime\prime}}( 0 ) H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) ) + P_{Y^{\prime\prime}}( 1 ) \ln n ,
\label{eq:example_Lmax_H} \\
\!\!\! \mathbb{E}[ \| P_{X^{\prime\prime}|Y^{\prime\prime}}(\cdot \mid Y^{\prime\prime}) \|_{\alpha} ]
& =
P_{Y^{\prime\prime}}( 0 ) \| \bvec{v}_{n}( p_{n}^{\ast}( \alpha ) ) \|_{\alpha} + P_{Y^{\prime\prime}}( 1 ) \| \bvec{u}_{n} \|_{\alpha} .
\label{eq:example_Lmax_N}
\end{align}
Since $P_{Y^{\prime\prime}}( 0 ) + P_{Y^{\prime\prime}}( 1 ) = 1$, it follows from \eqref{eq:example_Lmax_H} and \eqref{eq:example_Lmax_N} that
\begin{align}
\mathbb{E}[ \| P_{X^{\prime\prime}|Y^{\prime\prime}}(\cdot \mid Y^{\prime\prime}) \|_{\alpha} ]
=
L_{\max}^{\alpha}(X^{\prime\prime} \mid Y^{\prime\prime})
\end{align}
for $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$.
Therefore, the upper bound of \eqref{eq:cond_extremes} is also tight.
\end{IEEEproof}
\begin{figure}[!t]
\centering
\subfloat[The case $\alpha = \frac{1}{2}$.]{
\label{subfig:norm_half}
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsN_cond_8ary_half.pdf}
\put(75, -1.5){$H(X \mid Y)$}
\put(98, 2){\scriptsize [nats]}
\put(-5, 17){\rotatebox{90}{$\mathbb{E}[ \| P_{X|Y}(\cdot \mid Y) \|_{\alpha} ]$}}
\put(60, 53){\color{red} $L_{\max}^{\alpha}(X \mid Y)$}
\put(65, 23){\color{bluegreen} $L_{\min}^{\alpha}(X \mid Y)$}
\put(49.5, 10){\small $H(X \mid Y) = H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$}
\put(48.5, 11.25){\vector(-4, -1){15.5}}
\put(11, 54){\small inflection}
\put(13, 49){\small point of {\color{burgundy} the curve}}
\put(10.5, 44){\small \color{burgundy} $p \mapsto (H_{\sbvec{v}_{n}}( p ), \| \bvec{v}_{n}( p ) \|_{\alpha})$}
\put(20, 40){\line(1, -1){3}}
\put(33, 50){\oval(50, 20)}
\put(23, 37){\vector(1, 0){30}}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $\alpha = 2$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsN_cond_8ary_2.pdf}
\put(75, -1.5){$H(X \mid Y)$}
\put(98, 2){\scriptsize [nats]}
\put(-5, 17){\rotatebox{90}{$\mathbb{E}[ \| P_{X|Y}(\cdot \mid Y) \|_{\alpha} ]$}}
\put(26, 55){\color{red} $L_{\max}^{\alpha}(X \mid Y)$}
\put(27, 23){\color{bluegreen} $L_{\min}^{\alpha}(X \mid Y)$}
\put(31, 10){\small $H(X \mid Y) = H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$}
\put(79.5, 10){\vector(3, -1){8}}
\put(58, 54){\small inflection}
\put(60, 49){\small point of {\color{burgundy} the curve}}
\put(58, 44){\small \color{burgundy} $p \mapsto (H_{\sbvec{v}_{n}}( p ), \| \bvec{v}_{n}( p ) \|_{\alpha})$}
\put(80, 50){\oval(50, 20)}
\put(92.6, 40){\vector(0, -1){26}}
\end{overpic}
}
\caption{Plots of the boundaries of $\mathcal{R}_{n}^{\mathrm{cond}}( \alpha )$ with $n = 8$.
The upper- and lower-boundaries correspond to $L_{\max}^{\alpha}(X \mid Y)$ and $L_{\min}^{\alpha}(X \mid Y)$, respectively.
The left- and right-hand sides of $L_{\max}^{\alpha}(X \mid Y)$ from the circle points \textbullet \ correspond to $\| \hat{\bvec{v}}_{n}(X \mid Y) \|_{\alpha}$ and $T_{n, \alpha}(X \mid Y)$, respectively.
Note that $T_{n, \alpha}(X \mid Y)$ is the tangent line of the curve $p \mapsto (H_{\sbvec{v}_{n}}( p ), \| \bvec{v}_{n}( p ) \|_{\alpha})$ at $H_{\sbvec{v}_{n}}( p ) = H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$.
The dotted lines correspond to the boundaries of $\mathcal{R}_{n}( \alpha )$;
in particular, the dotted lines of (a) are same as Fig. \ref{fig:region_P6_half}.}
\label{fig:LminLmax}
\end{figure}
Fact \ref{fact:RVs_prime} implies that the bounds of Theorem \ref{th:cond_extremes} are tight.
Namely, Theorem \ref{th:cond_extremes} shows that the boundaries of $\mathcal{R}_{n}^{\mathrm{cond}}( \alpha )$ are correspond to $L_{\min}^{\alpha}(X \mid Y)$ and $L_{\max}^{\alpha}(X \mid Y)$ for $n \ge 3$ and $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$.
We illustrate the boundaries of $\mathcal{R}_{n}^{\mathrm{cond}}( \alpha )$ are correspond to $L_{\min}^{\alpha}(X \mid Y)$ and $L_{\max}^{\alpha}(X \mid Y)$ in Fig. \ref{fig:LminLmax}.
From Theorem \ref{th:convexhull}, note that Fig. \ref{fig:LminLmax}-\subref{subfig:norm_half} is convex hulls of Fig. \ref{fig:region_P6_half}.
Furthermore, as with Theorem \ref{th:cond_extremes}, we also provide tight bounds of the conditional Shannon entropy with a fixed expectation of $\ell_{\alpha}$-norm in the following theorem.
\begin{theorem}
\label{th:cond_extremes2}
For a given $(X, Y)$ and a fixed $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$, if $X, X^{\prime}, X^{\prime\prime} \in \mathcal{X}$ and
\begin{align}
\mathbb{E}[ \| P_{X|Y}(\cdot \mid Y) \|_{\alpha} ]
=
\mathbb{E}[ \| P_{X^{\prime}|Y^{\prime}}(\cdot \mid Y^{\prime}) \|_{\alpha} ]
=
\mathbb{E}[ \| P_{X^{\prime\prime}|Y^{\prime\prime}}(\cdot \mid Y^{\prime\prime}) \|_{\alpha} ] ,
\end{align}
then we observe that
\begin{align}
\frac{1}{2} \le \alpha < 1
\ & \Longrightarrow \
H(X^{\prime\prime} \mid Y^{\prime\prime})
\le
H(X \mid Y)
\le
H(X^{\prime} \mid Y^{\prime}) ,
\label{ineq:cond_H_less1} \\
\alpha > 1
\ & \Longrightarrow \
H(X^{\prime} \mid Y^{\prime})
\le
H(X \mid Y)
\le
H(X^{\prime\prime} \mid Y^{\prime\prime}) ,
\label{ineq:cond_H_greater1}
\end{align}
where the upper bound of \eqref{ineq:cond_H_less1} also holds for $\alpha \in (0, \frac{1}{2})$.
\end{theorem}
\begin{IEEEproof}[Proof of Theorem \ref{th:cond_extremes2}]
Using the monotonicity of Lemmas \ref{lem:Lmin} and \ref{lem:Lmax}, we can prove Theorem \ref{th:cond_extremes2} from Theorem \ref{th:cond_extremes}, as with the proof of \cite[Theorem \ref{th:extremes2}]{part1_arxiv}.
\end{IEEEproof}
Since there exists $(X^{\prime}, Y^{\prime})$ and $(X^{\prime\prime}, Y^{\prime\prime})$ such that the bounds \eqref{ineq:cond_H_less1} and \eqref{ineq:cond_H_greater1} hold with equalities, Theorem \ref{th:cond_extremes} also provides the tight bounds of the conditional Shannon entropy with a fixed expectation of $\ell_{\alpha}$-norm.
We now extend Theorem \ref{th:cond_extremes} in a similar manner with \cite[Theorem 2]{fabregas} and \cite[Corollary 1]{part1, part1_arxiv} as follows:
\begin{corollary}
\label{cor:cond_extremes}
Let $f( \cdot )$ be a strictly monotonic function. Then, for $\alpha \in [\frac{1}{2}, 1) \cup (1, \infty)$, we observe that
\begin{itemize}
\item
if $f( \cdot )$ is strictly increasing, then
\begin{align}
\hspace{-2em}
f( L_{\min}^{\alpha}(X|Y) )
\le
f( \mathbb{E}[ \| P_{X|Y}(\cdot|Y) \|_{\alpha} ] )
\le
f( \vphantom{\sum} L_{\max}^{\alpha}(X|Y) ) ,
\notag
\end{align}
\item
if $f( \cdot )$ is strictly decreasing, then
\begin{align}
\hspace{-2em}
f( L_{\max}^{\alpha}(X|Y) )
\le
f( \mathbb{E}[ \| P_{X|Y}(\cdot|Y) \|_{\alpha} ] )
\le
f( L_{\min}^{\alpha}(X|Y) ) .
\notag
\end{align}
\end{itemize}
The bounds with $f(L_{\min}^{\alpha}(X \mid Y))$ also hold for $\alpha \in (0, \frac{1}{2})$.
\end{corollary}
\begin{IEEEproof}[Proof of Corollary \ref{cor:cond_extremes}]
We can proof Corollary \ref{cor:cond_extremes} in the same manner with the proof of \cite[Corollary 1]{part1_arxiv}.
\end{IEEEproof}
Therefore, we can obtain the tight bounds of several information measures, determined by the expectation of $\ell_{\alpha}$-norm, with a fixed conditional Shannon entropy.
As an instance, we introduce the application of Corollary \ref{cor:cond_extremes} to the conditional R\'{e}nyi entropy as follows:
Let $f_{\alpha}( x ) = \frac{\alpha}{1-\alpha} \ln x$.
Then, we readily see that
\begin{align}
H_{\alpha}(X \mid Y)
=
f_{\alpha}( \mathbb{E}[ \| P_{X|Y}(\cdot \mid Y) \|_{\alpha} ] ) .
\label{eq:renyi_f}
\end{align}
It can be easily seen that $f_{\alpha}( x )$ is strictly increasing for $x \ge 0$ when $\alpha \in (0, 1)$ and strictly decreasing for $x \ge 0$ when $\alpha \in (1, \infty)$.
Hence, assuming that $(X^{\prime}, Y^{\prime})$ and $(X^{\prime\prime}, Y^{\prime\prime})$ satisfy $X, X^{\prime}, X^{\prime\prime} \in \mathcal{X}$ and
\begin{align}
H(X \mid Y)
=
H(X^{\prime} \mid Y^{\prime})
=
H(X^{\prime\prime} \mid Y^{\prime\prime})
\end{align}
for a given $(X, Y)$, it follows from Corollary \ref{cor:cond_extremes} that
\begin{align}
\hspace{-0.5em}
\frac{1}{2} \le \alpha < 1 \
& \Longrightarrow \
H_{\alpha}(X^{\prime\prime}|Y^{\prime\prime})
\le
H_{\alpha}(X|Y)
\le
H_{\alpha}(X^{\prime}|Y^{\prime}) ,
\label{eq:cond_Renyi_bound1} \\
\hspace{-0.5em}
\alpha > 1 \
& \Longrightarrow \
H_{\alpha}(X^{\prime}|Y^{\prime})
\le
H_{\alpha}(X|Y)
\le
H_{\alpha}(X^{\prime\prime}|Y^{\prime\prime}) ,
\label{eq:cond_Renyi_bound2}
\end{align}
where note that the left-hand inequality of \eqref{eq:cond_Renyi_bound1} also holds for $\alpha \in (0, \frac{1}{2})$.
Moreover, we can provide the tight bounds of the conditional Shannon entropy with a fixed conditional R\'{e}nyi entropy by using Theorem \ref{th:cond_extremes2}.
We illustrate \eqref{eq:cond_Renyi_bound1} and \eqref{eq:cond_Renyi_bound2} in Fig. \ref{fig:LminLmax_Renyi}, as with Fig. \ref{fig:LminLmax}.
As another application, letting $f_{R}( x ) = \frac{R}{R-1} (1 - x)$, we can also provide tight bounds of the conditional $R$-norm information \cite[Eq. (39)]{boekee}, defined by
\begin{align}
H_{R}(X \mid Y)
\triangleq
f_{R}( \mathbb{E}[ \| P_{X|Y}(\cdot \mid Y) \|_{R} ] ) .
\label{eq:R_f}
\end{align}
We illustrate the exact feasible regions between the conditional Shannon entropy and the conditional $R$-norm information in Fig. \ref{fig:LminLmax_R}.
\begin{figure}[!t]
\centering
\subfloat[The case $\alpha = \frac{1}{2}$.]{
\label{subfig:Renyi_half}
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsRenyi_cond_16ary_half.pdf}
\put(75, -1.5){$H(X \mid Y)$}
\put(98, 2){\scriptsize [nats]}
\put(-4, 23){\rotatebox{90}{$H_{\alpha}(X \mid Y)$}}
\put(-1, 60){\scriptsize [nats]}
\put(60, 32){\color{bluegreen} $H_{\alpha}(X^{\prime} \mid Y^{\prime})$}
\put(24, 52){\color{red} $H_{\alpha}(X^{\prime\prime} \mid Y^{\prime\prime})$}
\put(45, 15){\small $H(X \mid Y) = H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$}
\put(44, 15){\vector(-3, -1){22}}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $\alpha = 2$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsRenyi_cond_16ary_2.pdf}
\put(75, -1.5){$H(X \mid Y)$}
\put(98, 2){\scriptsize [nats]}
\put(-4, 23){\rotatebox{90}{$H_{\alpha}(X \mid Y)$}}
\put(-1, 60){\scriptsize [nats]}
\put(25, 34){\color{bluegreen} $H_{\alpha}(X^{\prime} \mid Y^{\prime})$}
\put(70, 24){\color{red} $H_{\alpha}(X^{\prime\prime} \mid Y^{\prime\prime})$}
\put(37, 11){\footnotesize $H(X \mid Y) = H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$}
\put(81, 11){\vector(3, -2){5.5}}
\end{overpic}
}
\caption{Plots of the boundaries of $\{ (H(X \mid Y), H_{\alpha}(X \mid Y)) \mid P_{XY} \in \mathcal{P}( \mathcal{X} \times \mathcal{Y} ), |\mathcal{X}| = n \}$ with $n = 16$.
The upper- and lower-boundaries correspond to $(X^{\prime\prime}, Y^{\prime\prime})$ of Definition \ref{def:RVs_prime2} and $(X^{\prime}, Y^{\prime})$ of Definition \ref{def:RVs_prime1}, respectively.
The dotted lines correspond to the boundary of $\{ (H( \bvec{p} ), H_{\alpha}( \bvec{p} )) \mid \bvec{p} \in \mathcal{P}_{n} \}$.}
\label{fig:LminLmax_Renyi}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[The case $\alpha = \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsR_cond_10ary_half.pdf}
\put(75, -1.5){$H(X \mid Y)$}
\put(98, 2){\scriptsize [nats]}
\put(-4, 23){\rotatebox{90}{$H_{R}(X \mid Y)$}}
\put(71, 25){\color{bluegreen} $H_{R}(X^{\prime} \mid Y^{\prime})$}
\put(29, 40){\color{red} $H_{R}(X^{\prime\prime} \mid Y^{\prime\prime})$}
\put(50, 14.75){\small $H(X \mid Y) = H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$}
\put(49, 14.75){\vector(-3, -1){22}}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $\alpha = 2$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsR_cond_10ary_2.pdf}
\put(75, -1.5){$H(X \mid Y)$}
\put(98, 2){\scriptsize [nats]}
\put(-4, 23){\rotatebox{90}{$H_{R}(X \mid Y)$}}
\put(25, 42){\color{bluegreen} $H_{\alpha}(X^{\prime} \mid Y^{\prime})$}
\put(60, 26){\color{red} $H_{\alpha}(X^{\prime\prime} \mid Y^{\prime\prime})$}
\put(37.5, 11){\footnotesize $H(X \mid Y) = H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$}
\put(81.5, 11){\vector(3, -2){5.5}}
\end{overpic}
}
\caption{Plots of the boundaries of $\{ (H(X \mid Y), H_{R}(X \mid Y)) \mid P_{XY} \in \mathcal{P}( \mathcal{X} \times \mathcal{Y} ), |\mathcal{X}| = n \}$ with $n = 10$.
The upper- and lower-boundaries correspond to $(X^{\prime\prime}, Y^{\prime\prime})$ of Definition \ref{def:RVs_prime2} and $(X^{\prime}, Y^{\prime})$ of Definition \ref{def:RVs_prime1}, respectively.
The dotted lines correspond to the boundary of $\{ (H( \bvec{p} ), H_{R}( \bvec{p} )) \mid \bvec{p} \in \mathcal{P}_{n} \}$, where $H_{R}( \bvec{p} ) \triangleq \frac{ R }{ R - 1 } (1 - \| \bvec{p} \|_{R})$ denotes the $R$-norm information \cite{boekee} of $\bvec{p} \in \mathcal{P}_{n}$.}
\label{fig:LminLmax_R}
\end{figure}
\subsection{Applications for discrete memoryless channels}
\label{subsect:DMC}
In this subsection, we consider applications of Corollary \ref{cor:cond_extremes} for discrete memoryless channels (DMCs).
We define DMCs as follows:
Let the discrete random variables $X \in \mathcal{X}$ and $Y \in \mathcal{Y}$ denote the input and output of a DMC, respectively, where $\mathcal{X}$ and $\mathcal{Y}$ denote the finite input and output alphabets, respectively.
Let
$P_{Y|X}(y \mid x)$ denotes the transition probability of a DMC $(X, Y)$ for $(x, y) \in \mathcal{X} \times \mathcal{Y}$.
For a DMC $(X, Y)$, the mutual information of order $\alpha \in (0, \infty)$ \cite{arimoto} is defined as
\begin{align}
I_{\alpha}(X; Y)
\triangleq
H_{\alpha}(X) - H_{\alpha}(X \mid Y) ,
\end{align}
where $I_{1}(X; Y) \triangleq I(X; Y)$ denotes the (ordinary) mutual information.
Since $H_{\alpha}( \bvec{u}_{n} ) = \ln n$ for $\alpha \in (0, \infty)$, if the input $X$ follows a uniform distribution, i.e., $X \sim \bvec{u}_{|\mathcal{X}|}$, then we observe that
\begin{align}
I_{\alpha}(X; Y)
=
\ln |\mathcal{X}| - H_{\alpha}(X \mid Y) .
\end{align}
Therefore, by using \eqref{eq:cond_Renyi_bound1} and \eqref{eq:cond_Renyi_bound2}, we can obtain the tight bounds of $I_{\alpha}(X; Y)$ for a DMC $(X, Y)$ which the input $X$ follows a uniform distribution.
We summarize the bounds of $I_{\alpha}(X; Y)$ which follows from \eqref{eq:cond_Renyi_bound1} and \eqref{eq:cond_Renyi_bound2} in the following corollary.
\begin{corollary}
\label{cor:mutual}
For a given DMC $(X, Y)$, if the channels $(X^{\prime}, Y^{\prime})$ and $(X^{\prime\prime}, Y^{\prime\prime})$ of Definitions \ref{def:RVs_prime1} and \ref{def:RVs_prime2}, respectively, satisfy $X, X^{\prime}, X^{\prime\prime} \sim \bvec{u}_{|\mathcal{X}|}$ and
\begin{align}
I(X; Y)
=
I(X^{\prime}, Y^{\prime})
=
I(X^{\prime\prime}, Y^{\prime\prime}) ,
\end{align}
then we observe that
\begin{align}
\hspace{-0.5em}
\frac{1}{2} \le \alpha < 1
& \Longrightarrow
I_{\alpha}(X^{\prime}, Y^{\prime})
\le
I_{\alpha}(X; Y)
\le
I_{\alpha}(X^{\prime\prime}, Y^{\prime\prime}) , \!
\label{eq:mutual1} \\
\hspace{-0.5em}
\alpha > 1
& \Longrightarrow
I_{\alpha}(X^{\prime\prime}, Y^{\prime\prime})
\le
I_{\alpha}(X; Y)
\le
I_{\alpha}(X^{\prime}, Y^{\prime}) , \!
\label{eq:mutual1}
\end{align}
where the lower bound of \eqref{eq:mutual1} also holds for $\alpha \in (0, \frac{1}{2})$.
\end{corollary}
\begin{figure}[!t]
\centering
\subfloat[The case $\alpha = \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_MIvsMI-a_cond_9ary_half.pdf}
\put(75, -1.5){$I(X ; Y)$}
\put(98, 2){\scriptsize [nats]}
\put(-4, 23){\rotatebox{90}{$I_{\alpha}(X; Y)$}}
\put(-1, 60){\scriptsize [nats]}
\put(30, 35){\color{bluegreen} $I_{\alpha}(X^{\prime} ; Y^{\prime})$}
\put(80, 27){\color{red} $I_{\alpha}(X^{\prime\prime} ; Y^{\prime\prime})$}
\put(50, 14.5){\small $I(X ; Y) = \ln n - H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$}
\put(53, 13){\vector(4, -1){21}}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $\alpha = 2$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_MIvsMI-a_cond_9ary_2.pdf}
\put(75, -1.5){$I(X ; Y)$}
\put(98, 2){\scriptsize [nats]}
\put(-4, 23){\rotatebox{90}{$I_{\alpha}(X ; Y)$}}
\put(-1, 60){\scriptsize [nats]}
\put(60, 32){\color{bluegreen} $I_{\alpha}(X^{\prime} ; Y^{\prime})$}
\put(25, 47){\color{red} $I_{\alpha}(X^{\prime\prime} ; Y^{\prime\prime})$}
\put(33, 12){\small $I(X; Y) = \ln n - H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$}
\put(32, 12){\vector(-3, -1){13.5}}
\end{overpic}
}
\caption{Plots of the boundaries of $\{ (I(X ; Y), I_{\alpha}(X ; Y)) \mid P_{XY} \in \mathcal{P}( \mathcal{X} \times \mathcal{Y} ), |\mathcal{X}| = n, P_{X}( \cdot ) = \bvec{u}_{n} \}$ with $n = 9$.
The upper- and lower-boundaries correspond to $(X^{\prime\prime}, Y^{\prime\prime})$ of Definition \ref{def:RVs_prime2} and $(X^{\prime}, Y^{\prime})$ of Definition \ref{def:RVs_prime1}, respectively.
The dotted lines correspond to the boundary of $\{ (H( \bvec{p} ), H_{\alpha}( \bvec{p} )) \mid \bvec{p} \in \mathcal{P}_{n} \}$.}
\label{fig:LminLmax_MI}
\end{figure}
Furthermore, we consider Gallager's $E_{0}$ function \cite{gallager} of a DMC $(X, Y)$, defined by
\begin{align}
E_{0}(\rho, X, Y)
& \triangleq
- \ln \sum_{y \in \mathcal{Y}} \left( \sum_{x \in \mathcal{X}} P_{X}( x ) P_{Y|X}(y \mid x)^{\frac{1}{1+\rho}} \right)^{1+\rho}
\notag
\end{align}
for $\rho \in (-1, \infty)$.
We can conclude an extremality of the $E_{0}$ function by using Definitions \ref{def:RVs_prime1} and \ref{def:RVs_prime2} in the following theorem.
\begin{theorem}
\label{th:E0_symmetric}
Assume that channels $(X^{\prime}, Y^{\prime})$ and $(X^{\prime\prime}, Y^{\prime\prime})$ satisfy $X, X^{\prime}, X^{\prime\prime} \sim \bvec{u}_{|\mathcal{X}|}$ and
\begin{align}
I(X; Y)
=
I(X^{\prime}; Y^{\prime})
=
I(X^{\prime\prime}; Y^{\prime\prime})
\end{align}
for a given DMC $(X, Y)$.
Then, we observe that
\begin{align}
E_{0}(\rho, X^{\prime\prime}, Y^{\prime\prime})
\le
E_{0}(\rho, X, Y)
\le
E_{0}(\rho, X^{\prime}, Y^{\prime})
\label{ineq:E0_symmetric}
\end{align}
for any $\rho \in (-1, 1]$.
In particular, the right-hand inequality of \eqref{ineq:E0_symmetric} also holds for $\rho \in (1, \infty)$.
\end{theorem}
\begin{figure}[!t]
\centering
\subfloat[The case $n = 5$ and $\rho = 1$ (cutoff rate).]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_MIvsE0_cond_5ary_1.pdf}
\put(75, -1.5){$I(X ; Y)$}
\put(95, -1){\scriptsize [nats]}
\put(-4, 23){\rotatebox{90}{$E_{0}(\rho, X, Y)$}}
\put(-1, 60.5){\scriptsize [nats]}
\put(29, 37){\color{bluegreen} $E_{0}(\rho, X^{\prime}, Y^{\prime})$}
\put(71, 25){\color{red} $E_{0}(\rho, X^{\prime\prime}, Y^{\prime\prime})$}
\put(44, 13.5){\small $I(X ; Y) = \ln n - H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$}
\put(46.5, 12){\vector(3, -2){7.25}}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $n = 5$ and $\rho = - \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_MIvsE0_cond_5ary_minus-half.pdf}
\put(75, 48){$I(X ; Y)$}
\put(95, 50){\scriptsize [nats]}
\put(-4, 20){\rotatebox{90}{$E_{0}(\rho, X, Y)$}}
\put(-2, -1){\scriptsize [nats]}
\put(70, 23){\color{bluegreen} $E_{0}(\rho, X^{\prime}, Y^{\prime})$}
\put(20, 15){\color{red} $E_{0}(\rho, X^{\prime\prime}, Y^{\prime\prime})$}
\put(50, 40){\small $I(X; Y) = \ln n - H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$}
\put(49, 42){\vector(-2, 1){31}}
\end{overpic}
}\\
\subfloat[The case $n = 256$ and $\rho = 1$ (cutoff rate).]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_MIvsE0_cond_256ary_1.pdf}
\put(75, -1.5){$I(X ; Y)$}
\put(98, 2){\scriptsize [nats]}
\put(-4, 23){\rotatebox{90}{$E_{0}(\rho, X, Y)$}}
\put(-3, 59){\scriptsize [nats]}
\put(29, 38){\color{bluegreen} $E_{0}(\rho, X^{\prime}, Y^{\prime})$}
\put(88, 20){\rotatebox{60}{\color{red} $E_{0}(\rho, X^{\prime\prime}, Y^{\prime\prime})$}}
\put(35, 13.5){\small $I(X ; Y) = \ln n - H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$}
\put(91, 11.75){\vector(3, -2){6.5}}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $n = 256$ and $\rho = - \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_MIvsE0_cond_256ary_minus-half.pdf}
\put(75, 48){$I(X ; Y)$}
\put(98, 54){\scriptsize [nats]}
\put(-4, 20){\rotatebox{90}{$E_{0}(\rho, X, Y)$}}
\put(1, 0){\scriptsize [nats]}
\put(70, 23){\color{bluegreen} $E_{0}(\rho, X^{\prime}, Y^{\prime})$}
\put(20, 8){\color{red} $E_{0}(\rho, X^{\prime\prime}, Y^{\prime\prime})$}
\put(51, 40){\small $I(X; Y) = \ln n - H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$}
\put(50, 42){\vector(-2, 1){31}}
\end{overpic}
}
\caption{Plots of the boundaries of $\{ (I(X ; Y), E_{0}(\rho, X, Y)) \mid P_{XY} \in \mathcal{P}( \mathcal{X} \times \mathcal{Y} ), |\mathcal{X}| = n, P_{X}( \cdot ) = \bvec{u}_{n} \}$.
The upper- and lower-boundaries correspond to $(X^{\prime\prime}, Y^{\prime\prime})$ of Definition \ref{def:RVs_prime2} and $(X^{\prime}, Y^{\prime})$ of Definition \ref{def:RVs_prime1}, respectively.
The dotted lines correspond to the boundary of $\{ (H( \bvec{p} ), H_{\alpha}( \bvec{p} )) \mid \bvec{p} \in \mathcal{P}_{n} \}$.}
\label{fig:LminLmax_MI}
\end{figure}
\begin{IEEEproof}[Proof of Theorem \ref{th:E0_symmetric}]
It can be seen from \cite[Eq. (6)]{alsan} that, if the input $X$ follows a uniform distribution, then
\begin{align}
E_{0}(\rho, X, Y)
& =
\rho \, I_{\frac{1}{1+\rho}}(X; Y) .
\label{E0_mutual}
\end{align}
Therefore, for $\alpha = \frac{1}{1+\rho}$, noting the relations
\begin{align}
-1 < \rho \le 0
& \iff
1 \le \alpha < \infty ,
\\
0 \le \rho \le 1
& \iff
\frac{1}{2} \le \alpha \le 1 ,
\\
1 < \rho < \infty
& \iff
0 < \alpha < \frac{1}{2} ,
\end{align}
we can obtain Theorem \ref{th:E0_symmetric} from Corollary \ref{cor:mutual}.
\end{IEEEproof}
Theorem \ref{th:E0_symmetric} is a generalization of \cite[Theorem 2]{fabregas} and \cite[Corollary 2]{alsan} from binary-input DMCs to non-binary input DMCs under a uniform input distribution.
In addition, Theorem \ref{th:E0_symmetric} contains \cite[Corollary 1]{isit2015}.
\subsection{A special case of $L_{\max}^{\alpha}(X \mid Y)$: $\alpha = \frac{1}{2}$}
\label{subsect:alpha_half}
In Theorem \ref{th:cond_extremes}, we saw that $L_{\max}^{\alpha}(X \mid Y)$ is the maximum of expectations of $\ell_{\alpha}$-norm with a fixed conditional Shannon entropy.
Note that $L_{\max}^{\alpha}(X \mid Y)$ is defined by a piecewise continuous function of $H(X \mid Y) \in [0, \ln n]$, composed of the two segments separated by $H(X \mid Y) = H_{\sbvec{v}_{n}}( p_{n}^{\ast}( \alpha ) )$.
We now show an example of the solution $p_{n}^{\ast}( \alpha )$ of the equation \eqref{eq:equation_p^ast} with respect to $p \in [0, \frac{1}{n}]$ by the simple form as follows:
\begin{fact}
\label{fact:alpha_half}
If $\alpha = \frac{1}{2}$, then $p_{n}^{\ast}( \frac{1}{2} ) = \frac{1}{n (n-1)}$ for any $n \ge 2$.
\end{fact}
\begin{IEEEproof}[Proof of Fact \ref{fact:alpha_half}]
Simple calculations yield
\begin{align}
H_{\sbvec{v}_{n}} \! \left( \frac{ 1 }{ n (n-1) } \right)
& =
\left. \left( \vphantom{\sum} - (1 - (n-1) p ) \ln (1 - (n-1) p) - (n-1) p \ln p \right) \right|_{p = \frac{1}{n (n-1)}}
\\
& =
- \left( 1 - (n-1) \left( \frac{1}{n (n-1)} \right) \right) \ln \left( 1 - (n-1) \left( \frac{1}{n (n-1)} \right) \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- (n-1) \left( \frac{1}{n (n-1)} \right) \ln \left( \frac{1}{n (n-1)} \right)
\\
& =
\ln n - \left( 1 - \frac{2}{n} \right) \ln (n-1) ,
\\
\| \bvec{u}_{n} \|_{( \frac{1}{2} )}
& =
\left. \left( n^{\frac{1}{\alpha} - 1} \right) \right|_{\alpha = \frac{1}{2}}
\\
& =
n ,
\\
\left\| \bvec{v}_{n} \! \left( \frac{ 1 }{ n (n-1) } \right) \right\|_{( \frac{1}{2} )}
& =
\left. \left( \vphantom{\sum} (1 - (n-1) p)^{\alpha} + (n-1) p^{\alpha} \right)^{\frac{1}{\alpha}} \right|_{(p, \alpha) = (\frac{1}{n (n-1)}, \frac{1}{2})}
\\
& =
\left( \left( 1 - (n-1) \left( \frac{1}{n (n-1)} \right) \right)^{\frac{1}{2}} + (n-1) \left( \frac{1}{n (n-1)} \right)^{\frac{1}{2}} \right)^{2}
\\
& =
\left( 2 \left( \frac{n-1}{n} \right)^{\frac{1}{2}} \right)^{2}
\\
& =
4 \left( \frac{n-1}{n} \right) .
\end{align}
Substituting $(p, \alpha) = (\frac{1}{n (n-1)}, \frac{1}{2})$ into the left-hand side of \eqref{eq:equation_p^ast}, we have
\begin{align}
&
\left. (\ln n - H_{\sbvec{v}_{n}}( p )) \left( \frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ H_{\sbvec{v}_{n}}( p ) } \right) \right|_{(p, \alpha) = (\frac{1}{n (n-1)}, \frac{1}{2})}
\notag \\
& \quad =
\left. (\ln n - H_{\sbvec{v}_{n}}( p )) \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \frac{ p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} }{ \ln \frac{ 1 - (n-1) p}{ p } } \right) \right|_{(p, \alpha) = (\frac{1}{n (n-1)}, \frac{1}{2})}
\\
& \quad =
\left( \ln n - H_{\sbvec{v}_{n}} \!\! \left( \frac{1}{n (n-1)} \right) \right) \left( (n-1) \left( \frac{1}{n (n-1)} \right)^{\frac{1}{2}} + \left( 1 - (n-1) \left( \frac{1}{n (n-1)} \right) \right)^{\frac{1}{2}} \right)^{1}
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times
\left( \frac{ \left( \frac{1}{n (n-1)} \right)^{-\frac{1}{2}} - \left( 1 - (n-1) \left( \frac{1}{n (n-1)} \right) \right)^{-\frac{1}{2}} }{ \ln ( (n-1)^{2} ) } \right)
\\
& \quad =
\left( \ln n - \left( \ln n - \left( 1 - \frac{2}{n} \right) \ln (n-1) \right) \right) \left( 2 \left( \frac{n-1}{n} \right)^{\frac{1}{2}} \right) \left( \frac{ ( n (n-1) )^{\frac{1}{2}} - \left( \frac{n}{n-1} \right)^{\frac{1}{2}} }{ 2 \ln (n-1) } \right)
\\
& \quad =
\left( \left( \frac{n - 2}{n} \right) \ln (n-1) \right) (n-1)^{\frac{1}{2}} \left( \frac{ (n-1)^{\frac{1}{2}} - \left( \frac{1}{n-1} \right)^{\frac{1}{2}} }{ \ln (n-1) } \right)
\\
& \quad =
\left( \frac{n-2}{n} \ln (n-1) \right) \left( \frac{ (n-1) - 1 }{ \ln (n-1) } \right)
\\
& \quad =
\frac{ (n-2)^{2} }{ n } .
\label{eq:LHS_equation_p^ast_1/2}
\end{align}
Similarly, substituting $(p, \alpha) = (\frac{1}{n (n-1)}, \frac{1}{2})$ into the right-hand side of \eqref{eq:equation_p^ast}, we have
\begin{align}
\left. \left( \vphantom{\sum} \| \bvec{u}_{n} \|_{\alpha} - \| \bvec{v}_{n}( p ) \|_{\alpha} \right) \right|_{(p, \alpha) = (\frac{1}{n (n-1)}, \frac{1}{2})}
& =
n - 4 \left( \frac{n-1}{n} \right)
\\
& =
\frac{n^{2} - 4 n + 4}{n}
\\
& =
\frac{(n-2)^{2}}{n} .
\label{eq:RHS_equation_p^ast_1/2}
\end{align}
Since \eqref{eq:LHS_equation_p^ast_1/2} and \eqref{eq:RHS_equation_p^ast_1/2} are the same, we have $p_{n}^{\ast}( \frac{1}{2} ) = \frac{1}{n (n-1)}$.
\end{IEEEproof}
Therefore, after some algebra, we can obtain
\begin{align}
& \hspace{-0.5em}
L_{\max}^{\frac{1}{2}}(X \mid Y)
=
\begin{cases}
\| \hat{\bvec{v}}_{n}(X \mid Y) \|_{\frac{1}{2}}
& \mathrm{if} \ H(X \mid Y) \le H_{\sbvec{v}_{n}}( \frac{1}{n (n-1)} ) , \\
T_{n, \frac{1}{2}} (X \mid Y)
& \mathrm{if} \ H(X \mid Y) > H_{\sbvec{v}_{n}}( \frac{1}{n (n-1)} ) ,
\end{cases}
\label{eq:Lmax_alpha_half}
\end{align}
where
\begin{align}
H_{\sbvec{v}_{n}}( {\textstyle \frac{1}{n (n-1)}} )
& =
\ln n - \left( 1 - \frac{2}{n} \right) \ln (n-1) ,
\\
T_{n, \frac{1}{2}} (X \mid Y)
& =
n - \frac{ (n-2) (\ln n - H(X \mid Y)) }{ \ln (n-1) } .
\end{align}
Note that Figs. \ref{fig:LminLmax}-\subref{subfig:norm_half} and \ref{fig:LminLmax_Renyi}-\subref{subfig:Renyi_half} are plotted by using \eqref{eq:Lmax_alpha_half}.
Since $\alpha = \frac{1}{1+\rho}$ in the $E_{0}$ function, the order $\alpha = \frac{1}{2}$ implies $\rho = 1$.
Thus, we can obtain the tight lower bound of the \emph{cutoff rate} $E_{0}(1, X, Y)$ by using \eqref{eq:Lmax_alpha_half}.
More precisely, if $\rho = 1$, then the lower bound of \eqref{ineq:E0_symmetric} can be calculated as
\begin{align}
E_{0}(1, X^{\prime\prime}, Y^{\prime\prime})
=
\ln n - \ln L_{\max}^{\frac{1}{2}}(X^{\prime\prime} \mid Y^{\prime\prime})
\label{eq:cutoffrate}
\end{align}
for a fixed $H(X^{\prime\prime} \mid Y^{\prime\prime}) \in [0, \ln n]$.
\if0
On the other hand, we consider the average Bhattacharyya distance \cite{sasoglu} of a DMC $(X, Y) \sim P_{Y|X} P_{X}$, defined by
\begin{align}
Z( P_{Y|X} )
\triangleq
\frac{1}{n (n - 1)} \sum_{x \neq x^{\prime}} \sum_{y \in \mathcal{Y}} \! \sqrt{ P_{Y|X}(y \mid x) P_{Y|X}(y \mid x^{\prime}) }
\notag
\end{align}
Since the cutoff rate is a function of the average Bhattacharyya distance, i.e.,
\begin{align}
E_{0}(1, X, Y)
=
\ln \frac{ n }{ 1 + (n - 1) Z( P_{Y|X} ) } ,
\end{align}
we can also obtain the tight upper bounds of $Z( P_{Y|X} )$ with a fixed $I(X, Y)$ by \eqref{eq:Lmax_alpha_half}, as with \eqref{eq:cutoffrate}.
\fi
\section{Conclusion}
In this study, we investigated extremal relations between the conditional Shannon entropy and the expectation of $\ell_{\alpha}$-norm for joint probability distributions in Theorems \ref{th:cond_extremes} and \ref{th:cond_extremes2}.
Extending Theorem \ref{th:cond_extremes} to Corollary \ref{cor:cond_extremes}, we obtained tight bounds of some conditional entropies \eqref{eq:renyi_f} and \eqref{eq:R_f} with a fixed conditional Shannon entropy.
In Section \ref{subsect:DMC}, we applied Corollary \ref{cor:cond_extremes} for DMCs under a uniform input distribution.
Then, we showed tight bounds of the $E_{0}$ function with a fixed mutual information.
\appendices
\section{Proof of Lemma \ref{lem:convex_v}}
\label{app:convex_v}
\begin{IEEEproof}[Proof of Lemma \ref{lem:convex_v}]
By the chain rule of the derivation, we have
\begin{align}
\frac{ \partial^{2} \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p )^{2} }
& =
\left( \frac{ \partial^{2} \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial p^{2} } \right) \cdot \left( \frac{ \partial p }{ \partial H_{\sbvec{v}_{n}}( p ) } \right)^{2} + \left( \frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial p } \right) \cdot \left( \frac{ \partial^{2} p }{ \partial H_{\sbvec{v}_{n}}( p )^{2} } \right) .
\label{eq:diff2}
\end{align}
We can see from the proofs of \cite[Lemmas 1 and 3]{part1_arxiv} that
\begin{align}
\frac{ \mathrm{d} H_{\sbvec{v}_{n}}( p ) }{ \mathrm{d} p }
& =
(n-1) \ln \frac{ 1 - (n-1) p }{ p } ,
\label{eq:diff1_Hv} \\
\frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial p }
& =
(n-1) \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) .
\label{eq:norm_diff1}
\end{align}
Direct calculation shows
\begin{align}
\frac{ \partial^{2} \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial p^{2} }
& =
\frac{ \partial }{ \partial p } \left( \frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial p } \right)
\\
& \overset{\eqref{eq:norm_diff1}}{=}
\frac{ \partial }{ \partial p } \left( (n-1) \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) \right)
\\
& \overset{\text{(a)}}{=}
(n-1) \left[ \left( \frac{ \partial }{ \partial p } \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \right) \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) \right.
\notag \\
& \left. \qquad \qquad \quad +
\left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \frac{ \partial }{ \partial p } \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) \right) \right]
\\
& =
(n-1) \left[ (1 - \alpha) (n-1) \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 2} \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right)^{2} \right.
\notag \\
& \left. \qquad + \,
(\alpha - 1) \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p^{\alpha-2} + (n-1) (1 - (n-1)p)^{\alpha-2} \right) \right]
\\
& =
(n-1) (\alpha - 1) \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 2} \left[ - (n-1) \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right)^{2} \right.
\notag \\
& \left. \qquad +
\left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right) \left( \vphantom{\sum} p^{\alpha-2} + (n-1) (1 - (n-1)p)^{\alpha-2} \right) \vphantom{\left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right)^{2}} \right]
\label{eq:norm_diff2_halfway} \\
& \overset{\text{(b)}}{=}
(n-1) (\alpha - 1) \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 2} \left( \vphantom{\sum} p (1 - (n-1)p) \right)^{\alpha - 2} ,
\label{eq:norm_diff2}
\end{align}
where
\begin{itemize}
\item
(a) follows by the product rule and
\item
(b) follows from the fact that the bracket $[ \cdot ]$ of the right-hand side of \eqref{eq:norm_diff2_halfway} is
\begin{align}
&
\left[ \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right) \left( \vphantom{\sum} p^{\alpha-2} + (n-1) (1 - (n-1)p)^{\alpha-2} \right) \right.
\notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad -
(n-1) \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right)^{2} \right]
\notag \\
& \quad =
\left( \vphantom{\sum} (n-1) p^{2(\alpha-1)} + (n-1)^{2} p^{\alpha} (1 - (n-1)p)^{\alpha-2} + p^{\alpha-2} (1 - (n-1)p)^{\alpha} + (n-1) (1 - (n-1)p)^{2(\alpha-1)} \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- (n-1) \left( \vphantom{\sum} p^{2(\alpha-1)} - 2 p^{\alpha-1} (1 - (n-1)p)^{\alpha-1} + (1 - (n-1)p)^{2(\alpha-1)} \right)
\\
& \quad =
(n-1)^{2} p^{\alpha} (1 - (n-1)p)^{\alpha-2} + p^{\alpha-2} (1 - (n-1)p)^{\alpha} + 2 (n-1) p^{\alpha-1} (1 - (n-1)p)^{\alpha-1}
\\
& \quad =
\left( \vphantom{\sum} p (1 - (n-1)p) \right)^{\alpha-2} \left( \vphantom{\sum} 2 (n-1) p (1 - (n-1)p) + (n-1)^{2} p^{2} + (1 - (n-1)p)^{2} \right)
\\
& \quad =
\left( \vphantom{\sum} p (1 - (n-1)p) \right)^{\alpha-2} \underbrace{ \left( \vphantom{\sum} 2 (n-1) p - 2 (n-1)^{2} p^{2} + (n-1)^{2} p^{2} + 1 - 2 (n-1)p + (n-1)^{2} p^{2} \right) }_{ = 1 }
\\
& \quad =
\left( \vphantom{\sum} p (1 - (n-1)p) \right)^{\alpha-2} .
\end{align}
\end{itemize}
Moreover, we see that
\begin{align}
\frac{ \mathrm{d}^{2} p }{ \mathrm{d} H_{\sbvec{v}_{n}}( p )^{2} }
& =
\left[ \frac{ \mathrm{d} }{ \mathrm{d} p } \left( \frac{ \mathrm{d} p }{ \mathrm{d} H_{\sbvec{v}_{n}}( p ) } \right) \right] \left( \frac{ \mathrm{d} p }{ \mathrm{d} H_{\sbvec{v}_{n}}( p ) } \right)
\\
& \overset{\text{(a)}}{=}
\left[ \frac{ \mathrm{d} }{ \mathrm{d} p } \left( \frac{ 1 }{ \frac{ \mathrm{d} h_{n}( \mbox{\boldmath \scriptsize $p$}^{\ast} ) }{ \mathrm{d} p } } \right) \right] \left( \frac{ 1 }{ \frac{ \mathrm{d} h_{n}( \mbox{\boldmath \scriptsize $p$}^{\ast} ) }{ \mathrm{d} p } } \right)
\\
& \overset{\eqref{eq:diff1_Hv}}{=}
\left[ \frac{ \mathrm{d} }{ \mathrm{d} p } \left( \frac{ 1 }{ (n-1) \ln \frac{ 1 - (n-1) p}{ p } } \right) \right] \left( \frac{ 1 }{ (n-1) \ln \frac{ 1 - (n-1) p}{ p } } \right)
\\
& \overset{\text{(b)}}{=}
\left[ - \frac{ 1 }{ (n-1) \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{2} } \left( \frac{ \mathrm{d} \left( \ln \frac{ 1 - (n-1) p}{ p } \right) }{ \mathrm{d} p } \right) \right] \left( \frac{ 1 }{ (n-1) \ln \frac{ 1 - (n-1) p}{ p } } \right)
\\
& =
- \left( \frac{ 1 }{ (n-1)^{2} \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{3} } \right) \left( \frac{ \mathrm{d} \left( \ln \frac{ 1 - (n-1) p}{ p } \right) }{ \mathrm{d} p } \right)
\\
& \overset{\text{(c)}}{=}
- \left( \frac{ 1 }{ (n-1)^{2} \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{3} } \right) \left( \frac{ 1 }{ \frac{ 1 - (n-1) p }{ p } } \left( - \frac{1}{p^{2}} \right) \right)
\\
& =
\left( \frac{ 1 }{ (n-1)^{2} \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{3} } \right) \left( \frac{ 1 }{ p (1 - (n-1)p) } \right)
\\
& =
\frac{ 1 }{ p (1 - (n-1)p) (n-1)^{2} \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{3} } ,
\label{eq:hn_diff2}
\end{align}
where
\begin{itemize}
\item
(a) follows by the inverse function theorem,
\item
(b) follows from the fact that $\frac{ \mathrm{d} }{ \mathrm{d} x } \left( \frac{1}{f(x)} \right) = - \frac{ 1 }{ (f(x))^{2} } \left( \frac{ \mathrm{d} f(x) }{ \mathrm{d} x } \right)$, and
\item
(c) follows from the fact that $\frac{ \mathrm{d} \ln f(x) }{ \mathrm{d} x } = \frac{ 1 }{ f(x) } \left( \frac{ \mathrm{d} f(x) }{ \mathrm{d} x } \right)$.
\end{itemize}
Substituting \eqref{eq:norm_diff1}, \eqref{eq:diff1_Hv}, \eqref{eq:norm_diff2}, and \eqref{eq:hn_diff2} into \eqref{eq:diff2}, we obtain
\begin{align}
&
\frac{ \partial^{2} \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p )^{2} }
\notag \\
& =
\underbrace{ (n-1) (\alpha - 1) \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 2} \left( \vphantom{\sum} p (1 - (n-1)p) \right)^{\alpha - 2} }_{= \, \text{\eqref{eq:norm_diff2}}} \times \left( \vphantom{\frac{ 1 }{ (n-1) \ln \left( \frac{ 1 - (n-1) p }{ p } \right) }} \right. \underbrace{ \frac{ 1 }{ (n-1) \ln \frac{ 1 - (n-1) p }{ p } } }_{= \, \frac{ 1 }{ \text{\eqref{eq:diff1_Hv}} }} \left. \vphantom{\frac{ 1 }{ (n-1) \ln \left( \frac{ 1 - (n-1) p }{ p } \right) }} \right)^{2}
\notag \\
& \qquad \qquad \qquad +
\underbrace{ (n-1) \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) }_{ = \, \text{\eqref{eq:norm_diff1}} }
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times
\underbrace{ \left( \frac{ 1 }{ p (1 - (n-1)p) (n-1)^{2} \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{3} } \right) }_{ = \, \text{\eqref{eq:hn_diff2}} }
\\
& =
\frac{ (\alpha - 1) \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 2} \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} p (1 - (n-1)p) \right)^{\alpha - 2} }{ (n-1) \left( \ln \left( \frac{ 1 - (n-1) p }{ p } \right) \right)^{2} }
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad +
\frac{ \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) }{ p (1 - (n-1)p) (n-1) \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{3} }
\\
& =
\frac{ \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 2} \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} p (1 - (n-1)p) \right)^{\alpha - 2} }{ (n-1) \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{2} }
\notag \\
& \qquad \qquad \qquad \qquad \qquad \times
\left[ (\alpha - 1) + \frac{ \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right) \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) }{ \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} p (1 - (n-1)p) \right)^{\alpha - 1} \ln \frac{ 1 - (n-1) p}{ p } } \right]
\\
& =
\frac{ \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 2} \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} p (1 - (n-1)p) \right)^{\alpha - 2} }{ (n-1) \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{2} }
\notag \\
& \quad \times
\left[ (\alpha - 1) + \frac{ (n-1) p^{2 \alpha - 1} - (n-1) p^{\alpha} (1 - (n-1) p)^{\alpha-1} + p^{\alpha-1} (1 - (n-1) p)^{\alpha} - (1 - (n-1) p)^{2\alpha-1} }{ \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} p (1 - (n-1)p) \right)^{\alpha - 1} \ln \frac{ 1 - (n-1) p}{ p } } \right]
\\
& =
\frac{ \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 2} \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} p (1 - (n-1)p) \right)^{\alpha - 2} }{ (n-1) \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{2} }
\notag \\
& \qquad \times
\left[ (\alpha - 1) + \frac{ (n-1) p^{\alpha} (1 - (n-1)p)^{1 - \alpha} - (n-1) p + (1 - (n-1) p) - p^{1 - \alpha} (1 - (n-1) p)^{\alpha} }{ \ln \frac{ 1 - (n-1) p}{ p } } \right]
\\
& =
\frac{ \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 2} \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} p (1 - (n-1)p) \right)^{\alpha - 2} }{ (n-1) \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{2} }
\notag \\
& \qquad \qquad \qquad \quad \times
\left[ (\alpha - 1) + \frac{ 1 - 2 (n-1) p + (n-1) p^{\alpha} (1 - (n-1)p)^{1 - \alpha} - p^{1 - \alpha} (1 - (n-1) p)^{\alpha} }{ \ln \frac{ 1 - (n-1) p}{ p } } \right]
\\
& =
\left( \frac{ \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 2} \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} p (1 - (n-1)p) \right)^{\alpha - 2} }{ (n-1) \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{2} } \right) \, g(n, p, \alpha) ,
\label{eq:diff2:N_H_v}
\end{align}
where the function $g(n, p, \alpha)$ is defined by
\begin{align}
g(n, p, \alpha)
\triangleq
(\alpha - 1) + \frac{ 1 - 2 (n-1) p + (n-1) p^{\alpha} (1 - (n-1)p)^{1 - \alpha} - p^{1 - \alpha} (1 - (n-1) p)^{\alpha} }{ \ln \frac{ 1 - (n-1) p}{ p } } .
\label{eq:g_p}
\end{align}
Since
\begin{align}
\operatorname{sgn} \! \left( \frac{ \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 2} \left( \vphantom{\frac{ 1 - (n-1) p}{ p }} p (1 - (n-1)p) \right)^{\alpha - 2} }{ (n-1) \left( \ln \frac{ 1 - (n-1) p}{ p } \right)^{2} } \right)
=
1
\end{align}
for $p \in (0, \frac{1}{n})$ and $\alpha \in (-\infty, 0) \cup (0, +\infty)$, we get from \eqref{eq:diff2:N_H_v} that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p )^{2} } \right)
=
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, p, \alpha) \right)
\label{eq:sgn_diff2_N_Hv}
\end{align}
for $p \in (0, \frac{1}{n})$ and $\alpha \in (-\infty, 0) \cup (0, +\infty)$.
After some algebra, we can rewrite $g(n, p, \alpha)$ as follows:
\begin{align}
g(n, p, \alpha)
& =
(\alpha - 1) + \frac{ 1 - 2 (n-1) p + (n-1) p^{\alpha} (1 - (n-1)p)^{1 - \alpha} - p^{1 - \alpha} (1 - (n-1) p)^{\alpha} }{ \ln \frac{ 1 - (n-1) p}{ p } }
\\
& =
(\alpha - 1) + \frac{ 1 - p \left( (n-1) \left(2 - \left( \frac{1 - (n-1)p}{p} \right)^{1 - \alpha} \right) + \left( \frac{1 - (n-1)p}{p} \right)^{\alpha} \right) }{ \ln \frac{ 1 - (n-1) p}{ p } }
\\
& \overset{\text{(a)}}{=}
(\alpha - 1) + \frac{ 1 - p \left( (n-1) \left(2 - z^{1 - \alpha} \right) + z^{\alpha} \right) }{ \ln z }
\\
& \overset{\text{(b)}}{=}
(\alpha - 1) + \frac{ 1 - \frac{1}{(n-1) + z} \left( (n-1) \left(2 - z^{1 - \alpha} \right) + z^{\alpha}
\right) }{ \ln z }
\\
& =
(\alpha - 1) + \frac{ ((n-1) + z) - \left( (n-1) \left(2 - z^{1 - \alpha} \right) + z^{\alpha}
\right) }{ ((n-1) + z) \ln z }
\\
& =
(\alpha - 1) + \frac{ z - (n-1) + (n-1) z^{1 - \alpha} - z^{\alpha} }{ ((n-1) + z) \ln z }
\\
& =
(\alpha - 1) + \frac{ z (1 - z^{\alpha-1}) - (n-1) (1 - z^{1 - \alpha}) }{ ((n-1) + z) \ln z }
\\
& =
(\alpha - 1) + \frac{ z^{\alpha} (z^{1-\alpha} - 1) + (n-1) (z^{1 - \alpha}-1) }{ ((n-1) + z) \ln z }
\\
& =
(\alpha - 1) + \frac{ ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) }{ ((n-1) + z) \ln z }
\end{align}
for $p \in (0, \frac{1}{n})$ and $\alpha \in (- \infty, 0) \cup (0, + \infty)$, where
\begin{itemize}
\item
(a) follows from the change of variable: $z = z(n, p) \triangleq \frac{ 1 - (n-1) p }{ p }$, and
\item
(b) follows from the fact that $z = \frac{ 1 - (n-1) p }{ p } \iff p = \frac{ 1 }{ (n-1) + z }$.
\end{itemize}
We define
\begin{align}
g(n, z, \alpha)
\triangleq
(\alpha - 1) + \frac{ ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) }{ ((n-1) + z) \ln z } ,
\label{eq:g_z}
\end{align}
where note that
\begin{itemize}
\item
$g(n, p, \alpha)$ denotes the right-hand side of \eqref{eq:g_p} and
\item
$g(n, z, \alpha)$ denotes the right-hand side of \eqref{eq:g_z}.
\end{itemize}
We now verify the relation between $p \in (0, \frac{1}{n})$ and $z(n, p)$.
It is easy to see that
\begin{align}
\frac{ \partial z(n, p) }{ \partial p }
& =
\frac{ \partial }{ \partial p } \left( \frac{ 1 - (n-1) p }{ p } \right)
\\
& \overset{\text{(a)}}{=}
\left( \frac{ \partial }{ \partial p } (1 - (n-1) p) \right) \frac{ 1 }{ p } + (1 - (n-1) p) \left( \frac{ \partial }{ \partial p } \left( \frac{1}{p} \right) \right)
\\
& =
( - (n-1) ) \frac{ 1 }{ p } + (1 - (n-1) p) \left( - \frac{1}{p^{2}} \right)
\\
& =
\frac{ - (n-1) p - (1 - (n-1) p) }{ p^{2} }
\\
& =
- \frac{1}{p^{2}}
\\
& <
0
\label{eq:diff_z}
\end{align}
for $p \in (0, \frac{1}{n-1}]$, where (a) follows by the product rule;
namely, it follows from \eqref{eq:diff_z} that $z = z(n, p)$ is strictly decreasing for $p \in (0, \frac{1}{n-1}]$.
Moreover, we can see that
\begin{align}
\lim_{p \to 0^{+}} z(n, p)
& =
\lim_{p \to 0^{+}} \left( \frac{ 1 - (n-1) p }{ p } \right)
\\
& =
\lim_{p \to 0^{+}} \left( \frac{ 1 }{ p } - (n-1) \right)
\\
& =
+ \infty ,
\\
z(n, {\textstyle \frac{1}{n}})
& =
\left. \frac{ 1 - (n-1) p }{ p } \right|_{p = \frac{1}{n}}
\\
& =
\frac{ 1 - (n-1) \frac{1}{n} }{ \frac{1}{n} }
\\
& =
n - (n-1)
\\
& =
1 ,
\\
z(n, {\textstyle \frac{1}{n-1}})
& =
\left. \frac{ 1 - (n-1) p }{ p } \right|_{p = \frac{1}{n-1}}
\\
& =
\frac{ 1 - (n-1) \frac{1}{n-1} }{ \frac{1}{n-1} }
\\
& =
\frac{ 1 - 1 }{ \frac{1}{n-1} }
\\
& =
0 ,
\end{align}
which imply that
\begin{align}
0 < p \le \frac{1}{n}
& \iff
1 \le z < +\infty ,
\\
\frac{1}{n} \le p \le \frac{1}{n-1}
& \iff
0 \le z \le 1 .
\label{eq:range_z_w}
\end{align}
Therefore, it is enough to check the sign of $g(n, z, \alpha)$ for $z \in (1, +\infty)$ and $\alpha \in (0, 1) \cup (1, +\infty)$ rather than the sign of $g(n, p, \alpha)$ for $p \in (0, \frac{1}{n})$.
To analyze $g(n, z, \alpha)$, we calculate the partial derivatives of $g(n, z, \alpha)$ with respect to $\alpha$ as follows:
\begin{align}
\frac{ \partial g(n, z, \alpha) }{ \partial \alpha }
& \overset{\eqref{eq:g_z}}{=}
\frac{ \partial g(n, z, \alpha) }{ \partial \alpha } \left( (\alpha - 1) + \frac{ (z^{\alpha} + (n-1)) (z^{1-\alpha} - 1) }{ ((n-1) + z) \ln z } \right)
\\
& =
1 + \frac{ 1 }{ ((n-1) + z) \ln z } \left( \frac{ \partial }{ \partial \alpha } (z^{\alpha} + (n-1)) (z^{1-\alpha} - 1) \right)
\\
& =
1 + \frac{ 1 }{ ((n-1) + z) \ln z } \left( \frac{ \partial }{ \partial \alpha } ( z - z^{\alpha} + (n-1) z^{1 - \alpha} - (n-1) ) \right)
\\
& =
1 + \frac{ 1 }{ ((n-1) + z) \ln z } \left( (n-1) \left( \frac{ \partial z^{1 - \alpha} }{ \partial \alpha } \right) - \left( \frac{ \partial z^{\alpha} }{ \partial \alpha } \right) \right)
\\
& \overset{\text{(a)}}{=}
1 + \frac{ 1 }{ ((n-1) + z) \ln z } \left( (n-1) (\ln z) (-1) z^{1-\alpha} - (\ln z) (1) z^{\alpha} \right)
\\
& =
1 - \frac{ (n-1) z^{1-\alpha} + z^{\alpha} }{ (n-1) + z } ,
\label{eq:diff1_g} \\
\frac{ \partial^{2} g(n, z, \alpha) }{ \partial \alpha^{2} }
& \overset{\eqref{eq:diff1_g}}{=}
\frac{ \partial }{ \partial \alpha } \left( 1 - \frac{ (n-1) z^{1-\alpha} + z^{\alpha} }{ (n-1) + z } \right)
\\
& =
- \frac{ \frac{ \partial }{ \partial \alpha } ((n-1) z^{1-\alpha} + z^{\alpha}) }{ (n-1) + z }
\\
& \overset{\text{(b)}}{=}
- \frac{ (n-1) (\ln z) (-1) z^{1-\alpha} + (\ln z) (1) z^{\alpha} }{ (n-1) + z }
\\
& =
\left( (n-1) z^{1-\alpha} - z^{\alpha} \right) \frac{ \ln z }{ (n-1) + z } ,
\label{eq:diff2_g}
\\
\frac{ \partial^{3} g(n, z, \alpha) }{ \partial \alpha^{3} }
& \overset{\eqref{eq:diff2_g}}{=}
\frac{ \partial }{ \partial \alpha } \left( \left( (n-1) z^{1-\alpha} - z^{\alpha} \right) \frac{ \ln z }{ (n-1) + z } \right)
\\
& \overset{\text{(c)}}{=}
( (n-1) (\ln z) (-1) z^{1-\alpha} - (\ln z) (1) z^{\alpha} ) \frac{ \ln z }{ (n-1) + z }
\\
& =
\underbrace{ - \left( (n-1) z^{1-\alpha} + z^{\alpha} \right) }_{< 0} \underbrace{ \frac{ (\ln z)^{2} }{ (n-1) + z } }_{\ge 0}
\\
& =
\begin{cases}
< 0
& \mathrm{if} \ z \in (0, 1) \cup (1, +\infty) , \\
= 0
& \mathrm{if} \ z = 1 ,
\end{cases}
\label{eq:diff3_g}
\end{align}
where (a), (b), and (c) follow from the fact that $\frac{ \mathrm{d} (a^{f(x)}) }{ \mathrm{d} x } = (\ln a) \left( \frac{ \mathrm{d} f(x) }{ \mathrm{d} x } \right) a^{f(x)}$ for $a > 0$.
It follows from \eqref{eq:diff3_g} that, if $z > 1$, then $\frac{ \partial^{2} g(n, z, \alpha) }{ \partial \alpha^{2} }$ is strictly decreasing for $\alpha \in (-\infty, +\infty)$.
We derive the solution of the equation $\frac{ \partial^{2} g(n, z, \alpha) }{ \partial \alpha^{2} } = 0$ with respect to $\alpha$ as follows:
\begin{align}
&&
\frac{ \partial^{2} g(n, z, \alpha) }{ \partial \alpha^{2} }
& =
0
\\
& \overset{\eqref{eq:diff2_g}}{\iff} &
\left( (n-1) z^{1-\alpha} - z^{\alpha} \right) \frac{ \ln z }{ (n-1) + z }
& =
0
\\
& \iff &
(n-1) z^{1-\alpha} - z^{\alpha}
& =
0
\\
& \iff &
(n-1) z^{1-\alpha}
& =
z^{\alpha}
\\
& \iff &
(n-1)
& =
z^{2\alpha-1}
\label{root:diff2_g}
\\
& \iff &
\ln (n-1)
& =
\ln z^{2\alpha-1}
\\
& \iff &
\ln (n-1)
& =
( 2 \alpha - 1 ) \ln z
\\
& \iff &
2 \alpha - 1
& =
\frac{ \ln (n-1) }{ \ln z }
\\
& \iff &
\alpha
& =
\frac{1}{2} \left( 1 + \frac{ \ln (n-1) }{ \ln z } \right) .
\end{align}
Thus, we can denote by
\begin{align}
\alpha_{2}(n, z)
& \triangleq
\frac{ 1 }{ 2 } \left( 1 + \frac{ \ln (n-1) }{ \ln z } \right)
\label{def:a2}
\end{align}
the solution of $\frac{ \partial^{2} g(n, z, \alpha) }{ \partial \alpha^{2} } = 0$ with respect to $\alpha$ for $z \in (0, 1) \cup (1, +\infty)$.
Similarly, the solution of $\frac{ \partial^{2} g(n, z, \alpha) }{ \partial \alpha^{2} } = 0$ with respect to $z$ can also be derived as follows:
\begin{align}
&&
\frac{ \partial^{2} g(n, z, \alpha) }{ \partial \alpha^{2} }
& =
0
\\
& \overset{\eqref{root:diff2_g}}{\iff} &
(n-1)
& =
z^{2\alpha-1}
\\
& \iff &
z
& =
(n-1)^{\frac{1}{2\alpha-1}} .
\end{align}
Thus, we can also denote by
\begin{align}
z_{2}(n, \alpha)
& \triangleq
(n-1)^{\frac{1}{2 \alpha-1}}
\label{def:z2}
\end{align}
the root of $\frac{ \partial^{2} g(n, z, \alpha) }{ \partial \alpha^{2} } = 0$ with respect to $z$ for $\alpha \in (-\infty, \frac{1}{2}) \cup (\frac{1}{2}, +\infty)$.
Since $z_{2}(n, \cdot)$ is the inverse function of $\alpha_{2}(n, z)$ for $z \in (0, 1) \cup (1, +\infty)$, note that
\begin{align}
\alpha_{2}(n, z) = \alpha
\iff
z_{2}(n, \alpha) = z
\label{eq:inverse_a2z2}
\end{align}
for $z \in (0, 1) \cup (1, +\infty)$ and $\alpha \in (-\infty, \frac{1}{2}) \cup (\frac{1}{2}, +\infty)$.
Further, note that
\begin{align}
\alpha_{2}(2, z)
& \overset{\eqref{def:a2}}{=}
\left. \frac{1}{2} \left( 1 + \frac{ \ln (n-1) }{ \ln z } \right) \right|_{n = 2}
\\
& =
\frac{1}{2} \left( 1 + \frac{ \ln 1 }{ \ln z } \right)
\\
& =
\frac{1}{2} ,
\label{eq:alpha2_n2} \\
z_{2}(2, \alpha)
& \overset{\eqref{def:z2}}{=}
\left. (n-1)^{\frac{1}{2 \alpha - 1}} \right|_{n = 2}
\\
& =
1^{\frac{1}{2\alpha-1}}
\\
& =
1
\end{align}
for $z \in (0, 1) \cup (1, +\infty)$ and $\alpha \in (-\infty, \frac{1}{2}) \cup (\frac{1}{2}, +\infty)$.
If $n \ge 3$, we can see the following limiting values:
\begin{align}
\lim_{z \to 0^{+}} \alpha_{2}(n, z)
& =
\frac{1}{2} ,
\label{eq:alpha2_0} \\
\lim_{z \to 1^{-}} \alpha_{2}(n, z)
& =
- \infty,
\label{eq:alpha2_1_minus} \\
\lim_{z \to 1^{+}} \alpha_{2}(n, z)
& =
+ \infty,
\label{eq:alpha2_1_plus} \\
\lim_{z \to +\infty} \alpha_{2}(n, z)
& =
\frac{1}{2} ,
\label{eq:alpha2_infty} \\
\lim_{\alpha \to -\infty} z_{2}(n, \alpha)
& =
1 ,
\\
\lim_{\alpha \to (\frac{1}{2})^{-}} z_{2}(n, \alpha)
& =
0 ,
\\
\lim_{\alpha \to (\frac{1}{2})^{+}} z_{2}(n, \alpha)
& =
+\infty ,
\\
\lim_{\alpha \to +\infty} z_{2}(n, \alpha)
& =
1 .
\end{align}
Calculating the derivative of $\alpha_{2}(n, z)$ with respect to $z$ as
\begin{align}
\frac{ \partial \alpha_{2}(n, z) }{ \partial z }
& =
\frac{ \partial }{ \partial z } \left( \frac{ 1 }{ 2 } \left( 1 + \frac{ \ln (n-1) }{ \ln z } \right) \right)
\\
& =
\frac{ \ln (n-1) }{ 2 } \left( \frac{ \partial }{ \partial z } \left( \frac{ 1 }{ \ln z } \right) \right)
\\
& =
\frac{ \ln (n-1) }{ 2 } \left( - \frac{ 1 }{ (\ln z)^{2} } \left( \frac{ \partial \ln z }{ \partial z } \right) \right)
\\
& =
\frac{ \ln (n-1) }{ 2 } \left( - \frac{ 1 }{ (\ln z)^{2} } \frac{ 1 }{ z } \right)
\\
& =
- \frac{ \ln (n-1) }{ 2 z (\ln z)^{2} }
\\
& =
\begin{cases}
< 0
& \mathrm{if} \ z \in (0, 1) \cup (1, \infty) \ \mathrm{and} \ n \ge 3 , \\
= 0
& \mathrm{if} \ z \in (0, 1) \cup (1, \infty) \ \mathrm{and} \ n = 2 , \\
\end{cases}
\label{eq:diff_alpha1}
\end{align}
we see that, if $n \ge 3$, then
\begin{itemize}
\item
$\alpha_{2}(n, z)$ is strictly decreasing for $z \in (0, 1)$ and
\item
$\alpha_{2}(n, z)$ is strictly decreasing for $z \in (1, +\infty)$.
\end{itemize}
Moreover, the inverse function theorem shows that, if $n \ge 3$, then
\begin{itemize}
\item
$z_{2}(n, \alpha)$ is strictly decreasing for $\alpha \in (-\infty, \frac{1}{2})$ and
\item
$z_{2}(n, \alpha)$ is strictly decreasing for $\alpha \in (\frac{1}{2}, +\infty)$.
\end{itemize}
Since
\begin{itemize}
\item
if $z \in (0, 1) \cup (1, +\infty)$, then $\frac{ \partial^{2} g(n, z, \alpha) }{ \partial \alpha^{2} }$ is strictly decreasing for $\alpha \in (-\infty, +\infty)$ (see Eq. \eqref{eq:diff3_g}) and
\item
$\left. \frac{ \partial^{2} g(n, z, \alpha) }{ \partial \alpha^{2} } \right|_{\alpha = \alpha_{2}(n, z)} = 0$ for $z \in (0, 1) \cup (1, +\infty)$,
\end{itemize}
we can see that
\begin{itemize}
\item
if $z \in (0, 1) \cup (1, +\infty)$, then
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} g(n, z, \alpha) }{ \partial \alpha^{2} } \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha < \alpha_{2}(n, z) , \\
0
& \mathrm{if} \ \alpha = \alpha_{2}(n, z) , \\
-1
& \mathrm{if} \ \alpha > \alpha_{2}(n, z) .
\end{cases}
\label{eq:sign_diff2_g}
\end{align}
\end{itemize}
Hence, we have the following monotonicity:
\begin{itemize}
\item
for a fixed $z \in (0, 1) \cup (1, +\infty)$, the stationary point (global maximum) of $\frac{ \partial g(n, z, \alpha) }{ \partial \alpha }$ is at $\alpha = \alpha_{2}(n, z)$ and
\item
if $z \in (0, 1) \cup (1, +\infty)$, then
\begin{itemize}
\item
$\frac{ \partial g(n, z, \alpha) }{ \partial \alpha }$ is strictly increasing for $\alpha \in (- \infty, \alpha_{2}(n, z)]$ and
\item
$\frac{ \partial g(n, z, \alpha) }{ \partial \alpha }$ is strictly decreasing for $\alpha \in [\alpha_{2}(n, z), + \infty)$.
\end{itemize}
\end{itemize}
We now verify the sign of $\frac{ \partial g(n, z, \alpha) }{ \partial \alpha }$; namely, the monotonicity of $g(n, z, \alpha)$ with respect to $\alpha$ are examined.
Substituting $\alpha = 1$ into $\frac{ \partial g(n, z, \alpha) }{ \partial \alpha }$, we see that
\begin{align}
\left. \frac{ \partial g(n, z, \alpha) }{ \partial \alpha } \right|_{\alpha = 1}
& \overset{\eqref{eq:diff1_g}}{=}
\left. \left( 1 - \frac{ (n-1) z^{1-\alpha} + z^{\alpha} }{ (n-1) + z } \right) \right|_{\alpha = 1}
\\
& =
1 - \frac{ (n-1) + z }{ (n-1) + z }
\\
& =
1 - 1
\\
& =
0 .
\label{eq:diff1_g_a1}
\end{align}
Similarly, substituting $z = 1$ into $\frac{ \partial g(n, z, \alpha) }{ \partial \alpha }$, we see that
\begin{align}
\left. \frac{ \partial g(n, z, \alpha) }{ \partial \alpha } \right|_{z = 1}
& \overset{\eqref{eq:diff1_g}}{=}
\left. \left( 1 - \frac{ (n-1) z^{1-\alpha} + z^{\alpha} }{ (n-1) + z } \right) \right|_{z = 1}
\\
& =
1 - \frac{ (n-1) + 1 }{ (n-1) + 1 }
\\
& =
1 - 1
\\
& =
0 .
\label{eq:diff1_g_z1}
\end{align}
Moreover, we derive another solution of $\frac{ \partial g(n, z, \alpha) }{ \partial \alpha } = 0$ with respect to $\alpha$ as follows:
\begin{align}
&&
\frac{ \partial g(n, z, \alpha) }{ \partial \alpha }
& =
0
\\
& \overset{\eqref{eq:diff1_g}}{\iff} &
1 - \frac{ (n-1) z^{1-\alpha} + z^{\alpha} }{ (n-1) + z }
& =
0
\\
& \iff &
\frac{ (n-1) z^{1-\alpha} + z^{\alpha} }{ (n-1) + z }
& =
1
\\
& \iff &
(n-1) z^{1-\alpha} + z^{\alpha}
& =
(n-1) + z
\\
& \iff &
(n-1) z^{1-\alpha} - (n-1)
& =
z - z^{\alpha}
\\
& \iff &
(n-1) (z^{1-\alpha} - 1)
& =
z^{\alpha} (z^{1-\alpha} - 1) ;
\\
& \iff &
(n-1)
& =
z^{\alpha}
\label{eq:root_diff1_g} \\
& \iff &
\ln (n-1)
& =
\ln z^{\alpha}
\\
& \iff &
\ln (n-1)
& =
\alpha \ln z
\\
& \iff &
\alpha
& =
\frac{ \ln (n-1) }{ \ln z } .
\label{eq:diff1_g_root}
\end{align}
Thus, a solution of $\frac{ \partial g(n, z, \alpha) }{ \partial \alpha } = 0$ with respect to $\alpha$ can be denoted by
\begin{align}
\alpha_{1}(n, z)
\triangleq
\frac{ \ln (n-1) }{ \ln z }
\label{def:a1}
\end{align}
for $n \ge 2$ and $z \in (0, 1) \cup (1, +\infty)$.
We can also derive another solution of $\frac{ \partial g(n, z, \alpha) }{ \partial \alpha } = 0$ with respect to $z$ as follows:
\begin{align}
&&
\frac{ \partial g(n, z, \alpha) }{ \partial \alpha }
& =
0
\\
& \overset{\eqref{eq:root_diff1_g}}{\iff} &
(n-1)
& =
z^{\alpha}
\\
& \iff &
z
& =
(n-1)^{\frac{1}{\alpha}} .
\label{eq:unless_alpha=0}
\end{align}
Thus, we can also denote by
\begin{align}
z_{1}(n, \alpha)
& \triangleq
(n-1)^{\frac{1}{\alpha}}
\label{def:z1}
\end{align}
a solution of $\frac{ \partial g(n, z, \alpha) }{ \partial \alpha } = 0$ with respect to $z$ for $\alpha \in (-\infty, 0) \cup (0, +\infty)$.
As with \eqref{eq:inverse_a2z2}, since $z_{1}(n, \cdot)$ is the inverse function of $\alpha_{1}(n, z)$ for $z \in (0, 1) \cup (1, +\infty)$, note that
\begin{align}
\alpha_{1}(n, z) = \alpha
\iff
z_{1}(n, \alpha) = z
\end{align}
for $z \in (0, 1) \cup (1, +\infty)$ and $\alpha \in (-\infty, 0) \cup (0, +\infty)$.
Further, note that
\begin{align}
\alpha_{1}(2, z)
& \overset{\eqref{def:a1}}{=}
\left. \frac{ \ln (n-1) }{ \ln z } \right|_{n = 2}
\\
& =
\frac{ \ln 1 }{ \ln z }
\\
& =
0 ,
\label{eq:alpha1_n2} \\
z_{1}(2, \alpha)
& \overset{\eqref{def:z1}}{=}
\left. (n-1)^{\frac{1}{\alpha}} \right|_{n = 2}
\\
& =
1^{\frac{1}{\alpha}}
\\
& =
1
\end{align}
for $z \in (0, 1) \cup (1, +\infty)$ and $\alpha \in (-\infty, 0) \cup (0, +\infty)$.
If $n \ge 3$, we can see the following limiting values:
\begin{align}
\lim_{z \to 0^{+}} \alpha_{1}(n, z)
& =
0 ,
\\
\lim_{z \to 1^{-}} \alpha_{1}(n, z)
& =
- \infty ,
\\
\lim_{z \to 1^{+}} \alpha_{1}(n, z)
& =
+ \infty ,
\label{eq:alpha1_lim_z=0+}
\\
\lim_{z \to +\infty} \alpha_{1}(n, z)
& =
0 ,
\\
\lim_{\alpha \to -\infty} z_{1}(n, \alpha)
& =
1 ,
\\
\lim_{\alpha \to 0^{-}} z_{1}(n, \alpha)
& =
0 ,
\\
\lim_{\alpha \to 0^{+}} z_{1}(n, \alpha)
& =
+\infty ,
\\
\lim_{\alpha \to +\infty} z_{1}(n, \alpha)
& =
1 .
\end{align}
Calculating the derivative of $\alpha_{1}(n, z)$ with respect to $z$ as
\begin{align}
\frac{ \partial \alpha_{1}(n, z) }{ \partial z }
& =
\frac{ \partial }{ \partial z } \left( \frac{ \ln (n-1) }{ \ln z } \right)
\\
& =
- \frac{ \ln (n-1) }{ (\ln z)^{2} } \left( \frac{ \mathrm{d} \ln z }{ \mathrm{d} z } \right)
\\
& =
- \frac{ \ln (n-1) }{ z (\ln z)^{2} }
\\
& =
\begin{cases}
< 0
& \mathrm{if} \ z \in (0, 1) \cup (1, +\infty) \ \mathrm{and} \ n \ge 3 , \\
= 0
& \mathrm{if} \ z \in (0, 1) \cup (1, +\infty) \ \mathrm{and} \ n = 2 , \\
\end{cases}
\label{eq:diff1_alpha1}
\end{align}
we can see that, if $n \ge 3$, then
\begin{itemize}
\item
$\alpha_{1}(n, z)$ is strictly decreasing for $z \in (0, 1)$ and
\item
$\alpha_{1}(n, z)$ is strictly decreasing for $z \in (1, +\infty)$.
\end{itemize}
Moreover, the inverse function theorem shows that, if $n \ge 3$, then
\begin{itemize}
\item
$z_{1}(n, \alpha)$ is strictly decreasing for $\alpha \in (-\infty, 0)$ and
\item
$z_{1}(n, \alpha)$ is strictly decreasing for $\alpha \in (0, +\infty)$.
\end{itemize}
We now check the magnitude relation between $\alpha_{1}(n, z)$ and $\alpha_{2}(n, z)$.
When $n = 2$, it follows from \eqref{eq:alpha2_n2} and \eqref{eq:alpha1_n2} that
\begin{align}
\underbrace{ \alpha_{1}(2, z) }_{ = 0 } < \underbrace{ \alpha_{2}(2, z) }_{ = \frac{1}{2} } < 1
\end{align}
for $z \in (0, 1) \cup (1, +\infty)$.
Hence, if $n = 2$, we readily see from \eqref{eq:diff1_g_z1} that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial g(2, z, \alpha) }{ \partial \alpha } \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha \in (0, 1) , \\
0
& \mathrm{if} \ \alpha \in \{ 0, 1 \} , \\
-1
& \mathrm{if} \ \alpha \in (-\infty, 0) \cup (1, +\infty)
\end{cases}
\label{eq:diff1_g_n2}
\end{align}
for $z \in (0, 1) \cup (1, +\infty)$;
and therefore, for any $z \in (0, 1) \cup (1, +\infty)$, we have that
\begin{itemize}
\item
the stationary point (local minimum) $g(2, z, \alpha)$ is at $\alpha = \alpha_{1}(2, z) = 0$,
\item
the inflection point of $g(2, z, \alpha)$ is at $\alpha = \alpha_{2}(2, z) = \frac{1}{2}$,
\item
the stationary point (local maximum) $g(2, z, \alpha)$ is at $\alpha = 1$,
\item
$g(2, z, \alpha)$ is strictly decreasing for $\alpha \in (-\infty, 0]$,
\item
$g(2, z, \alpha)$ is strictly increasing for $\alpha \in [0, 1]$, and
\item
$g(2, z, \alpha)$ is strictly decreasing for $\alpha \in [1, +\infty)$.
\end{itemize}
We now consider the magunitude relation between $\alpha_{1}(n, z)$ and $\alpha_{2}(n, z)$ for $n \ge 3$.
Direct calculation yields
\begin{align}
\alpha_{2}(n, z) - \alpha_{1}(n, z)
& =
\frac{1}{2} \left( 1 + \frac{ \ln (n-1) }{ \ln z } \right) - \frac{ \ln (n-1) }{ \ln z }
\\
& =
\frac{1}{2} \left( 1 - \frac{ \ln (n-1) }{ \ln z } \right) ,
\\
\frac{ \partial \left( \vphantom{\frac{1}{x}} \alpha_{2}(n, z) - \alpha_{1}(n, z) \right) }{ \partial z }
& =
\frac{ \partial }{ \partial z } \left( \frac{1}{2} \left( 1 - \frac{ \ln (n-1) }{ \ln z } \right) \right)
\\
& =
- \frac{1}{2} \left( \frac{ \partial }{ \partial z } \left( \frac{ \ln (n-1) }{ \ln z } \right) \right)
\\
& =
- \frac{1}{2} \left( - \frac{ \ln (n-1) }{ (\ln z)^{2} } \right) \left( \frac{ \mathrm{d} \ln z }{ \mathrm{d} z } \right)
\\
& =
\frac{ \ln (n-1) }{ 2 z (\ln z)^{2} }
\\
& =
\begin{cases}
> 0
& \mathrm{if} \ z \in (0, 1) \cup (1, +\infty) \ \mathrm{and} \ n \ge 3 , \\
= 0
& \mathrm{if} \ z \in (0, 1) \cup (1, +\infty) \ \mathrm{and} \ n = 2 ,
\end{cases}
\\
\lim_{z \to 0^{+}} \left( \vphantom{\frac{1}{x}} \alpha_{2}(n, z) - \alpha_{1}(n, z) \right)
& =
\frac{1}{2} ,
\\
\lim_{z \to 1^{-}} \left( \vphantom{\frac{1}{x}} \alpha_{2}(n, z) - \alpha_{1}(n, z) \right)
& =
+ \infty ,
\\
\lim_{z \to 1^{+}} \left( \vphantom{\frac{1}{x}} \alpha_{2}(n, z) - \alpha_{1}(n, z) \right)
& =
- \infty ,
\\
\left. \left( \vphantom{\frac{1}{x}} \alpha_{2}(n, z) - \alpha_{1}(n, z) \right) \right|_{z = n-1}
& =
0 ,
\\
\lim_{z \to +\infty} \left( \vphantom{\frac{1}{x}} \alpha_{2}(n, z) - \alpha_{1}(n, z) \right)
& =
\frac{1}{2} .
\end{align}
From the above equations, the following magnitude relations hold:
\begin{align}
\left\{
\begin{array}{ll}
\alpha_{1}(n, z) < 0 < \alpha_{2}(n, z) < \frac{1}{2} & \mathrm{if} \ z \in (0, \frac{1}{n-1}) , \\
\alpha_{1}(n, z) = -1 \ \mathrm{and} \ \alpha_{2}(n, z) = 0 & \mathrm{if} \ z = \frac{1}{n-1} , \\
\alpha_{1}(n, z) < \alpha_{2}(n, z) < 0 < 1 & \mathrm{if} \ z = (\frac{1}{n-1}, 1) , \\
0 < 1 < \alpha_{2}(n, z) < \alpha_{1}(n, z) & \mathrm{if} \ z \in (1, n-1) , \\
\alpha_{1}(n, z) = \alpha_{2}(n, z) = 1 & \mathrm{if} \ z = n-1 , \\
0 < \alpha_{1}(n, z) < \alpha_{2}(n, z) < 1 & \mathrm{if} \ z \in (n-1, +\infty)
\end{array}
\right.
\label{eq:range_a1a2}
\end{align}
for $n \ge 3$.
Moreover, since $z_{1}(n, \alpha)$ and $z_{2}(n, \alpha)$ are the inverse functions of $\alpha_{1}(n, z)$ and $\alpha_{2}(n, z)$, respectively, with respect to $z \in (0, 1) \cup (1, +\infty)$, we also have the following magnitude relations:
\begin{align}
\left\{
\begin{array}{ll}
0 < z_{1}(n, \alpha) < z_{2}(n, \alpha) < 1
& \mathrm{if} \ \alpha \in (-\infty, 0) ,
\\
z_{1}(n, \alpha) \ \mathrm{is} \ \mathrm{undefined} \ \mathrm{and} \ z_{2}(n, \alpha) = \frac{1}{n-1}
& \mathrm{if} \ \alpha = 0 ,
\\
0 < z_{2}(n, \alpha) < \frac{1}{n-1} \ \mathrm{and} \ (n-1)^{2} < z_{1}(n, \alpha)
& \mathrm{if} \ \alpha \in (0, \frac{1}{2}) ,
\\
z_{1}(n, \alpha) = (n-1)^{2} \ \mathrm{and} \ z_{2}(n, \alpha) \ \mathrm{is} \ \mathrm{undefined}
& \mathrm{if} \ \alpha = \frac{1}{2} ,
\\
n-1 < z_{1}(n, \alpha) < z_{2}(n, \alpha)
& \mathrm{if} \ \alpha \in (\frac{1}{2}, 1) ,
\\
z_{1}(n, \alpha) = z_{1}(n, \alpha) = n-1
& \mathrm{if} \ \alpha = 1 ,
\\
1 < z_{2}(n, \alpha) < z_{1}(n, \alpha) < n-1
& \mathrm{if} \ \alpha \in (1, +\infty) .
\end{array}
\right.
\label{eq:range_z1z2_orig}
\end{align}
Noting the above magnitude relations, we can see from \eqref{eq:sign_diff2_g} that
\begin{itemize}
\item
if $z \in (0, 1) \cup (n-1, +\infty)$, then
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial g(n, z, \alpha) }{ \partial \alpha } \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha \in (\alpha_{1}(n, z), 1) , \\
0
& \mathrm{if} \ \alpha \in \{ \alpha_{1}(n, z), 1 \} , \\
-1
& \mathrm{if} \ \alpha \in (-\infty, \alpha_{1}(n, z)) \cup (1, +\infty) ,
\end{cases}
\label{eq:diff1_g_alpha_part1}
\end{align}
\item
if $z \in (1, n-1)$, then
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial g(n, z, \alpha) }{ \partial \alpha } \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha \in (1, \alpha_{1}(n, z)) , \\
0
& \mathrm{if} \ \alpha \in \{ 1, \alpha_{1}(n, z) \} , \\
-1
& \mathrm{if} \ \alpha \in (-\infty, 1) \cup (\alpha_{1}(n, z), +\infty) ,
\end{cases}
\label{eq:diff1_g_alpha_part2}
\end{align}
and
\item
if $z = n-1$, then
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial g(n, z, \alpha) }{ \partial \alpha } \right)
=
\begin{cases}
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha \in (-\infty, 1) \cup (1, +\infty) .
\end{cases}
\label{eq:diff1_g_alpha_part3}
\end{align}
\end{itemize}
Therefore, if $n \ge 3$, we have the following monotonicity:
\begin{itemize}
\item
if $z \in (0, 1) \cup (n-1, +\infty)$, then
\begin{itemize}
\item
$\alpha_{1}(n, z) < \alpha_{2}(n, z) < 1$,
\item
the stationary point (local minimum) of $g(n, z, \alpha)$ is at $\alpha = \alpha_{1}(n, z)$,
\item
the inflection point of $g(n, z, \alpha)$ is at $\alpha = \alpha_{2}(n, z)$,
\item
the stationary point (local maximum) of $g(n, z, \alpha)$ is at $\alpha = 1$,
\item
$g(n, z, \alpha)$ is strictly decreasing for $\alpha \in (-\infty, \alpha_{1}(n, z)]$,
\item
$g(n, z, \alpha)$ is strictly increasing for $\alpha \in [\alpha_{1}(n, z), 1]$, and
\item
$g(n, z, \alpha)$ is strictly decreasing for $\alpha \in [1, +\infty)$,
\end{itemize}
\item
if $z \in (1, n-1)$, then
\begin{itemize}
\item
$1 < \alpha_{2}(n, z) < \alpha_{1}(n, z)$,
\item
the stationary point (local minimum) of $g(n, z, \alpha)$ is at $\alpha = 1$,
\item
the inflection point of $g(n, z, \alpha)$ is at $\alpha = \alpha_{2}(n, z)$,
\item
the stationary point (local maximum) of $g(n, z, \alpha)$ is at $\alpha = \alpha_{1}(n, z)$,
\item
$g(n, z, \alpha)$ is strictly decreasing for $\alpha \in (-\infty, 1]$,
\item
$g(n, z, \alpha)$ is strictly increasing for $\alpha \in [1, \alpha_{1}(n, z)]$, and
\item
$g(n, z, \alpha)$ is strictly decreasing for $\alpha \in [\alpha_{1}(n, z), +\infty)$,
\end{itemize}
\item
if $z = n-1$, then
\begin{itemize}
\item
$\alpha_{1}(n, z) = \alpha_{2}(n, z) = 1$,
\item
the stationary point (saddle point) of $g(n, z, \alpha)$ is at $\alpha = 1$,
\item
the inflection point of $g(n, z, \alpha)$ is at $\alpha = 1$, and
\item
$g(n, z, \alpha)$ is strictly decreasing for $\alpha \in (-\infty, +\infty)$.
\end{itemize}
\end{itemize}
We now calculate the limiting values of $g(n, z, \alpha)$ with respect to $\alpha$.
It can be seen that
\begin{align}
\lim_{\alpha \to -\infty} g(n, z, \alpha)
& =
\lim_{\alpha \to -\infty} \left( (\alpha-1) + \frac{ ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) }{ ((n-1) + z) \ln z } \right)
\\
& =
\lim_{\alpha \to -\infty} \left( (\alpha-1) + \frac{ (n-1) z^{1-\alpha} - (n-1) + z - z^{\alpha} }{ ((n-1) + z) \ln z } \right)
\\
&
+\infty
\label{eq:g_lim_-}
\end{align}
for $z \in (0, 1) \cup (1, +\infty)$, where note that
\begin{align}
\lim_{\alpha \to -\infty} z^{1-\alpha}
& =
\begin{cases}
+\infty
& \mathrm{if} \ z \in (1, +\infty) , \\
0
& \mathrm{if} \ z \in (0, 1) ,
\end{cases}
\\
\lim_{\alpha \to -\infty} z^{\alpha}
& =
\begin{cases}
+\infty
& \mathrm{if} \ z \in (0, 1) , \\
0
& \mathrm{if} \ z \in (1, +\infty) .
\end{cases}
\end{align}
Similarly, we have
\begin{align}
\lim_{\alpha \to +\infty} g(n, z, \alpha)
& =
\lim_{\alpha \to +\infty} \left( (\alpha-1) + \frac{ ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) }{ ((n-1) + z) \ln z } \right)
\\
& =
-\infty
\label{eq:g_lim_+}
\end{align}
for $z \in (0, 1) \cup (1, +\infty)$.
On the other hand, it easy see that
\begin{align}
g(n, z, 1)
& =
\left. \left( (\alpha-1) + \frac{ ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) }{ ((n-1) + z) \ln z } \right) \right|_{\alpha = 1}
\\
& =
(1-1) + \frac{ ((n-1) + z^{1}) (z^{1-1} - 1) }{ ((n-1) + z) \ln z }
\\
& =
\frac{ ((n-1) + z) (z^{0} - 1) }{ ((n-1) + z) \ln z }
\\
& =
\frac{ 1 - 1 }{ \ln z }
\\
& =
0
\label{eq:g_alpha_is_0}
\end{align}
for $n \ge 2$ and $z \in (0, 1) \cup (1, +\infty)$.
Using the above results, we now consider the sign of $g(n, z, \alpha)$ with $n = 2$.
Since
\begin{itemize}
\item
the following monotonicity of $g(2, z, \alpha)$ hold (see Eq. \eqref{eq:diff1_g_n2}):
\begin{itemize}
\item
$g(2, z, \alpha)$ is strictly decreasing for $\alpha \in (-\infty, 0]$,
\item
$g(2, z, \alpha)$ is strictly increasing for $\alpha \in [0, 1]$, and
\item
$g(2, z, \alpha)$ is strictly decreasing for $\alpha \in [1, +\infty)$,
\end{itemize}
\item
for $z \in (0, 1) \cup (1, +\infty)$, $g(2, z, 1) = 0$ (see Eq. \eqref{eq:g_alpha_is_0}), and
\item
for $z \in (0, 1) \cup (1, +\infty)$, $g(2, z, \alpha) \to -\infty$ as $\alpha \to +\infty$ (see Eq. \eqref{eq:g_lim_+}),
\end{itemize}
we observe that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(2, z, \alpha) \right)
=
\begin{cases}
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha \neq 1
\end{cases}
\end{align}
for $\alpha \in (0, 1) \cup (1, +\infty)$ and $z \in (0, 1) \cup (1, +\infty)$.
Note that this results for $n=2$ is same as \cite[Appendix I]{fabregas}.
We further consider the sign of $g(n, z, \alpha)$ for $n \ge 3$ by using the above analyses.
We first show the following lemma.
\begin{lemma}[The case of $\alpha = 0$]
\label{lem:g_a0}
For any $n \ge 3$, there exists $\kappa_{p}( n ) \in (\mathrm{e}^{-n}, \frac{1}{n(n-1)})$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, p, 0) \right)
=
\begin{cases}
1
& \mathrm{if} \ p \in (\kappa_{p}( n ), \frac{1}{n}) , \\
0
& \mathrm{if} \ p = \kappa_{p}( n ) , \\
-1
& \mathrm{if} \ p \in (0, \kappa_{p}( n )) \cup (\frac{1}{n}, \frac{1}{n-1}) .
\end{cases}
\label{eq:g_alpha0_p}
\end{align}
On the other hand, if $n = 2$, then $g(2, p, 0) < 0$ for $p \in (0, \frac{1}{2}) \cup (\frac{1}{2}, 1)$.
\end{lemma}
The proof of Lemma \ref{lem:g_a0} is given in Appendix \ref{app:g_a0}.
Since $z = \frac{1 - (n-1) p}{ p }$, we see that
\begin{align}
p = \mathrm{e}^{- n}
& \iff
z = \mathrm{e}^{n} - (n-1) ,
\\
p = \frac{1}{n (n-1)}
& \iff
z = (n-1)^{2} .
\end{align}
Thus, Eq. \eqref{eq:g_alpha0_p} of Lemma \ref{lem:g_a0} can be rewritten as follows:
for any $n \ge 3$, there exists $\kappa_{z}(n) \in (\mathrm{e}^{n} - (n-1), (n-1)^{2})$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, 0) \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (1, \kappa_{z}(n)) , \\
0
& \mathrm{if} \ z = \kappa_{z}(n) , \\
-1
& \mathrm{if} \ z \in (0, 1) \cup (\kappa_{z}(n), +\infty) .
\end{cases}
\label{eq:g_kappa_z}
\end{align}
By the intermediate value theorem and the monotonicity of $g(n, z, \alpha)$ with respect to $\alpha$ (see Eqs. \eqref{eq:diff1_g_alpha_part1}, \eqref{eq:diff1_g_alpha_part2}, and \eqref{eq:diff1_g_alpha_part3}), for a fixed $n \ge 3$, the sign of $g(n, z, \alpha)$ is evaluated as follows:
\begin{itemize}
\item
for any $z \in (1, n-1)$, there exists $\xi_{n}( z ) \in ( \alpha_{1}( n, z ), +\infty )$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha \in (-\infty, 1) \cup (1, \xi_{n}( z )) , \\
0
& \mathrm{if} \ \alpha \in \{ 1, \xi_{n}( z ) \} , \\
-1
& \mathrm{if} \ \alpha \in (\xi_{n}( z ), +\infty) ,
\end{cases}
\label{eq:sign_gz_part1}
\end{align}
\item
if $z = n - 1$, then
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha \in (-\infty, 1) , \\
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha \in (1, +\infty) ,
\end{cases}
\label{eq:sign_gz_part2}
\end{align}
\item
for any $z \in (n-1, \kappa_{z}(n))$, there exists $\xi_{n}( z ) \in ( 0, \alpha_{1}( n, z ) )$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha \in (-\infty, \xi_{n}( z )) , \\
0
& \mathrm{if} \ \alpha \in \{ \xi_{n}( z ), 1 \} , \\
-1
& \mathrm{if} \ \alpha \in (\xi_{n}( z ), 1) \cup (1, +\infty) ,
\end{cases}
\label{eq:sign_gz_part3}
\end{align}
\item
if $z = \kappa_{z}(n)$, then
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha \in (-\infty, 0) , \\
0
& \mathrm{if} \ \alpha = \{ 0, 1 \} , \\
-1
& \mathrm{if} \ \alpha \in (0, 1) \cup (1, +\infty) ,
\end{cases}
\label{eq:sign_gz_part4}
\end{align}
and
\item
for any $z \in (\kappa_{z}(n), +\infty)$, there exists $\xi_{n}( z ) \in ( -\infty, 0 )$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha \in (-\infty, \xi_{n}( z )) , \\
0
& \mathrm{if} \ \alpha \in \{ \xi_{n}( z ), 1 \} , \\
-1
& \mathrm{if} \ \alpha \in (\xi_{n}( z ), 1) \cup (1, +\infty) .
\end{cases}
\label{eq:sign_gz_part5}
\end{align}
\end{itemize}
Note that, to prove the above statements, we use the following equalities:
\begin{align}
\lim_{\alpha \to -\infty} g(n, z, \alpha)
& \overset{\eqref{eq:g_lim_-}}{=}
+\infty ,
\\
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, 0) \right)
& \overset{\eqref{eq:g_kappa_z}}{=}
\begin{cases}
1
& \mathrm{if} \ z \in (1, \kappa_{z}(n)) , \\
0
& \mathrm{if} \ z = \kappa_{z}(n) , \\
-1
& \mathrm{if} \ z \in (0, 1) \cup (\kappa_{z}(n), +\infty) ,
\end{cases}
\\
g(n, z, 1)
& \overset{\eqref{eq:g_alpha_is_0}}{=}
0 ,
\\
\lim_{\alpha \to +\infty} g(n, z, \alpha)
& \overset{\eqref{eq:g_lim_+}}{=}
-\infty .
\end{align}
Henceforth, for a fixed $n \ge 3$, we will prove that the value $\xi_{n}( z )$, used in \eqref{eq:sign_gz_part1} and \eqref{eq:sign_gz_part3}, is strictly decreasing for $z \in (1, (n-1)^{2}]$.
Note that, for any $\epsilon > 0$, there exists $\delta( \epsilon ) > 0$ such that $\xi_{n}( z ) > \epsilon$ for all $1 < z < 1 + \delta( \epsilon )$ since $\alpha_{1}(n, z) \to +\infty$ as $z \to 1^{+}$ (see Eq. \eqref{eq:alpha1_lim_z=0+}) and $\xi_{n}( z ) \in (\alpha_{1}(n, z), +\infty)$ when $z \in (1, n-1)$ (see Eq. \eqref{eq:sign_gz_part1}).
To put it simply, we see that
\begin{align}
\lim_{z \to 1^{+}} \xi_{n}( z )
=
+\infty .
\label{eq:xi_infty}
\end{align}
To show the monotonicity of $\xi_{n}( z )$ with respect to $z \in (1, (n-1)^{2}]$, we now provide the following three lemmas:
\begin{lemma}
\label{lem:dzda}
For any fixed $n \ge 3$, the following statements hold:
\begin{itemize}
\item
for any $\alpha \in (-\infty, 0)$, there exists $\gamma(n, \alpha) \in (z_{1}(n, 2 \alpha), z_{2}(n, \alpha))$ such that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (0, \gamma(n, \alpha)) , \\
0
& \mathrm{if} \ z = \gamma(n, \alpha) , \\
-1
& \mathrm{if} \ z \in (\gamma(n, \alpha), +\infty) ,
\end{cases}
\label{eq:g_dzda_1}
\end{align}
where note that $0< z_{1}(n, \alpha) < z_{1}(n, 2\alpha) < z_{2}(n, \alpha) < 1$ for $\alpha \in (-\infty, 0)$,
\item
if $\alpha = 0$, then $\frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } < 0$ for any $z \in (0, +\infty)$,
\item
for any $\alpha \in (0, \frac{1}{2})$, there exists $\gamma(n, \alpha) \in (n-1, z_{1}(n, 2 \alpha))$ such that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (\gamma(n, \alpha), +\infty) , \\
0
& \mathrm{if} \ z = \gamma(n, \alpha) , \\
-1
& \mathrm{if} \ z \in (0, \gamma(n, \alpha)) ,
\end{cases}
\label{eq:g_dzda_2}
\end{align}
where note that $n-1 < z_{1}(n, 2 \alpha) < z_{1}(n, \alpha)$ for $\alpha \in (0, \frac{1}{2})$,
\item
if $\alpha = \frac{1}{2}$, then
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (n-1, +\infty) , \\
0
& \mathrm{if} \ z = n-1 , \\
-1
& \mathrm{if} \ z \in (0, n-1) ,
\end{cases}
\label{eq:g_dzda_3}
\end{align}
\item
for any $\alpha \in (\frac{1}{2}, 1)$, there exists $\gamma(n, \alpha) \in (z_{1}(n, 2 \alpha), n-1)$ such that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (\gamma(n, \alpha), +\infty) , \\
0
& \mathrm{if} \ z = \gamma(n, \alpha) , \\
-1
& \mathrm{if} \ z \in (0, \gamma(n, \alpha)) ,
\end{cases}
\label{eq:g_dzda_4}
\end{align}
where note that $\sqrt{n-1} < z_{1}(n, 2 \alpha) < n-1 < z_{1}(n, \alpha) < z_{2}(n, \alpha)$ for $\alpha \in (0, \frac{1}{2})$,
\item
if $\alpha = 1$, then $\frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } = 0$ for any $z \in (0, +\infty)$, and
\item
for any $\alpha \in (1, +\infty)$, there exists $\gamma(n, \alpha) \in (z_{1}(n, 2\alpha), z_{2}(n, \alpha))$ such that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (0, \gamma(n, \alpha)) , \\
0
& \mathrm{if} \ z = \gamma(n, \alpha) , \\
-1
& \mathrm{if} \ z \in (\gamma(n, \alpha), +\infty) ,
\end{cases}
\label{eq:g_dzda_5}
\end{align}
where note that $1 < z_{1}(n, 2 \alpha) < z_{2}(n, \alpha) < z_{1}(n, \alpha) < n-1$ for $\alpha \in (1, +\infty)$.
\end{itemize}
\end{lemma}
Lemma \ref{lem:dzda} is proved in Appendix \ref{app:dzda}.
In Lemma \ref{lem:dzda}, note that $\frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } = \frac{ \partial^{2} g(n, z, \alpha) }{ \partial \alpha \, \partial z }$ by Young's theorem.
We can verify it by calculating derivatives.
\begin{lemma}
\label{lem:diff_g_z}
For any $n \ge 3$ and any $z \in (1, +\infty)$, we observe that
\begin{align}
\left. \frac{ \partial g(n, z, \alpha) }{ \partial z } \right|_{\alpha = 1}
& =
0 ,
\\
\operatorname{sgn} \! \left( \left. \frac{ \partial g(n, z, \alpha) }{ \partial z } \right|_{\alpha = \alpha_{1}(n, z)} \right)
& =
\begin{cases}
0
& \mathrm{if} \ z = n-1 , \\
-1
& \mathrm{if} \ z \neq n-1 .
\end{cases}
\label{eq:diff_g_z}
\end{align}
\end{lemma}
Lemma \ref{lem:diff_g_z} is proved in Appendix \ref{app:diff_g_z}.
\if0
\begin{lemma}[The case of $\alpha = 0$]
\label{lem:diff_g_z_a0}
For any $n \ge 3$, there exists $\zeta( n ) \in (- \frac{n}{2} W_{-1}(-\frac{2}{n} \mathrm{e}^{-\frac{2}{n}}), (n-1) \sqrt{n-1})$ such that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial g(n, z, 0) }{ \partial z } \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (0, 1) \cup (1, \zeta( n )) , \\
0
& \mathrm{if} \ z = \zeta(n) , \\
-1
& \mathrm{if} \ z \in (\zeta( n ), +\infty) ,
\end{cases}
\end{align}
where $W_{-1}( \cdot )$ is the Lambert $W_{-1}$ function, i.e., the inverse function of $f(x) = x \, \mathrm{e}^{x}$ for $x \le -1$.
\end{lemma}
Lemma \ref{lem:diff_g_z_a0} is proved in Appendix \ref{app:diff_g_z_a0}.
\fi
\begin{lemma}
\label{lem:ln(n-1)/2ln(z)}
For any $n \ge 3$ and any $z \in [n-1, (n-1)^{2}]$,
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, {\textstyle \frac{1}{2} \alpha_{1}(n, z)}) \right)
& =
1 .
\end{align}
\end{lemma}
Lemma \ref{lem:ln(n-1)/2ln(z)} is proved in Appendix \ref{app:ln(n-1)/2ln(z)}.
\if0
\begin{lemma}
\label{lem:diff_ln(n-1)/2ln(z)}
For any $n \ge 3$ and any $z \in [n-1, +\infty)$,
\begin{align}
\operatorname{sgn} \! \left( \left. \frac{ \partial g(n, z, \alpha) }{ \partial z } \right|_{\alpha = \frac{ \ln (n-1) }{ 2 \ln z }} \right)
& =
-1 .
\end{align}
\end{lemma}
Lemma \ref{lem:diff_g_z} is proved in Appendix \ref{app:diff_ln(n-1)/2ln(z)}.
\fi
We divide the proof of the monotonicity of $\xi_{n}( z )$ for $z \in (1, (n-1)^{2}]$ into the case of $z \in (1, n-1]$ and the case of $z \in [n-1, (n-1)^{2}]$.
Firstly, we show that $\xi_{n}( z )$ is strictly decreasing for $z \in (1, n-1]$.
To prove this, we now provide that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right)
=
-1
\label{eq:dzda_part5_>=alpha1}
\end{align}
for $z \in (1, n-1)$ and $\alpha \ge \alpha_{1}(n, z)$, which implies that $\frac{ \partial g(n, z, \alpha) }{ \partial z }$ with a fixed $z \in (1, n-1)$ is strictly decreasing for $\alpha \ge \alpha_{1}(n, z)$.
We can verify \eqref{eq:dzda_part5_>=alpha1} as follows:
Since $1 < \gamma( n, \alpha ) < z_{2}( n, \alpha ) < z_{1}( n, \alpha )$ for $\alpha \in (1, +\infty)$, it follows from \eqref{eq:g_dzda_5} that
\begin{align}
\operatorname{sgn} \! \left( \left. \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right|_{z = z_{1}( n, \alpha)} \right)
=
-1
\label{eq:dzda_part5_>=alpha1_derive1}
\end{align}
for $\alpha \in (1, +\infty)$.
Then, since $z_{1}( n, \alpha )$ is strictly decreasing for $\alpha > 0$ (by the inverse function theorem and Eq. \eqref{eq:diff1_alpha1}), we see that $1 < \gamma( n, \alpha ) < z_{1}( n, \alpha ) \le z_{1}( n, \beta )$ for $1 < \beta \le \alpha < +\infty$;
and thus, we observe from \eqref{eq:dzda_part5_>=alpha1_derive1} that
\begin{align}
\operatorname{sgn} \! \left( \left. \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right|_{z = z_{1}( n, \beta)} \right)
=
-1
\label{eq:dzda_part5_>=alpha1_derive2}
\end{align}
for $1 < \beta \le \alpha < +\infty$.
Moreover, since $\alpha_{1}( n, \cdot )$ is the inverse function of $z_{1}( n, \beta )$ for $\beta \in (-\infty, 0) \cup (0, +\infty)$, i.e.,
\begin{align}
z = z_{1}(n, \beta)
& \iff
\beta = \alpha_{1}(n, z) ,
\\
1 < \beta < +\infty
& \overset{\eqref{eq:range_z1z2_orig}}{\iff}
1 < z_{1}( n, \beta ) < n-1 ,
\end{align}
we obtain \eqref{eq:dzda_part5_>=alpha1} from \eqref{eq:dzda_part5_>=alpha1_derive2}.
On the other hand, from Lemma \ref{lem:diff_g_z}, we observe that
\begin{align}
\operatorname{sgn} \! \left( \left. \frac{ \partial g(n, z, \alpha) }{ \partial z } \right|_{\alpha = \alpha_{1}(n, z)} \right)
=
-1
\end{align}
for a fixed $z \in (1, n-1)$;
and therefore, it follows from \eqref{eq:dzda_part5_>=alpha1} that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial g(n, z, \alpha) }{ \partial z } \right)
=
-1
\label{eq:diff1_z_1ton-1}
\end{align}
for $z \in (1, n-1)$ and $\alpha \ge \alpha_{1}( n, z )$, which implies that $g(n, z, \alpha)$ with a fixed $\alpha \ge \alpha_{1}( n, z )$ is strictly decreasing for $z \in (1, n-1)$.
Then, since
\begin{itemize}
\item
$\xi_{n}( z ) > \alpha_{1}( n, z )$ and $g( n, z, \xi_{n}( z ) ) = 0$ for $z \in (1, n-1)$ (see Eq. \eqref{eq:sign_gz_part1}), and
\item
$g( n, z, \alpha )$ with a fixed $\alpha \ge \alpha_{1}( n, z )$ is strictly decreasing for $z \in (1, n-1)$ (see Eq. \eqref{eq:diff1_z_1ton-1}),
\end{itemize}
we observe that
\begin{align}
g( n, z^{\prime}, \xi_{n}( z ) ) < 0
\label{eq:g_zprime_1ton-1}
\end{align}
for $1 < z < z^{\prime} < n-1$.
Moreover, since
\begin{itemize}
\item
if $1 < z^{\prime} < n-1$, then $g( n, z^{\prime}, \alpha )$ is strictly decreasing for $\alpha \ge \alpha_{1}( n, z^{\prime} )$ (see Eq. \eqref{eq:diff1_g_alpha_part2}),
\item
$g( n, z^{\prime}, \alpha_{1}( n, z^{\prime} ) ) > 0$ for $1 < z^{\prime} < n-1$ (see Eq. \eqref{eq:sign_gz_part1}), and
\item
$g( n, z^{\prime}, \xi_{n}( z ) ) < 0$ for $1 < z < z^{\prime} < n-1$ (see Eq. \eqref{eq:g_zprime_1ton-1}),
\end{itemize}
it follows by the intermediate value theorem that, for any $1 < z < z^{\prime} < n-1$, there exists $\xi_{n}( z^{\prime} ) \in (\alpha_{1}(n, z^{\prime}), \xi_{n}( z ))$ such that
\begin{align}
g( n, z^{\prime}, \xi_{n}( z^{\prime} ) )
=
0 ,
\end{align}
which implies that, if $1 < z < z^{\prime} < n-1$, then $\xi_{n}( z^{\prime} ) < \xi_{n}( z )$.
Note that
\begin{align}
\lim_{z \to 1^{+}} \xi_{n}( z )
& \overset{\eqref{eq:xi_infty}}{=}
+\infty ,
\label{eq:xi_infty2} \\
\xi_{n}( n-1 )
& \overset{\eqref{eq:sign_gz_part2}}{=}
1 .
\label{eq:xi_n-1_1}
\end{align}
Therefore, we obtain that $\xi_{n}( z ) \in [1, +\infty)$ is strictly decreasing for $z \in (1, n-1]$.
Secondly, we show that $\xi_{n}( z )$ is strictly decreasing for $z \in [n-1, (n-1)^{2}]$.
From \eqref{eq:range_a1a2}, note that
\begin{align}
0 & < \frac{1}{2} \alpha_{1}( n, z ) < \frac{1}{2} ,
\label{eq:magnitudes_1/2alpha1_1} \\
\frac{1}{2} \alpha_{1}( n, z ) & < \alpha_{1}( n, z ) < 1
\label{eq:magnitudes_1/2alpha1_2}
\end{align}
for $z > n-1$.
Then, since
\begin{itemize}
\item
if $z > n-1$, then $g( n, z, \alpha )$ is strictly decreasing for $\alpha \in (-\infty, \alpha_{1}( n, z^{\prime} )]$ (see Eq. \eqref{eq:diff1_g_alpha_part1}),
\item
$g(n, z, \frac{1}{2} \alpha_{1}(n, z)) > 0$ for $z \in [n-1, (n-1)^{2}]$ (see Lemma \ref{lem:ln(n-1)/2ln(z)}), and
\item
$g(n, z, \alpha_{1}(n, z)) < 0$ for $z \in (n-1, (n-1)^{2}]$ (see Eq. \eqref{eq:sign_gz_part3}),
\end{itemize}
we can refine a part of the bounds of $\xi_{n}( z )$ used in \eqref{eq:sign_gz_part3} as follows:
for any $z \in (n-1, (n-1)^{2}]$, there exists $\xi_{n}( z ) \in (\frac{1}{2} \alpha_{1}(n, z), \alpha_{1}(n, z))$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha \in (-\infty, \xi_{n}( z )) , \\
0
& \mathrm{if} \ \alpha \in \{ \xi_{n}( z ), 1 \} , \\
-1
& \mathrm{if} \ \alpha \in (\xi_{n}( z ), 1) \cup (1, +\infty) .
\end{cases}
\label{eq:sign_gz_part3_2}
\end{align}
To prove the monotonicity of $\xi_{n}( z )$ for $z \in [n-1, (n-1)^{2}]$, we now provide that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial g(n, z, \alpha) }{ \partial z } \right)
=
-1
\label{eq:diff1_gz_1/2alpha_to_1}
\end{align}
for $z > n-1$ and $\alpha \in [\frac{1}{2} \alpha_{1}( n, z ), 1)$.
It follows from \eqref{eq:g_dzda_3} and \eqref{eq:g_dzda_4} of Lemma \ref{lem:dzda} that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right)
=
1
\label{eq:dzda_part5_>=1/2to1}
\end{align}
for $z > n-1$ and $\alpha \in [\frac{1}{2}, 1)$.
Moreover, we now prove that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right)
=
1
\label{eq:dzda_part5_>=2alpha1}
\end{align}
for $z > n-1$ and $\alpha \in [\frac{1}{2} \alpha_{1}(n, z), \frac{1}{2}]$, as with the proof of \eqref{eq:dzda_part5_>=alpha1}.
Since $n-1 < \gamma( n, \alpha ) < z_{1}(n, 2 \alpha) < z_{1}(n, \alpha)$ for $\alpha \in (0, \frac{1}{2})$, it follows from \eqref{eq:g_dzda_2} that
\begin{align}
\operatorname{sgn} \! \left( \left. \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right|_{z = z_{1}(n, 2\alpha)} \right)
=
1
\label{eq:dzda_part5_>=2alpha1_derive1}
\end{align}
for $\alpha \in (0, \frac{1}{2})$.
Then, since $z_{1}( n, \alpha )$ is strictly decreasing for $\alpha > 0$ (by the inverse function theorem and Eq. \eqref{eq:diff1_alpha1}), we see that $1 < \gamma( n, \alpha ) < z_{1}( n, 2\alpha ) \le z_{1}( n, 2\beta )$ for $0 < \beta \le \alpha < \frac{1}{2}$;
and thus, we observe from \eqref{eq:dzda_part5_>=2alpha1_derive1} that
\begin{align}
\operatorname{sgn} \! \left( \left. \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right|_{z = z_{1}(n, 2\beta)} \right)
=
1
\label{eq:dzda_part5_>=2alpha1_derive2}
\end{align}
for $0 < \beta \le \alpha < \frac{1}{2}$.
Moreover, since $\alpha_{1}( n, \cdot )$ is the inverse function of $z_{1}( n, \beta )$ for $\beta \in (-\infty, 0) \cup (0, +\infty)$, i.e.,
\begin{align}
z = z_{1}(n, 2 \beta)
& \iff
\beta = \frac{1}{2} \alpha_{1}(n, z) ,
\\
0 < \beta < \frac{1}{2}
& \overset{\eqref{eq:range_z1z2_orig}}{\iff}
n-1 < z_{1}( n, 2 \beta ) < +\infty ,
\end{align}
we obtain \eqref{eq:dzda_part5_>=2alpha1} from \eqref{eq:dzda_part5_>=2alpha1_derive2}.
Combining \eqref{eq:dzda_part5_>=1/2to1} and \eqref{eq:dzda_part5_>=2alpha1}, we have
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right)
=
1
\end{align}
for $z > n-1$ and $\alpha \in [\frac{1}{2} \alpha_{1}(n, z), 1)$, which implies that $\frac{ \partial g(n, z, \alpha) }{ \partial z }$ with a fixed $z > n-1$ is strictly increasing for $\alpha \in [\frac{1}{2} \alpha_{1}(n, z), 1]$.
Then, since $\left. \frac{ \partial g(n, z, \alpha) }{ \partial z } \right|_{\alpha = 1} = 0$ (see Lemma \ref{lem:diff_g_z}), we obtain \eqref{eq:diff1_gz_1/2alpha_to_1}.
It follows from \eqref{eq:diff1_gz_1/2alpha_to_1} that $g(n, z, \alpha)$ with a fixed $\alpha \in [\frac{1}{2} \alpha_{1}(n, z), 1)$ is strictly decreasing for $z > n-1$.
Using this monotonicity, we can prove the monotonicity of $\xi_{n}( z )$ for $z \in [n-1, (n-1)^{2}]$ as follows:
Since
\begin{itemize}
\item
$\frac{1}{2} \alpha_{1}( n, z ) < \xi_{n}( z ) < \alpha_{1}( n, z ) < 1$ for $z \in (n-1, (n-1)^{2}]$ (see Eqs. \eqref{eq:magnitudes_1/2alpha1_2} and \eqref{eq:sign_gz_part3_2}),
\item
$g( n, z, \xi_{n}( z ) ) = 0$ for $z \in (n-1, (n-1)^{2}]$ (see Eq. \eqref{eq:sign_gz_part3_2}), and
\item
$g(n, z, \alpha)$ with a fixed $\alpha \in [\frac{1}{2} \alpha_{1}(n, z), 1)$ is strictly decreasing for $z > n-1$ (see Eq. \eqref{eq:diff1_gz_1/2alpha_to_1}),
\end{itemize}
we observe that
\begin{align}
g( n, z^{\prime}, \xi_{n}( z ) ) < 0
\label{eq:g_zprime_>n-1}
\end{align}
for $n-1 < z < z^{\prime}$.
Moreover, since
\begin{itemize}
\item
if $z^{\prime} > n-1$, then $g( n, z^{\prime}, \alpha )$ is strictly decreasing for $\alpha \in (-\infty, \alpha_{1}( n, z^{\prime} ))$ (see Eq. \eqref{eq:diff1_g_alpha_part1}),
\item
$g( n, z^{\prime}, \frac{1}{2} \alpha_{1}( n, z^{\prime} ) ) > 0$ for $z^{\prime} \in [n-1, (n-1)^{2}]$ (see Lemma \ref{lem:ln(n-1)/2ln(z)}), and
\item
$g( n, z^{\prime}, \xi_{n}( z ) ) < 0$ for $n-1 < z < z^{\prime}$ (see Eq. \eqref{eq:g_zprime_>n-1}),
\end{itemize}
it follows by the intermediate value theorem that, for any $n-1 < z < z^{\prime} \le (n-1)^{2}$, there exists $\xi_{n}( z^{\prime} ) \in (\frac{1}{2} \alpha_{1}(n, z^{\prime}), \xi_{n}( z ))$ such that
\begin{align}
g( n, z^{\prime}, \xi_{n}( z^{\prime} ) )
=
0 ,
\end{align}
which implies that, if $n-1 < z < z^{\prime} \le (n-1)^{2}$, then $\xi_{n}( z^{\prime} ) < \xi_{n}( z )$.
Note that
\begin{align}
\xi_{n}( n-1 )
& \overset{\eqref{eq:sign_gz_part2}}{=}
1 ,
\label{eq:xi_n-1_2} \\
\frac{1}{4}
<
\xi_{n}( (n-1)^{2} )
& <
\frac{1}{2} ,
\label{ineq:magnitudes_xi(n-1)^2}
\end{align}
where \eqref{ineq:magnitudes_xi(n-1)^2} follows by $\frac{1}{2} \alpha_{1}(n, (n-1)^{2}) < \xi_{n}( (n-1)^{2} ) < \alpha_{1}(n, (n-1)^{2})$ (see Eq. \eqref{eq:sign_gz_part3_2}).
Therefore, we obtain that $\xi_{n}( z )$ is strictly decreasing for $z \in [n-1, (n-1)^{2}]$.
Note that
\begin{align}
\xi_{n}( z ) < \frac{1}{2}
\label{ineq:magnitudes_xi>(n-1)^2}
\end{align}
for $z > (n-1)^{2}$ since $\xi_{n}( z ) < \alpha_{1}(n, z)$ and $\alpha_{1}(n, z)$ is strictly decreasing for $z > 1$.
Summarizing the above monotonicity of $\xi_{n}( z )$, we get that there exists the inverse function $\xi_{n}^{-1}( \cdot )$ of $\xi_{n}( z )$ for $z \in (1, (n-1)^{2}]$.
By the inverse function theorem, note that $\xi_{n}^{-1}( \alpha )$ is strictly decreasing for $\alpha \in [\frac{1}{2}, 1) \cup (1, +\infty)$.
Since
\begin{itemize}
\item
$\xi_{n}( z )$ is strictly decreasing for $z \in (1, n-1]$,
\item
$\lim_{z \to 1^{+}} \xi_{n}( z ) = +\infty$ (see Eq. \eqref{eq:xi_infty2}), and
\item
$\xi_{n}( n-1 ) = 1$ (see Eq. \eqref{eq:xi_n-1_1}),
\end{itemize}
we can rewrite \eqref{eq:sign_gz_part1} by using the inverse function $\xi_{n}^{-1}( \cdot )$ as follows:
for any $\alpha \in (1, +\infty)$, there exists $\xi_{n}^{-1}( \alpha ) \in (1, n-1)$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (1, \xi_{n}^{-1}( \alpha )) , \\
0
& \mathrm{if} \ z = \xi_{n}^{-1}( \alpha ) , \\
-1
& \mathrm{if} \ z \in (\xi_{n}^{-1}( \alpha ), +\infty) .
\end{cases}
\label{eq:sign_gz_part1_rewrite}
\end{align}
Moreover, since
\begin{itemize}
\item
$\xi_{n}( z )$ is strictly decreasing for $z \in [n-1, (n-1)^{2}]$,
\item
$\xi_{n}( n-1 ) = 1$ (see Eq. \eqref{eq:xi_n-1_2}), and
\item
$\xi_{n}( z ) < \frac{1}{2}$ for $z \ge (n-1)^{2}$ (see Eq. \eqref{ineq:magnitudes_xi>(n-1)^2}),
\end{itemize}
we can also rewrite a part of \eqref{eq:sign_gz_part3} by using the inverse function $\xi_{n}^{-1}( \cdot )$ as follows:
for any $\alpha \in [\frac{1}{2}, 1)$ and any $n \ge 3$, there exists $\xi_{n}^{-1}( \alpha ) \in (n-1, (n-1)^{2})$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (1, \xi_{n}^{-1}( \alpha )) , \\
0
& \mathrm{if} \ z = \xi_{n}^{-1}( \alpha ) , \\
-1
& \mathrm{if} \ z \in (\xi_{n}^{-1}( \alpha ), +\infty) .
\end{cases}
\label{eq:sign_gz_part3_rewrite}
\end{align}
Note that
\begin{align}
g(n, z, 1)
\overset{\eqref{eq:g_alpha_is_0}}{=}
0 .
\end{align}
for $z > 1$.
Combining \eqref{eq:sign_gz_part1_rewrite} and \eqref{eq:sign_gz_part3_rewrite}, we obtain that, for any $\alpha \in [\frac{1}{2}, 1) \cup (1, +\infty)$ and any $n \ge 3$, there exists $\xi_{n}^{-1}( \alpha ) \in (1, (n-1)^{2})$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (1, \xi_{n}^{-1}( \alpha )) , \\
0
& \mathrm{if} \ z = \xi_{n}^{-1}( \alpha ) , \\
-1
& \mathrm{if} \ z \in (\xi_{n}^{-1}( \alpha ), +\infty) .
\end{cases}
\label{eq:sign_gz_rewrite}
\end{align}
Furthermore, since $p = \frac{1}{z + (n-1)}$ and $p$ is strictly decreasing for $z > 1$, we can rewrite \eqref{eq:sign_gz_rewrite} by using $g(n, z, \alpha)$ rather than $g(n, z, \alpha)$ as follows:
Let $\pi_{n}( \alpha ) = \frac{ 1 }{ \xi_{n}^{-1}( \alpha ) + (n-1) }$.
Then, for any $\alpha \in [\frac{1}{2}, 1) \cup (1, +\infty)$ and any $n \ge 3$, there exists $\pi_{n}( \alpha ) \in (\frac{1}{n (n-1)}, \frac{1}{n})$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, p, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ p \in (\pi_{n}( \alpha ), \frac{1}{n}) , \\
0
& \mathrm{if} \ p = \pi_{n}( \alpha) , \\
-1
& \mathrm{if} \ p \in (0, \pi_{n}( \alpha )) ,
\end{cases}
\label{eq:sign_gp_rewrite}
\end{align}
where note that $g(n, p, \alpha)$ denotes the right-hand side of \eqref{eq:g_p}.
Since $\xi_{n}^{-1}( \alpha )$ is strictly decreasing for $\alpha \in [\frac{1}{2}, 1) \cup (1, +\infty)$, note that $\pi_{n}( \alpha )$ is strictly increasing for $\alpha \in [\frac{1}{2}, 1) \cup (1, +\infty)$ and
\begin{align}
\lim_{\alpha \to +\infty} \pi_{n}( \alpha )
& =
\lim_{\alpha \to +\infty} \frac{1}{ \xi_{n}^{-1}( \alpha ) + (n-1) }
\\
& =
\frac{1}{ 1 + (n-1) }
\\
& =
\frac{1}{n} , \\
\lim_{\alpha \to 1} \pi_{n}( \alpha )
& =
\lim_{\alpha \to 1} \frac{1}{ \xi_{n}^{-1}( \alpha ) + (n-1) }
\\
& =
\frac{1}{ (n-1) + (n-1) }
\\
& =
\frac{1}{2 (n-1)} ,
\\
\pi_{n}( {\textstyle \frac{1}{2}} )
& =
\frac{1}{ \xi_{n}^{-1}( \frac{1}{2} ) + (n-1) }
\\
& >
\frac{1}{ (n-1)^{2} + (n-1) }
\\
& =
\frac{1}{n(n-1)} .
\end{align}
Therefore, we obtain that, for any $\alpha \in [\frac{1}{2}, 1) \cup (1, +\infty)$ and any $n \ge 3$, there exists $\pi_{n}( \alpha ) \in (\frac{1}{n (n-1)}, \frac{1}{n})$ such that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p )^{2} } \right)
& \overset{\eqref{eq:sgn_diff2_N_Hv}}{=}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, p, \alpha) \right)
\\
& \overset{\eqref{eq:sign_gp_rewrite}}{=}
\begin{cases}
1
& \mathrm{if} \ p \in (\pi_{n}( \alpha ), \frac{1}{n}) , \\
0
& \mathrm{if} \ p = \pi_{n}( \alpha) , \\
-1
& \mathrm{if} \ p \in (0, \pi_{n}( \alpha )) .
\end{cases}
\label{eq:sgn_diff2_N_Hv_prime}
\end{align}
Since $H_{\sbvec{v}_{n}}( p )$ is strictly increasing for $p \in [0, \frac{1}{n}]$ (see \cite[Lemma 1]{part1, part1_arxiv}), we can further rewrite \eqref{eq:sgn_diff2_N_Hv_prime} by using $H_{\sbvec{v}_{n}}( p )$ rather than $p$ as follows:
Let $\chi_{n}( \alpha ) = H_{\sbvec{v}_{n}}( \pi_{n}( \alpha ) )$.
Then, for any $\alpha \in [\frac{1}{2}, 1) \cup (1, +\infty)$ and any $n \ge 3$, there exists $\chi_{n}( \alpha ) \in ( \ln n - (1 - \frac{2}{n}) \ln (n-1), \ln n )$ such that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p )^{2} } \right)
& =
\begin{cases}
1
& \mathrm{if} \ H_{\sbvec{v}_{n}}( p ) \in (\chi_{n}( \alpha ), \ln n) , \\
0
& \mathrm{if} \ H_{\sbvec{v}_{n}}( p ) = \chi_{n}( \alpha) , \\
-1
& \mathrm{if} \ H_{\sbvec{v}_{n}}( p ) \in (0, \chi_{n}( \alpha )) .
\end{cases}
\end{align}
Since $\pi_{n}( \alpha )$ is strictly increasing for $\alpha \in [\frac{1}{2}, 1) \cup (1, +\infty)$, note that $\chi_{n}( \alpha )$ is also strictly increasing for $\alpha \in [\frac{1}{2}, 1) \cup (1, +\infty)$ and
\begin{align}
\lim_{\alpha \to +\infty} \chi_{n}( \alpha )
& =
\lim_{\alpha \to +\infty} H_{\sbvec{v}_{n}}( \pi_{n}( \alpha ) )
\\
& =
H_{\sbvec{v}_{n}}( {\textstyle \frac{1}{n}} )
\\
& =
\ln n ,
\\
\lim_{\alpha \to 1} \chi_{n}( \alpha )
& =
\lim_{\alpha \to 1} H_{\sbvec{v}_{n}}( \pi_{n}( \alpha ) )
\\
& =
H_{\sbvec{v}_{n}}( {\textstyle \frac{1}{2 (n-1)}} )
\\
& =
\left. \left( \vphantom{\sum} - (n-1) p \ln p - (1 - (n-1) p) \ln (1 - (n-1) p) \right) \right|_{p = \frac{1}{2 (n-1)}}
\\
& =
- (n-1) \left( \frac{1}{2(n-1)} \right) \ln \left( \frac{1}{2(n-1)} \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad
- \left(1 - (n-1) \left( \frac{1}{2(n-1)} \right)\right) \ln \left(1 - (n-1) \left( \frac{1}{2(n-1)} \right)\right)
\\
& =
- \frac{1}{2} \ln \left( \frac{1}{2(n-1)} \right) - \left( 1 - \frac{1}{2} \right) \ln \left( 1 - \frac{1}{2} \right)
\\
& =
- \frac{1}{2} \ln \frac{1}{2} + \frac{1}{2} \ln \frac{1}{n-1} - \frac{1}{2} \ln \frac{1}{2}
\\
& =
\ln 2 + \ln \sqrt{n-1} ,
\\
\chi_{n}( {\textstyle \frac{1}{2}} )
& =
H_{\sbvec{v}_{n}}( \pi_{n}( {\textstyle \frac{1}{2}} ) )
\\
& >
H_{\sbvec{v}_{n}}( {\textstyle \frac{1}{n (n-1)}} )
\\
& =
\left. \left( \vphantom{\sum} - (n-1) p \ln p - (1 - (n-1) p) \ln (1 - (n-1) p) \right) \right|_{p = \frac{1}{n (n-1)}}
\\
& =
- (n-1) \left( \frac{1}{n(n-1)} \right) \ln \left( \frac{1}{n(n-1)} \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad
- \left(1 - (n-1) \left( \frac{1}{n(n-1)} \right)\right) \ln \left(1 - (n-1) \left( \frac{1}{n(n-1)} \right)\right)
\\
& =
- \frac{1}{n} \ln \left( \frac{1}{n(n-1)} \right) - \left( 1 - \frac{1}{n} \right) \ln \left( 1 - \frac{1}{n} \right)
\\
& =
- \frac{1}{n} \ln \frac{1}{n} - \frac{1}{n} \ln \frac{1}{n-1} - \ln \left( 1 - \frac{1}{n} \right) + \frac{1}{n} \ln \left( 1 - \frac{1}{n} \right)
\\
& =
\frac{1}{n} \ln n + \frac{1}{n} \ln (n-1) - \ln \frac{n-1}{n} + \frac{1}{n} \ln \frac{n-1}{n}
\\
& =
\frac{1}{n} \ln n + \frac{1}{n} \ln (n-1) - \ln (n-1) + \ln n + \frac{1}{n} \ln (n-1) - \frac{1}{n} \ln n
\\
& =
\frac{2}{n} \ln (n-1) + \ln \frac{ n }{ n-1 }
\\
& =
\ln n - \left( 1 - \frac{2}{n} \right) \ln (n-1) .
\end{align}
\if0
Finally, we can see that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p )^{2} } \right)
& \overset{\eqref{eq:sgn_diff2_N_Hv}}{=}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
\\
& =
\operatorname{sgn} \! \left( (\alpha-1) + \frac{ ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) }{ ((n-1) + z) \ln z } \right)
\\
& =
\operatorname{sgn} \! \left( (\alpha-1) + \frac{ (1 - \alpha) ((n-1) + z^{\alpha}) \ln_{\alpha} z }{ ((n-1) + z) \ln z } \right)
\\
& =
\operatorname{sgn}(\alpha - 1) \cdot \operatorname{sgn} \! \left( 1 - \frac{ ((n-1) + z^{\alpha}) \ln_{\alpha} z }{ ((n-1) + z) \ln z } \right)
\\
& =
\operatorname{sgn}(\alpha - 1) \cdot \underbrace{ \operatorname{sgn} \! \left( \frac{1}{ ((n-1) + z) \ln z } \right) }_{ = 1 \ \mathrm{for} \ z > 1 } \cdot \, \operatorname{sgn} \! \left( ((n-1) + z) \ln z - ((n-1) + z^{\alpha}) \ln_{\alpha} z \right)
\\
& =
\operatorname{sgn}(\alpha - 1) \cdot \, \operatorname{sgn} \! \left( ((n-1) + z) \ln z - ((n-1) + z^{\alpha}) \ln_{\alpha} z \right)
\end{align}
for $z > 1$ and $\alpha \in (0, 1) \cup (1, +\infty)$, where note that $z > 1$ implies $p \in (0, \frac{1}{n})$.
\fi
That completes the proof of Lemma \ref{lem:convex_v}.
\end{IEEEproof}
\section{Proof of Lemma \ref{lem:concave_w}}
\label{app:concave_w}
\begin{IEEEproof}[Proof of Lemma \ref{lem:concave_w}]
Since $\bvec{w}_{n}( p ) = \bvec{v}_{n}( p )_{\downarrow}$ for $p \in [\frac{1}{n}, \frac{1}{n-1}]$ we can obtain immediately from \eqref{eq:sgn_diff2_N_Hv} that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p )^{2} } \right)
=
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
\label{eq:sgn_diff2_N_Hw}
\end{align}
for $n \ge 2$, $z \in (0, 1)$, and $\alpha \in (0, 1) \cup (1, +\infty)$, where $z = z(n, \alpha) \triangleq \frac{1 - (n-1) p}{p}$ and the function $g(n, z, \alpha)$ is defined in \eqref{eq:g_z}.
Therefore, to analyze this, we can use the results of Appendix \ref{app:convex_v}.
Note that
\begin{align}
\frac{1}{n} \le p \le \frac{1}{n-1}
\overset{\eqref{eq:range_z_w}}{\iff}
0 \le z \le 1 .
\end{align}
Since
\begin{itemize}
\item
$g(n, z, 0) < 0$ for $n \ge 2$ and $z \in (0, 1)$ (see Lemma \ref{lem:g_a0} and Eq. \eqref{eq:g_kappa_z}),
\item
$g(n, z, 1) = 0$ for $n \ge 2$ and $z \in (0, 1)$ (see Eq. \eqref{eq:g_alpha_is_0}),
\item
$\lim_{\alpha \to +\infty} g(n, z, \alpha) = -\infty$ (see Eq. \eqref{eq:g_lim_+}), and
\item
the following monotonicity hold (see Eq. \eqref{eq:diff1_g_alpha_part1}):
\begin{itemize}
\item
$\alpha_{1}(n, z) < 0$ for $z \in (0, 1)$ (see Eq. \eqref{eq:range_a1a2}),
\item
$g(n, z, \alpha)$ is strictly increasing for $\alpha \in [0, 1]$, and
\item
$g(n, z, \alpha)$ is strictly decreasing for $\alpha \in [1, +\infty)$,
\end{itemize}
\end{itemize}
we can see that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
=
\begin{cases}
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha \in [0, 1) \cup (1, +\infty)
\end{cases}
\label{eq:sign_g_w}
\end{align}
for $n \ge 2$ and $z \in (0, 1)$;
and therefore, we observe that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p )^{2} } \right)
& \overset{\eqref{eq:sgn_diff2_N_Hw}}{=}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \alpha) \right)
\\
& \overset{\eqref{eq:sign_g_w}}{=}
-1
\end{align}
for $n \ge 2$, $z \in (0, 1)$, and $\alpha \in (0, 1) \cup (1, +\infty)$, which implies that $\| \bvec{w}_{n}( p ) \|_{\alpha}$ is strictly concave in $H_{\sbvec{w}_{n}}( p ) \in [0, \ln n]$.
Finally, since
\begin{align}
H_{\sbvec{w}_{n}}( p )
& =
H_{\sbvec{w}_{n}}( p ) ,
\\
\| \bvec{w}_{n}( p ) \|_{\alpha}
& =
\| \bvec{w}_{m}( p ) \|_{\alpha}
\end{align}
for $n \ge m \ge 2$ and $p \in [\frac{1}{m}, \frac{1}{m-1}]$, we have Lemma \ref{lem:concave_w}.
\end{IEEEproof}
\section{Proof of Lemma \ref{lem:g_a0}}
\label{app:g_a0}
\begin{IEEEproof}[Proof of Lemma \ref{lem:g_a0}]
Direct calculation shows
\begin{align}
g(n, p, 0)
& \overset{\eqref{eq:g_p}}{=}
\left.
\left( (\alpha - 1) + \frac{ 1 - 2 (n-1) p + (n-1) p^{\alpha} (1 - (n-1)p)^{1 - \alpha} - p^{1 - \alpha} (1 - (n-1) p)^{\alpha} }{ \ln \left( \frac{ 1 - (n-1) p}{ p } \right) } \right) \right|_{\alpha = 0}
\\
& =
- 1 + \frac{ 1 - 2 (n-1) p + (n-1) (1 - (n-1) p) - p }{ \ln \left( \frac{ 1 - (n-1) p}{ p } \right) }
\\
& =
- 1 + \frac{ 1 - 2 n p + 2 p + n - 1 - n^{2} p + 2 n p - p - p }{ \ln \left( \frac{ 1 - (n-1) p}{ p } \right) }
\\
& =
- 1 + \frac{ n - n^{2} p }{ \ln \left( \frac{ 1 - (n-1) p }{ p } \right) }
\\
& =
- 1 - \frac{ n ( n p - 1 ) }{ \ln \left( \frac{ 1 - (n-1) p }{ p } \right) }
\\
& =
- \frac{ n ( n p - 1 ) + \ln \left( \frac{ 1 - (n-1) p }{ p } \right) }{ \ln \left( \frac{ 1 - (n-1) p }{ p } \right) }
\\
& =
- \frac{ s(n, p) }{ \ln \left( \frac{ 1 - (n-1) p }{ p } \right) } ,
\end{align}
where $s(n, p) \triangleq n ( n p - 1 ) + \ln \left( \frac{ 1 - (n-1) p }{ p } \right)$.
We now analyze $s(n, p)$ to prove Lemma \ref{lem:g_a0}.
We readily see that
\begin{align}
s(n, {\textstyle \frac{1}{n}})
& =
\left. \left( n ( n p - 1 ) + \ln \left( \frac{ 1 - (n-1) p }{ p } \right) \right) \right|_{p = \frac{1}{n}}
\\
& =
\left( n (1 - 1) + \ln 1 \right)
\\
& =
0 .
\label{eq:s_overn}
\end{align}
Substituting $p = \mathrm{e}^{-n}$ into $s(n, p)$, we have
\begin{align}
s(n, \mathrm{e}^{- n})
& =
\left. \left( n (n p - 1) + \ln \left( \frac{ 1 - (n-1) p }{ p } \right) \right) \right|_{p = \mathrm{e}^{-n}}
\\
& =
n (n \, \mathrm{e}^{-n} - 1) + \ln \left( \frac{ 1 - (n-1) \, \mathrm{e}^{-n} }{ \mathrm{e}^{-n} } \right)
\\
& =
n^{2} \, \mathrm{e}^{-n} - n + \ln (1 - (n-1) \, \mathrm{e}^{-n}) - \ln \mathrm{e}^{-n}
\\
& =
n^{2} \, \mathrm{e}^{-n} - n + \ln (1 - (n-1) \, \mathrm{e}^{-n}) - (-n)
\\
& =
n^{2} \, \mathrm{e}^{-n} + \ln (1 - (n-1) \, \mathrm{e}^{-n}) .
\end{align}
Then, we can see that
\begin{align}
\lim_{n \to +\infty} s(n, \mathrm{e}^{-n})
& =
\lim_{n \to +\infty} \left( \vphantom{\sum} n^{2} \, \mathrm{e}^{-n} + \ln (1 - (n-1) \, \mathrm{e}^{-n}) \right)
\\
& =
0 .
\end{align}
Moreover, the derivative of $s(n, \mathrm{e}^{- n})$ with respect to $n$ can be calculated as follows:
\begin{align}
\frac{ \mathrm{d} s(n, \mathrm{e}^{- n}) }{ \mathrm{d} n }
& =
\frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} n^{2} \, \mathrm{e}^{-n} + \ln (1 - (n-1) \, \mathrm{e}^{-n}) \right)
\\
& =
\left[ \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} n^{2} \, \mathrm{e}^{-n} \right) \right] + \left[ \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} \ln (1 - (n-1) \, \mathrm{e}^{-n}) \right) \right]
\\
& =
\left[ \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n^{2}) \right) \mathrm{e}^{-n} + n^{2} \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\mathrm{e}^{-n}) \right) \right] + \left[ \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} \ln (1 - (n-1) \, \mathrm{e}^{-n}) \right) \right]
\\
& =
\left[ 2 n \, \mathrm{e}^{-n} - n^{2} \, \mathrm{e}^{-n} \right] + \left[ \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} \ln (1 - (n-1) \, \mathrm{e}^{-n}) \right) \right]
\\
& =
n \, \mathrm{e}^{-n} \, (2 - n) + \left[ \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} \ln (1 - (n-1) \, \mathrm{e}^{-n}) \right) \right]
\\
& =
n \, \mathrm{e}^{-n} \, (2 - n) + \left[ \frac{ \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} 1 - (n-1) \, \mathrm{e}^{-n} \right) }{ (1 - (n-1) \, \mathrm{e}^{-n}) } \right]
\\
& =
n \, \mathrm{e}^{-n} \, (2 - n) + \left[ \frac{ - \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n-1) \right) \mathrm{e}^{-n} - (n-1) \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\mathrm{e}^{-n}) \right) }{ 1 - (n-1) \, \mathrm{e}^{-n} } \right]
\\
& =
n \, \mathrm{e}^{-n} \, (2 - n) + \left[ \frac{ - \, \mathrm{e}^{-n} + (n-1) \, \mathrm{e}^{-n} }{ 1 - (n-1) \, \mathrm{e}^{-n} } \right]
\\
& =
n \, \mathrm{e}^{-n} \, (2 - n) + \left[ \frac{ \mathrm{e}^{-n} \, (- 1 + (n-1)) }{ 1 - (n-1) \, \mathrm{e}^{-n} } \right]
\\
& =
n \, \mathrm{e}^{-n} \, (2 - n) + \left[ \frac{ \mathrm{e}^{-n} \, (n-2) }{ 1 - (n-1) \, \mathrm{e}^{-n} } \right]
\\
& =
\frac{ n \, \mathrm{e}^{-n} \, (2 - n) (1 - (n-1) \, \mathrm{e}^{-n}) + \mathrm{e}^{-n} \, (n-2) }{ 1 - (n-1) \, \mathrm{e}^{-n} }
\\
& =
\mathrm{e}^{-n} \, (n-2) \, \frac{ n (1 - (n-1) \, \mathrm{e}^{-n}) + 1 }{ 1 - (n-1) \, \mathrm{e}^{-n} }
\\
& =
\mathrm{e}^{-n} \, (n-2) \, \frac{ n - n^{2} \, \mathrm{e}^{-n} + n \, \mathrm{e}^{-n} + 1 }{ 1 - (n-1) \, \mathrm{e}^{-n} }
\\
& =
\mathrm{e}^{-n} \, (n-2) \, \frac{ (n-1) ( n \, \mathrm{e}^{-n} - 1 ) }{ 1 - (n-1) \, \mathrm{e}^{-n} }
\\
& =
\mathrm{e}^{-n} \, (n-2) (n-1) \left( \frac{ n \, \mathrm{e}^{-n} - 1 }{ 1 - (n-1) \, \mathrm{e}^{-n} } \right)
\\
& =
\underbrace{ \mathrm{e}^{-n} \, (n-2) (n-1) }_{ > 0 \ \mathrm{for} \ n > 2 } \underbrace{ \left( - \frac{ 1 - n \, \mathrm{e}^{-n} }{ 1 - (n-1) \, \mathrm{e}^{-n} } \right) }_{ < 0 }
\\
& <
0 .
\end{align}
Since $s(n, \mathrm{e}^{-n})$ is strictly decreasing for $n \ge 2$ and $\lim_{n \to +\infty} s(n, \mathrm{e}^{-n}) = 0$, we obtain
\begin{align}
s(n, \mathrm{e}^{-n})
>
0
\label{eq:s_exp}
\end{align}
for $n \ge 2$.
We next calculate the derivative of $s(n, p)$ with respect to $z$ as follows:
\begin{align}
\frac{ \partial s(n, p) }{ \partial p }
& =
\frac{ \partial }{ \partial p } \left( n ( n p - 1 ) + \ln \left( \frac{ 1 - (n-1) p }{ p } \right) \right)
\\
& =
n^{2} + \frac{ \partial }{ \partial p } \ln \left( \frac{ 1 - (n-1) p }{ p } \right)
\\
& \overset{\text{(a)}}{=}
n^{2} + \frac{ 1 }{ \frac{ 1 - (n-1) p }{ p } } \left( - \frac{ 1 }{ p^{2} } \right)
\\
& =
n^{2} - \frac{ 1 }{ p (1 - (n-1) p) }
\\
& =
\frac{ n^{2} p (1 - (n-1) p) - 1 }{ p (1 - (n-1) p) }
\\
& =
\frac{ n^{2} p - n^{3} p^{2} + n^{2} p^{2} - 1 }{ p (1 - (n-1) p) }
\\
& =
\frac{ n^{2} \left( (1-n) p^{2} + p - \left( \frac{1}{n^{2}} \right) \right) }{ p (1 - (n-1) p) }
\\
& =
\frac{ n^{2} (1-n) \left( p^{2} - \left( \frac{1}{n-1} \right) p + \left( \frac{1}{n^{2} (n-1)} \right) \right) }{ p (1 - (n-1) p) }
\\
& =
\underbrace{ - \frac{ n^{2} (n-1) }{ p (1 - (n-1) p) } }_{ < 0 } \left( p - \frac{1}{n(n-1)} \right) \left( p - \frac{1}{n} \right)
\\
& \overset{\text{(b)}}{=}
\begin{cases}
< 0
& \mathrm{if} \ p \in (0, \frac{1}{n(n-1)}) \cup (\frac{1}{n}, \frac{1}{n-1}) , \\
= 0
& \mathrm{if} \ p \in \{ \frac{1}{n(n-1)}, \frac{1}{n} \} , \\
> 0
& \mathrm{if} \ p \in (\frac{1}{n(n-1)}, \frac{1}{n}) ,
\end{cases}
\label{eq:diff_s_sign}
\end{align}
where
\begin{itemize}
\item
(a) follows from the fact that $\frac{ \mathrm{d} \ln f(x) }{ \mathrm{d} x } = \frac{ 1 }{ f(x) } \left( \frac{ \mathrm{d} f(x) }{ \mathrm{d} x } \right)$ and
\item
(b) follows from the fact that
\begin{align}
\operatorname{sgn} \! \left( \left( p - \frac{1}{n(n-1)} \right) \left( p - \frac{1}{n} \right) \right)
=
\begin{cases}
1
& \mathrm{if} \ p \in (0, \frac{1}{n(n-1)}) \cup (\frac{1}{n}, \frac{1}{n-1}) , \\
0
& \mathrm{if} \ p \in \{ \frac{1}{n(n-1)}, \frac{1}{n} \} , \\
-1
& \mathrm{if} \ p \in (\frac{1}{n(n-1)}, \frac{1}{n}) .
\end{cases}
\end{align}
\end{itemize}
It follows from
\begin{itemize}
\item
the intermediate value theorem,
\item
the monotonicity of $s(n, p)$ with respect to $p$ (see Eq. \eqref{eq:diff_s_sign}):
\begin{itemize}
\item
$s(n, p)$ is strictly decreasing for $p \in (0, \frac{1}{n (n-1)}]$,
\item
$s(n, p)$ is strictly increasing for $p \in [\frac{1}{n (n-1)}, \frac{1}{n}]$,
\item
$s(n, p)$ is strictly decreasing for $p \in [\frac{1}{n}, \frac{1}{n-1})$,
\end{itemize}
\item
$s(n, \mathrm{e}^{-n}) > 0$ for $n \ge 2$ (see Eq. \eqref{eq:s_exp}),
\item
$s(n, \frac{1}{n}) = 0$ for $n \ge 2$ (see Eq. \eqref{eq:s_overn}), and
\item
$0 < \mathrm{e}^{-n} < \frac{1}{n (n-1)} < \frac{1}{n}$ for $n \ge 3$
\end{itemize}
that the following statements hold:
\begin{itemize}
\item
if $n = 2$, then
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} s(2, p) \right)
=
\begin{cases}
1
& \mathrm{if} \ p \in (0, \frac{1}{2}) , \\
0
& \mathrm{if} \ p = \frac{1}{2} , \\
-1
& \mathrm{if} \ p \in (\frac{1}{2}, 1) ,
\end{cases}
\end{align}
\item
if $n \ge 3$, there exists $\kappa_{p}( n ) \in (\mathrm{e}^{-n}, \frac{1}{n(n-1)})$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} s(n, p) \right)
=
\begin{cases}
1
& \mathrm{if} \ p \in (0, \kappa_{p}( n )) , \\
0
& \mathrm{if} \ p \in \{ \kappa_{p}( n ), \frac{1}{n} \} , \\
-1
& \mathrm{if} \ p \in (\kappa_{p}( n ), \frac{1}{n}) \cup (\frac{1}{n}, \frac{1}{n-1}) .
\end{cases}
\end{align}
\end{itemize}
Therefore, since
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, p, 0) \right)
& =
\operatorname{sgn} \! \left( \vphantom{\sum} s(n, p) \right) \cdot \, \operatorname{sgn} \! \left( - \frac{ 1 }{ \ln \left( \frac{ 1 - (n-1) p }{ p } \right) } \right) ,
\\
\operatorname{sgn} \! \left( - \frac{ 1 }{ \ln \left( \frac{ 1 - (n-1) p }{ p } \right) } \right)
& =
\begin{cases}
1
& \mathrm{if} \ p \in (\frac{1}{n}, \frac{1}{n-1}) , \\
-1
& \mathrm{if} \ p \in (0, \frac{1}{n}) ,
\end{cases}
\end{align}
we have that
\begin{itemize}
\item
if $n = 2$, then
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(2, p, 0) \right)
=
-1
\end{align}
for $p \in (0, \frac{1}{2}) \cup (\frac{1}{2}, 1)$,
\item
for any $n \ge 3$, there exists $\kappa_{p}( n ) \in (\mathrm{e}^{-n}, \frac{1}{n(n-1)})$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, p, 0) \right)
=
\begin{cases}
1
& \mathrm{if} \ p \in (\kappa_{p}( n ), \frac{1}{n}) , \\
0
& \mathrm{if} \ p = \kappa_{p}( n ) , \\
-1
& \mathrm{if} \ p \in (0, \kappa_{p}( n )) \cup (\frac{1}{n}, \frac{1}{n-1}) .
\end{cases}
\end{align}
\end{itemize}
Note that $\lim_{p \to \frac{1}{n}} g(n, p, 0) = 0$.
That concludes the proof of the lemma.
\end{IEEEproof}
\section{Proof of Lemma \ref{lem:dzda}}
\label{app:dzda}
\begin{IEEEproof}[Proof of Lemma \ref{lem:dzda}]
In the proof, assume that $n \ge 3$.
Direct calculation shows
\begin{align}
\frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha }
& =
\frac{ \partial }{ \partial z } \left( \frac{ \partial g(n, z, \alpha) }{ \partial \alpha } \right)
\\
& =
\frac{ \partial }{ \partial z } \left( 1 - \frac{ (n-1) z^{1-\alpha} + z^{\alpha} }{ (n-1) + z } \right)
\\
& =
- \frac{ \partial }{ \partial z } \left( \frac{ (n-1) z^{1-\alpha} + z^{\alpha} }{ (n-1) + z } \right)
\\
& \overset{\text{(a)}}{=}
- \frac{ \left[ \frac{ \partial ( (n-1) z^{1-\alpha} + z^{\alpha} ) }{ \partial z } \right] ((n-1) + z) - ( (n-1) z^{1-\alpha} + z^{\alpha} ) \left[ \frac{ \partial ((n-1) + z) }{ \partial z } \right] }{ ((n-1) + z)^{2} }
\\
& =
- \frac{ [ (n-1) (1-\alpha) z^{-\alpha} + \alpha z^{\alpha-1} ] ((n-1) + z) - ( (n-1) z^{1-\alpha} + z^{\alpha} ) [ 1 ] }{ ((n-1) + z)^{2} }
\\
& =
- \frac{ (n-1)^{2} (1-\alpha) z^{-\alpha} + (n-1) (1-\alpha) z^{1-\alpha} + (n-1) \alpha z^{\alpha-1} + \alpha z^{\alpha} - (n-1) z^{1-\alpha} - z^{\alpha} }{ ((n-1) + z)^{2} }
\\
& =
- \frac{ (\alpha-1) z^{\alpha} + \alpha (n-1) z^{\alpha-1} - \alpha (n-1) z^{1-\alpha} - (\alpha-1) (n-1)^{2} z^{-\alpha} }{ ((n-1) + z)^{2} }
\\
& =
u(n, z, \alpha) \underbrace{ \left( - \frac{ 1 }{ ((n-1) + z)^{2} } \right) }_{ < 0 } ,
\label{eq:diff_g_za}
\end{align}
where $u(n, z, \alpha) \triangleq (\alpha-1) z^{\alpha} + \alpha (n-1) z^{\alpha-1} - \alpha (n-1) z^{1-\alpha} - (\alpha-1) (n-1)^{2} z^{-\alpha}$ and (a) follows from the fact that $\frac{ \mathrm{d} }{ \mathrm{d} x } \left( \frac{ f(x) }{ g(x) } \right) = \frac{ \left[ \frac{ \mathrm{d} f(x) }{ \mathrm{d} x } \right] g(x) - f(x) \left[ \frac{ \mathrm{d} g(x) }{ \mathrm{d} x } \right] }{ (g(x))^{2} }$.
It follows from \eqref{eq:diff_g_za} that it is enough to check the sign of $u(n, z, \alpha)$.
Simple calculation yields
\begin{align}
u(n, z, 0)
& =
\left. \left( \vphantom{\sum} (\alpha-1) z^{\alpha} + \alpha (n-1) z^{\alpha-1} - \alpha (n-1) z^{1-\alpha} - (\alpha-1) (n-1)^{2} z^{-\alpha} \right) \right|_{\alpha = 0}
\\
& =
(-1) z^{0} + 0 (n-1) z^{-1} - 0 (n-1) z^{1} - (-1) (n-1)^{2} z^{0}
\\
& =
-1 + (n-1)^{2}
\\
& >
0 ,
\\
u(n, z, 1)
& =
\left. \left( \vphantom{\sum} (\alpha-1) z^{\alpha} + \alpha (n-1) z^{\alpha-1} - \alpha (n-1) z^{1-\alpha} - (\alpha-1) (n-1)^{2} z^{-\alpha} \right) \right|_{\alpha = 1}
\\
& =
0 z^{1} + 1 (n-1) z^{0} - 1 (n-1) z^{0} - 0 (n-1)^{2} z^{-1}
\\
& =
(n-1) - (n-1)
\\
& =
0
\end{align}
for $z \in (0, +\infty)$.
Therefore, we readily see that
\begin{itemize}
\item
if $\alpha = 0$, then $\frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } < 0$ for $z \in (0, +\infty)$ and
\item
if $\alpha = 1$, then $\frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } = 0$ for $z \in (0, +\infty)$.
\end{itemize}
Thus, we will verify the sign of $u(n, z, \alpha)$ for $\alpha \in (-\infty, 0) \cup (0, 1) \cup (1, +\infty)$.
We first check the sign of the derivative of $u(n, z, \alpha)$ with respect to $z$ as
\begin{align}
\frac{ \partial u(n, z, \alpha) }{ \partial z }
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} (\alpha-1) z^{\alpha} + \alpha (n-1) z^{\alpha-1} - \alpha (n-1) z^{1-\alpha} - (\alpha-1) (n-1)^{2} z^{-\alpha} \right)
\\
& =
\alpha (\alpha-1) z^{\alpha-1} + \alpha (\alpha-1) (n-1) z^{\alpha-2}
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ \alpha (\alpha-1) (n-1) z^{-\alpha} + \alpha (\alpha-1) (n-1)^{2} z^{-1-\alpha}
\\
& =
\alpha (\alpha-1) \left( \vphantom{\sum} z^{\alpha-1} + (n-1) z^{\alpha-2} + (n-1) z^{-\alpha} + (n-1)^{2} z^{-1-\alpha} \right)
\\
& =
\alpha (\alpha-1) \underbrace{ \left( \frac{ (n-1) + z }{ z^{2} } \right) \left( \vphantom{\sum} z^{\alpha} + (n-1) z^{1-\alpha} \right) }_{ > 0 }
\\
& =
\begin{cases}
< 0
& \mathrm{if} \ \alpha \in (0, 1) , \\
= 0
& \mathrm{if} \ \alpha \in \{ 0, 1 \} , \\
> 0
& \mathrm{if} \ \alpha \in (-\infty, 0) \cup (1, +\infty) .
\end{cases}
\label{eq:diff_u}
\end{align}
Hence, we have that
\begin{itemize}
\item
if $\alpha \in (0, 1)$, then $u(n, z, \alpha)$ is strictly decreasing for $z \in (0, +\infty)$,
\item
if $\alpha \in (-\infty, 0) \cup (1, +\infty)$, then $u(n, z, \alpha)$ is strictly increasing for $z \in (0, +\infty)$, and
\item
if $\alpha \in \{ 0, 1 \}$, then $u(n, z, \alpha)$ is invariant for $z \in (0, +\infty)$.
\end{itemize}
We second check the some signs of $u(n, z, \alpha)$ as follows:
we first get
\begin{align}
u(n, n-1, \alpha)
& =
\left. \left( \vphantom{\sum} (\alpha-1) z^{\alpha} + \alpha (n-1) z^{\alpha-1} - \alpha (n-1) z^{1-\alpha} - (\alpha-1) (n-1)^{2} z^{-\alpha} \right) \right|_{z = n-1}
\\
& =
(\alpha-1) (n-1)^{\alpha} + \alpha (n-1) (n-1)^{\alpha-1} - \alpha (n-1) (n-1)^{1-\alpha}
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- (\alpha-1) (n-1)^{2} (n-1)^{-\alpha}
\\
& =
(\alpha-1) (n-1)^{\alpha} + \alpha (n-1)^{\alpha} - \alpha (n-1)^{2-\alpha} - (\alpha-1) (n-1)^{2-\alpha}
\\
& =
(n-1)^{\alpha} \left( \vphantom{\sum} (\alpha-1) + \alpha \right) - (n-1)^{2-\alpha} \left( \vphantom{\sum} \alpha + (\alpha-1) \right)
\\
& =
(n-1)^{\alpha} (2 \alpha - 1) - (n-1)^{2-\alpha} (2 \alpha - 1)
\\
& =
(2 \alpha - 1) \left( \vphantom{\sum} (n-1)^{\alpha} - (n-1)^{2-\alpha} \right)
\\
& =
(2 \alpha - 1) \underbrace{ (n-1)^{2-\alpha} }_{ > 0 } \left( \vphantom{\sum} (n-1)^{2(\alpha-1)} - 1 \right)
\\
& \overset{\text{(a)}}{=}
\begin{cases}
< 0
& \mathrm{if} \ \alpha \in (\frac{1}{2}, 1) , \\
= 0
& \mathrm{if} \ \alpha \in \{ \frac{1}{2}, 1 \} , \\
> 0
& \mathrm{if} \ \alpha \in (-\infty, \frac{1}{2}) \cup (1, +\infty) ,
\end{cases}
\label{eq:u_n-1}
\end{align}
where (a) follows from the facts that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} 2 \alpha - 1 \right)
& =
\begin{cases}
1
& \mathrm{if} \ \alpha \in (\frac{1}{2}, +\infty) , \\
0
& \mathrm{if} \ \alpha = \frac{1}{2} , \\
-1
& \mathrm{if} \ \alpha \in (-\infty, \frac{1}{2}) ,
\end{cases}
\\
\operatorname{sgn} \! \left( \vphantom{\sum} (n-1)^{2(\alpha - 1)} - 1 \right)
& =
\begin{cases}
1
& \mathrm{if} \ \alpha \in (1, +\infty) , \\
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha \in (-\infty, 1) ,
\end{cases}
\end{align}
we second get
\begin{align}
u(n, z_{1}(n, \alpha) , \alpha)
& =
u(n, (n-1)^{\frac{1}{\alpha}}, \alpha)
\\
& =
\left. \left( \vphantom{\sum} (\alpha-1) z^{\alpha} + \alpha (n-1) z^{\alpha-1} - \alpha (n-1) z^{1-\alpha} - (\alpha-1) (n-1)^{2} z^{-\alpha} \right) \right|_{z = (n-1)^{\frac{1}{\alpha}}}
\\
& =
(\alpha - 1) \left( (n-1)^{\frac{1}{\alpha}} \right)^{\alpha} + \alpha (n-1) \left( (n-1)^{\frac{1}{\alpha}} \right)^{\alpha-1}
\notag \\
& \qquad \qquad \qquad \qquad \qquad
- \alpha (n-1) \left( (n-1)^{\frac{1}{\alpha}} \right)^{1-\alpha} - (\alpha-1) (n-1)^{2} \left( (n-1)^{\frac{1}{\alpha}} \right)^{-\alpha}
\\
& =
(\alpha-1) (n-1) + \alpha (n-1) (n-1)^{1 - \frac{1}{\alpha}}
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- \alpha (n-1) (n-1)^{\frac{1}{\alpha} - 1} - (\alpha-1) (n-1)^{2} (n-1)^{-1}
\\
& =
(\alpha-1) (n-1) + \alpha (n-1)^{2 - \frac{1}{\alpha}} - \alpha (n-1)^{\frac{1}{\alpha}} - (\alpha-1) (n-1)
\\
& =
\alpha (n-1)^{2 - \frac{1}{\alpha}} - \alpha (n-1)^{\frac{1}{\alpha}}
\\
& =
\alpha (n-1)^{\frac{1}{\alpha}} \left( \vphantom{\sum} (n-1)^{2 - \frac{2}{\alpha}} - 1 \right)
\\
& =
\alpha \underbrace{ (n-1)^{\frac{1}{\alpha}} }_{ > 0 } \left( \vphantom{\sum} (n-1)^{\frac{2 (\alpha - 1)}{\alpha}} - 1 \right)
\\
& \overset{\text{(b)}}{=}
\begin{cases}
< 0
& \mathrm{if} \ \alpha \in (-\infty, 0) \cup (0, 1) , \\
= 0
& \mathrm{if} \ \alpha = 1 , \\
> 0
& \mathrm{if} \ \alpha \in (1, +\infty) ,
\end{cases}
\label{eq:u_z1}
\end{align}
where (b) follows from the facts that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} \alpha \right)
& =
\begin{cases}
1
& \mathrm{if} \ \alpha \in (0, +\infty) , \\
0
& \mathrm{if} \ \alpha = 0 , \\
-1
& \mathrm{if} \ \alpha \in (-\infty, 0) ,
\end{cases}
\\
\operatorname{sgn} \! \left( \vphantom{\sum} (n-1)^{\frac{ 2 (\alpha-1) }{ \alpha }} - 1 \right)
& =
\begin{cases}
1
& \mathrm{if} \ \alpha \in (-\infty, 0) \cup (1, +\infty) , \\
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha \in (0, 1) ,
\end{cases}
\end{align}
and we third get
\begin{align}
u(n, z_{2}(n, \alpha) , \alpha)
& =
u(n, (n-1)^{\frac{1}{2\alpha-1}}, \alpha)
\\
& =
\left. \left( \vphantom{\sum} (\alpha-1) z^{\alpha} + \alpha (n-1) z^{\alpha-1} - \alpha (n-1) z^{1-\alpha} - (\alpha-1) (n-1)^{2} z^{-\alpha} \right) \right|_{z = (n-1)^{\frac{1}{2\alpha-1}}}
\\
& =
(\alpha - 1) \left( (n-1)^{\frac{1}{2\alpha-1}} \right)^{\alpha} + \alpha (n-1) \left( (n-1)^{\frac{1}{2\alpha-1}} \right)^{\alpha-1}
\notag \\
& \qquad \qquad \qquad \qquad
- \alpha (n-1) \left( (n-1)^{\frac{1}{2\alpha-1}} \right)^{1-\alpha} - (\alpha-1) (n-1)^{2} \left( (n-1)^{\frac{1}{2\alpha-1}} \right)^{-\alpha}
\\
& =
(\alpha-1) (n-1)^{\frac{\alpha}{2\alpha-1}} + \alpha (n-1) (n-1)^{\frac{\alpha-1}{2\alpha-1}}
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad
- \alpha (n-1) (n-1)^{\frac{1-\alpha}{2\alpha-1}} - (\alpha-1) (n-1)^{2} (n-1)^{-\frac{\alpha}{2\alpha-1}}
\\
& =
(\alpha-1) (n-1)^{\frac{\alpha}{2\alpha-1}} + \alpha (n-1)^{\frac{(2\alpha-1) + (\alpha-1)}{2\alpha-1}}
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- \alpha (n-1)^{\frac{(2\alpha-1) + (1-\alpha)}{2\alpha-1}} - (\alpha-1) (n-1)^{\frac{2 (2\alpha-1) - \alpha}{2\alpha-1}}
\\
& =
(\alpha-1) (n-1)^{\frac{\alpha}{2\alpha-1}} + \alpha (n-1)^{\frac{3\alpha-2}{2\alpha-1}} - \alpha (n-1)^{\frac{\alpha}{2\alpha-1}} - (\alpha-1) (n-1)^{\frac{3\alpha-2}{2\alpha-1}}
\\
& =
(n-1)^{\frac{\alpha}{2\alpha-1}} \left( \vphantom{\sum} (\alpha-1) - \alpha \right) + (n-1)^{\frac{3\alpha-2}{2\alpha-1}} \left( \vphantom{\sum} \alpha - (\alpha - 1) \right)
\\
& =
- (n-1)^{\frac{\alpha}{2\alpha-1}} + (n-1)^{\frac{3\alpha-2}{2\alpha-1}}
\\
& =
(n-1)^{\frac{\alpha}{2\alpha-1}} \left( (n-1)^{\frac{3\alpha-2}{2\alpha-1} - \frac{\alpha}{2\alpha-1}} - 1 \right)
\\
& =
\underbrace{ (n-1)^{\frac{\alpha}{2\alpha-1}} }_{ > 0 } \left( (n-1)^{\frac{2(\alpha-1)}{2\alpha-1}} - 1 \right)
\\
& \overset{\text{(c)}}{=}
\begin{cases}
< 0
& \mathrm{if} \ \alpha \in (\frac{1}{2}, 1) , \\
= 0
& \mathrm{if} \ \alpha = 1 , \\
> 0
& \mathrm{if} \ \alpha \in (-\infty, \frac{1}{2}) \cup (1, +\infty) ,
\end{cases}
\label{eq:u_z2}
\end{align}
where (c) follows from the fact that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} (n-1)^{\frac{2 (\alpha-1)}{2 \alpha - 1}} - 1 \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha \in (-\infty, \frac{1}{2}) \cup (1, +\infty) , \\
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha \in (\frac{1}{2}, 1) .
\end{cases}
\end{align}
Since $u(n, z, \frac{1}{2})$ is strictly decreasing for $z \in (0, +\infty)$ (see Eq. \eqref{eq:diff_u}) and $u(n, n-1, \frac{1}{2}) = 0$ (see Eq. \eqref{eq:u_n-1}), we readily see that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} u(n, z, {\textstyle \frac{1}{2}}) \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (0, n-1) , \\
0
& \mathrm{if} \ z = n-1 , \\
-1
& \mathrm{if} \ z \in (n-1, +\infty) ;
\end{cases}
\end{align}
and therefore, we have
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} \left. \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right|_{\alpha = \frac{1}{2}} \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (n-1, +\infty) , \\
0
& \mathrm{if} \ z = n-1 , \\
-1
& \mathrm{if} \ z \in (0, n-1)
\end{cases}
\end{align}
from \eqref{eq:diff_g_za}.
Moreover, we check the sign of $u(n, z_{1}(n, 2 \alpha), \alpha)$ as follows:
Simple calculation yields
\begin{align}
u(n, z_{1}(n, 2 \alpha) , \alpha)
& =
u(n, (n-1)^{\frac{1}{2 \alpha}}, \alpha)
\\
& =
\left. \left( \vphantom{\sum} (\alpha-1) z^{\alpha} + \alpha (n-1) z^{\alpha-1} - \alpha (n-1) z^{1-\alpha} - (\alpha-1) (n-1)^{2} z^{-\alpha} \right) \right|_{z = (n-1)^{\frac{1}{2 \alpha}}}
\\
& =
(\alpha - 1) \left( (n-1)^{\frac{1}{2 \alpha}} \right)^{\alpha} + \alpha (n-1) \left( (n-1)^{\frac{1}{2 \alpha}} \right)^{\alpha-1}
\notag \\
& \qquad \qquad \qquad \qquad \quad
- \alpha (n-1) \left( (n-1)^{\frac{1}{2 \alpha}} \right)^{1-\alpha} - (\alpha-1) (n-1)^{2} \left( (n-1)^{\frac{1}{2 \alpha}} \right)^{-\alpha}
\\
& =
(\alpha - 1) (n-1)^{\frac{1}{2}} + \alpha (n-1) (n-1)^{\frac{1}{2} (1 - \frac{1}{\alpha})}
\notag \\
& \qquad \qquad \qquad \qquad \quad
- \alpha (n-1) (n-1)^{\frac{1}{2}(\frac{1}{\alpha} - 1)} - (\alpha-1) (n-1)^{2} (n-1)^{-\frac{1}{2}}
\\
& =
(\alpha - 1) (n-1)^{\frac{1}{2}} + \alpha (n-1)^{\frac{3}{2} - \frac{1}{2 \alpha}} - \alpha (n-1)^{\frac{1}{2 \alpha} + \frac{1}{2}} - (\alpha-1) (n-1)^{\frac{3}{2}}
\\
& =
(\alpha - 1) \sqrt{n-1} \, (1 - (n-1)) + \alpha (n-1)^{\frac{3}{2} - \frac{1}{2 \alpha}} - \alpha (n-1)^{\frac{1}{2 \alpha} + \frac{1}{2}}
\\
& =
(\alpha - 1) \sqrt{n-1} \, (2 - n) + \alpha \sqrt{n-1} \, \left( \vphantom{\sum} (n-1)^{1 - \frac{1}{2 \alpha}} - (n-1)^{\frac{1}{2 \alpha}} \right)
\\
& =
\sqrt{n-1} \, \left( \vphantom{\sum} (\alpha - 1) (2 - n) + \alpha (n-1)^{1 - \frac{1}{2 \alpha}} - \alpha (n-1)^{\frac{1}{2 \alpha}} \right)
\\
& =
\sqrt{n-1} \, \left( (\alpha - 1) (2 - n) + \alpha (n-1)^{\frac{1}{2\alpha}} \left( \vphantom{\sum} (n-1)^{1 - \frac{1}{\alpha}} - 1 \right) \right)
\\
& =
\sqrt{n-1} \, \left( (\alpha - 1) (2 - n) + \alpha \left( 1 - \frac{1}{\alpha} \right) (n-1)^{\frac{1}{2\alpha}} \ln_{\frac{1}{\alpha}} (n-1) \right)
\\
& =
(\alpha-1) \sqrt{n-1} \, \left( (2 - n) + (n-1)^{\frac{1}{2\alpha}} \ln_{\frac{1}{\alpha}} (n-1) \right)
\\
& \overset{\text{(a)}}{=}
(\alpha-1) \sqrt{n-1} \, \left( (2 - n) + (n-1)^{\frac{\beta}{2}} \ln_{\beta} (n-1) \right)
\\
& =
(\alpha-1) \sqrt{n-1} \, \cdot u_{1}( n, \beta ) ,
\label{eq:u_z1(n,2alpha)}
\end{align}
where (a) follows by the change of variable as $\beta = \beta( \alpha ) \triangleq \frac{1}{\alpha}$, and
\begin{align}
u_{1}( n, \beta )
\triangleq
(2 - n) + (n-1)^{\frac{\beta}{2}} \ln_{\beta} (n-1) .
\end{align}
We now calculate the following derivatives:
\begin{align}
\frac{ \partial u_{1}(n, \beta) }{ \partial \beta }
& =
\frac{ \partial }{ \partial \beta } \left( \vphantom{\sum} (2 - n) + (n-1)^{\frac{\beta}{2}} \ln_{\beta} (n-1) \right)
\\
& =
\frac{ \partial }{ \partial \beta } \left( \vphantom{\sum} (n-1)^{\frac{\beta}{2}} \ln_{\beta} (n-1) \right)
\\
& =
\left( \frac{ \partial }{ \partial \beta } ((n-1)^{\frac{\beta}{2}}) \right) \ln_{\beta} (n-1) + (n-1)^{\frac{\beta}{2}} \left( \frac{ \partial }{ \partial \beta } (\ln_{\beta} (n-1)) \right)
\\
& \overset{\eqref{eq:diff1_lnq}}{=}
\left( \frac{1}{2} (n-1)^{\frac{\beta}{2}} \ln (n-1) \right) \ln_{\beta} (n-1) + (n-1)^{\frac{\beta}{2}} \left( \frac{ (n-1)^{\beta} \ln_{\beta} (n-1) - (n-1) \ln (n-1) }{ (n-1)^{\beta} (1-\beta) } \right)
\\
& =
(n-1)^{\frac{\beta}{2}} \left( \frac{ (\ln (n-1)) (\ln_{\beta} (n-1)) }{2} + \frac{ \ln_{\beta} (n-1) - (n-1)^{1-\beta} \ln (n-1) }{ (1-\beta) } \right)
\\
& =
(n-1)^{\frac{\beta}{2}} \left( \frac{ ((n-1)^{1-\beta} - 1) \ln (n-1) }{2 (1-\beta)} + \frac{ \ln_{\beta} (n-1) - (n-1)^{1-\beta} \ln (n-1) }{ (1-\beta) } \right)
\\
& =
(n-1)^{\frac{\beta}{2}} \left( \frac{ ((n-1)^{1-\beta} - 1) \ln (n-1) }{2 (1-\beta)} + \frac{ ((n-1)^{1-\beta} - 1) - (1-\beta) (n-1)^{1-\beta} \ln (n-1) }{ (1-\beta)^{2} } \right)
\\
& =
(n-1)^{\frac{\beta}{2}} \left( \frac{ (1-\beta) ((n-1)^{1-\beta} - 1) \ln (n-1) }{2 (1-\beta)^{2}}
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ \frac{ 2 ((n-1)^{1-\beta} - 1) - 2 (1-\beta) (n-1)^{1-\beta} \ln (n-1) }{ 2 (1-\beta)^{2} } \right)
\\
& =
\frac{ (n-1)^{\frac{\beta}{2}} }{ 2 (1-\beta)^{2} } \left( \left[ \vphantom{\sum} (1-\beta) ((n-1)^{1-\beta} - 1) - 2 (1-\beta) (n-1)^{1-\beta} \right] \ln (n-1) + 2 ((n-1)^{1-\beta} - 1) \right)
\\
& =
\frac{ (n-1)^{\frac{\beta}{2}} }{ 2 (1-\beta)^{2} } \left( \vphantom{\sum} - (1-\beta) (n-1)^{1-\beta} \ln (n-1) - (1-\beta) \ln (n-1) + 2 ((n-1)^{1-\beta} - 1) \right)
\\
& =
\frac{ (n-1)^{\frac{\beta}{2}} }{ 2 (1-\beta)^{2} } \left( \vphantom{\sum} - (n-1)^{1-\beta} \ln (n-1)^{1-\beta} - \ln (n-1)^{1-\beta} + 2 ((n-1)^{1-\beta} - 1) \right)
\\
& \overset{\text{(a)}}{=}
\frac{ (n-1)^{\frac{\beta}{2}} }{ 2 (1-\beta)^{2} } \left( \vphantom{\sum} - k \ln k - \ln k + 2 (k - 1) \right) ,
\label{eq:diff1_u_beta}
\end{align}
where (a) follows by the change of variable: $k = k(n, \beta) \triangleq (n-1)^{1-\beta}$.
Note that $k = (n-1)^{1-\beta} > 0$ for $n \ge 2$ and $\beta \in (-\infty, +\infty)$.
Then, we readily see that
\begin{align}
\left. \left( \vphantom{\sum} - k \ln k - \ln k + 2 (k - 1) \right) \right|_{k = 1}
& =
0,
\\
\frac{ \mathrm{d} }{ \mathrm{d} k } \left( \vphantom{\sum} - k \ln k - \ln k + 2 (k - 1) \right)
& =
- \left( \frac{ \mathrm{d} }{ \mathrm{d} k } (k \ln k) \right) - \left( \frac{ \mathrm{d} }{ \mathrm{d} k } (\ln k) \right) + 2 \left( \frac{ \mathrm{d} }{ \mathrm{d} k } (k - 1) \right)
\\
& =
- \left( \vphantom{\sum} \ln k + 1 \right) - \left( \frac{1}{k} \right) + 2
\\
& =
\left( 1 - \frac{1}{k} \right) - \ln k
\\
& \overset{\text{(a)}}{\le}
\ln k - \ln k
\\
& =
0 ,
\end{align}
where note that (a) holds with equality if and only if $k = 1$.
Hence, we obtain
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} - k \ln k - \ln k + 2 (k - 1) \right)
=
\begin{cases}
1
& \mathrm{if} \ 0 < k < 1 , \\
0
& \mathrm{if} \ k = 1 , \\
-1
& \mathrm{if} \ k > 1 ;
\end{cases}
\label{eq:sgn_k}
\end{align}
and hence, we can rewrite \eqref{eq:sgn_k} as
\begin{align}
\operatorname{sgn} \left( \vphantom{\sum} - (n-1)^{1-\beta} \ln (n-1)^{1-\beta} - \ln (n-1)^{1-\beta} + 2 ((n-1)^{1-\beta} - 1) \right)
=
\begin{cases}
1
& \mathrm{if} \ \beta > 1 , \\
0
& \mathrm{if} \ \beta = 1 , \\
-1
& \mathrm{if} \ \beta < 1 ;
\end{cases}
\label{eq:sgn_k_prot}
\end{align}
and therefore, we obtain
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial u_{1}(n, \beta) }{ \partial \beta } \right)
& \overset{\eqref{eq:diff1_u_beta}}{=}
\underbrace{ \operatorname{sgn} \! \left( \frac{ (n-1)^{\frac{\beta}{2}} }{ 2 (1-\beta)^{2} } \right) }_{ = 1 } \cdot \, \operatorname{sgn} \left( \vphantom{\sum} - (n-1)^{1-\beta} \ln (n-1)^{1-\beta} - \ln (n-1)^{1-\beta} + 2 ((n-1)^{1-\beta} - 1) \right)
\\
& \overset{\eqref{eq:sgn_k_prot}}{=}
\begin{cases}
1
& \mathrm{if} \ \beta > 1 , \\
0
& \mathrm{if} \ \beta = 1 , \\
-1
& \mathrm{if} \ \beta < 1 ,
\end{cases}
\label{eq:sgn_diff1_u1_beta}
\end{align}
which implies that
\begin{itemize}
\item
$u_{1}(n, \beta)$ is strictly decreasing for $\beta \in (-\infty, 1]$ and
\item
$u_{1}(n, \beta)$ is strictly increasing for $\beta \in [1, +\infty)$.
\end{itemize}
Moreover, since
\begin{align}
u_{1}( n, 0 )
& =
\left. \left( \vphantom{\sum} (2-n) + (n-1)^{\frac{\beta}{2}} \ln_{\beta} (n-1) \right) \right|_{\beta=0}
\\
& =
-(n-2) + (n-1)^{0} \ln_{0} (n-1)
\\
& =
-(n-2) + (n-2)
\\
& =
0 ,
\\
u_{1}( n, 2 )
& =
\left. \left( \vphantom{\sum} (2-n) + (n-1)^{\frac{\beta}{2}} \ln_{\beta} (n-1) \right) \right|_{\beta=2}
\\
& =
- (n-2) + (n-1) \ln_{2} (n-1)
\\
& =
- (n-2) + (n-1) \left( 1 - \frac{1}{n-1} \right)
\\
& =
- (n-2) + (n-2)
\\
& =
0 ,
\end{align}
it follows from \eqref{eq:sgn_diff1_u1_beta} that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} u_{1}( n, \beta ) \right)
=
\begin{cases}
1
& \mathrm{if} \ \beta \in (-\infty, 0) \cup (2, +\infty) , \\
0
& \mathrm{if} \ \beta \in \{ 0, 2 \} , \\
-1
& \mathrm{if} \ \beta \in (0, 2) .
\end{cases}
\label{eq:sgn_u1_beta}
\end{align}
Furthermore, since $\beta = \frac{1}{\alpha}$, we can rewrite \eqref{eq:sgn_u1_beta} as
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} u_{1}( n, {\textstyle \frac{1}{\alpha}} ) \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha \in (-\infty, 0) \cup (0, \frac{1}{2}) , \\
0
& \mathrm{if} \ \alpha = \frac{1}{2} , \\
-1
& \mathrm{if} \ \alpha \in (\frac{1}{2}, +\infty) ;
\end{cases}
\label{eq:sgn_u1_1_over_alpha}
\end{align}
and therefore, we obtain
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} u(n, z_{1}(n, 2 \alpha) , \alpha) \right)
& \overset{\eqref{eq:u_z1(n,2alpha)}}{=}
\operatorname{sgn} (\alpha-1) \cdot \, \underbrace{ \operatorname{sgn} \! \left( \vphantom{\sum} \sqrt{n-1} \right) }_{=1} \cdot \, \operatorname{sgn} \! \left( \vphantom{\sum} u_{1}( n, \beta ) \right)
\\
& \overset{\eqref{eq:sgn_u1_1_over_alpha}}{=}
\begin{cases}
1
& \mathrm{if} \ \alpha \in (\frac{1}{2}, 1) , \\
0
& \mathrm{if} \ \alpha \in \{ \frac{1}{2}, 1 \} , \\
-1
& \mathrm{if} \ \alpha \in (-\infty, 0) \cup (0, \frac{1}{2}) \cup (1, +\infty) .
\end{cases}
\label{eq:sgn_u_z1(n,2alpha)}
\end{align}
So far, we already provided the signs of $\frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha }$ for $\alpha \in \{ 0, \frac{1}{2}, 1 \}$ in the proof.
We next show the signs of $u(n, z, \alpha)$ for $\alpha \in (-\infty, +\infty) \setminus \{ 0, \frac{1}{2}, 1 \}$.
Then, note that it follows from \eqref{eq:range_z1z2_orig} that
\begin{align}
\left\{
\begin{array}{ll}
0 < z_{1}(n, \alpha) < z_{1}(n, 2 \alpha) < z_{2}(n, \alpha) < 1
& \mathrm{if} \ \alpha \in (-\infty, 0) ,
\\
0 < z_{2}(n, \alpha) < \frac{1}{2} \ \mathrm{and} \ n-1 < z_{1}(n, 2 \alpha) < z_{1}(n, \alpha)
& \mathrm{if} \ \alpha \in (0, \frac{1}{2}) ,
\\
\sqrt{n-1} < z_{1}(n, 2 \alpha) < n-1 < z_{1}(n, \alpha) < z_{2}(n, \alpha)
& \mathrm{if} \ \alpha \in (\frac{1}{2}, 1) ,
\\
1 < z_{1}(n, 2 \alpha) < z_{2}(n, \alpha) < z_{1}(n, \alpha) < n-1
& \mathrm{if} \ \alpha \in (1, +\infty) .
\end{array}
\right.
\label{eq:range_z1z2}
\end{align}
By the intermediate value theorem, we can prove the signs of $u(n, z, \alpha)$ for $\alpha \in (-\infty, +\infty) \setminus \{ 0, \frac{1}{2}, 1 \}$ as follows:
\subsubsection*{The case of $\alpha \in (-\infty, 0)$}
Since
\begin{itemize}
\item
for a fixed $\alpha \in (-\infty, 0)$, $u(n, z, \alpha)$ is strictly increasing for $z \in (0, +\infty)$ (see Eq. \eqref{eq:diff_u}),
\item
if $\alpha \in (-\infty, 0)$, then $u(n, z_{1}(n, 2 \alpha), \alpha) < 0$ (see Eq. \eqref{eq:sgn_u_z1(n,2alpha)}),
\item
if $\alpha \in (-\infty, 0)$, then $u(n, z_{2}(n, \alpha), \alpha) > 0$ (see Eq. \eqref{eq:u_z2}), and
\item
if $\alpha \in (-\infty, 0)$, then $0 < z_{1}(n, \alpha) < z_{1}(n, 2\alpha) < z_{2}(n, \alpha) < 1$ (see Eq. \eqref{eq:range_z1z2}),
\end{itemize}
for any $\alpha \in (-\infty, 0)$, there exists $\gamma(n, \alpha) \in (z_{1}(n, 2\alpha), z_{2}(n, \alpha))$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} u(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (\gamma(n, \alpha), +\infty) , \\
0
& \mathrm{if} \ z = \gamma(n, \alpha) , \\
-1
& \mathrm{if} \ z \in (0, \gamma(n, \alpha)) .
\end{cases}
\label{eq:sign_u1}
\end{align}
\subsubsection*{The case of $\alpha \in (0, \frac{1}{2})$}
Since
\begin{itemize}
\item
for a fixed $\alpha \in (0, \frac{1}{2})$, $u(n, z, \alpha)$ is strictly decreasing for $z \in (0, +\infty)$ (see Eq. \eqref{eq:diff_u}),
\item
if $\alpha \in (0, \frac{1}{2})$, then $u(n, n-1, \alpha) > 0$ (see Eq. \eqref{eq:u_n-1}),
\item
if $\alpha \in (0, \frac{1}{2})$, then $u(n, z_{1}(n, 2 \alpha), \alpha) < 0$ (see Eq. \eqref{eq:sgn_u_z1(n,2alpha)}), and
\item
if $\alpha \in (0, \frac{1}{2})$, then $n-1 < z_{1}(n, 2 \alpha) < z_{1}(n, \alpha)$ (see Eq. \eqref{eq:range_z1z2}),
\end{itemize}
for any $\alpha \in (0, \frac{1}{2})$, there exists $\gamma(n, \alpha) \in (n-1, z_{1}(n, 2 \alpha))$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} u(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (0, \gamma(n, \alpha)) , \\
0
& \mathrm{if} \ z = \gamma(n, \alpha) , \\
-1
& \mathrm{if} \ z \in (\gamma(n, \alpha), +\infty) .
\end{cases}
\label{eq:sign_u2}
\end{align}
\subsubsection*{The case of $\alpha \in (\frac{1}{2}, 1)$}
Since
\begin{itemize}
\item
for a fixed $\alpha \in (\frac{1}{2}, 1)$, $u(n, z, \alpha)$ is strictly decreasing for $z \in (0, +\infty)$ (see Eq. \eqref{eq:diff_u}) ,
\item
if $\alpha \in (\frac{1}{2}, 1)$, then $u(n, z_{1}(n, 2 \alpha), \alpha) > 0$ (see Eq. \eqref{eq:sgn_u_z1(n,2alpha)}),
\item
if $\alpha \in (\frac{1}{2}, 1)$, then $u(n, n-1, \alpha) < 0$ (see Eq. \eqref{eq:u_n-1}), and
\item
$\sqrt{n-1} < z_{1}(n, 2 \alpha) < n-1$ (see Eq. \eqref{eq:range_z1z2}),
\end{itemize}
for any $\alpha \in (\frac{1}{2}, 1)$, there exists $\gamma(n, \alpha) \in (z_{1}(n, 2 \alpha), n-1)$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} u(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (0, \gamma(n, \alpha)) , \\
0
& \mathrm{if} \ z = \gamma(n, \alpha) , \\
-1
& \mathrm{if} \ z \in (\gamma(n, \alpha), +\infty) .
\end{cases}
\label{eq:sign_u3}
\end{align}
\subsubsection*{The case of $\alpha \in (1, +\infty)$}
Since
\begin{itemize}
\item
for a fixed $\alpha \in (1, +\infty)$, $u(n, z, \alpha)$ is strictly increasing for $z \in (0, +\infty)$ (see Eq. \eqref{eq:diff_u}) ,
\item
if $\alpha \in (1, +\infty)$, then $u(n, z_{1}(n, 2\alpha), \alpha) < 0$ (see Eq. \eqref{eq:sgn_u_z1(n,2alpha)}),
\item
if $\alpha \in (1, +\infty)$, then $u(n, z_{2}(n, \alpha), \alpha) > 0$ (see Eq. \eqref{eq:u_z2}), and
\item
if $\alpha \in (1, +\infty)$, then $1 < z_{1}(n, 2 \alpha) < z_{2}(n, \alpha) < z_{1}(n, \alpha)$ (see Eq. \eqref{eq:range_z1z2}),
\end{itemize}
for any $\alpha \in (1, +\infty)$, there exists $\gamma(n, \alpha) \in (z_{1}(n, 2\alpha), z_{2}(n, \alpha))$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} u(n, z, \alpha) \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (\gamma(n, \alpha), +\infty) , \\
0
& \mathrm{if} \ z = \gamma(n, \alpha) , \\
-1
& \mathrm{if} \ z \in (0, \gamma(n, \alpha)) .
\end{cases}
\label{eq:sign_u4}
\end{align}
Since
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} \frac{ \partial^{2} g(n, z, \alpha) }{ \partial z \, \partial \alpha } \right)
& \overset{\eqref{eq:diff_g_za}}{=}
- \operatorname{sgn} \! \left( \vphantom{\sum} u(n, z, \alpha) \right) ,
\end{align}
\begin{itemize}
\item
Eq. \eqref{eq:g_dzda_1} follows from \eqref{eq:sign_u1},
\item
Eq. \eqref{eq:g_dzda_2} follows from \eqref{eq:sign_u2},
\item
Eq. \eqref{eq:g_dzda_4} follows from \eqref{eq:sign_u3}, and
\item
Eq. \eqref{eq:g_dzda_5} follows from \eqref{eq:sign_u4},
\end{itemize}
Therefore, we obtain Lemma \ref{lem:dzda}.
\end{IEEEproof}
\section{Proof of Lemma \ref{lem:diff_g_z}}
\label{app:diff_g_z}
\begin{IEEEproof}[Proof of Lemma \ref{lem:diff_g_z}]
In the proof, assume that $n \ge 3$.
Direct calculation shows
\begin{align}
\frac{ \partial g(n, z, \alpha) }{ \partial z }
& \overset{\eqref{eq:g_z}}{=}
\frac{ \partial }{ \partial z } \left( (\alpha - 1) + \frac{ ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) }{ ((n-1) + z) \ln z } \right)
\\
& =
\frac{ \partial }{ \partial z } \left( \frac{ ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) }{ ((n-1) + z) \ln z } \right)
\\
& \overset{\text{(a)}}{=}
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} } \left( \left[ \frac{ \partial ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) }{ \partial z } \right] ((n-1) + z) \ln z
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- \; ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) \left[ \frac{ \partial ((n-1) + z) \ln z }{ \partial z } \right] \right)
\\
& \overset{\text{(b)}}{=}
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} } \left( \left[ \vphantom{\sum} (n-1) (1-\alpha) z^{-\alpha} + 1 - \alpha z^{\alpha-1} \right] ((n-1) + z) \ln z
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad
- \; ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) \left[ \vphantom{\sum} (n-1) z^{-1} + (\ln z + 1) \right] \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} } \left( \left[ \vphantom{\sum} (n-1) (1-\alpha) z^{-\alpha} + 1 - \alpha z^{\alpha-1} \right] ((n-1) \ln z + z \ln z)
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad
- \; ((n-1) z^{1-\alpha} - (n-1) + z - z^{\alpha}) \left[ \vphantom{\sum} (n-1) z^{-1} + \ln z + 1 \right] \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} (1-\alpha) z^{-\alpha} \ln z + (n-1) (1-\alpha) z^{1-\alpha} \ln z
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
+ (n-1) \ln z + z \ln z - (n-1) \alpha z^{\alpha-1} \ln z - \alpha z^{\alpha} \ln z \right]
\notag \\
& \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} z^{-\alpha} + (n-1) z^{1-\alpha} \ln z + (n-1) z^{1-\alpha} - (n-1)^{2} z^{-1} - (n-1) \ln z
\right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
- (n-1) + (n-1) + z \ln z + z - (n-1) z^{\alpha-1} - z^{\alpha} \ln z - z^{\alpha} \right] \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} (1-\alpha) z^{-\alpha} \ln z + (n-1) (1-\alpha) z^{1-\alpha} \ln z
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
+ (n-1) \ln z - (n-1) \alpha z^{\alpha-1} \ln z - \alpha z^{\alpha} \ln z \right]
\notag \\
& \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} z^{-\alpha} + (n-1) z^{1-\alpha} \ln z + (n-1) z^{1-\alpha} - (n-1)^{2} z^{-1} - (n-1) \ln z
\right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad \qquad \qquad \quad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ z - (n-1) z^{\alpha-1} - z^{\alpha} \ln z - z^{\alpha} \right] \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} (1-\alpha) z^{-\alpha} + (n-1) (1-\alpha) z^{1-\alpha} + (n-1) - (n-1) \alpha z^{\alpha-1} - \alpha z^{\alpha} \right] \ln z
\right. \notag \\
& \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} z^{-\alpha} + (n-1) z^{1-\alpha} \ln z + (n-1) z^{1-\alpha} - (n-1)^{2} z^{-1} - (n-1) \ln z
\right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad \qquad \qquad \quad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ z - (n-1) z^{\alpha-1} - z^{\alpha} \ln z - z^{\alpha} \right] \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} (1-\alpha) z^{-\alpha} + (n-1) (1-\alpha) z^{1-\alpha} + (n-1) - (n-1) \alpha z^{\alpha-1} - \alpha z^{\alpha} \right] \ln z
\right. \notag \\
& \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} z^{-\alpha} + z - (n-1) z^{\alpha-1} - z^{\alpha} + (n-1) z^{1-\alpha} - (n-1)^{2} z^{-1} \right]
\notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- \left[ \vphantom{\sum} (n-1) z^{1-\alpha} - (n-1) - z^{\alpha} \right] \ln z \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} (1-\alpha) z^{-\alpha} + (n-1) (1-\alpha) z^{1-\alpha} + (n-1) - (n-1) \alpha z^{\alpha-1} - \alpha z^{\alpha}
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
- (n-1) z^{1-\alpha} + (n-1) + z^{\alpha} \right] \ln z
\notag \\
& \left. \qquad \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} z^{-\alpha} + z - (n-1) z^{\alpha-1} - z^{\alpha} + (n-1) z^{1-\alpha} - (n-1)^{2} z^{-1} \right] \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} (1-\alpha) z^{-\alpha} + (n-1) z^{1-\alpha} ((1-\alpha) - 1)
\right. \right. \notag \\
& \left. \vphantom{\sum} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ 2 (n-1) - (n-1) \alpha z^{\alpha-1} + z^{\alpha} (-\alpha + 1) \right] \ln z
\notag \\
& \left. \qquad \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} z^{-\alpha} + z - (n-1) z^{\alpha-1} - z^{\alpha} + (n-1) z^{1-\alpha} - (n-1)^{2} z^{-1} \right] \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} (1-\alpha) z^{-\alpha} - (n-1) \alpha z^{1-\alpha} + 2 (n-1) - (n-1) \alpha z^{\alpha-1} + (1 - \alpha) z^{\alpha} \right] \ln z
\right. \notag \\
& \left. \qquad \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} z^{-\alpha} + z - (n-1) z^{\alpha-1} - z^{\alpha} + (n-1) z^{1-\alpha} - (n-1)^{2} z^{-1} \right] \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} (1-\alpha) z^{-\alpha} - (n-1) \alpha (z^{1-\alpha} + z^{\alpha-1}) + 2 (n-1) + (1 - \alpha) z^{\alpha} \right] \ln z
\right. \notag \\
& \left. \qquad \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} z^{-\alpha} + z - (n-1) z^{\alpha-1} - z^{\alpha} + (n-1) z^{1-\alpha} - (n-1)^{2} z^{-1} \right] \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} (1-\alpha) z^{-\alpha} - (n-1) \alpha (z^{1-\alpha} + z^{\alpha-1}) + 2 (n-1) + (1 - \alpha) z^{\alpha} \right] \ln z
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad
- \left[ \vphantom{\sum} (n-1)^{2} (z^{-\alpha} - z^{-1}) - (n-1) (z^{\alpha-1} - z^{1-\alpha}) + z - z^{\alpha} \right] \right) ,
\label{eq:diff_g_z}
\end{align}
where
\begin{itemize}
\item
(a) follows from the fact that
$
\frac{ \mathrm{d} }{ \mathrm{d} x } \left( \frac{ f(x) }{ g(x) } \right) = \frac{ \left[ \frac{ \mathrm{d} f(x) }{ \mathrm{d} x } \right] g(x) - f(x) \left[ \frac{ \mathrm{d} g(x) }{ \mathrm{d} x } \right] }{ (g(x))^{2} }
$
and
\item
(b) follows from the facts that
\begin{align}
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) \right)
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} (n-1) z^{1-\alpha} - (n-1) + z - z^{\alpha} \right)
\\
& =
\left( \frac{ \partial (n-1) z^{1-\alpha} }{ \partial z } \right) - \left( \frac{ \partial (n-1) }{ \partial z } \right) + \left( \frac{ \partial z }{ \partial z } \right) - \left( \frac{ \partial z^{\alpha} }{ \partial z } \right)
\\
& =
(n-1) \left( \frac{ \partial z^{1-\alpha} }{ \partial z } \right) + \left( \frac{ \partial z }{ \partial z } \right) - \left( \frac{ \partial z^{\alpha} }{ \partial z } \right)
\\
& =
(n-1) (1-\alpha) z^{-\alpha} + 1 - \alpha z^{\alpha-1} ,
\\
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} ((n-1) + z) \ln z \right)
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} (n-1) \ln z + z \ln z \right)
\\
& =
\left( \frac{ \partial (n-1) \ln z }{ \partial z } \right) + \left( \frac{ \partial z \ln z }{ \partial z } \right)
\\
& =
(n-1) \left( \frac{ \partial \ln z }{ \partial z } \right) + \left( \frac{ \partial z \ln z }{ \partial z } \right)
\\
& =
(n-1) z^{-1} + (\ln z + 1) .
\end{align}
\end{itemize}
Substituting $\alpha = 1$ into $\frac{ \partial g(n, z, \alpha) }{ \partial z }$, we readily see that
\begin{align}
&
\left. \frac{ \partial g(n, z, \alpha) }{ \partial z } \right|_{\alpha = 1}
\\
& \quad \overset{\eqref{eq:diff_g_z}}{=}
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} (1-\alpha) z^{-\alpha} - (n-1) \alpha (z^{1-\alpha} + z^{\alpha-1}) + 2 (n-1) + (1 - \alpha) z^{\alpha} \right] \ln z
\right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- \left[ \vphantom{\sum} (n-1)^{2} (z^{-\alpha} - z^{-1}) - (n-1) (z^{\alpha-1} - z^{1-\alpha}) + z - z^{\alpha} \right] \right) \right|_{\alpha = 1}
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} } \left( \left[ \vphantom{\sum} - (n-1) (z^{0} + z^{0}) + 2 (n-1) \right] \ln z
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- \left[ \vphantom{\sum} (n-1)^{2} (z^{-1} - z^{-1}) - (n-1) (z^{0} - z^{0}) + z - z \right] \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} } \left( \left[ \vphantom{\sum} - 2 (n-1) + 2 (n-1) \right] \ln z \right)
\\
& \quad =
0 .
\end{align}
On the other hand, substituting $\alpha = \alpha_{1}(n, z) = \frac{ \ln (n-1) }{ \ln z }$ into $\frac{ \partial g(n, z, \alpha) }{ \partial z }$, we have
\begin{align}
&
\left. \frac{ \partial g(n, z, \alpha) }{ \partial z } \right|_{\alpha = \frac{\ln (n-1)}{\ln z}}
\\
& \quad \overset{\eqref{eq:diff_g_z}}{=}
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} (1-\alpha) z^{-\alpha} - (n-1) \alpha (z^{1-\alpha} + z^{\alpha-1}) + 2 (n-1) + (1 - \alpha) z^{\alpha} \right] \ln z
\right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} (z^{-\alpha} - z^{-1}) - (n-1) (z^{\alpha-1} - z^{1-\alpha}) + z - z^{\alpha} \right] \right) \right|_{\alpha = \frac{\ln (n-1)}{\ln z}}
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} \left( 1 - \frac{\ln (n-1)}{\ln z} \right) z^{-\frac{\ln (n-1)}{\ln z}} - (n-1) \left( \frac{\ln (n-1)}{\ln z} \right) (z^{1-\frac{\ln (n-1)}{\ln z}} + z^{\frac{\ln (n-1)}{\ln z}-1})
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ 2 (n-1) + \left( 1 - \frac{\ln (n-1)}{\ln z} \right) z^{\frac{\ln (n-1)}{\ln z}} \right] \ln z
\notag \\
& \left. \qquad \qquad \qquad \qquad
- \left[ \vphantom{\sum} (n-1)^{2} (z^{-\frac{\ln (n-1)}{\ln z}} - z^{-1}) - (n-1) (z^{\frac{\ln (n-1)}{\ln z}-1} - z^{1-\frac{\ln (n-1)}{\ln z}}) + z - z^{\frac{\ln (n-1)}{\ln z}} \right] \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} \left( 1 - \frac{\ln (n-1)}{\ln z} \right) (n-1)^{-1} - (n-1) \left( \frac{\ln (n-1)}{\ln z} \right) ((n-1)^{-1} z + (n-1) z^{-1})
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ 2 (n-1) + \left( 1 - \frac{\ln (n-1)}{\ln z} \right) (n-1) \right] \ln z
\notag \\
& \left. \qquad \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} ((n-1)^{-1} - z^{-1}) - (n-1) ((n-1) z^{-1} - (n-1)^{-1} z) + z - (n-1) \right] \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \left[ \vphantom{\sum} (n-1) \left( 1 - \frac{\ln (n-1)}{\ln z} \right) - \left( \frac{\ln (n-1)}{\ln z} \right) (z + (n-1)^{2} z^{-1})
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ 2 (n-1) + \left( 1 - \frac{\ln (n-1)}{\ln z} \right) (n-1) \right] \ln z
\notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- \left[ \vphantom{\sum} (n-1) - (n-1)^{2} z^{-1} - ((n-1)^{2} z^{-1} - z) + z - (n-1) \right] \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \left[ \vphantom{\sum} (n-1) (\ln z - \ln (n-1)) - (\ln (n-1)) (z + (n-1)^{2} z^{-1})
\right. \right. \notag \\
& \left. \vphantom{\sum} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ 2 (n-1) \ln z + (\ln z - \ln (n-1)) (n-1) \right]
\notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- \left[ \vphantom{\sum} - (n-1)^{2} z^{-1} - ((n-1)^{2} z^{-1} - z) + z \right] \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \left[ \vphantom{\sum} (n-1) \ln z - (n-1) \ln (n-1) - z \ln (n-1)
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
- z^{-1} (n-1)^{2} \ln (n-1) + 2 (n-1) \ln z + (n-1) \ln z - (n-1) \ln (n-1) \right]
\notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- \left[ \vphantom{\sum} - (n-1)^{2} z^{-1} - (n-1)^{2} z^{-1} + z + z \right] \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \vphantom{\sum} (n-1) \ln z - (n-1) \ln (n-1) - z \ln (n-1)
\right. \notag \\
& \qquad \qquad \qquad \qquad
- z^{-1} (n-1)^{2} \ln (n-1) + 2 (n-1) \ln z + (n-1) \ln z - (n-1) \ln (n-1)
\notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ 2 (n-1)^{2} z^{-1} - 2 z \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \vphantom{\sum} 4 (n-1) \ln z - 2 (n-1) \ln (n-1) - z \ln (n-1)
\right. \notag \\
& \left. \vphantom{\sum} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- z^{-1} (n-1)^{2} \ln (n-1) + 2 (n-1)^{2} z^{-1} - 2 z \right)
\\
& \quad =
\frac{ 1 }{ z ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \vphantom{\sum} 4 (n-1) z \ln z - 2 z (n-1) \ln (n-1) - z^{2} \ln (n-1) - (n-1)^{2} \ln (n-1) + 2 (n-1)^{2} - 2 z^{2} \right)
\\
& \quad =
\frac{ 1 }{ z ((n-1) + z)^{2} (\ln z)^{2} } \left( \vphantom{\sum} 4 (n-1) z \ln z - ((n-1)^{2} + 2 z (n-1) + z^{2}) \ln (n-1) + 2 (n-1)^{2} - 2 z^{2} \right) \!\!\!
\\
& \quad =
\frac{ 1 }{ z ((n-1) + z)^{2} (\ln z)^{2} } \left( \vphantom{\sum} 4 (n-1) z \ln z - ((n-1) + z)^{2} \ln (n-1) + 2 (n-1)^{2} - 2 z^{2} \right)
\\
& \quad =
\frac{ v(n, z) }{ z ((n-1) + z)^{2} (\ln z)^{2} } ,
\label{eq:v}
\end{align}
where
\begin{align}
v(n, z)
\triangleq
4 (n-1) z \ln z - ((n-1) + z)^{2} \ln (n-1) + 2 (n-1)^{2} - 2 z^{2} .
\end{align}
Thus, it is enough to check the sign of $v(n, z)$ for $z \in (1, +\infty)$.
Substituting $z = 1$ into $v(n, z)$, we see
\begin{align}
v(n, 1)
& =
\left. \left( \vphantom{\sum} 4 (n-1) z \ln z - ((n-1) + z)^{2} \ln (n-1) + 2 (n-1)^{2} - 2 z^{2} \right) \right|_{z = 1}
\\
& =
4 (n-1) \cdot \underbrace{ (1 \ln 1) }_{ = 0 } - ((n-1) + 1)^{2} \ln (n-1) + 2 (n-1)^{2} - 2 \cdot 1^{2}
\\
& =
- n^{2} \ln (n-1) + 2 (n-1)^{2} - 2
\\
& =
- n^{2} \ln (n-1) + (2 n^{2} - 4 n + 2) - 2
\\
& =
- n^{2} \ln (n-1) + 2 n^{2} - 4 n
\\
& =
2 n (n - 2) - n^{2} \ln (n-1)
\\
& =
n \left( \vphantom{\sum} 2 (n - 2) - n \ln (n-1) \right)
\\
& =
n \, w( n ),
\label{eq:v_1}
\end{align}
where
\begin{align}
w( n )
\triangleq
2 (n - 2) - n \ln (n-1) .
\end{align}
Then, we can see that $w( n )$ is strictly decreasing for $n \ge 2$ as follows:
\begin{align}
\frac{ \mathrm{d} w( n ) }{ \mathrm{d} n }
& =
\frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} 2 (n - 2) - n \ln (n-1) \right)
\\
& =
\left( \frac{ \mathrm{d} }{ \mathrm{d} n } (2 (n - 2)) \right) - \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n \ln (n-1)) \right)
\\
& =
2 - \left( \left( \frac{ \mathrm{d} n }{ \mathrm{d} n } \right) \ln (n-1) + n \left( \frac{ \mathrm{d} \ln (n-1) }{ \mathrm{d} n } \right) \right)
\\
& =
2 - \ln (n-1) - \frac{ n }{ n-1 }
\\
& =
\frac{ 2 (n - 1) - n }{ n-1 } - \ln (n-1)
\\
& =
\frac{ n - 2 }{ n-1 } - \ln (n-1)
\\
& \overset{\text{(a)}}{<}
\frac{ n - 2 }{ n-1 } - \left( 1 - \frac{ 1 }{ n-1 } \right)
\\
& =
\frac{ n - 2 }{ n-1 } - \left( \frac{ (n - 1) - 1 }{ n-1 } \right)
\\
& =
\frac{ n - 2 }{ n-1 } - \frac{ n - 2 }{ n-1 }
\\
& =
0 ,
\end{align}
where (a) holds for $n > 2$ from \eqref{eq:ITineq}.
Since $w(2) = 0$ and $w(n)$ is strictly decreasing for $n \ge 2$, we have
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} w(n) \right)
& =
\begin{cases}
0
& \mathrm{if} \ n = 2, \\
-1
& \mathrm{if} \ n \ge 3 ;
\end{cases}
\end{align}
and therefore, we obtain
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} v(n, 1) \right)
& \overset{\eqref{eq:v_1}}{=}
\operatorname{sgn} \! \left( \vphantom{\sum} n \, w(n) \right)
\\
& =
-1
\label{ineq:v_1}
\end{align}
for $n \ge 3$.
Moreover, substituting $z = n-1$ into $v(n, z)$, we readily see that
\begin{align}
v(n, n-1)
& =
\left. \left( \vphantom{\sum} 4 (n-1) z \ln z - ((n-1) + z)^{2} \ln (n-1) + 2 (n-1)^{2} - 2 z^{2} \right) \right|_{z = n-1}
\\
& =
4 (n-1) (n-1) \ln (n-1) - ((n-1) + (n-1))^{2} \ln (n-1) + 2 (n-1)^{2} - 2 (n-1)^{2}
\\
& =
4 (n-1)^{2} \ln (n-1) - 4 (n-1)^{2} \ln (n-1)
\\
& =
0 .
\label{eq:v_n-1}
\end{align}
Next, we calculate the derivatives of $v(n, z)$ with respect to $z$ as follows:
\begin{align}
\frac{ \partial v(n, z) }{ \partial z }
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} 4 (n-1) z \ln z - ((n-1) + z)^{2} \ln (n-1) + 2 (n-1)^{2} - 2 z^{2} \right)
\\
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} 4 (n-1) z \ln z - (n-1)^{2} \ln (n-1) - 2 z (n-1) \ln (n-1) - z^{2} \ln (n-1) + 2 (n-1)^{2} - 2 z^{2} \right)
\\
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} 4 (n-1) z \ln z - 2 z (n-1) \ln (n-1) - z^{2} \ln (n-1) - 2 z^{2} \right)
\\
& =
\left( \frac{ \partial }{ \partial z } 4 (n-1) z \ln z \right) - \left( \frac{ \partial }{ \partial z } 2 z (n-1) \ln (n-1) \right) - \left( \frac{ \partial }{ \partial z } z^{2} \ln (n-1) \right) - \left( \frac{ \partial }{ \partial z } 2 z^{2} \right)
\\
& =
4 (n-1) \left( \frac{ \mathrm{d} }{ \mathrm{d} z } z \ln z \right) - 2 (n-1) \ln (n-1) \left( \frac{ \mathrm{d} }{ \mathrm{d} z } z \right) - \ln (n-1) \left( \frac{ \mathrm{d} }{ \mathrm{d} z } z^{2} \right) - 2 \left( \frac{ \mathrm{d} }{ \mathrm{d} z } z^{2} \right)
\\
& =
4 (n-1) (\ln z + 1) - 2 (n-1) \ln (n-1) - \ln (n-1) (2 z) - 2 (2 z)
\\
& =
4 (n-1) \ln z + 4 (n-1) - 2 (n-1) \ln (n-1) - 2 z \ln (n-1) - 4 z ,
\\
\frac{ \partial^{2} v(n, z) }{ \partial z^{2} }
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} 4 (n-1) \ln z + 4 (n-1) - 2 (n-1) \ln (n-1) - 2 z \ln (n-1) - 4 z \right)
\\
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} 4 (n-1) \ln z - 2 z \ln (n-1) - 4 z \right)
\\
& =
\left( \frac{ \partial }{ \partial z } 4 (n-1) \ln z \right) - \left( \frac{ \partial }{ \partial z } 2 z \ln (n-1) \right) - \left( \frac{ \partial }{ \partial z } 4 z \right)
\\
& =
4 (n-1) \left( \frac{ \mathrm{d} }{ \mathrm{d} z } \ln z \right) - 2 (\ln (n-1)) \left( \frac{ \mathrm{d} }{ \mathrm{d} z } z \right) - 4 \left( \frac{ \mathrm{d} }{ \mathrm{d} z } z \right)
\\
& =
4 (n-1) \left( \frac{1}{z} \right) - 2 \ln (n-1) - 4
\\
& =
\frac{ 4 (n-1) }{ z } - 2 \ln (n-1) - 4 ,
\\
\frac{ \partial^{3} v(n, z) }{ \partial z^{3} }
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} \frac{ 4 (n-1) }{ z } - 2 \ln (n-1) - 4 \right)
\\
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} \frac{ 4 (n-1) }{ z } \right)
\\
& =
4 (n-1) \left( \frac{ \mathrm{d} }{ \mathrm{d} z } \frac{ 1 }{ z } \right)
\\
& =
4 (n-1) \left( - \frac{1}{z^{2}} \right)
\\
& =
- \frac{4 (n-1)}{z^{2}}
\\
& <
0
\qquad (\mathrm{for} \ z \in (0, +\infty)) .
\label{eq:diff3_v}
\end{align}
It follows from \eqref{eq:diff3_v} that $\frac{ \partial^{2} v(n, z) }{ \partial z^{2} }$ is strictly decreasing for $z \in (0, +\infty)$.
Then, we can solve the root of the equation $\frac{ \partial^{2} v(n, z) }{ \partial z^{2} } = 0$ with respect to $z$ as follows:
\begin{align}
&&
\frac{ \partial^{2} v(n, z) }{ \partial z^{2} }
& =
0
\\
& \iff &
\frac{ 4 (n-1) }{ z } - 2 \ln (n-1) - 4
& =
0
\\
& \iff &
\frac{ 4 (n-1) }{ z }
& =
2 \ln (n-1) + 4
\\
& \iff &
\frac{ 2 (n-1) }{ z }
& =
\ln (n-1) + 2
\\
& \iff &
z
& =
\frac{ 2 (n-1) }{ \ln (n-1) + 2 } .
\label{eq:root_v2}
\end{align}
Note that we can see that
\begin{align}
\frac{ 2 (n-1) }{ \ln (n-1) + 2 }
& \overset{\text{(a)}}{<}
\frac{ 2 (n-1) }{ \left(1 - \frac{1}{n-1}\right) + 2 }
\\
& =
\frac{ 2 (n-1) }{ \frac{(n-1) - 1}{n-1} + 2 }
\\
& =
\frac{ 2 (n-1) }{ \frac{(n-2) + 2(n-1)}{n-1} }
\\
& =
\frac{ 2 (n-1)^{2} }{ (n-2) + 2(n-1) }
\\
& =
\frac{ 2 (n-1)^{2} }{ 3(n-1) - 1 }
\\
& =
(n-1) \underbrace{ \frac{ 2 (n-1) }{ 3(n-1) - 1 } }_{ < 1 \ \mathrm{for} \ n \ge 3 }
\\
& <
n-1
\label{ineq:root_v2}
\end{align}
for $n \ge 3$, where (a) holds for $n \ge 3$ from \eqref{eq:ITineq}.
Since
\begin{itemize}
\item
$\frac{ \partial^{2} v(n, z) }{ \partial z^{2} }$ is strictly decreasing for $(0, +\infty)$ (see Eq. \eqref{eq:diff3_v}) and
\item
$\left. \frac{ \partial^{2} v(n, z) }{ \partial z^{2} } \right|_{z = \frac{ 2(n-1) }{ \ln (n-1) + 2 }} = 0$ (see Eq. \eqref{eq:root_v2}),
\end{itemize}
we have
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} v(n, z) }{ \partial z^{2} } \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (0, \frac{ 2(n-1) }{ \ln (n-1) + 2 }) , \\
0
& \mathrm{if} \ z = \frac{ 2(n-1) }{ \ln (n-1) + 2 } , \\
-1
& \mathrm{if} \ z \in (\frac{ 2(n-1) }{ \ln (n-1) + 2 }, +\infty) .
\end{cases}
\label{eq:sign_v2}
\end{align}
It follows from \eqref{eq:sign_v2} that
\begin{itemize}
\item
$\frac{ \partial v(n, z) }{ \partial z }$ is strictly increasing for $z \in (0, \frac{ 2(n-1) }{ \ln (n-1) + 2 }]$ and
\item
$\frac{ \partial v(n, z) }{ \partial z }$ is strictly decreasing for $z \in [\frac{ 2(n-1) }{ \ln (n-1) + 2 }, +\infty)$.
\end{itemize}
We readily see that
\begin{align}
\left. \frac{ \partial v(n, z) }{ \partial z } \right|_{z = n-1}
& =
\left. \left( \vphantom{\sum} 4 (n-1) \ln z + 4 (n-1) - 2 (n-1) \ln (n-1) - 2 z \ln (n-1) - 4 z \right) \right|_{z = n-1}
\\
& =
4 (n-1) \ln (n-1) + 4 (n-1) - 2 (n-1) \ln (n-1) - 2 (n-1) \ln (n-1) - 4 (n-1)
\\
& =
0 .
\label{eq:root_v1_n-1}
\end{align}
We further solve another solution of the equation $\frac{ \partial v(n, z) }{ \partial z } = 0$ with respect to $z$ as follows:
\begin{align}
&&
\frac{ \partial v(n, z) }{ \partial z }
& =
0
\\
& \iff &
4 (n-1) \ln z + 4 (n-1) - 2 (n-1) \ln (n-1) - 2 z \ln (n-1) - 4 z
& =
0
\\
& \iff &
2 (n-1) \ln z + 2 (n-1) - (n-1) \ln (n-1) - z \ln (n-1) - 2 z
& =
0
\\
& \iff &
2 (n-1) \ln z + 2 (n-1) - (n-1) \ln (n-1) - z (\ln (n-1) + 2)
& =
0
\\
& \iff &
2 (n-1) \ln z + 2 (n-1) - (n-1) \ln (n-1)
& =
z (\ln (n-1) + 2)
\\
& \iff &
2 \ln z + 2 - \ln (n-1)
& =
z \left( \frac{ \ln (n-1) + 2 }{ n-1 } \right)
\\
& \iff &
\ln z + 1 - \frac{1}{2} \ln (n-1)
& =
z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)
\\
& \iff &
\mathrm{e}^{\ln z + 1 - \frac{1}{2} \ln (n-1)}
& =
\mathrm{e}^{z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)}
\\
& \iff &
\mathrm{e}^{\ln z} \cdot \mathrm{e}^{1} \cdot \mathrm{e}^{-\frac{1}{2} \ln (n-1)}
& =
\mathrm{e}^{z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)}
\\
& \iff &
z \, \mathrm{e} \, (n-1)^{-\frac{1}{2}}
& =
\mathrm{e}^{z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)}
\\
& \iff &
z \, \mathrm{e}^{-z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)}
& =
\mathrm{e}^{-1} \, (n-1)^{\frac{1}{2}}
\\
& \iff &
- z \, \mathrm{e}^{-z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)}
& =
- \mathrm{e}^{-1} \, (n-1)^{\frac{1}{2}}
\\
& \iff &
- z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right) \mathrm{e}^{-z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)}
& =
- \mathrm{e}^{-1} \, (n-1)^{\frac{1}{2}} \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)
\\
& \iff &
- z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right) \mathrm{e}^{-z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)}
& =
- \frac{ \ln (n-1) + 2 }{ 2 \, \mathrm{e} \, \sqrt{n-1} }
\label{eq:both_side_>=-1} \\
& \overset{\text{(a)}}{\iff} &
W_{0} \! \left( - z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right) \mathrm{e}^{-z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)} \right)
& =
W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \, \mathrm{e} \, \sqrt{n-1} } \right)
\\
& \overset{\text{(b)}}{\iff} &
- z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)
& =
W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \, \mathrm{e} \, \sqrt{n-1} } \right)
\\
& \iff &
z
& =
- \frac{ 2 (n-1) W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \mathrm{e} \sqrt{n-1} } \right) }{ \ln (n-1) + 2 } ,
\label{eq:root_v1}
\end{align}
where
\begin{itemize}
\item
$W_{0}( \cdot )$ denotes the Lambert $W_{0}$ function, i.e., the inverse function of $f(x) = x \, \mathrm{e}^{x}$ for $x \ge -1$ and
\item
(a) holds for $n \ge 2$ and $z \le \frac{ 2(n-1) }{ \ln (n-1) + 2 }$ since the domain of $W_{0}( \cdot )$ is the interval $[-\frac{1}{\mathrm{e}}, +\infty)$ and the both sides of \eqref{eq:both_side_>=-1} is greater than $- \frac{1}{\mathrm{e}}$ for $n \ge 2$ and $z \ge \frac{ 2(n-1) }{ \ln (n-1) + 2 }$, i.e.,
\begin{align}
(\text{the left-hand side of \eqref{eq:both_side_>=-1}})
& =
- z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right) \mathrm{e}^{-z \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)}
\\
& \overset{\text{(c)}}{\ge}
- \left( \frac{ 2(n-1) }{ \ln (n-1) + 2 } \right) \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right) \mathrm{e}^{-\left( \frac{ 2(n-1) }{ \ln (n-1) + 2 } \right) \left( \frac{ \ln (n-1) + 2 }{ 2(n-1) } \right)}
\\
& =
- \, \mathrm{e}^{-1}
\\
& =
- \frac{1}{\mathrm{e}}
\\
(\text{the right-hand side of \eqref{eq:both_side_>=-1}})
& =
- \frac{ \ln (n-1) + 2 }{ 2 \, \mathrm{e} \, \sqrt{n-1} }
\\
& \overset{\text{(d)}}{\ge}
- \frac{ \ln_{(\frac{1}{2})} (n-1) + 2 }{ 2 \, \mathrm{e} \, \sqrt{n-1} }
\\
& =
- \frac{ ( 2 \sqrt{n-1} - 2 ) +2 }{ 2 \, \mathrm{e} \, \sqrt{n-1} }
\\
& =
- \frac{ 2 \sqrt{n-1} }{ 2 \, \mathrm{e} \, \sqrt{n-1} }
\\
& =
- \frac{1}{\mathrm{e}} ,
\end{align}
where
\begin{itemize}
\item
(c) follows from the fact that $f(x) = x \, \mathrm{e}^{x}$ is strictly increasing for $x \ge -1$ and
\item
(d) follows by Lemma \ref{lem:IT_ineq},
\end{itemize}
\item
(b) holds for $z \le \frac{ 2(n-1) }{ \ln (n-1) + 2 }$ since $W_{0}( x \, \mathrm{e}^{x} ) = x$ holds for $x \ge -1$.
\end{itemize}
Since
\begin{itemize}
\item
$W_{0}( x )$ is strictly increasing for $x \ge - \frac{1}{\mathrm{e}}$,
\item
$W_{0}( - \frac{1}{\mathrm{e}} ) = -1$,
\item
$W_{0}( 0 ) = 0$
\item
$- \frac{ \ln (n-1) + 2 }{ 2 \, \mathrm{e} \, \sqrt{n-1} }$ is strictly increasing for $n \ge 2$,
\item
$\left. - \frac{ \ln (n-1) + 2 }{ 2 \, \mathrm{e} \, \sqrt{n-1} } \right|_{n = 2} = - \frac{1}{\mathrm{e}}$, and
\item
$\lim_{n \to +\infty} \left( - \frac{ \ln (n-1) + 2 }{ 2 \, \mathrm{e} \, \sqrt{n-1} } \right) = 0$
\end{itemize}
we see that
\begin{align}
-1 < W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \mathrm{e} \sqrt{n-1} } \right) < 0
\end{align}
for $n \ge 3$;
and therefore, we can see that
\begin{align}
0
<
- \frac{ 2 (n-1) W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \mathrm{e} \sqrt{n-1} } \right) }{ \ln (n-1) + 2 }
<
\frac{ 2 (n-1) }{ \ln (n-1) + 2 }
\label{ineq:root_v1}
\end{align}
for $n \ge 3$.
Since
\begin{itemize}
\item
$\frac{ \partial v(n, z) }{ \partial z }$ is strictly increasing for $z \in (0, \frac{ 2(n-1) }{ \ln (n-1) + 2 }]$ (see Eq. \eqref{eq:sign_v2}),
\item
$\frac{ \partial v(n, z) }{ \partial z }$ is strictly decreasing for $z \in [\frac{ 2(n-1) }{ \ln (n-1) + 2 }, +\infty)$ (see Eq. \eqref{eq:sign_v2}),
\item
$0 < - \frac{ 2 (n-1) W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \mathrm{e} \sqrt{n-1} } \right) }{ \ln (n-1) + 2 } < \frac{ 2 (n-1) }{ \ln (n-1) + 2 } < n-1$ for $n \ge 3$ (see Eqs. \eqref{ineq:root_v2} and \eqref{ineq:root_v1}),
\item
$\left. \frac{ \partial v(n, z) }{ \partial z } \right|_{z = - \frac{ 2 (n-1) W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \mathrm{e} \sqrt{n-1} } \right) }{ \ln (n-1) + 2 }} = 0$ (see Eq. \eqref{eq:root_v1}), and
\item
$\left. \frac{ \partial v(n, z) }{ \partial z } \right|_{z = n-1} = 0$ (see Eq. \eqref{eq:root_v1_n-1}),
\end{itemize}
we obtain
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial v(n, z) }{ \partial z } \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (- \frac{ 2 (n-1) W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \mathrm{e} \sqrt{n-1} } \right) }{ \ln (n-1) + 2 }, n-1) , \\
0
& \mathrm{if} \ z \in \{ - \frac{ 2 (n-1) W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \mathrm{e} \sqrt{n-1} } \right) }{ \ln (n-1) + 2 }, n-1 \} , \\
-1
& \mathrm{if} \ z \in (0, - \frac{ 2 (n-1) W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \mathrm{e} \sqrt{n-1} } \right) }{ \ln (n-1) + 2 }) \cup (n-1, +\infty) .
\end{cases}
\label{eq:diff1_v}
\end{align}
Since
\begin{itemize}
\item
$v(n, 1) < 0$ for $n \ge 3$ (see Eq. \eqref{eq:v_1}),
\item
$v(n, n-1) = 0$ for $n \ge 3$ (see Eq. \eqref{eq:v_n-1}),
\item
$0 < - \frac{ 2 (n-1) W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \mathrm{e} \sqrt{n-1} } \right) }{ \ln (n-1) + 2 } < n-1$ for $n \ge 3$ (see Eqs. \eqref{ineq:root_v2} and \eqref{ineq:root_v1}),
\item
the following monotonicity hold (see Eq. \eqref{eq:diff1_v}):
\begin{itemize}
\item
$v(n, z)$ is strictly decreasing for $z \in (0, - \frac{ 2 (n-1) W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \mathrm{e} \sqrt{n-1} } \right) }{ \ln (n-1) + 2 }]$,
\item
$v(n, z)$ is strictly increasing for $z \in [- \frac{ 2 (n-1) W_{0} \! \left( - \frac{ \ln (n-1) + 2 }{ 2 \mathrm{e} \sqrt{n-1} } \right) }{ \ln (n-1) + 2 }, n-1]$, and
\item
$v(n, z)$ is strictly decreasing for $z \in [n-1, +\infty)$,
\end{itemize}
\end{itemize}
we have
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} v(n, z) \right)
& =
\begin{cases}
0
& \mathrm{if} \ z = n-1 , \\
-1
& \mathrm{if} \ z \in [1, n-1) \cup (n-1, +\infty) .
\end{cases}
\label{ineq:v}
\end{align}
Therefore, we obtain
\begin{align}
\operatorname{sgn} \! \left( \left. \frac{ \partial g(n, z, \alpha) }{ \partial z } \right|_{\alpha = \frac{ \ln (n-1) }{ \ln z }} \right)
& \overset{\eqref{eq:v}}{=}
\underbrace{ \operatorname{sgn} \! \left( \left( \frac{ 1 }{ z ((n-1) + z)^{2} (\ln z)^{2} } \right) \right) }_{ = 1 \ \mathrm{for} \ z \in (0, 1) \cup (1, +\infty) } \, \cdot \; \operatorname{sgn} \! \left( \vphantom{\sum} v(n, z) \right)
\\
& \overset{\eqref{ineq:v}}{=}
\begin{cases}
0
& \mathrm{if} \ z = n-1 , \\
-1
& \mathrm{if} \ z \in (1, n-1) \cup (n-1, +\infty) .
\end{cases}
\end{align}
That concludes the proof of Lemma \ref{lem:diff_g_z}.
\end{IEEEproof}
\if0
\section{Proof of Lemma \ref{lem:diff_g_z_a0}}
\label{app:diff_g_z_a0}
\begin{IEEEproof}[Proof of Lemma \ref{lem:diff_g_z_a0}]
We first verify $g(n, z, \alpha)$ with $\alpha = 0$ as follows:
\begin{align}
g(n, z, 0)
& =
\left. \left( \vphantom{\sum} (\alpha-1) + \frac{ ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) }{ ((n-1) + z) \ln z } \right) \right|_{\alpha = 0}
\\
& =
(-1) + \frac{ ((n-1) + z^{0}) (z^{1} - 1) }{ ((n-1) + z) \ln z }
\\
& =
-1 + \frac{ ((n-1) + 1) (z - 1) }{ ((n-1) + z) \ln z }
\\
& =
-1 + \frac{ n (z - 1) }{ ((n-1) + z) \ln z } .
\end{align}
Then, the first-order derivative of $g(n, z, 0)$ with respect to $z$ is
\begin{align}
\frac{ \partial g(n, z, 0) }{ \partial z }
& =
\frac{ \partial }{ \partial z } \left( -1 + \frac{ n (z - 1) }{ ((n-1) + z) \ln z } \right)
\\
& =
\frac{ \partial }{ \partial z } \left( \frac{ n (z - 1) }{ ((n-1) + z) \ln z } \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} } \left( \left( \frac{ \partial (n (z - 1)) }{ \partial z } \right) ((n-1) + z) \ln z - n (z - 1) \left( \frac{ \partial (((n-1) + z) \ln z) }{ \partial z } \right) \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} } \left( n ((n-1) + z) \ln z - n (z - 1) \left( (n-1) \frac{ \mathrm{d} (\ln z) }{ \mathrm{d} z } + \frac{ \mathrm{d} (z \ln z) }{ \mathrm{d} z } \right) \right)
\\
& =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} } \left( n ((n-1) + z) \ln z - n (z - 1) \left( (n-1) \frac{ 1 }{ z } + (\ln z + 1) \right) \right)
\\
& =
\frac{ n }{ z ((n-1) + z)^{2} (\ln z)^{2} } \left( \vphantom{\sum} ((n-1) + z) z \ln z - (z - 1) (n-1) - z (z - 1) (\ln z + 1) \right)
\\
& =
\frac{ n }{ z ((n-1) + z)^{2} (\ln z)^{2} } \left( \vphantom{\sum} (n-1) z \ln z + z^{2} \ln z - (n z - n - z + 1) - (z^{2} \ln z + z^{2} - z \ln z - z) \right)
\\
& =
\frac{ n }{ z ((n-1) + z)^{2} (\ln z)^{2} } \left( \vphantom{\sum} n z \ln z - n z + n + z - 1 - z^{2} + z \right)
\\
& =
\frac{ n }{ z ((n-1) + z)^{2} (\ln z)^{2} } \left( \vphantom{\sum} n z \ln z - n z + n - z^{2} + 2 z - 1 \right)
\\
& =
\frac{ n \, t(n, z) }{ z ((n-1) + z)^{2} (\ln z)^{2} } ,
\label{eq:diff_gz_alpha0}
\end{align}
where $t(n, z) \triangleq n z \ln z - n z + n - z^{2} + 2 z - 1$.
Since
\begin{align}
\frac{ n }{ z ((n-1) + z)^{2} (\ln z)^{2} }
>
0
\end{align}
for $n \ge 2$ and $z \in (0, 1) \cup (1, +\infty)$, it is enough to check the sign of $t(n, z)$ for $n \ge 3$ and $z \in (0, 1) \cup (1, +\infty)$ rather than the right-hand side of \eqref{eq:diff_gz_alpha0}.
To analyze $t(n, z)$, we calculate the derivatives of $t(n, z)$ with respect to $z$ as follows:
\begin{align}
\frac{ \partial t(n, z) }{ \partial z }
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} n z \ln z - n z + n - z^{2} + 2 z - 1 \right)
\\
& =
n \left( \frac{ \mathrm{d} }{ \mathrm{d} z } (z \ln z) \right) - n \left( \frac{ \mathrm{d} }{ \mathrm{d} z } (z) \right) - \left( \frac{ \mathrm{d} }{ \mathrm{d} z } (z^{2}) \right) + 2 \left( \frac{ \mathrm{d} }{ \mathrm{d} z } (z) \right)
\\
& =
n (\ln z + 1) - n - 2 z + 2 ,
\label{eq:diff1_t} \\
\frac{ \partial^{2} t(n, z) }{ \partial z^{2} }
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} n (\ln z + 1) - n - 2 z + 2 \right)
\\
& =
n \left( \frac{ \mathrm{d} }{ \mathrm{d} z } (\ln z) \right) - 2 \left( \frac{ \mathrm{d} }{ \mathrm{d} z } (z) \right)
\\
& =
\frac{ n }{ z } - 2 ,
\label{eq:diff2_t} \\
\frac{ \partial^{3} t(n, z) }{ \partial z^{3} }
& =
\frac{ \partial }{ \partial z } \left( \frac{ n }{ z } - 2 \right)
\\
& =
n \left( \frac{ \mathrm{d} }{ \mathrm{d} z } \left( \frac{1}{z} \right) \right)
\\
& =
- \frac{ n }{ z^{2} }
\\
& <
0
\qquad (\mathrm{for} \ z \in (0, +\infty)) .
\label{eq:diff3_t}
\end{align}
It follows from \eqref{eq:diff3_t} that $\frac{ \partial^{2} t(n, z) }{ \partial z^{2} }$ is strictly decreasing for $z \in (0, +\infty)$.
Then, we can derive the solution of the equation $\frac{ \partial^{2} t(n, z) }{ \partial z^{2} } = 0$ with respect to $z \in (0, +\infty)$ as follows:
\begin{align}
&&
\frac{ \partial^{2} t(n, z) }{ \partial z^{2} }
& =
0
\\
& \overset{\eqref{eq:diff2_t}}{\iff} &
\frac{ n }{ z } - 2
& =
0
\\
& \iff &
\frac{ n }{ z }
& =
2
\\
& \iff &
n
& =
2 z
\\
& \iff &
z
& =
\frac{ n }{ 2 } .
\end{align}
Hence, we get
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} t(n, z) }{ \partial z^{2} } \right)
& =
\begin{cases}
1
& \mathrm{if} \ z \in (0, \frac{n}{2}) , \\
0
& \mathrm{if} \ z = \frac{n}{2} , \\
-1
& \mathrm{if} \ z \in (\frac{n}{2}, +\infty) ,
\end{cases}
\label{eq:diff2_t_sign}
\end{align}
which implies that
\begin{itemize}
\item
$\frac{ \partial t(n, z) }{ \partial z }$ is strictly increasing for $z \in (0, \frac{n}{2}]$ and
\item
$\frac{ \partial t(n, z) }{ \partial z }$ is strictly decreasing for $z \in [\frac{n}{2}, +\infty)$.
\end{itemize}
Substituting $z = 1$ into $\frac{ \partial t(n, z) }{ \partial z }$, we readily see that
\begin{align}
\left. \frac{ \partial t(n, z) }{ \partial z } \right|_{z = 1}
& \overset{\eqref{eq:diff1_t}}{=}
\left. \left( \vphantom{\sum} n (\ln z + 1) - n - 2 z + 2 \right) \right|_{z = 1}
\\
& =
n (\ln 1 + 1) - n - 2 + 2
\\
& =
n - n - 2 + 2
\\
& =
0 .
\label{eq:diff1_t_z=1}
\end{align}
Moreover, we also derive another solution of the equation $\frac{ \partial t(n, z) }{ \partial z } = 0$ with respect to $z$ as follows:
\begin{align}
&&
\frac{ \partial t(n, z) }{ \partial z }
& =
0
\\
& \overset{\eqref{eq:diff1_t}}{\iff} &
n (\ln z + 1) - n - 2 z + 2
& =
0
\\
& \iff &
n (\ln z + 1) - n
& =
2 z - 2
\\
& \iff &
n \ln z
& =
2 (z - 1)
\\
& \iff &
\ln z
& =
\frac{ 2 (z - 1) }{ n }
\\
& \iff &
\mathrm{e}^{\ln z}
& =
\mathrm{e}^{\frac{ 2 (z - 1) }{ n }}
\\
& \iff &
z
& =
\mathrm{e}^{\frac{ 2 z }{ n }} \, \mathrm{e}^{-\frac{ 2 }{ n }}
\\
& \iff &
z \, \mathrm{e}^{- \frac{ 2 z }{ n }}
& =
\mathrm{e}^{-\frac{ 2 }{ n }}
\\
& \iff &
- \frac{ 2 z }{ n } \, \mathrm{e}^{- \frac{ 2 z }{ n }}
& =
- \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }}
\label{eq:both_side_isin} \\
& \overset{\text{(a)}}{\iff} &
W_{-1} \! \left( - \frac{ 2 z }{ n } \, \mathrm{e}^{- \frac{ 2 z }{ n }} \right)
& =
W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right)
\\
& \overset{\text{(b)}}{\iff} &
- \frac{ 2 z }{ n }
& =
W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right)
\label{eq:lambert-1} \\
& \iff &
z
& =
- \frac{ n }{ 2 } \, W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right) ,
\label{eq:diff1_t_root}
\end{align}
where
\begin{itemize}
\item
$W_{-1}( \cdot )$ denotes the Lambert $W_{-1}$ function, i.e., the inverse function of $f( x ) = x \, \mathrm{e}^{x}$ for $x \le -1$,
\item
(a) holds for $n \ge 2$ and $z > 0$ since the domain of $W_{-1}( \cdot )$ is the interval $[-\frac{1}{\mathrm{e}}, 0)$ and the both sides of \eqref{eq:both_side_isin} is in the interval $[-\frac{1}{\mathrm{e}}, 0)$ for $n \ge 2$ and $z > 0$, which follows from the fact that $0 < x \, \mathrm{e}^{x} \le - \frac{1}{\mathrm{e}}$ for $x < 0$ with equality if and only if $x = -1$, and
\item
(b) holds for $n \ge 2$ and $z \ge \frac{n}{2}$ since $W_{-1}( x \, \mathrm{e}^{x} ) = x$ for $x \le -1$ and
\begin{align}
(\text{the left-hand side of \eqref{eq:lambert-1}})
& =
- \frac{ 2 z }{ n }
\\
& \le
- \frac{ 2 }{ n } \left( \frac{n}{2} \right)
\\
& =
-1
\end{align}
for $n \ge 2$ and $z \ge \frac{n}{2}$.
\end{itemize}
Thus, since
\begin{itemize}
\item
for $n \ge 2$, the following monotonicity hold (see Eq. \eqref{eq:diff2_t_sign}):
\begin{itemize}
\item
$\frac{ \partial t(n, z) }{ \partial z }$ is strictly increasing for $z \in (0, \frac{n}{2}]$ and
\item
$\frac{ \partial t(n, z) }{ \partial z }$ is strictly decreasing for $z \in [\frac{n}{2}, +\infty)$,
\end{itemize}
\item
$\left. \frac{ \partial t(n, z) }{ \partial z } \right|_{z = 1} = 0$ for $n \ge 2$ (see Eq. \eqref{eq:diff1_t_z=1}),
\item
$\left. \frac{ \partial t(n, z) }{ \partial z } \right|_{z = - \frac{ n }{ 2 } \, W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right)} = 0$ for $n \ge 2$ (see Eq. \eqref{eq:diff1_t_root}), and
\item
$1 \le \frac{n}{2} \le - \frac{ n }{ 2 } \, W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right)$ for $n \ge 2$,
\end{itemize}
we obtain
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial t(n, z) }{ \partial z } \right)
& =
\begin{cases}
1
& \mathrm{if} \ z \in (1, - \frac{ n }{ 2 } \, W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right)) , \\
0
& \mathrm{if} \ z \in \{ 1, - \frac{ n }{ 2 } \, W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right) \} , \\
-1
& \mathrm{if} \ z \in (0, 1) \cup (- \frac{ n }{ 2 } \, W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right), +\infty)
\end{cases}
\label{eq:diff1_t_sign}
\end{align}
for $n \ge 2$, which implies that
\begin{itemize}
\item
$t(n, z)$ is strictly decreasing for $z \in (0, 1]$,
\item
$t(n, z)$ is strictly increasing for $z \in [1, - \frac{ n }{ 2 } \, W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right)]$, and
\item
$t(n, z)$ is strictly decreasing for $z \in [- \frac{ n }{ 2 } \, W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right), +\infty)$.
\end{itemize}
Note that, substituting $n = 2$ into the right-hand side of \eqref{eq:diff1_t_root}, we see
\begin{align}
\left. \left( - \frac{ n }{ 2 } \, W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right) \right) \right|_{n = 2}
& =
- \frac{ 2 }{ 2 } \, W_{-1} \! \left( - \frac{ 2 }{ 2 } \, \mathrm{e}^{-\frac{ 2 }{ 2 }} \right)
\\
& =
- W_{-1} \! \left( - \frac{1}{\mathrm{e}} \right)
\\
& =
1 .
\end{align}
Using the above monotonicity of $t(n, z)$, we consider the sign of $t(n, z)$ for $n \ge 3$ and $z \in (0, +\infty)$.
Substituting $z = 1$ into $t(n, z)$, we get
\begin{align}
t(n, 1)
& =
\left. \left( \vphantom{\sum} n (z \ln z - z + 1) - (z - 1)^{2} \right) \right|_{z = 1}
\\
& =
n (1 \ln 1 - 1 + 1) - (1 - 1)^{2}
\\
& =
0 .
\label{eq:t_z=1}
\end{align}
Since
\begin{itemize}
\item
$t(n, z)$ is strictly decreasing for $z \in (0, 1]$ (see Eq. \eqref{eq:diff1_t_sign}),
\item
$t(n, z)$ is strictly increasing for $z \in [1, - \frac{ n }{ 2 } \, W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right)]$ (see also Eq. \eqref{eq:diff1_t_sign}), and
\item
$t(n, 1) = 0$ (see Eq \eqref{eq:t_z=1}),
\end{itemize}
we can see that
\begin{align}
t(n, z) \ge 0
\label{eq:t_sign_1}
\end{align}
for $z \in (0, - \frac{ n }{ 2 } \, W_{-1} \! \left( - \frac{ 2 }{ n } \, \mathrm{e}^{-\frac{ 2 }{ n }} \right)]$ with equality if and only if $z = 1$.
We next check the sign of $t(n, z)$ with $z = (n-1) \sqrt{n-1}$ for $n \ge 3$.
Substituting $z = (n-1) \sqrt{n-1}$ into $t(n, z)$, we see
\begin{align}
t(n, (n-1) \sqrt{n-1})
& =
\left. \left( \vphantom{\sum} n (z \ln z - z + 1) - (z - 1)^{2} \right) \right|_{z = (n-1)^{\frac{3}{2}}}
\\
& =
n (((n-1)^{\frac{3}{2}}) \ln ((n-1)^{\frac{3}{2}}) - ((n-1)^{\frac{3}{2}}) + 1) - ((n-1)^{\frac{3}{2}} - 1)^{2}
\\
& =
n \left( \frac{3}{2} (n-1)^{\frac{3}{2}} \ln (n-1) - (n-1)^{\frac{3}{2}} + 1 \right) - ((n-1)^{3} - 2 (n-1)^{\frac{3}{2}} + 1)
\\
& =
\frac{3}{2} n (n-1)^{\frac{3}{2}} \ln (n-1) - n (n-1)^{\frac{3}{2}} + n - (n-1)^{3} + 2 (n-1)^{\frac{3}{2}} - 1
\\
& =
\frac{3}{2} n (n-1)^{\frac{3}{2}} \ln (n-1) - n (n-1)^{\frac{3}{2}} - (n-1)^{3} + 2 (n-1)^{\frac{3}{2}} + (n-1)
\\
& =
(n-1) \left( \vphantom{\sum} n (n-1)^{\frac{1}{2}} \ln (n-1) - n (n-1)^{\frac{1}{2}} - (n-1)^{2} + 2 (n-1)^{\frac{1}{2}} + 1 \right)
\\
& =
(n-1) \left( \vphantom{\sum} n (n-1)^{\frac{1}{2}} \ln (n-1) - (n - 2) (n-1)^{\frac{1}{2}} - (n-1)^{2} + 1 \right)
\\
& =
(n-1) \left( \vphantom{\sum} n (n-1)^{\frac{1}{2}} \ln (n-1) - (n - 2) (n-1)^{\frac{1}{2}} - (n^{2} - 2n + 1) + 1 \right)
\\
& =
(n-1) \left( \vphantom{\sum} n (n-1)^{\frac{1}{2}} \ln (n-1) - (n - 2) (n-1)^{\frac{1}{2}} - (n^{2} - 2n) \right)
\\
& =
(n-1) \left( \vphantom{\sum} n (n-1)^{\frac{1}{2}} \ln (n-1) - (n - 2) (n-1)^{\frac{1}{2}} - n (n - 2) \right)
\\
& \overset{\text{(a)}}{\le}
(n-1) \left( \vphantom{\sum} n (n-1)^{\frac{1}{2}} \ln (n-1) - (\ln(n-1)) (n-1)^{\frac{1}{2}} - n (n - 2) \right)
\\
& =
(n-1) \left( \vphantom{\sum} (n-1) (n-1)^{\frac{1}{2}} \ln (n-1) - n (n - 2) \right)
\\
& =
(n-1) \left( \vphantom{\sum} (n-1)^{\frac{3}{2}} \ln (n-1) - n (n - 2) \right)
\\
& =
(n-1) \, t_{1}( n ) ,
\label{eq:t_sqrt}
\end{align}
where
\begin{itemize}
\item
$t_{1}(n) \triangleq (n-1)^{\frac{3}{2}} \ln (n-1) - n (n - 2)$ and
\item
the equality (a) holds if and only if $n = 2$ since $\ln x \le x-1$ for $x > 0$ with equality if and only if $x = 1$.
\end{itemize}
Since $n-1 > 0$ for $n \ge 2$, it follows from \eqref{eq:t_sqrt} that $t_{1}( n ) < 0$ implies $t(n, (n-1) \sqrt{n-1}) < 0$ for $n \ge 3$.
To show $t_{1}( n ) < 0$ for $n \ge 3$, we calculate the derivatives of $t_{1}( n )$ as follows:
\begin{align}
\frac{ \mathrm{d} t_{1}( n ) }{ \mathrm{d} n }
& =
\frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} (n-1)^{\frac{3}{2}} \ln (n-1) - n (n - 2) \right)
\\
& =
\left( \frac{ \mathrm{d} }{ \mathrm{d} n } ((n-1)^{\frac{3}{2}} \ln (n-1)) \right) - \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n^{2} - 2n) \right)
\\
& =
\left( \frac{ \mathrm{d} }{ \mathrm{d} n } ((n-1)^{\frac{3}{2}}) \right) \ln (n-1) + (n-1)^{\frac{3}{2}} \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\ln (n-1)) \right) - (2n - 2)
\\
& =
\left( \frac{ 3 (n-1)^{\frac{1}{2}} }{ 2 } \right) \ln (n-1) + (n-1)^{\frac{3}{2}} \left( \frac{ 1 }{ n-1 } \right) - 2 (n - 1)
\\
& =
\frac{ 3 (n-1)^{\frac{1}{2}} \ln (n-1) }{ 2 } + (n-1)^{\frac{1}{2}} - 2 (n - 1)
\\
& =
\frac{3}{2} \sqrt{n-1} \, \ln (n-1) + \sqrt{n-1} - 2 (n-1) ,
\label{eq:diff1_t_sqrt} \\
\frac{ \mathrm{d}^{2} t_{1}( n ) }{ \mathrm{d} n^{2} }
& =
\frac{ \mathrm{d} }{ \mathrm{d} n } \left( \frac{3}{2} \sqrt{n-1} \, \ln (n-1) + \sqrt{n-1} - 2 (n-1) \right)
\\
& =
\frac{3}{2} \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\sqrt{n-1} \, \ln (n-1)) \right) + \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\sqrt{n-1}) \right) - 2 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n-1) \right)
\\
& =
\frac{3}{2} \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\sqrt{n-1}) \right) \ln (n-1) + \sqrt{n-1} \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\ln (n-1)) \right) + \left( \frac{ 1 }{ 2 \sqrt{n-1} } \right) - 2
\\
& =
\frac{3}{2} \left( \frac{ 1 }{ 2 \sqrt{n-1} } \right) \ln (n-1) + \frac{3}{2} \sqrt{n-1} \left( \frac{ 1 }{ n-1 } \right) + \frac{ 1 }{ 2 \sqrt{n-1} } - 2
\\
& =
\frac{ 3 \ln (n-1) }{ 4 \sqrt{n-1} } + \frac{ 3 }{ 2 \sqrt{n-1} } + \frac{ 1 }{ 2 \sqrt{n-1} } - 2
\\
& =
\frac{ 3 \ln (n-1) + 6 + 2 - 8 \sqrt{n-1} }{ 4 \sqrt{n-1} }
\\
& =
\frac{ 3 \ln (n-1) - 8 (\sqrt{n-1} - 1) }{ 4 \sqrt{n-1} }
\\
& =
\frac{ 3 \ln (n-1) - 4 \ln_{(\frac{1}{2})} (n-1) }{ 4 \sqrt{n-1} }
\\
& \overset{\text{(a)}}{\le}
\frac{ 3 \ln (n-1) - 4 \ln (n-1) }{ 4 \sqrt{n-1} }
\\
& =
- \frac{ \ln (n-1) }{ 4 \sqrt{n-1} }
\\
& \overset{\text{(b)}}{\le}
0
\qquad (\mathrm{for} \ n \ge 2) ,
\label{eq:diff2_t_sqrt}
\end{align}
where
\begin{itemize}
\item
(a) holds with equality if and only if $n = 2$ since $\ln_{\alpha} x \ge \ln_{\beta} x$ for $\alpha < \beta$ and $x \in (0, +\infty)$ with equality if and only if $x = 1$ (see Lemma \ref{lem:IT_ineq}) and
\item
(b) follows from the fact that $\ln (n-1) \ge 0$ for $n \ge 2$ with equality if and only if $n = 2$.
\end{itemize}
Note that, substituting $n = 2$ into $\frac{ \mathrm{d} t_{1}( n ) }{ \mathrm{d} n }$, we can get
\begin{align}
\left. \frac{ \mathrm{d} t_{1}( n ) }{ \mathrm{d} n } \right|_{n = 2}
& \overset{\eqref{eq:diff1_t_sqrt}}{=}
\left. \left( \frac{3}{2} \sqrt{n-1} \, \ln (n-1) + \sqrt{n-1} - 2 (n-1) \right) \right|_{n = 2}
\\
& =
\frac{3}{2} \sqrt{1} \, \ln (1) + \sqrt{1} - 2
\\
& =
-1 .
\label{eq:diff1_t_sqrt_n=2}
\end{align}
Since
\begin{itemize}
\item
$\frac{ \mathrm{d} t_{1}( n ) }{ \mathrm{d} n }$ is strictly decreasing for $n \ge 2$ (see Eq. \eqref{eq:diff2_t_sqrt}) and
\item
$\left. \frac{ \mathrm{d} t_{1}( n ) }{ \mathrm{d} n } \right|_{n = 2} = -1$ (see Eq. \eqref{eq:diff1_t_sqrt_n=2}),
\end{itemize}
we have that
$
\frac{ \mathrm{d} t_{1}( n ) }{ \mathrm{d} n } \le -1
$
for $n \ge 2$ with equality if and only if $n = 2$, which implies that $t_{1}( n )$ is strictly decreasing for $n \ge 2$.
Moreover, substituting $n = 2$ into $t_{1}( n )$, we also get
\begin{align}
t_{1}( 2 )
& =
\left. \left( \vphantom{\sum} (n-1)^{\frac{3}{2}} \ln (n-1) - n (n - 2) \right) \right|_{n = 2}
\\
& =
1^{\frac{3}{2}} \ln 1 - 2 \cdot 0
\\
& =
0 ;
\end{align}
and therefore, we obtain that
\begin{align}
t_{1}( n )
\le
0
\label{eq:t1_sign}
\end{align}
for $n \ge 2$ with equality if and only if $n = 2$.
Therefore, we have
\begin{align}
t(n, (n-1) \sqrt{n-1})
& \overset{\eqref{eq:t_sqrt}}{=}
(n-1) \, t_{1}( n )
\\
& \overset{\eqref{eq:t1_sign}}{<}
0
\label{eq:t_sqrt_sign}
\end{align}
for $n \ge 3$.
Further, since
\begin{itemize}
\item
$t(n, - \frac{n}{2} W_{-1} \! \left( - \frac{2}{n} \mathrm{e}^{- \frac{2}{n}} \right)) > 0$ (see Eq. \eqref{eq:t_sign_1}),
\item
$t(n, z)$ is strictly dereasing for $z \in [- \frac{n}{2} W_{-1} \! \left( - \frac{2}{n} \mathrm{e}^{- \frac{2}{n}} \right), +\infty)$ (see Eq. \eqref{eq:diff1_t_sign}), and
\item
$t(n, (n-1) \sqrt{n-1}) < 0$ for $n \ge 3$ (see Eq. \eqref{eq:t_sqrt_sign}),
\end{itemize}
it follows from the intermediate value theorem that, for any $n \ge 3$, there exists $\zeta( n ) \in (- \frac{n}{2} W_{-1} \! \left( - \frac{2}{n} \mathrm{e}^{- \frac{2}{n}} \right), (n-1) \sqrt{n-1})$ such that
\begin{align}
t(n, z)
& =
\begin{cases}
> 0
& \mathrm{if} \ z \in [- \frac{n}{2} W_{-1} \! \left( - \frac{2}{n} \mathrm{e}^{- \frac{2}{n}} \right), \zeta( n )) , \\
= 0
& \mathrm{if} \ z = \zeta( n ) , \\
< 0
& \mathrm{if} \ z \in (\zeta(n), +\infty) .
\end{cases}
\label{eq:t_sign_2}
\end{align}
Then, note that $- \frac{n}{2} W_{-1} \! \left( - \frac{2}{n} \mathrm{e}^{- \frac{2}{n}} \right) < (n-1) \sqrt{n-1}$ holds for $n \ge 3$ since, if $0 < (n-1) \sqrt{n-1} \le - \frac{n}{2} W_{-1} \! \left( - \frac{2}{n} \mathrm{e}^{- \frac{2}{n}} \right)$, then $t(n, (n-1) \sqrt{n-1})$ must be nonnegative from \eqref{eq:t_sign_1};
however, we already proved that $t(n, (n-1) \sqrt{n-1}) < 0$ for $n \ge 3$ in \eqref{eq:t_sqrt_sign}.
Combining \eqref{eq:t_sign_1} and \eqref{eq:t_sign_2}, we get that, for any $n \ge 3$, there exists $\zeta( n ) \in (- \frac{n}{2} W_{-1} \! \left( - \frac{2}{n} \mathrm{e}^{- \frac{2}{n}} \right), (n-1) \sqrt{n-1})$ such that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} t(n, z) \right)
& =
\begin{cases}
1
& \mathrm{if} \ z \in (\zeta(n), +\infty) , \\
0
& \mathrm{if} \ z \in \{ 1, \zeta( n ) \} , \\
-1
& \mathrm{if} \ z \in (0, 1) \cup (1, \zeta( n )) .
\end{cases}
\label{eq:t_sign}
\end{align}
Therefore, we have that, for any $n \ge 3$, there exists $\zeta( n ) \in (- \frac{n}{2} W_{-1} \! \left( - \frac{2}{n} \mathrm{e}^{- \frac{2}{n}} \right), (n-1) \sqrt{n-1})$ such that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial g(n, z, 0) }{ \partial z } \right)
& \overset{\eqref{eq:diff_gz_alpha0}}{=}
\underbrace{ \operatorname{sgn} \! \left( \frac{ n }{ z ((n-1) + z)^{2} (\ln z)^{2} } \right) }_{ = 1 } \, \cdot \; \operatorname{sgn} \! \left( \vphantom{\sum} t(n, z) \right)
\\
& \overset{\eqref{eq:t_sign}}{=}
\begin{cases}
1
& \mathrm{if} \ z \in (\zeta(n), +\infty) , \\
0
& \mathrm{if} \ z \in \{ 1, \zeta( n ) \} , \\
-1
& \mathrm{if} \ z \in (0, 1) \cup (1, \zeta( n )) .
\end{cases}
\end{align}
That concludes the proof of Lemma \ref{lem:diff_g_z_a0}.
\end{IEEEproof}
\fi
\section{Proof of Lemma \ref{lem:ln(n-1)/2ln(z)}}
\label{app:ln(n-1)/2ln(z)}
\begin{IEEEproof}[Proof of Lemma \ref{lem:ln(n-1)/2ln(z)}]
First note that $\frac{1}{2} \alpha_{1}( n, z ) = \frac{ \ln (n-1) }{ 2 \ln z }$.
In this proof, we show the positivity of $g(n, z, {\textstyle \frac{ \ln (n-1) }{ 2 \ln z }})$ for $n \ge 3$ and $z \in [n-1, (n-1)^{2}]$.
Substituting $z = (n - 1)^{r}$ into $g(n, z, {\textstyle \frac{ \ln (n-1) }{ 2 \ln z }})$, we have
\begin{align}
g(n, z, {\textstyle \frac{ \ln (n-1) }{ 2 \ln z }}) |_{z = (n-1)^{r}}
& =
g(n, (n-1)^{r}, {\textstyle \frac{1}{2 r}})
\\
& \overset{\eqref{eq:g_z}}{=}
\left. \left( (\alpha-1) + \frac{ ((n-1) + z^{\alpha}) (z^{1-\alpha} - 1) }{ ((n-1) + z) \ln z } \right) \right|_{(z, \alpha) = ((n-1)^{r}, \frac{1}{2 r})}
\\
& =
\left( \frac{1}{2 r} - 1 \right) + \frac{ ((n-1) + \left((n-1)^{r}\right)^{\frac{1}{2 r}}) (\left((n-1)^{r}\right)^{1-\frac{1}{2 r}} - 1) }{ ((n-1) + (n-1)^{r}) \ln (n-1)^{r} }
\\
& =
\frac{1 - 2 r}{2 r} + \frac{ ((n-1) + (n-1)^{\frac{1}{2}}) ((n-1)^{\frac{2 r - 1}{2}} - 1) }{ ((n-1) + (n-1)^{r}) \left( r \ln (n-1) \right) }
\\
& =
\frac{1 - 2 r}{2 r} + \frac{ (n-1)^{\frac{2 r - 1}{2} + 1} - (n-1) + (n-1)^{\frac{2 r - 1}{2} + \frac{1}{2}} - (n-1)^{\frac{1}{2}} }{ r ((n-1) + (n-1)^{r}) \ln (n-1) }
\\
& =
\frac{1 - 2 r}{2 r} + \frac{ (n-1)^{r + \frac{1}{2}} - (n-1) + (n-1)^{r} - (n-1)^{\frac{1}{2}} }{ r ((n-1) + (n-1)^{r}) \ln (n-1) }
\\
& =
\frac{1 - 2 r}{2 r} + \frac{ \left[ (n-1)^{r + \frac{1}{2}} + (n-1)^{r} \right] - \left[ (n-1) + (n-1)^{\frac{1}{2}} \right] }{ r ((n-1) + (n-1)^{r}) \ln (n-1) }
\\
& =
\frac{1 - 2 r}{2 r} + \frac{ (n-1)^{r} \left[ (n-1)^{\frac{1}{2}} + 1 \right] - (n-1)^{\frac{1}{2}} \left[ (n-1)^{\frac{1}{2}} + 1 \right] }{ r ((n-1) + (n-1)^{r}) \ln (n-1) }
\\
& =
\frac{1 - 2 r}{2 r} + \frac{ \left( (n-1)^{r} - (n-1)^{\frac{1}{2}} \right) \left( (n-1)^{\frac{1}{2}} + 1 \right) }{ r ((n-1) + (n-1)^{r}) \ln (n-1) }
\\
& =
\frac{1}{2 r} \left( (1 - 2 r) + \frac{ 2 \left( (n-1)^{r} - (n-1)^{\frac{1}{2}} \right) \left( (n-1)^{\frac{1}{2}} + 1 \right) }{ ((n-1) + (n-1)^{r}) \ln (n-1) } \right)
\\
& =
\frac{ (1 - 2 r) ((n-1) + (n-1)^{r}) \ln (n-1) + 2 \left( (n-1)^{r} - (n-1)^{\frac{1}{2}} \right) \left( (n-1)^{\frac{1}{2}} + 1 \right) }{ 2 r ((n-1) + (n-1)^{r}) \ln (n-1) }
\\
& =
\frac{ d(n, r) }{ 2 r ((n-1) + (n-1)^{r}) \ln (n-1) } ,
\label{eq:g_ln(n-1)/2ln(z)_(n-1)^r}
\end{align}
where
\begin{align}
d(n, r)
\triangleq
(1 - 2 r) ((n-1) + (n-1)^{r}) \ln (n-1) + 2 ( (n-1)^{r} - (n-1)^{\frac{1}{2}} ) ( (n-1)^{\frac{1}{2}} + 1 ) .
\label{def:d_n_r}
\end{align}
Since
\begin{align}
\frac{ 1 }{ 2 r ((n-1) + (n-1)^{r}) \ln (n-1) }
>
0
\end{align}
for $n \ge 3$ and $r > 0$, it is enough to check the positivity of $d(n, r)$ for $n \ge 3$ and $1 \le r \le 2$ rather than the right-hand side of \eqref{eq:g_ln(n-1)/2ln(z)_(n-1)^r}.
The derivatives of $d(n, r)$ with respect to $r$ are as follows:
\begin{align}
\frac{ \partial d(n, r) }{ \partial r }
& =
\frac{ \partial }{ \partial r } \left( \vphantom{\sum} (1 - 2 r) ((n-1) + (n-1)^{r}) \ln (n-1) + 2 ( (n-1)^{r} - (n-1)^{\frac{1}{2}} ) ( (n-1)^{\frac{1}{2}} + 1 ) \right)
\\
& =
\frac{ \partial }{ \partial r } \left( \vphantom{\sum} (1 - 2 r) (n-1) \ln (n-1) + (1 - 2 r) (n-1)^{r} \ln (n-1)
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ \; 2 (n-1)^{r} ((n-1)^{\frac{1}{2}} + 1) - (n-1)^{\frac{1}{2}} ((n-1)^{\frac{1}{2}} + 1) \vphantom{\sum} \right)
\\
& =
\frac{ \partial }{ \partial r } \left( \vphantom{\sum} (1 - 2 r) (n-1) \ln (n-1) + (1 - 2 r) (n-1)^{r} \ln (n-1) + 2 (n-1)^{r} ((n-1)^{\frac{1}{2}} + 1) \right)
\\
& =
\left( \frac{ \mathrm{d} }{ \mathrm{d} r } (1 - 2 r) \right) (n-1) \ln (n-1) + \left( \frac{ \partial }{ \partial r } ((1 - 2 r) (n-1)^{r}) \right) \ln (n-1)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad
+ 2 \left( \frac{ \partial }{ \partial r } ((n-1)^{r}) \right) ((n-1)^{\frac{1}{2}} + 1)
\\
& =
( -2 ) (n-1) \ln (n-1) + \left( \left[ \frac{ \mathrm{d} }{ \mathrm{d} r } (1 - 2 r) \right] (n-1)^{r} + (1 - 2 r) \left[ \frac{ \partial }{ \partial r } ((n-1)^{r}) \right] \right) \ln (n-1)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad
+ 2 ( (n-1)^{r} \ln (n-1) ) ((n-1)^{\frac{1}{2}} + 1)
\\
& =
- 2 (n-1) \ln (n-1) + ( [ -2 ] (n-1)^{r} + (1 - 2 r) [ (n-1)^{r} \ln (n-1) ] ) \ln (n-1)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad
+ 2 (n-1)^{r + \frac{1}{2}} \ln (n-1) + 2 (n-1)^{r} \ln (n-1)
\\
& =
- 2 (n-1) \ln (n-1) - 2 (n-1)^{r} \ln (n-1) + (1 - 2 r) (n-1)^{r} (\ln (n-1))^{2}
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad
+ 2 (n-1)^{r + \frac{1}{2}} \ln (n-1) + 2 (n-1)^{r} \ln (n-1)
\\
& =
(\ln (n-1)) \left( \vphantom{\sum} - 2 (n-1) + (1 - 2 r) (n-1)^{r} \ln (n-1) + 2 (n-1)^{r + \frac{1}{2}} \right) ,
\label{eq:diff1_d_r} \\
\frac{ \partial^{2} d(n, r) }{ \partial r^{2} }
& =
\frac{ \partial }{ \partial r } \left( (\ln (n-1)) \left( \vphantom{\sum} - 2 (n-1) + (1 - 2 r) (n-1)^{r} \ln (n-1) + 2 (n-1)^{r + \frac{1}{2}} \right) \right)
\\
& =
(\ln (n-1)) \left( \frac{ \partial }{ \partial r } \left( \vphantom{\sum} - 2 (n-1) + (1 - 2 r) (n-1)^{r} \ln (n-1) + 2 (n-1)^{r + \frac{1}{2}} \right) \right)
\\
& =
(\ln (n-1)) \left( \frac{ \partial }{ \partial r } \left( \vphantom{\sum} (1 - 2 r) (n-1)^{r} \ln (n-1) + 2 (n-1)^{r + \frac{1}{2}} \right) \right)
\\
& =
(\ln (n-1)) \left( \left( \frac{ \partial }{ \partial r } ((1 - 2 r) (n-1)^{r}) \right) \ln (n-1) + 2 \left( \frac{ \partial }{ \partial r } ((n-1)^{r + \frac{1}{2}}) \right) \right)
\\
& =
(\ln (n-1)) \left( \left( \left( \frac{ \mathrm{d} }{ \mathrm{d} r } (1 - 2 r) \right) (n-1)^{r} + (1 - 2 r) \left( \frac{ \mathrm{d} }{ \mathrm{d} r } (n-1)^{r} \right) \right) \ln (n-1)
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ 2 \left( \frac{ \partial }{ \partial r } ((n-1)^{r + \frac{1}{2}}) \right) \right)
\\
& =
(\ln (n-1)) \left( \left( \vphantom{\sum} ( -2 ) (n-1)^{r} + (1 - 2 r) (n-1)^{r} \ln (n-1) \right) \ln (n-1)
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ 2 (n-1)^{r + \frac{1}{2}} \ln(n-1) \right)
\\
& =
(\ln (n-1))^{2} \left( \vphantom{\sum} - 2 (n-1)^{r} + (1 - 2 r) (n-1)^{r} \ln (n-1) + 2 (n-1)^{r + \frac{1}{2}} \right)
\\
& =
\underbrace{ (n-1)^{r} (\ln (n-1))^{2} }_{ > 0 } \left( \vphantom{\sum} - 2 + (1 - 2 r) \ln (n-1) + 2 (n-1)^{\frac{1}{2}} \right)
\\
& \overset{\text{(a)}}{=}
\begin{cases}
< 0
& \mathrm{if} \ r > \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) } \ \mathrm{and} \ n \ge 3 , \\
= 0
& \mathrm{if} \ r = \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) } \ \mathrm{or} \ n = 2 , \\
> 0
& \mathrm{if} \ r < \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) } \ \mathrm{and} \ n \ge 3 ,
\end{cases}
\end{align}
where (a) follows from the fact that
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} - 2 + (1 - 2 r) \ln (n-1) + 2 (n-1)^{\frac{1}{2}} \right)
=
\begin{cases}
1
& \mathrm{if} \ r < \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) } , \\
0
& \mathrm{if} \ r = \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) } , \\
-1
& \mathrm{if} \ r > \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) } .
\end{cases}
\label{eq:diff2_d_partial}
\end{align}
We can verify \eqref{eq:diff2_d_partial} as follows:
The derivative of the left-hand side of \eqref{eq:diff2_d_partial} with respect to $r$ is
\begin{align}
\frac{ \partial }{ \partial r } \left( \vphantom{\sum} - 2 + (1 - 2 r) \ln (n-1) + 2 (n-1)^{\frac{1}{2}} \right)
& =
\frac{ \partial }{ \partial r } \left( \vphantom{\sum} - 2 r \ln (n-1) \right)
\\
& =
- 2 \ln (n-1) \left( \frac{ \mathrm{d} }{ \mathrm{d} r } (r) \right)
\\
& =
- 2 \ln (n-1)
\\
& \le
0
\label{eq:diff2_d_partial_diff}
\end{align}
for $n \ge 2$ with equality if and only if $n = 2$.
Further, the left-hand side of \eqref{eq:diff2_d_partial} has can be transformed as follows:
\begin{align}
&&
- 2 + (1 - 2 r) \ln (n-1) + 2 (n-1)^{\frac{1}{2}}
& =
0
\\
& \iff &
(1 - 2 r) \ln (n-1)
& =
2 (1 - (n-1)^{\frac{1}{2}})
\\
& \iff &
1 - 2 r
& =
\frac{ 2 (1 - (n-1)^{\frac{1}{2}}) }{ \ln (n-1) }
\\
& \iff &
r
& =
\frac{1}{2} - \frac{ 1 - \sqrt{n-1} }{ \ln (n-1) }
\\
& \iff &
r
& =
\frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) } ,
\label{eq:diff2_d_partial_root}
\end{align}
Thus, it follows from \eqref{eq:diff2_d_partial_diff} and \eqref{eq:diff2_d_partial_root} that \eqref{eq:diff2_d_partial} holds for $n \ge 3$.
Hence, we have from \eqref{eq:diff2_d_partial} that, if $n \ge 3$, then
\begin{itemize}
\item
$\frac{ \partial d(n, r) }{ \partial r }$ is strictly increasing for $r \in (-\infty, \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) }]$ and
\item
$\frac{ \partial d(n, r) }{ \partial r }$ is strictly decreasing for $r \in [\frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) }, +\infty)$.
\end{itemize}
Using this monotonicity, we now check the sign of $\frac{ \partial d(n, r) }{ \partial r }$.
Substituting $r = \frac{1}{2}$ into $\frac{ \partial d(n, r) }{ \partial r }$, we see that
\begin{align}
\left. \frac{ \partial d(n, r) }{ \partial r } \right|_{r = \frac{1}{2}}
& \overset{\eqref{eq:diff1_d_r}}{=}
\left. (\ln (n-1)) \left( \vphantom{\sum} - 2 (n-1) + (1 - 2 r) (n-1)^{r} \ln (n-1) + 2 (n-1)^{r + \frac{1}{2}} \right) \right|_{r = \frac{1}{2}}
\\
& =
(\ln (n-1)) \left( \vphantom{\sum} - 2 (n-1) + (1 - 1) (n-1)^{1} \ln (n-1) + 2 (n-1)^{1} \right)
\\
& =
0 .
\label{eq:diff1_d_root1}
\end{align}
Moreover, we can transform the equation $\frac{ \partial d(n, r) }{ \partial r } = 0$ as follows:
\begin{align}
&&
\frac{ \partial d(n, r) }{ \partial r }
& =
0
\\
& \iff &
- 2 (n-1) + (1 - 2 r) (n-1)^{r} \ln (n-1) + 2 (n-1)^{r + \frac{1}{2}}
& =
0
\\
& \iff &
(1 - 2 r) (n-1)^{r} \ln (n-1) + 2 (n-1)^{r + \frac{1}{2}}
& =
2 (n-1)
\\
& \iff &
(n-1)^{r} ((1 - 2 r) \ln (n-1) + 2 (n-1)^{\frac{1}{2}})
& =
2 (n-1)
\\
& \iff &
(1 - 2 r) \ln (n-1) + 2 (n-1)^{\frac{1}{2}}
& =
2 (n-1)^{1-r}
\\
& \iff &
(1 - 2 r) \ln (n-1) + 2 (n-1)^{\frac{1}{2}}
& =
2 \, \mathrm{e}^{\ln(n-1)^{1-r}}
\\
& \iff &
(1 - 2 r) \ln (n-1) + 2 (n-1)^{\frac{1}{2}}
& =
2 \, \mathrm{e}^{(1- r) \ln(n-1)}
\\
& \iff &
- 2 r \ln (n-1) + \ln(n-1) + 2 (n-1)^{\frac{1}{2}}
& =
2 \, \mathrm{e}^{(1- r) \ln(n-1)}
\\
& \iff &
r \ln (n-1) - \frac{1}{2} \ln(n-1) - (n-1)^{\frac{1}{2}}
& =
- \mathrm{e}^{(1- r) \ln(n-1)}
\\
& \iff &
\frac{(2 r - 1) \ln (n-1)}{ 2 } - \sqrt{n-1}
& =
- \mathrm{e}^{(1- r) \ln(n-1)}
\\
& \iff &
\left( \frac{(2 r - 1) \ln (n-1)}{ 2 } - \sqrt{n-1} \right) \mathrm{e}^{\frac{(2 r - 1) \ln (n-1)}{ 2 } - \sqrt{n-1}}
& =
- \mathrm{e}^{(1- r) \ln(n-1)} \, \mathrm{e}^{\frac{(2 r - 1) \ln (n-1)}{ 2 } - \sqrt{n-1}}
\\
& \iff &
\left( \frac{(2 r - 1) \ln (n-1)}{ 2 } - \sqrt{n-1} \right) \mathrm{e}^{\frac{(2 r - 1) \ln (n-1)}{ 2 } - \sqrt{n-1}}
& =
- \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}}
\label{eq:both_side_>=-1_2} \\
& \overset{\text{(a)}}{\iff} &
W_{0} \! \left( \! \left( \frac{(2 r - 1) \ln (n-1)}{ 2 } - \sqrt{n-1} \right) \mathrm{e}^{\frac{(2 r - 1) \ln (n-1)}{ 2 } - \sqrt{n-1}} \right)
& =
W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right)
\\
& \overset{\text{(b)}}{\iff} &
\frac{(2 r - 1) \ln (n-1)}{ 2 } - \sqrt{n-1}
& =
W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right)
\\
& \iff &
r \ln (n-1)
& =
W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right)
\notag \\
&&& \qquad \qquad
+ \frac{1}{2} \ln (n-1) + \sqrt{n-1}
\\
& \iff &
r
& =
\frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) },
\label{eq:diff1_d_root2}
\end{align}
where
\begin{itemize}
\item
(a) holds for $n \ge 2$ and $r \ge \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) }$ since the domain of $W_{0}( \cdot )$ is the interval $[-\frac{1}{\mathrm{e}}, +\infty)$ and the both sides of \eqref{eq:both_side_>=-1_2} is greater than $- \frac{1}{\mathrm{e}}$ for $n \ge 2$ and $r \ge \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) }$, i.e.,
\begin{align}
(\text{the left-hand side of \eqref{eq:both_side_>=-1_2}})
& =
\left( \frac{(2 r - 1) \ln (n-1)}{ 2 } - \sqrt{n-1} \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \times
\exp \left( \frac{(2 r - 1) \ln (n-1)}{ 2 } - \sqrt{n-1} \right)
\\
& =
\left( r \ln (n-1) - \frac{1}{2} \ln (n-1) - \sqrt{n-1} \right)
\notag \\
& \qquad \qquad \qquad \quad \times
\exp \left( r \ln (n-1) - \frac{1}{2} \ln (n-1) - \sqrt{n-1} \right)
\\
& \overset{\text{(d)}}{\ge}
\left( \left( \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) } \right) \ln (n-1) - \frac{1}{2} \ln (n-1) - \sqrt{n-1} \right)
\notag \\
& \qquad \times
\exp \left( \left( \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) } \right) \ln (n-1) - \frac{1}{2} \ln (n-1) - \sqrt{n-1} \right)
\\
& =
\left( \frac{1}{2} \ln (n-1) + \sqrt{n-1} - 1 - \frac{1}{2} \ln (n-1) - \sqrt{n-1} \right)
\notag \\
& \qquad \quad \times
\exp \left( \frac{1}{2} \ln (n-1) + \sqrt{n-1} - 1 - \frac{1}{2} \ln (n-1) - \sqrt{n-1} \right)
\\
& =
(-1) \exp (-1)
\\
& =
- \frac{1}{\mathrm{e}} ,
\\
(\text{the right-hand side of \eqref{eq:both_side_>=-1_2}})
& =
- \exp \left( \frac{\ln (n-1)}{2} - \sqrt{n-1} \right)
\\
& \overset{\text{(e)}}{\ge}
- \exp \left( \frac{\ln_{(\frac{1}{2})} (n-1)}{2} - \sqrt{n-1} \right)
\\
& =
- \exp \left( (\sqrt{n-1} - 1) - \sqrt{n-1} \right)
\\
& =
- \frac{1}{\mathrm{e}} ,
\end{align}
where
\begin{itemize}
\item
(d) follows from the facts that $f(x) = x \, \mathrm{e}^{x}$ is strictly increasing for $x \ge -1$ and $r \ge \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) }$, and
\item
(e) follows from the fact that, for a fixed $x \in (0, 1) \cup (1, +\infty)$, $f_{x}( \alpha ) = \ln_{\alpha} x$ is strictly decreasing for $\alpha \in (-\infty, +\infty)$ (see Lemma \ref{lem:IT_ineq}),
\end{itemize}
\item
(b) holds for $r \ge \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) }$ since $W_{0}( x \, \mathrm{e}^{x} ) = x$ holds for $x \ge -1$.
\end{itemize}
Note that $\frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) } \le \frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) }$ for $n \ge 2$ with equality iff $n = 2$.
Since
\begin{itemize}
\item
if $n \ge 3$, then the following monotonicity hold (see Eq. \eqref{eq:diff2_d_partial}):
\begin{itemize}
\item
$\frac{ \partial d(n, r) }{ \partial r }$ is strictly increasing for $r \in (-\infty, \frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) }]$ and
\item
$\frac{ \partial d(n, r) }{ \partial r }$ is strictly decreasing for $r \in [\frac{1}{2} + \frac{ \sqrt{n-1} - 1 }{ \ln (n-1) }, +\infty)$,
\end{itemize}
and
\item
roots of the equation $\frac{ \partial d(n, r) }{ \partial r } = 0$ is at $r \in \{ \frac{1}{2}, \frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) } \}$ (see Eqs. \eqref{eq:diff1_d_root1} and \eqref{eq:diff1_d_root2}),
\end{itemize}
we obtain
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial d(n, r) }{ \partial r } \right)
& =
\begin{cases}
1
& \mathrm{if} \ r \in (\frac{1}{2}, \frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) }) \ \mathrm{and} \ n \ge 3 , \\
0
& \mathrm{if} \ r \in \{ \frac{1}{2}, \frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) } \} \ \mathrm{or} \ n = 2 , \\
-1
& \mathrm{if} \ r \in (-\infty, \frac{1}{2}) \cup (\frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) }, +\infty) \ \mathrm{and} \ n \ge 3 ,
\end{cases}
\label{eq:diff1_d_sign}
\end{align}
which implies that, if $n \ge 3$, then
\begin{itemize}
\item
$d(n, r)$ is strictly decreasing for $r \in (-\infty, \frac{1}{2}]$,
\item
$d(n, r)$ is strictly increasing for $r \in [\frac{1}{2}, \frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) }]$, and
\item
$d(n, r)$ is strictly decreasing for $r \in [\frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) }, +\infty)$.
\end{itemize}
Using this monotonicity and verifying the positivities of $d(n, 1)$ and $d(n, 2)$ for $n \ge 3$, we prove this lemma.
In order to do so, we first show the positivity of $d(n, 1)$ for $n \ge 3$.
Substituting $r = 1$ into $d(n, r)$, we see
\begin{align}
d(n, 1)
& \overset{\eqref{def:d_n_r}}{=}
\left. \left( \vphantom{\sum} (1 - 2 r) ((n-1) + (n-1)^{r}) \ln (n-1) + 2 ( (n-1)^{r} - (n-1)^{\frac{1}{2}} ) ( (n-1)^{\frac{1}{2}} + 1 ) \right) \right|_{r = 1}
\\
& =
(1 - 2) ((n-1) + (n-1)) \ln (n-1) + 2 ( (n-1) - (n-1)^{\frac{1}{2}} ) ( (n-1)^{\frac{1}{2}} + 1 )
\\
& =
(- 1) (2(n-1)) \ln (n-1) + 2 \sqrt{n-1} ( \sqrt{n-1} - 1 ) ( \sqrt{n-1} + 1 )
\\
& =
- 2 (n-1) \ln (n-1) + 2 \sqrt{n-1} ((n-1) - 1)
\\
& =
- 2 (n-1) \ln (n-1) + 2 \sqrt{n-1} (n-2)
\\
& =
2 \sqrt{n-1} \left( \vphantom{\sum} - \sqrt{n-1} \ln (n-1) + (n-2) \right)
\\
& =
2 \sqrt{n-1} \, d_{1}( n ) ,
\label{eq:d(n,1)}
\end{align}
where
\begin{align}
d_{1}( n )
\triangleq
- \sqrt{n-1} \ln (n-1) + (n-2) .
\end{align}
Since $2 \sqrt{n-1} > 0$ for $n > 1$, it is enough to check the positivity of $d_{1}( n )$ for $n \ge 3$ rather than \eqref{eq:d(n,1)}.
Hence, we analyze $d_{1}( n )$ for $n \ge 3$ as follows:
\begin{align}
d_{1}( 2 )
& =
\left. \left( \vphantom{\sum} - \sqrt{n-1} \ln (n-1) + (n-2) \right) \right|_{n = 2}
\\
& =
- \sqrt{1} \ln (1) + (2-2)
\\
& =
0 ,
\label{eq:d1_n2}
\\
\frac{ \mathrm{d} d_{1}(n) }{ \mathrm{d} n }
& =
\frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} - \sqrt{n-1} \ln (n-1) + (n-2) \right)
\\
& =
- \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\sqrt{n-1} \ln (n-1)) \right) + \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n-2) \right)
\\
& =
- \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\sqrt{n-1}) \right) \ln (n-1) - \sqrt{n-1} \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\ln (n-1)) \right) + 1
\\
& =
- \left( \frac{ 1 }{ 2 \sqrt{n-1} } \right) \ln (n-1) - \sqrt{n-1} \left( \frac{ 1 }{ n-1 } \right) + 1
\\
& =
- \frac{ \ln (n-1) }{ 2 \sqrt{n-1} } - \frac{ 1 }{ \sqrt{n-1} } + 1
\\
& =
\frac{ - \ln (n-1) - 2 + 2 \sqrt{n-1} }{ 2 \sqrt{n-1} }
\\
& =
\frac{ - \ln (n-1) + 2 (\sqrt{n-1} - 1) }{ 2 \sqrt{n-1} }
\\
& =
\frac{ - \ln (n-1) + \ln_{(\frac{1}{2})} (n-1) }{ 2 \sqrt{n-1} }
\\
& \overset{\text{(a)}}{\ge}
\frac{ - \ln (n-1) + \ln (n-1) }{ 2 \sqrt{n-1} }
\\
& =
0 ,
\label{diff_d1}
\end{align}
where (a) holds with equality if and only if $n = 2$ since $\ln_{\alpha} x \ge \ln_{\beta} x$ for $\alpha < \beta$ and $x \in (0, +\infty)$ with equality if and only if $x = 1$.
Since
\begin{itemize}
\item
$d_{1}( 2 ) = 0$ (see Eq. \eqref{eq:d1_n2}) and
\item
$d_{1}( n )$ is strictly increasing for $n \ge 2$ (see Eq. \eqref{diff_d1}),
\end{itemize}
we have $d_{1}( n ) > 0$ for $n \ge 3$; and therefore, we obtain $d(n, 1) > 0$ for $n \ge 3$ from \eqref{eq:d(n,1)}.
Moreover, we second show the positivity of $d(n, 2)$ for $n \ge 3$.
Substituting $r = 2$ into $d(n, r)$ we see
\begin{align}
d(n, 2)
& \overset{\eqref{def:d_n_r}}{=}
\left. \left( \vphantom{\sum} (1 - 2 r) ((n-1) + (n-1)^{r}) \ln (n-1) + 2 ( (n-1)^{r} - (n-1)^{\frac{1}{2}} ) ( (n-1)^{\frac{1}{2}} + 1 ) \right) \right|_{r = 2}
\\
& =
(1 - 4) ((n-1) + (n-1)^{2}) \ln (n-1) + 2 ( (n-1)^{2} - (n-1)^{\frac{1}{2}} ) ( (n-1)^{\frac{1}{2}} + 1 )
\\
& =
- 3 (n-1) (1 + (n-1)) \ln (n-1) + 2 ( (n-1)^{\frac{5}{2}} + (n-1)^{2} - (n-1) - (n-1)^{\frac{1}{2}})
\\
& =
- 3 n (n-1) \ln (n-1) + 2 ( (n-1)^{\frac{1}{2}} ((n-1)^{2} - 1) + (n-1)((n-1) - 1))
\\
& =
- 3 n (n-1) \ln (n-1) + 2 ( (n-1)^{\frac{1}{2}} ((n^{2} - 2n + 1) - 1) + (n-1)(n-2))
\\
& =
- 3 n (n-1) \ln (n-1) + 2 ( (n-1)^{\frac{1}{2}} (n (n - 2)) + (n-1)(n-2))
\\
& =
- 3 n (n-1) \ln (n-1) + 2 (n-2) (n (n-1)^{\frac{1}{2}} + (n-1))
\\
& =
- 3 n (n-1) \ln (n-1) + 2 n (n-2) \sqrt{n-1} + 2 (n-1) (n-2) .
\label{eq:d(n,2)}
\end{align}
Note that
\begin{align}
d(2, 2)
& \overset{\eqref{eq:d(n,2)}}{=}
\left. \left( \vphantom{\sum} - 3 n (n-1) \ln (n-1) + 2 n (n-2) \sqrt{n-1} + 2 (n-1) (n-2) \right) \right|_{n = 2}
\\
& =
- 3 \cdot 2 (2-1) \ln 1 + 2 \cdot 2 (2-2) \sqrt{1} + 2 (2-1) (2-2)
\\
& =
0 .
\label{eq:d(2,2)}
\end{align}
Then, the derivatives of $d(n, 2)$ with respect to $n$ are as follows:
\begin{align}
\frac{ \mathrm{d} d(n, 2) }{ \mathrm{d} n }
& \overset{\eqref{eq:d(n,2)}}{=}
\frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} - 3 n (n-1) \ln (n-1) + 2 n (n-2) \sqrt{n-1} + 2 (n-1) (n-2) \right)
\\
& =
- 3 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n (n-1) \ln (n-1)) \right) + 2 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n (n-2) \sqrt{n-1}) \right) + 2 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } ((n-1) (n-2)) \right)
\\
& =
- 3 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n^{2} \ln (n-1) - n \ln (n-1)) \right) + 2 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n^{2} \sqrt{n-1} - 2 n \sqrt{n-1}) \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ 2 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n^{2} - 3 n + 2) \right)
\\
& =
- 3 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n^{2} \ln (n-1)) \right) + 3 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n \ln (n-1)) \right) + 2 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n^{2} \sqrt{n-1}) \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- 4 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n \sqrt{n-1}) \right) + 2 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n^{2} - 3 n) \right)
\\
& =
- 3 \left( 2 n \ln (n-1) + \frac{n^{2}}{n-1} \right) + 3 \left( \ln (n-1) + \frac{ n }{ n-1 } \right) + 2 \left(2 n \sqrt{n-1} + \frac{ n^{2} }{ 2 \sqrt{n-1} } \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad
- 4 \left( \sqrt{n-1} + \frac{ n }{ 2 \sqrt{n-1} } \right) + 2 (2 n - 3)
\\
& =
- 6 n \ln (n-1) - \frac{3 n^{2}}{n-1} + 3 \ln (n-1) + \frac{ 3 n }{ n-1 } + 4 n \sqrt{n-1} + \frac{ 2 n^{2} }{ 2 \sqrt{n-1} }
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- 4 \sqrt{n-1} - \frac{ 4 n }{ 2 \sqrt{n-1} } + 4 n - 6
\\
& =
3 (1 - 2 n) \ln (n-1) - \frac{3 n^{2} - 3 n }{n-1} + 4 (n-1) \sqrt{n-1} + \frac{ n^{2} - 2 n }{ \sqrt{n-1} } + 4 n - 6
\\
& =
3 (1 - 2 n) \ln (n-1) - \frac{3 n (n-1) }{ (n-1) } + 4 (n-1) \sqrt{n-1} + \frac{ n (n-2) }{ \sqrt{n-1} } + 4 n - 6
\\
& =
3 (1 - 2 n) \ln (n-1) - 3 n + 4 (n-1) \sqrt{n-1} + \frac{ n (n-2) }{ \sqrt{n-1} } + 4 n - 6
\\
& =
3 (1 - 2 n) \ln (n-1) + 4 (n-1) \sqrt{n-1} + \frac{ n (n-2) }{ \sqrt{n-1} } + n - 6
\label{eq:diff1_d(n,2)} \\
\frac{ \mathrm{d}^{2} d(n, 2) }{ \mathrm{d} n^{2} }
& =
\frac{ \mathrm{d} }{ \mathrm{d} n } \left( 3 (1 - 2 n) \ln (n-1) + 4 (n-1) \sqrt{n-1} + \frac{ n (n-2) }{ \sqrt{n-1} } + n - 6 \right)
\\
& =
3 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } ((1 - 2 n) \ln (n-1)) \right) + 4 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } ((n-1) \sqrt{n-1}) \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ \left( \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \frac{ n (n-2) }{ \sqrt{n-1} } \right) \right) + \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n - 6) \right)
\\
& =
3 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } ((1 - 2 n) \ln (n-1)) \right) + 4 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } ((n-1)^{\frac{3}{2}}) \right) + \left( \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \frac{ n (n-2) }{ \sqrt{n-1} } \right) \right) + 1
\\
& =
3 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (1 - 2 n) \right) \ln (n-1) + 3 (1 - 2 n) \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\ln (n-1)) \right) + 4 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } ((n-1)^{\frac{3}{2}}) \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad
+ \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n^{2} - 2n) \right) \frac{1}{\sqrt{n-1}} + n (n-2) \left( \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \frac{1}{\sqrt{n-1}} \right) \right) + 1
\\
& =
3 ( -2 ) \ln (n-1) + 3 (1 - 2 n) \left( \frac{1}{n-1} \right) + 4 \left( \frac{3}{2} (n-1)^{\frac{1}{2}} \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ (2n - 2) \frac{1}{\sqrt{n-1}} + n (n-2) \left( - \frac{1}{2 (n-1)^{\frac{3}{2}}} \right) + 1
\\
& =
- 6 \ln (n-1) + \frac{3 (1 - 2 n)}{n-1} + 6 \sqrt{n-1} + \frac{2(n-1)}{\sqrt{n-1}} - \frac{n (n-2)}{2 (n-1) \sqrt{n-1} } + 1
\\
& =
- 6 \ln (n-1) + \frac{3 (1 - 2 n)}{n-1} + 6 \sqrt{n-1} + 2 \sqrt{n-1} - \frac{n (n-2)}{2 (n-1) \sqrt{n-1} } + 1
\\
& =
- 6 \ln (n-1) + \frac{3 (1 - 2 n)}{n-1} + 8 \sqrt{n-1} - \frac{n (n-2)}{2 (n-1) \sqrt{n-1} } + 1 ,
\label{eq:diff2_d(n,2)} \\
\frac{ \mathrm{d}^{3} d(n, 2) }{ \mathrm{d} n^{3} }
& =
\frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} - 6 \ln (n-1) + \frac{3 (1 - 2 n)}{n-1} + 8 \sqrt{n-1} - \frac{n (n-2)}{2 (n-1) \sqrt{n-1} } + 1 \right)
\\
& =
- 6 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\ln (n-1)) \right) + 3 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \frac{1 - 2 n}{n-1} \right) \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ 8 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\sqrt{n-1}) \right) - \frac{1}{2} \left( \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \frac{n (n-2)}{(n-1) \sqrt{n-1} } \right) \right)
\\
& =
- 6 \left( \frac{1}{n-1} \right) + 3 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \frac{1 - 2 n}{n-1} \right) \right) + 8 \left( \frac{1}{2\sqrt{n-1}} \right) - \left( \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \frac{n (n-2)}{2 (n-1) \sqrt{n-1} } \right) \right)
\\
& =
- \frac{6}{n-1} + 3 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \frac{1 - 2 n}{n-1} \right) \right) + \frac{4}{\sqrt{n-1}} - \left( \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \frac{n (n-2)}{2 (n-1) \sqrt{n-1} } \right) \right)
\\
& =
- \frac{6}{n-1} + 3 \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (1 - 2 n) \right) \frac{1}{n-1} + 3 (1 - 2 n) \left( \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \frac{1}{n-1} \right) \right) + \frac{4}{\sqrt{n-1}}
\notag \\
& \qquad \qquad \qquad
- \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n^{2} - 2n) \right) \frac{1}{2 (n-1) \sqrt{n-1}} - n (n-2) \left( \frac{ \mathrm{d} }{ \mathrm{d} n } \left( \frac{1}{2 (n-1) \sqrt{n-1}} \right) \right)
\\
& =
- \frac{6}{n-1} + 3 ( - 2 ) \frac{1}{n-1} + 3 (1 - 2 n) \left( - \frac{1}{(n-1)^{2}} \right) + \frac{4}{\sqrt{n-1}}
\notag \\
& \quad
- ( 2n - 2 ) \frac{1}{2 (n-1) \sqrt{n-1}} - n (n-2) \left( - \frac{1}{(2 (n-1) \sqrt{n-1})^{2}} \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (2 (n-1) \sqrt{n-1}) \right) \right)
\\
& =
- \frac{6}{n-1} - \frac{6}{n-1} + \frac{3 (2 n - 1)}{(n-1)^{2}} + \frac{4}{\sqrt{n-1}}
\notag \\
& \qquad \qquad
- \frac{1}{\sqrt{n-1}} + \frac{n (n-2)}{4 (n-1)^{3}} \left( \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (2 (n-1)) \right) \sqrt{n-1} + 2 (n-1) \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\sqrt{n-1}) \right) \right)
\\
& =
- \frac{12}{n-1} + \frac{3 (2 n - 1)}{(n-1)^{2}} + \frac{3}{\sqrt{n-1}} + \frac{n (n-2)}{4 (n-1)^{3}} \left( (2) \sqrt{n-1} + 2 (n-1) \left( \frac{1}{2\sqrt{n-1}} \right) \right)
\\
& =
- \frac{12}{n-1} + \frac{3 (2 n - 1)}{(n-1)^{2}} + \frac{3}{\sqrt{n-1}} + \frac{n (n-2)}{4 (n-1)^{3}} ( 2 \sqrt{n-1} + \sqrt{n-1} )
\\
& =
- \frac{12}{n-1} + \frac{3 (2 n - 1)}{(n-1)^{2}} + \frac{3}{\sqrt{n-1}} + \frac{3 n (n-2)}{4 (n-1)^{2} \sqrt{n-1}}
\\
& =
3 \left( - \frac{4}{n-1} + \frac{2 n - 1}{(n-1)^{2}} + \frac{1}{\sqrt{n-1}} + \frac{n (n-2)}{4 (n-1)^{2} \sqrt{n-1}} \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ - 16 (n-1) \sqrt{n-1} + 4 (2n-1) \sqrt{n-1} + 4 (n-1)^{2} + n (n-2) }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ (- 16 (n-1) + 4 (2n-1)) \sqrt{n-1} + (4n^{2} - 8 n + 4) + (n^{2} - 2n) }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ (- 16 n + 16 + 8 n - 4) \sqrt{n-1} + (5 n^{2} - 10 n + 4) }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ (- 8 n + 12) \sqrt{n-1} + 5 n (n - 2) + 4 }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ - 4 (2 n - 3) \sqrt{n-1} + 5 n (n - 2) + 4 }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ - 4 (2 n - 3) (\sqrt{n-1} - 1) - 4 (2 n - 3) + 5 n (n - 2) + 4 }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ - 2 (2 n - 3) \ln_{(\frac{1}{2})} (n-1) - 4 (2 n - 3) + 5 n (n - 2) + 4 }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& \overset{\text{(a)}}{\ge}
\frac{ 3 }{ 4 } \left( \frac{ - 2 (2 n - 3) \ln_{0} (n-1) - 4 (2 n - 3) + 5 n (n - 2) + 4 }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ - 2 (2 n - 3) ((n-1) - 1) - 4 (2 n - 3) + 5 n (n - 2) + 4 }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ - 2 (2 n - 3) (n-2) - 4 (2 n - 3) + 5 n (n - 2) + 4 }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ (- 2 (2 n - 3) + 5n) (n-2) - 4 ((2 n - 3) - 1) }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ (- 4 n + 6 + 5n) (n-2) - 4 (2 n - 4) }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ (n + 6) (n-2) - 8 (n-2) }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ ((n+6) - 8) (n-2) }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\frac{ 3 }{ 4 } \left( \frac{ (n-2)^{2} }{ (n-1)^{2} \sqrt{n-1} } \right)
\\
& =
\begin{cases}
> 0
& \mathrm{if} \ n \ge 3 , \\
= 0
& \mathrm{if} \ n = 2 ,
\end{cases}
\label{eq:diff3_d(n,2)}
\end{align}
where (a) holds with equality if and only if $n = 2$ since $\ln_{\alpha} x \ge \ln_{\beta} x$ for $\alpha < \beta$ and $x \in (0, +\infty)$ with equality if and only if $x = 1$ (see Lemma \ref{lem:IT_ineq}).
Note that
\begin{align}
\left. \frac{ \mathrm{d} d(n, 2) }{ \mathrm{d} n } \right|_{n = 2}
& \overset{\eqref{eq:diff1_d(n,2)}}{=}
\left. \left( \frac{ 3 (1 - 2 n) \sqrt{n-1} \, \ln (n-1) + 5 n (n - 2) + 4 + (n - 6) \sqrt{n-1} }{ \sqrt{n-1} } \right) \right|_{n = 2}
\\
& =
\frac{ 3 (1 - 4) \sqrt{1} \, \ln (1) + 10 (0) + 4 + (-4) \sqrt{1} }{ \sqrt{1} }
\\
& =
4 - 4
\\
& =
0 ,
\label{eq:diff1_d(2,2)} \\
\left. \frac{ \mathrm{d}^{2} d(n, 2) }{ \mathrm{d} n^{2} } \right|_{n = 2}
& \overset{\eqref{eq:diff2_d(n,2)}}{=}
\left. \left( \vphantom{\sum} - 6 \ln (n-1) + \frac{3 (1 - 2 n)}{n-1} + 8 \sqrt{n-1} - \frac{n (n-2)}{2 (n-1) \sqrt{n-1} } + 1 \right) \right|_{n = 2}
\\
& =
\underbrace{ - 6 \ln 1 }_{ = 0 } + \frac{3 (1 - 4)}{1} + 8 \sqrt{1} - \underbrace{ \frac{2 (2-2)}{2 (2-1) \sqrt{1} } }_{ = 0 } + 1
\\
& =
-9 + 8 + 1
\\
& =
0 .
\label{eq:diff2_d(2,2)}
\end{align}
Since
\begin{itemize}
\item
$\frac{ \mathrm{d}^{2} d(n, 2) }{ \mathrm{d} n^{2} }$ is strictly increasing for $n \ge 2$ (see Eq. \eqref{eq:diff3_d(n,2)}) and
\item
$\left. \frac{ \mathrm{d}^{2} d(n, 2) }{ \mathrm{d} n^{2} } \right|_{n = 2} = 0$ (see Eq. \eqref{eq:diff2_d(2,2)}),
\end{itemize}
we get that
\begin{align}
\frac{ \mathrm{d}^{2} d(n, 2) }{ \mathrm{d} n^{2} } \ge 0
\label{eq:diff2_d(n,2)_p}
\end{align}
for $n \ge 2$ with equality if and only if $n = 2$.
Moreover, since
\begin{itemize}
\item
$\frac{ \mathrm{d} d(n, 2) }{ \mathrm{d} n }$ is strictly increasing for $n \ge 2$ (see Eq. \eqref{eq:diff2_d(n,2)_p}) and
\item
$\left. \frac{ \mathrm{d} d(n, 2) }{ \mathrm{d} n } \right|_{n = 2} = 0$ (see Eq. \eqref{eq:diff1_d(2,2)}),
\end{itemize}
we also get that
\begin{align}
\frac{ \mathrm{d} d(n, 2) }{ \mathrm{d} n } \ge 0
\label{eq:diff1_d(n,2)_p}
\end{align}
for $n \ge 2$ with equality if and only if $n = 2$.
Therefore, since
\begin{itemize}
\item
$d(n, 2)$ is strictly increasing for $n \ge 2$ (see Eq. \eqref{eq:diff1_d(n,2)_p}) and
\item
$d(2, 2) = 0$ (see Eq. \eqref{eq:d(2,2)}),
\end{itemize}
we obtain $d(n, 2) > 0$ for $n \ge 3$.
So far, we derived that
\begin{itemize}
\item
if $n \ge 3$, then the following monotonicity hold (see Eq. \eqref{eq:diff1_d_sign}):
\begin{itemize}
\item
$d(n, r)$ is strictly decreasing for $r \in (-\infty, \frac{1}{2}]$,
\item
$d(n, r)$ is strictly increasing for $r \in [\frac{1}{2}, \frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) }]$, and
\item
$d(n, r)$ is strictly decreasing for $r \in [\frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) }, +\infty)$,
\end{itemize}
\item
$d(n, 1) > 0$ for $n \ge 3$, and
\item
$d(n, 2) > 0$ for $n \ge 3$.
\end{itemize}
Using the above statements, we now show that, if $n \ge 3$, then $d(n, r) > 0$ for $1 \le r \le 2$.
Note that
\begin{align}
\frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) }
& \overset{\text{(a)}}{\ge}
\frac{1}{2} + \frac{ -1 + \sqrt{n-1} }{ \ln (n-1) }
\\
& =
\frac{1}{2} \left( 1 + \frac{ 2 (\sqrt{n-1} - 1) }{ \ln (n-1) } \right)
\\
& =
\frac{1}{2} \left( 1 + \frac{ \ln_{(\frac{1}{2})} (n-1) }{ \ln (n-1) } \right)
\\
& \overset{\text{(b)}}{\ge}
\frac{1}{2} \left( 1 + \frac{ \ln (n-1) }{ \ln (n-1) } \right)
\\
& =
1
\end{align}
for $n \ge 2$, where
\begin{itemize}
\item
(a) holds with equality if and only if $n = 2$ since $\left. W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) \right|_{n = 2} = -1$ and $W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right)$ is strictly increasing for $n \ge 2$, and
\item
(b) holds with equality if and only if $n = 2$ since $\ln_{\alpha} x \ge \ln_{\beta} x$ for $\alpha < \beta$ and $x \in (0, +\infty)$ with equality if and only if $x = 1$ (see Lemma \ref{lem:IT_ineq}).
\end{itemize}
When $\frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) } \ge 2$, we readily get that $d(n, r) > 0$ for $n \ge 3$ and $1 \le r \le 2$ since
\begin{itemize}
\item
$d(n, r)$ is strictly increasing for $r \in [1, 2]$,
\item
$d(n, 1) > 0$, and $d(n, 2) > 0$.
\end{itemize}
We next prove that $d(n, r) > 0$ for $n \ge 3$ and $1 \le r \le 2$ when $1 < \frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) } < 2$.
Consider the value $r^{\prime} > 1$ such that $d(n, r^{\prime}) \le 0$.
Since
\begin{itemize}
\item
$d(n, r)$ is strictly increasing for $r \in [1, \frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) }]$ and
\item
$d(n, 1) > 0$,
\end{itemize}
the value $r^{\prime}$ must be greater than $\frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) }$.
However, since
\begin{itemize}
\item
$d(n, r)$ is strictly decreasing for $r \in [\frac{1}{2} + \frac{ W_{0} \! \left( - \mathrm{e}^{\ln \sqrt{n-1} - \sqrt{n-1}} \right) + \sqrt{n-1} }{ \ln (n-1) }, 2]$ and
\item
$d(n, 2) > 0$,
\end{itemize}
the value $r^{\prime}$ must be greater than $2$.
Hence, there does not exist $r^{\prime} \in [1, 2]$ such that $d(n, r^{\prime}) \le 0$.
Therefore, we have
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} d(n, r) \right)
=
1
\label{eq:d_positive}
\end{align}
for $n \ge 3$ and $1 \le r \le 2$.
Summarizing the above results, we obrain
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} g(n, z, \textstyle{\frac{\ln(n-1)}{2 \ln z}}) |_{z = (n-1)^{r}} \right)
& \overset{\eqref{eq:g_ln(n-1)/2ln(z)_(n-1)^r}}{=}
\underbrace{ \operatorname{sgn} \! \left( \frac{ 1 }{ 2 r ((n-1) + (n-1)^{r}) \ln (n-1) } \right) }_{=1} \, \cdot \; \operatorname{sgn} \! \left( \vphantom{\sum} d(n, r) \right)
\\
& \overset{\eqref{eq:d_positive}}{=}
1
\end{align}
for $n \ge 3$ and $1 \le r \le 2$.
Therefore, $g(n, z, \textstyle{\frac{\ln(n-1)}{2 \ln z}})$ is always strictly positive for $n \ge 3$ and $z \in [n-1, (n-1)^{2}]$.
That concludes the poof of Lemma \ref{lem:ln(n-1)/2ln(z)}.
\end{IEEEproof}
\if0
\section{Proof of Lemma \ref{lem:diff_ln(n-1)/2ln(z)}}
\label{app:diff_ln(n-1)/2ln(z)}
\begin{IEEEproof}[Proof of Lemma \ref{lem:diff_ln(n-1)/2ln(z)}]
Substituting $\alpha = \alpha_{1}(n, z) = \frac{ \ln (n-1) }{ 2 \ln z }$ into $\frac{ \partial g(n, z, \alpha) }{ \partial z }$, we have
\begin{align}
&
\left. \frac{ \partial g(n, z, \alpha) }{ \partial z } \right|_{\alpha = \frac{\ln (n-1)}{2 \ln z}}
\notag \\
& \ \ \overset{\eqref{eq:diff_g_z}}{=}
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} (1-\alpha) z^{-\alpha} - (n-1) \alpha (z^{1-\alpha} + z^{\alpha-1}) + 2 (n-1) + (1 - \alpha) z^{\alpha} \right] \ln z
\right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad \qquad \qquad
- \left[ \vphantom{\sum} (n-1)^{2} (z^{-\alpha} - z^{-1}) - (n-1) (z^{\alpha-1} - z^{1-\alpha}) + z - z^{\alpha} \right] \right) \right|_{\alpha = \frac{\ln (n-1)}{2 \ln z}}
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} \left( 1 - \frac{\ln (n-1)}{2 \ln z} \right) z^{-\frac{\ln (n-1)}{2 \ln z}} - (n-1) \left( \frac{\ln (n-1)}{2 \ln z} \right) (z^{1-\frac{\ln (n-1)}{2 \ln z}} + z^{\frac{\ln (n-1)}{2 \ln z}-1})
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ 2 (n-1) + \left( 1 - \frac{\ln (n-1)}{2 \ln z} \right) z^{\frac{\ln (n-1)}{2 \ln z}} \right] \ln z
\notag \\
& \left. \qquad \qquad \qquad \qquad
- \left[ \vphantom{\sum} (n-1)^{2} (z^{-\frac{\ln (n-1)}{2 \ln z}} - z^{-1}) - (n-1) (z^{\frac{\ln (n-1)}{2 \ln z}-1} - z^{1-\frac{\ln (n-1)}{2 \ln z}}) + z - z^{\frac{\ln (n-1)}{2 \ln z}} \right] \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{2} \left( 1 - \frac{\ln (n-1)}{2 \ln z} \right) (n-1)^{-\frac{1}{2}} - (n-1) \left( \frac{\ln (n-1)}{2 \ln z} \right) ((n-1)^{- \frac{1}{2}} z + (n-1)^{\frac{1}{2}} z^{-1})
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ 2 (n-1) + \left( 1 - \frac{\ln (n-1)}{2 \ln z} \right) (n-1)^{\frac{1}{2}} \right] \ln z
\notag \\
& \left. \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} ((n-1)^{-\frac{1}{2}} - z^{-1}) - (n-1) ((n-1)^{\frac{1}{2}} z^{-1} - (n-1)^{- \frac{1}{2}} z) + z - (n-1)^{\frac{1}{2}} \right] \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{\frac{3}{2}} \left( 1 - \frac{\ln (n-1)}{2 \ln z} \right) - (n-1)^{\frac{1}{2}} \left( \frac{\ln (n-1)}{2 \ln z} \right) (z + (n-1) z^{-1})
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ 2 (n-1) + \left( 1 - \frac{\ln (n-1)}{2 \ln z} \right) (n-1)^{\frac{1}{2}} \right] \ln z
\notag \\
& \left. \qquad \qquad \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} ((n-1)^{-\frac{1}{2}} - z^{-1}) - (n-1)^{\frac{1}{2}} ((n-1) z^{-1} - z) + z - (n-1)^{\frac{1}{2}} \right] \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \left[ \vphantom{\sum} (n-1)^{\frac{3}{2}} \left( \ln z - \frac{\ln (n-1)}{2} \right) - (n-1)^{\frac{1}{2}} \left( \frac{\ln (n-1)}{2} \right) (z + (n-1) z^{-1})
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ 2 (n-1) \ln z + \left( \ln z - \frac{\ln (n-1)}{2} \right) (n-1)^{\frac{1}{2}} \right]
\notag \\
& \left. \qquad \qquad \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} ((n-1)^{-\frac{1}{2}} - z^{-1}) - (n-1)^{\frac{1}{2}} ((n-1) z^{-1} - z) + z - (n-1)^{\frac{1}{2}} \right] \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \frac{1}{2} \left[ \vphantom{\sum} (n-1)^{\frac{3}{2}} ( 2 \ln z - \ln (n-1) ) - (n-1)^{\frac{1}{2}} (\ln (n-1)) (z + (n-1) z^{-1})
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ 4 (n-1) \ln z + (2 \ln z - \ln (n-1)) (n-1)^{\frac{1}{2}} \right]
\notag \\
& \left. \qquad \qquad \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{2} ((n-1)^{-\frac{1}{2}} - z^{-1}) - (n-1)^{\frac{1}{2}} ((n-1) z^{-1} - z) + z - (n-1)^{\frac{1}{2}} \right] \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \frac{1}{2} \left[ \vphantom{\sum} 2 (n-1)^{\frac{3}{2}} \ln z - (n-1)^{\frac{3}{2}} \ln (n-1) - (n-1)^{\frac{1}{2}} z \ln (n-1) - (n-1)^{\frac{3}{2}} z^{-1} \ln (n-1)
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
+ 4 (n-1) \ln z + 2 (n-1)^{\frac{1}{2}} \ln z - (n-1)^{\frac{1}{2}} \ln (n-1) \right]
\notag \\
& \left. \qquad \qquad \qquad \qquad \quad
- \left[ \vphantom{\sum} (n-1)^{\frac{3}{2}} - (n-1)^{2} z^{-1} - (n-1)^{\frac{3}{2}} z^{-1} + (n-1)^{\frac{1}{2}} z + z - (n-1)^{\frac{1}{2}} \right] \right)
\\
& \quad =
\frac{ 1 }{ ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \frac{1}{2} \left[ \vphantom{\sum} 2 (n-1)^{\frac{1}{2}} \left( \vphantom{\sum} (n-1) + 2 (n-1)^{\frac{1}{2}} + 1 \right) \ln z - (n-1)^{\frac{1}{2}} \left( \vphantom{\sum} (n-1) + 1 \right) \ln (n-1)
\right. \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\qquad \quad \vphantom{\sum}
- (n-1)^{\frac{1}{2}} z \ln (n-1) - (n-1)^{\frac{3}{2}} z^{-1} \ln (n-1) \right]
\notag \\
& \left. \qquad \qquad \qquad \qquad
- \left[ \vphantom{\sum} (n-1)^{\frac{1}{2}} \left( \vphantom{\sum} (n-1) - 1 \right) - (n-1)^{\frac{3}{2}} \left( \vphantom{\sum} (n-1)^{\frac{1}{2}} + 1 \right) z^{-1} + \left( \vphantom{\sum} (n-1)^{\frac{1}{2}} + 1 \right) z \right] \right)
\\
& \quad =
\frac{ 1 }{ 2 z ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \qquad \times
\left( \left[ \vphantom{\sum} 2 (n-1)^{\frac{1}{2}} ((n-1)^{\frac{1}{2}} + 1)^{2} z \ln z - n (n-1)^{\frac{1}{2}} z \ln (n-1)
- (n-1)^{\frac{1}{2}} z^{2} \ln (n-1) - (n-1)^{\frac{3}{2}} \ln (n-1) \right]
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \quad
- \left[ \vphantom{\sum} 2 (n-1)^{\frac{1}{2}} (n-2) z - 2 (n-1)^{\frac{3}{2}} \left( \vphantom{\sum} (n-1)^{\frac{1}{2}} + 1 \right) + 2 \left( \vphantom{\sum} (n-1)^{\frac{1}{2}} + 1 \right) z^{2} \right] \right)
\\
& \quad =
\frac{ 1 }{ 2 z \sqrt{n-1} \, ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \left[ \vphantom{\sum} 2 (n-1) (\sqrt{n-1} + 1)^{2} z \ln z - n (n-1) z \ln (n-1)
- (n-1) z^{2} \ln (n-1) - (n-1)^{2} \ln (n-1) \right] \right. \notag \\
& \left. \qquad \qquad \qquad \qquad \quad
- \left[ \vphantom{\sum} 2 (n-1) (n-2) z - 2 (n-1)^{2} ( \sqrt{n-1} + 1 ) + 2 z^{2} \sqrt{n-1} ( \sqrt{n-1} + 1 ) \right] \right)
\\
& \quad =
\frac{ 1 }{ 2 z \sqrt{n-1} \, ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \vphantom{\sum} 2 (n-1) (\sqrt{n-1} + 1)^{2} z \ln z - n (n-1) z \ln (n-1) - (n-1) z^{2} \ln (n-1) - (n-1)^{2} \ln (n-1)
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad
- \vphantom{\sum} 2 (n-1) (n-2) z + 2 (n-1)^{2} ( \sqrt{n-1} + 1 ) - 2 z^{2} ( (n-1) + \sqrt{n-1} ) \right)
\\
& \quad =
\frac{ 1 }{ 2 z \sqrt{n-1} \, ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \vphantom{\sum} - \left( \vphantom{\sum} (n-1) \ln (n-1) + 2 ( (n-1) + \sqrt{n-1} ) \right) z^{2} + 2 (n-1) (\sqrt{n-1} + 1)^{2} z \ln z
\right. \notag \\
& \left. \qquad \quad \
- \left( \vphantom{\sum} n (n-1) \ln (n-1) + 2 (n-1) (n-2) \right) z - (n-1)^{2} \ln (n-1) + 2 (n-1)^{2} ( \sqrt{n-1} + 1 ) \right)
\\
& \quad =
\frac{ 1 }{ 2 z \sqrt{n-1} \, ((n-1) + z)^{2} (\ln z)^{2} }
\notag \\
& \quad \qquad \times
\left( \vphantom{\sum} - \left( \vphantom{\sum} (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) \right) z^{2} + 2 (n-1) (\sqrt{n-1} + 1)^{2} z \ln z
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad
- (n-1) \left( \vphantom{\sum} n \ln (n-1) + 2 (n-2) \right) z - (n-1)^{2} \left( \vphantom{\sum} \ln (n-1) - 2 ( \sqrt{n-1} + 1 ) \right) \right)
\\
& \quad =
\frac{ y(n, z) }{ 2 z \sqrt{n-1} \, ((n-1) + z)^{2} (\ln z)^{2} } ,
\label{eq:diff_gz_ln(n-1)/2ln(z)}
\end{align}
where
\begin{align}
y(n, z)
& \triangleq
- ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z^{2} + 2 (n-1) (\sqrt{n-1} + 1)^{2} z \ln z
\notag \\
& \qquad \qquad \qquad
- (n-1) ( n \ln (n-1) + 2 (n-2) ) z - (n-1)^{2} ( \ln (n-1) - 2 ( \sqrt{n-1} + 1 ) ) .
\label{def:y}
\end{align}
To accomplish the proof, we prove the negativity of the right-hand side of \eqref{eq:diff_gz_ln(n-1)/2ln(z)}.
Then, since
\begin{align}
\frac{ 1 }{ 2 z \sqrt{n-1} \, ((n-1) + z)^{2} (\ln z)^{2} }
>
0
\end{align}
for $n \ge 2$ and $z \in (0, 1) \cup (1, +\infty)$, it is enough to check the negativity of $y(n, z)$ for $n \ge 3$ and $z \in (0, 1) \cup (1, +\infty)$ rather than \eqref{eq:diff_gz_ln(n-1)/2ln(z)}.
We calculate the derivatives of $y(n, z)$ with respect to $z$ as follows:
\begin{align}
\frac{ \partial y(n, z) }{ \partial z }
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} - ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z^{2} + 2 (n-1) (\sqrt{n-1} + 1)^{2} z \ln z
\right. \notag \\
& \left. \qquad \qquad \qquad \vphantom{\sum}
- (n-1) ( n \ln (n-1) + 2 (n-2) ) z - (n-1)^{2} ( \ln (n-1) - 2 ( \sqrt{n-1} + 1 ) ) \right)
\\
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} - ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z^{2} + 2 (n-1) (\sqrt{n-1} + 1)^{2} z \ln z
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
- (n-1) ( n \ln (n-1) + 2 (n-2) ) z \right)
\\
& =
- ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) \left( \frac{ \mathrm{d} }{ \mathrm{d} z } (z^{2}) \right) + 2 (n-1) (\sqrt{n-1} + 1)^{2} \left( \frac{ \mathrm{d} }{ \mathrm{d} z } (z \ln z) \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
- (n-1) ( n \ln (n-1) + 2 (n-2) ) \left( \frac{ \mathrm{d} }{ \mathrm{d} z } (z) \right)
\\
& =
- ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) (2z) + 2 (n-1) (\sqrt{n-1} + 1)^{2} (\ln z + 1)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
- (n-1) ( n \ln (n-1) + 2 (n-2) )
\\
& =
- 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z + 2 (n-1) (\sqrt{n-1} + 1)^{2} \ln z
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ 2 (n-1) (\sqrt{n-1} + 1)^{2} - (n-1) ( n \ln (n-1) + 2 (n-2) )
\\
& =
- 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z + 2 (n-1) (\sqrt{n-1} + 1)^{2} \ln z
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ (n-1) (2 (\sqrt{n-1} + 1)^{2} - n \ln (n-1) - 2 (n-2) )
\\
& =
- 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z + 2 (n-1) (\sqrt{n-1} + 1)^{2} \ln z
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
+ (n-1) (2 ((n-1) + 2 \sqrt{n-1} + 1) - n \ln (n-1) - 2 (n-2) )
\\
& =
- 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z + 2 (n-1) (\sqrt{n-1} + 1)^{2} \ln z
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
+ (n-1) (2n - 2 + 4 \sqrt{n-1} + 2 - n \ln (n-1) - 2n + 4 )
\\
& =
- 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z + 2 (n-1) (\sqrt{n-1} + 1)^{2} \ln z
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ (n-1) (4 \sqrt{n-1} - n \ln (n-1) + 4 ) ,
\label{eq:diff1_y} \\
\frac{ \partial^{2} y(n, z) }{ \partial z^{2} }
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} - 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z + 2 (n-1) (\sqrt{n-1} + 1)^{2} \ln z
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \vphantom{\sum}
+ (n-1) (4 \sqrt{n-1} - n \ln (n-1) + 4 ) \right)
\\
& =
\frac{ \partial }{ \partial z } \left( \vphantom{\sum} - 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z + 2 (n-1) (\sqrt{n-1} + 1)^{2} \ln z \right)
\\
& =
- 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) \left( \frac{ \mathrm{d} }{ \mathrm{d} z } (z) \right) + 2 (n-1) (\sqrt{n-1} + 1)^{2} \left( \frac{ \mathrm{d} }{ \mathrm{d} z } (\ln z) \right)
\\
& =
- 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) + 2 (n-1) (\sqrt{n-1} + 1)^{2} \left( \frac{1}{z} \right) ,
\label{eq:diff2_y} \\
\frac{ \partial^{3} y(n, z) }{ \partial z^{3} }
& =
\frac{ \partial }{ \partial z } \left( - 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) + 2 (n-1) (\sqrt{n-1} + 1)^{2} \left( \frac{1}{z} \right) \right)
\\
& =
\frac{ \partial }{ \partial z } \left( 2 (n-1) (\sqrt{n-1} + 1)^{2} \left( \frac{1}{z} \right) \right)
\\
& =
2 (n-1) (\sqrt{n-1} + 1)^{2} \left( \frac{ \mathrm{d} }{ \mathrm{d} z } \left( \frac{1}{z} \right) \right)
\\
& =
2 (n-1) (\sqrt{n-1} + 1)^{2} \left( - \frac{1}{z^{2}} \right)
\\
& =
- \frac{2 (n-1) (\sqrt{n-1} + 1)^{2}}{z^{2}}
\\
& \overset{\text{(a)}}{<}
0 ,
\label{eq:diff3_y}
\end{align}
where note that (a) holds for $n \ge 2$ and $z \in (0, +\infty)$.
It follows from \eqref{eq:diff3_y} that $\frac{ \partial^{2} y(n, z) }{ \partial z^{2} }$ is strictly decreasing for $z \in (0, +\infty)$.
On the other hand, we can solve the equation $\frac{ \partial^{2} y(n, z) }{ \partial z^{2} } = 0$ with respect to $z$ as follows:
\begin{align}
&&
\frac{ \partial^{2} y(n, z) }{ \partial z^{2} }
& =
0
\\
& \overset{\eqref{eq:diff2_y}}{\iff} &
\frac{ 2 (n-1) (\sqrt{n-1} + 1)^{2} }{ z }
& =
2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} )
\\
& \iff &
z
& =
\frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} } .
\label{eq:diff2_y_root}
\end{align}
Since
\begin{itemize}
\item
$\frac{ \partial^{2} y(n, z) }{ \partial z^{2} }$ is strictly decreasing for $z \in (0, +\infty)$ (see Eq. \eqref{eq:diff3_y}) and
\item
the root of $\frac{ \partial^{2} y(n, z) }{ \partial z^{2} } = 0$ is $z = \frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }$ (see Eq. \eqref{eq:diff2_y_root}),
\end{itemize}
we get that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} y(n, z) }{ \partial z^{2} } \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (0, \frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }) , \\
0
& \mathrm{if} \ z = \frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} } , \\
-1
& \mathrm{if} \ z \in (\frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }, +\infty) ,
\end{cases}
\label{eq:diff2_y_sign}
\end{align}
which implies that
\begin{itemize}
\item
$\frac{ \partial y(n, z) }{ \partial z }$ is strictly increasing for $z \in (0, \frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }]$ and
\item
$\frac{ \partial y(n, z) }{ \partial z }$ is strictly decreasing for $z \in [\frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }, +\infty)$.
\end{itemize}
Then, note that
\begin{align}
\sqrt{n-1}
\le
\frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }
\le
n-1
\label{ineq:diff2_y_root}
\end{align}
for $n \ge 2$ with equalities of both side if and only if $n = 2$.
We now verify that the inequalities \eqref{ineq:diff2_y_root} hold.
We first show the following chain of equations:
\begin{align}
\frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }
& =
\frac{ \sqrt{n-1} (\sqrt{n-1} + 1)^{2} }{ \sqrt{n-1} \, (\ln (n-1) + 2) + 2 }
\\
& =
\sqrt{n-1} \left( \frac{ (\sqrt{n-1} + 1)^{2} }{ \sqrt{n-1} \, (\ln (n-1) + 2) + 2 } \right)
\\
& =
\sqrt{n-1} \left( \frac{ (n-1) + 2 \sqrt{n-1} + 1 }{ \sqrt{n-1} \, \ln (n-1) + 2 \sqrt{n-1} + 2 } \right)
\\
& =
\sqrt{n-1} \left( \frac{ n + 2 \sqrt{n-1} }{ \sqrt{n-1} \, \ln (n-1) + 2 \sqrt{n-1} + 2 } \right) .
\label{ineq:diff2_y_root_left}
\end{align}
Hence, to prove the left-hand inequality of \eqref{ineq:diff2_y_root}, it is enough to show that
\begin{align}
\frac{ n + 2 \sqrt{n-1} }{ \sqrt{n-1} \, \ln (n-1) + 2 \sqrt{n-1} + 2 }
\ge 1
\label{eq:fraction_diff2_y_root}
\end{align}
for $n \ge 2$ with equality if and only if $n = 2$.
Then, the gap between the numerator and the denominator of the left-hand side of \eqref{eq:fraction_diff2_y_root} is
\begin{align}
\underbrace{ (n + 2 \sqrt{n-1} ) }_{\substack{\text{the numerator of} \\ \text{the left-hand side of \eqref{eq:fraction_diff2_y_root}}}} - \quad \underbrace{ (\sqrt{n-1} \, \ln (n-1) + 2 \sqrt{n-1} + 2) }_{\substack{\text{the denominator of} \\ \text{the left-hand side of \eqref{eq:fraction_diff2_y_root}}}}
& =
n - \sqrt{n-1} \, \ln (n-1) - 2 .
\label{eq:gap_fraction_diff2_y_root}
\end{align}
We readily see that, it is enough to check the nonnegativity of \eqref{eq:gap_fraction_diff2_y_root} rather than \eqref{eq:fraction_diff2_y_root}.
Hence, we analyze the right-hand side of \eqref{eq:gap_fraction_diff2_y_root} as follows:
\begin{align}
\left. \left( \vphantom{\sum} n - \sqrt{n-1} \, \ln (n-1) - 2 \right) \right|_{n = 2}
& =
2 - \sqrt{1} \, \ln 1 - 2
\\
& =
0 ,
\\
\frac{ \mathrm{d} }{ \mathrm{d} n } \left( \vphantom{\sum} n - \sqrt{n-1} \, \ln (n-1) - 2 \right)
& =
\left( \frac{ \mathrm{d} }{ \mathrm{d} n } (n) \right) - \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\sqrt{n-1} \, \ln (n-1)) \right)
\\
& =
1 - \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\sqrt{n-1}) \right) \ln (n-1) - \sqrt{n-1} \left( \frac{ \mathrm{d} }{ \mathrm{d} n } (\ln (n-1)) \right)
\\
& =
1 - \left( \frac{1}{2 \sqrt{n-1}} \right) \ln (n-1) - \sqrt{n-1} \left( \frac{1}{n-1} \right)
\\
& =
1 - \frac{\ln (n-1)}{2 \sqrt{n-1}} - \frac{1}{\sqrt{n-1}}
\\
& =
\frac{ 2 \sqrt{n-1} - \ln (n-1) - 2 }{ 2 \sqrt{n-1} }
\\
& =
\frac{ 2 (\sqrt{n-1} - 1) - \ln (n-1) }{ 2 \sqrt{n-1} }
\\
& =
\frac{ \ln_{(\frac{1}{2})} (n-1) - \ln (n-1) }{ 2 \sqrt{n-1} }
\\
& \overset{\text{(a)}}{\ge}
\frac{ \ln (n-1) - \ln (n-1) }{ 2 \sqrt{n-1} }
\\
& =
0 ,
\end{align}
where (a) holds with equality if and only if $n = 2$ since $\ln_{\alpha} x \ge \ln_{\beta} x$ for $\alpha < \beta$ and $x \in (0, +\infty)$ with equality if and only if $x = 1$ (see Lemma \ref{lem:IT_ineq}).
Thus, we have
\begin{align}
\underbrace{ \left( \vphantom{\sum} n - \sqrt{n-1} \, \ln (n-1) - 2 \right) }_{\text{the right-hand side of \eqref{eq:gap_fraction_diff2_y_root}}}
\ge
0
\label{ineq:gap_fraction_diff2_y_root}
\end{align}
for $n \ge 2$ with equality if and only if $n = 2$, which implies \eqref{eq:fraction_diff2_y_root}.
Therefore, we obtain
\begin{align}
\frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }
& \overset{\eqref{ineq:diff2_y_root_left}}{=}
\sqrt{n-1} \left( \frac{ n + 2 \sqrt{n-1} }{ \sqrt{n-1} \, \ln (n-1) + 2 \sqrt{n-1} + 2 } \right)
\\
& \overset{\eqref{eq:fraction_diff2_y_root}}{\ge}
\sqrt{n-1} ,
\end{align}
which implies the left-hand inequality of \eqref{ineq:diff2_y_root}.
Moreover, the right-hand inequality of \eqref{ineq:diff2_y_root} can be proved as follows:
\begin{align}
\frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }
& =
(n-1) \left( \frac{ (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} } \right)
\\
& =
(n-1) \left( \frac{ n + 2 \sqrt{n-1} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} } \right)
\\
& \overset{\text{(a)}}{\le}
(n-1) \left( \frac{ n + 2 \sqrt{n-1} }{ (n-1) \left( \left( 1 - \frac{1}{n-1} \right)+ 2 \right) + 2 \sqrt{n-1} } \right)
\\
& =
(n-1) \left( \frac{ n + 2 \sqrt{n-1} }{ (n-1) \left( 3 - \frac{1}{n-1} \right) + 2 \sqrt{n-1} } \right)
\\
& =
(n-1) \left( \frac{ n + 2 \sqrt{n-1} }{ 3 (n-1) - 1 + 2 \sqrt{n-1} } \right)
\\
& =
(n-1) \left( \frac{ n + 2 \sqrt{n-1} }{ 3 n - 4 + 2 \sqrt{n-1} } \right)
\\
& =
(n-1) \left( \frac{ n + 2 \sqrt{n-1} }{ n + 2(n-2) + 2 \sqrt{n-1} } \right)
\\
& \le
(n-1) \left( \frac{ n + 2 \sqrt{n-1} }{ n + 2 \sqrt{n-1} } \right)
\\
& =
n-1 ,
\end{align}
where (a) follows from the fact that $\ln x \ge 1 - \frac{1}{x}$ for $x > 0$ with equality if and only if $x = 1$.
We next consider the sign of $\frac{ \partial y(n, z) }{ \partial z }$.
Substituting $z = \sqrt{n-1}$ into $\frac{ \partial y(n, z) }{ \partial z }$, we get
\begin{align}
\left. \frac{ \partial y(n, z) }{ \partial z } \right|_{z = \sqrt{n-1}}
& \overset{\eqref{eq:diff1_y}}{=}
\left( \vphantom{\sum} - 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z + 2 (n-1) (\sqrt{n-1} + 1)^{2} \ln z
\right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
+ (n-1) (4 \sqrt{n-1} - n \ln (n-1) + 4 ) \right) \right|_{z = \sqrt{n-1}}
\\
& =
- 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) \sqrt{n-1} + 2 (n-1) (\sqrt{n-1} + 1)^{2} \ln (\sqrt{n-1})
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad
+ (n-1) (4 \sqrt{n-1} - n \ln (n-1) + 4 )
\\
& =
- 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) \sqrt{n-1}
\notag \\
& \qquad \qquad \qquad \qquad
+ 2 (n-1) (\sqrt{n-1} + 1)^{2} \left( \frac{1}{2} \ln (n-1) \right)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ (n-1) (4 \sqrt{n-1} - n \ln (n-1) + 4 )
\\
& =
- 2 ((n-1) \ln (n-1) + 2 (n-1) + 2 \sqrt{n-1} ) \sqrt{n-1}
\notag \\
& \qquad \qquad \qquad
+ (n-1) ((n-1) + 2 \sqrt{n-1} + 1) \ln (n-1)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad
+ (4 (n-1) \sqrt{n-1} - n (n-1) \ln (n-1) + 4 (n-1) )
\\
& =
- 2 (n-1) \sqrt{n-1} \, \ln (n-1) - 4 (n-1) \sqrt{n-1} - 4 (n-1)
\notag \\
& \qquad \qquad \qquad
+ n (n-1) \ln (n-1) + 2 (n-1) \sqrt{n-1} \, \ln (n-1)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad
+ 4 (n-1) \sqrt{n-1} - n (n-1) \ln (n-1) + 4 (n-1)
\\
& =
0 .
\label{eq:diff1_y_root}
\end{align}
Moreover, substituting $z = n-1$ into $\frac{ \partial y(n, z) }{ \partial z }$, we get
\begin{align}
\left. \frac{ \partial y(n, z) }{ \partial z } \right|_{z = n-1}
& \overset{\eqref{eq:diff1_y}}{=}
\left( \vphantom{\sum} - 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z + 2 (n-1) (\sqrt{n-1} + 1)^{2} \ln z
\right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
+ (n-1) (4 \sqrt{n-1} - n \ln (n-1) + 4 ) \right) \right|_{z = n-1}
\\
& =
- 2 ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) (n-1) + 2 (n-1) (\sqrt{n-1} + 1)^{2} \ln (n-1)
\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ (n-1) (4 \sqrt{n-1} - n \ln (n-1) + 4)
\\
& =
(n-1) \left( \vphantom{\sum} - 2 ((n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1}) + 2 (\sqrt{n-1} + 1)^{2} \ln (n-1)
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
+ (4 \sqrt{n-1} - n \ln (n-1) + 4) \right)
\\
& =
(n-1) \left( \vphantom{\sum} - 2 (n-1) (\ln (n-1) + 2) - 4 \sqrt{n-1} + 2 ((n-1) + 2 \sqrt{n-1} + 1) \ln (n-1)
\right. \notag \\
& \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \vphantom{\sum}
+ 4 \sqrt{n-1} - n \ln (n-1) + 4 \right)
\\
& =
(n-1) \left( \vphantom{\sum} - 2 (n-1) \ln (n-1) - 4 (n-1)
\right. \notag \\
& \left. \vphantom{\sum} \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ 2 (n + 2 \sqrt{n-1}) \ln (n-1) - n \ln (n-1) + 4 \right)
\\
& =
(n-1) \left( (\ln (n-1)) \left( \vphantom{\sum} - 2 (n-1) + 2 (n + 2 \sqrt{n-1}) - n \right) - 4 (n-1) + 4 \right)
\\
& =
(n-1) \left( (\ln (n-1)) \left( \vphantom{\sum} - 2 n + 2 + 2 n + 4 \sqrt{n-1} - n \right) - 4 n + 4 + 4 \right)
\\
& =
(n-1) \left( (\ln (n-1)) \left( \vphantom{\sum} 2 + 4 \sqrt{n-1} - n \right) - 4 n + 8 \right)
\\
& =
(n-1) \left( (\ln (n-1)) \left( \vphantom{\sum} 4 \sqrt{n-1} - (n-2) \right) - 4 (n - 2) \right)
\\
& =
(n-1) \left( (\ln (n-1)) \left( \vphantom{\sum} 4 \sqrt{n-1} - \ln_{(0)} (n-1) \right) - 4 (n - 2) \right)
\\
& \overset{\text{(a)}}{\le}
(n-1) \left( (\ln (n-1)) \left( \vphantom{\sum} 4 \sqrt{n-1} - \ln_{(\frac{1}{2})} (n-1) \right) - 4 (n - 2) \right)
\\
& =
(n-1) \left( (\ln (n-1)) \left( \vphantom{\sum} 4 \sqrt{n-1} - 2 (\sqrt{n-1} - 1) \right) - 4 (n - 2) \right)
\\
& =
(n-1) \left( (\ln (n-1)) \left( \vphantom{\sum} 4 \sqrt{n-1} - 2 \sqrt{n-1} + 2 \right) - 4 (n - 2) \right)
\\
& =
(n-1) \left( 2 (\ln (n-1)) \left( \vphantom{\sum} \sqrt{n-1} + 1 \right) - 4 (n - 2) \right)
\\
& =
2 (n-1) \left( \left( \vphantom{\sum} \sqrt{n-1} + 1 \right) \ln (n-1) - 2 (n - 2) \right)
\\
& =
2 (n-1) \left( \vphantom{\sum} \sqrt{n-1} \, \ln (n-1) + \ln (n-1) - 2 (n - 2) \right)
\\
& =
2 (n-1) \left( \vphantom{\sum} \right. - \underbrace{ \left( \vphantom{\sum} n - \sqrt{n-1} \, \ln (n-1) + 2 \right) }_{\text{the right-hand side of \eqref{eq:gap_fraction_diff2_y_root}}} + \ln (n-1) - (n - 2) \left. \vphantom{\sum} \right)
\\
& \overset{\text{(b)}}{\le}
2 (n-1) (\ln (n-1) - (n - 2))
\\
& \overset{\text{(c)}}{\le}
2 (n-1) (((n-1)-1) - (n-2))
\\
& =
2 (n-1) ((n-2) - (n-2))
\\
& =
0 ,
\label{eq:diff1_y_z=n-1}
\end{align}
where
\begin{itemize}
\item
(a) holds with equality if and only if $n = 2$ since $\ln_{\alpha} x \ge \ln_{\beta} x$ for $\alpha < \beta$ and $x \in (0, +\infty)$ with equality if and only if $x = 1$ (see Lemma \ref{lem:IT_ineq}),
\item
(b) follows from \eqref{ineq:gap_fraction_diff2_y_root}, and
\item
(c) follows from the fact that $\ln x \le x - 1$ for $x > 0$ with equality if and only if $x = 1$.
\end{itemize}
Using the above results, we show the sign of $\frac{ \partial y(n, z) }{ \partial z }$.
Since
\begin{itemize}
\item
$\frac{ \partial y(n, z) }{ \partial z }$ is strictly increasing for $z \in (0, \frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }]$ (see Eq. \eqref{eq:diff2_y_sign}),
\item
$\left. \frac{ \partial y(n, z) }{ \partial z } \right|_{z = \sqrt{n-1}} = 0$ for $n \ge 2$ (see Eq. \eqref{eq:diff1_y_root}), and
\item
$\sqrt{n-1} < \frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }$ for $n \ge 3$ (see Eq. \eqref{ineq:diff2_y_root}),
\end{itemize}
we can see that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial y(n, z) }{ \partial z } \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (\sqrt{n-1}, \frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }] , \\
0
& \mathrm{if} \ z = \sqrt{n-1} , \\
-1
& \mathrm{if} \ z \in (0, \sqrt{n-1})
\end{cases}
\label{eq:diff1_y_fraction}
\end{align}
for $n \ge 3$.
Moreover, since
\begin{itemize}
\item
$\left. \frac{ \partial y(n, z) }{ \partial z } \right|_{z = \frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }} > 0$ for $n \ge 3$ (see Eq. \eqref{eq:diff1_y_fraction}),
\item
$\frac{ \partial y(n, z) }{ \partial z }$ is strictly decreasing for $z \in [\frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }, +\infty)$ (see Eq. \eqref{eq:diff2_y_sign}),
\item
$\left. \frac{ \partial y(n, z) }{ \partial z } \right|_{z = n-1} < 0$ for $n \ge 3$ (see Eq. \eqref{eq:diff1_y_z=n-1}), and
\item
$\frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} } < n-1$ for $n \ge 3$ (see Eq. \eqref{ineq:diff2_y_root}),
\end{itemize}
it follows from the intermediate value theorem that, for any $n \ge 3$, there exists $\eta( n ) \in (\frac{ (n-1) (\sqrt{n-1} + 1)^{2} }{ (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} }, n-1)$ such that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial y(n, z) }{ \partial z } \right)
=
\begin{cases}
1
& \mathrm{if} \ z \in (\sqrt{n-1}, \eta(n)) , \\
0
& \mathrm{if} \ z \in \{ \sqrt{n-1}, \eta(n) \} , \\
-1
& \mathrm{if} \ z \in (0, \sqrt{n-1}) \cup (\eta(n), +\infty) ,
\end{cases}
\label{eq:diff1_y_sign}
\end{align}
which implies that
\begin{itemize}
\item
$y(n, z)$ is strictly decreasing for $z \in (0, \sqrt{n-1}]$,
\item
$y(n, z)$ is strictly increasing for $z \in [\sqrt{n-1}, \eta(n)]$, and
\item
$y(n, z)$ is strictly increasing for $z \in [\eta(n), +\infty)$.
\end{itemize}
Finally, we show the negativity of $y(n, z)$ with $z = n-1$ for $n \ge 3$.
Substituting $z = n-1$ into $y(n, z)$, we get
\begin{align}
y(n, n-1)
& \overset{\eqref{def:y}}{=}
\left( \vphantom{\sum} - ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) z^{2} + 2 (n-1) (\sqrt{n-1} + 1)^{2} z \ln z
\right. \notag \\
& \left. \left. \vphantom{\sum} \qquad
- (n-1) ( n \ln (n-1) + 2 (n-2) ) z - (n-1)^{2} ( \ln (n-1) - 2 ( \sqrt{n-1} + 1 ) ) \right) \right|_{z = n-1}
\\
& =
- ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) (n-1)^{2} + 2 (n-1) (\sqrt{n-1} + 1)^{2} (n-1) \ln (n-1)
\notag \\
& \qquad \quad
- (n-1) ( n \ln (n-1) + 2 (n-2) ) (n-1) - (n-1)^{2} ( \ln (n-1) - 2 ( \sqrt{n-1} + 1 ) )
\\
& =
- (n-1)^{2} ( (n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1} ) + 2 (n-1)^{2} (\sqrt{n-1} + 1)^{2} \ln (n-1)
\notag \\
& \qquad \qquad \qquad
- (n-1)^{2} ( n \ln (n-1) + 2 (n-2) ) - (n-1)^{2} ( \ln (n-1) - 2 ( \sqrt{n-1} + 1 ) )
\\
& =
(n-1)^{2} \left( \vphantom{\sum} - ((n-1) (\ln (n-1) + 2) + 2 \sqrt{n-1}) + 2 (\sqrt{n-1} + 1)^{2} \ln (n-1)
\right. \notag \\
& \left. \vphantom{\sum} \qquad \qquad \qquad \qquad \qquad \qquad
- ( n \ln (n-1) + 2 (n-2) ) - ( \ln (n-1) - 2 ( \sqrt{n-1} + 1 ) ) \right)
\\
& =
(n-1)^{2} \left( \vphantom{\sum} - (n-1) \ln (n-1) - 2 (n-1) - 2 \sqrt{n-1} + 2 ((n-1) + 2 \sqrt{n-1} + 1) \ln (n-1)
\right. \notag \\
& \left. \vphantom{\sum} \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- n \ln (n-1) - 2 (n-2) - \ln (n-1) + 2 \sqrt{n-1} + 2 \right)
\\
& =
(n-1)^{2} \left( \vphantom{\sum} - n \ln (n-1) + \ln (n-1) - 2 (n-1) + 2 n \ln (n-1) + 4 \sqrt{n-1} \ln (n-1)
\right. \notag \\
& \left. \vphantom{\sum} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- n \ln (n-1) - 2 (n-2) - \ln (n-1) + 2 \right)
\\
& =
(n-1)^{2} \left( \vphantom{\sum} - 2 (n-1) + 4 \sqrt{n-1} \ln (n-1) - 2 (n-2) + 2 \right)
\\
& =
(n-1)^{2} \left( \vphantom{\sum} - 2 (n-2) + 4 \sqrt{n-1} \ln (n-1) - 2 (n-2) \right)
\\
& =
(n-1)^{2} \left( \vphantom{\sum} 4 \sqrt{n-1} \ln (n-1) - 4 (n-2) \right)
\\
& =
4 (n-1)^{2} \left( \vphantom{\sum} \sqrt{n-1} \ln (n-1) - (n-2) \right)
\\
& =
- 4 (n-1)^{2} \underbrace{ \left( \vphantom{\sum} n - \sqrt{n-1} \ln (n-1) - 2 \right) }_{\text{the right-hand side of \eqref{eq:gap_fraction_diff2_y_root}}}
\\
& \overset{\text{(a)}}{\le}
0 ,
\label{eq:y_z=n-1}
\end{align}
where (a) follows from \eqref{ineq:gap_fraction_diff2_y_root}.
Since
\begin{itemize}
\item
$y(n, z)$ is strictly decreasing for $z \in [n-1, +\infty)$ (see Eq. \eqref{eq:diff1_y_sign}) and
\item
$y(n, n-1) < 0$ for $n \ge 3$ (see Eq. \eqref{eq:y_z=n-1}),
\end{itemize}
we obtain
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} y(n, z) \right)
=
-1
\label{eq:y_sign}
\end{align}
for $n \ge 3$ and $z \in [n-1, +\infty)$.
Therefore, we have
\begin{align}
\operatorname{sgn} \! \left( \left. \frac{ \partial g(n, z, \alpha) }{ \partial z } \right|_{\alpha = \frac{\ln (n-1)}{2 \ln z}} \right)
& \overset{\eqref{eq:diff_gz_ln(n-1)/2ln(z)}}{=}
\underbrace{ \operatorname{sgn} \! \left( \frac{ 1 }{ 2 z \sqrt{n-1} \, ((n-1) + z)^{2} (\ln z)^{2} } \right) }_{ = 1 } \, \cdot \; \operatorname{sgn} \! \left( \vphantom{\sum} y(n, z) \right)
\\
& \overset{\eqref{eq:y_sign}}{=}
-1
\end{align}
for $n \ge 3$ and $z \in [n-1, +\infty)$.
That concludes the proof of Lemma \ref{lem:diff_ln(n-1)/2ln(z)}.
\end{IEEEproof}
\fi
|
1,108,101,564,318 | arxiv |
\section{Introduction}
The next generation of cosmology experiments~\cite{cmbs4,desi1,desi2,euclid,lsst,so,spherex} are aimed at exploring some of the most exciting questions in fundamental science -- the twin mysteries of dark energy and dark matter and the origin of primordial fluctuations, along with the use of cosmology as a probe of particle physics (e.g., studies of the neutrino sector or the nature of dark matter). Interpreting the results of many of these experiments, spanning measurements across multiple temporal epochs and length scales, involves solving an inverse problem, where given the observational results one wishes to unearth the details of the underlying physics. Modeling the effects of changes in parameter values as well as in the physical assumptions and establishing a direct connection to the observations across multiple surveys is a complex and challenging task. Cosmological simulations are the only way to approach this problem, simultaneously addressing the myriad issues associated with dynamical complexity, cross-correlations, and strict requirements on error control.
The required ability to create simulated ``virtual universes'' on demand is the fundamental computational challenge faced by the Cosmic Frontier. Indeed, it is not an exaggeration to say that the ultimate scientific success of the next generation of sky surveys hinges critically on the success of the underlying modeling and simulation effort.
The generation of these virtual universes can be accomplished in different ways \cite{somerville,Vogelsberger:2019ynw}. Large gravity-only simulations~\cite{angulo_hahn} are used as the backbone for building sky maps that closely resemble the observations from large surveys~\cite{cosmodc2,dc2}. This approach requires careful modeling to establish the ``galaxy-halo'' connection~\cite{Wechsler2018}. The modeling strategies range from simple methods that take limited information into account and rely on empirical modeling assumptions to elaborate schemes that try to model galaxy formation processes as closely as possible but without directly modeling computationally expensive gas physics and feedback effects~\cite{somerville}. Hydrodynamics simulations attempt to model galaxy formation in cosmological volumes including gas physics and feedback effects~\cite{Vogelsberger:2019ynw}. They employ phenomenological subgrid models whenever the dynamical range needed to resolve the physics of interest is too vast to start from first principles. The ultimate aim is to advance these different methods such that they all converge to the same answer -- faithfully describing our Universe in all observable wavebands.
With the advent of exascale computing resources~\cite{ecp}, several opportunities will arrive, but taking full advantage of them will not be straightforward. The high-performance computing (HPC) system architectures, associated software ecosystem, and data infrastructure will be substantially different from that of the previous generation. Adjusting to this computational environment, along with its variety and rapid evolution, will require special attention and substantial human resources. The resolution and volume of gravity-only simulations will enable the creation of ever more detailed synthetic sky catalogs. Hydrodynamics simulations in large cosmological volumes with a rich set of well-tuned subgrid models will be feasible. These simulations will allow us to study and mitigate possible systematic effects that might obscure fundamental physics insights. Synthetic skies will be developed across multiple wavebands and surveys (e.g., Ref.~\cite{dc2}). In order to realize this vision, we have to fully exploit the next generation of HPC resources for these large-scale simulations, and develop efficient analysis approaches to connect the simulations closely to observational data. The large data sets may require additional dedicated data-intensive computing resources to run complicated analysis workflows (potentially including cloud access).
\section{Numerical Simulations}
Numerical simulations play a critical role in delivering Cosmic Frontier science, both as the means to formulate precise theoretical predictions for different cosmological and astrophysical models, but also in evaluating and interpreting the capabilities of current and planned experiments. For optical surveys, the chain begins with a large cosmological simulation into which galaxies and quasars (along with their individual properties) are placed using semi-analytic or halo-based models. A synthetic sky is then created by adding realistic object images and colors and by including the local solar and galactic environment. Propagation of this sky ``image" through the atmosphere, the telescope optics, detector electronics, and the data management and analysis systems constitutes an end-to-end simulation of the survey. A sufficiently detailed simulation of this type can serve a large number of purposes such as identifying possible sources of systematic errors and investigating strategies for correcting them and for optimizing survey design (in area, depth, and cadence). The effects of systematic errors on the analysis of the data can also be investigated; given the very low level of statistical errors in current and next-generation precision cosmology experiments, and the precision with which deviations from $\Lambda$CDM are to be measured, this is an absolutely essential task.
\begin{itemize}
\item {\bf N-body simulations.}
Gravity is the dominant force on large scales, and dark matter outweighs baryons by roughly a factor of five to one. Thus N-body simulations accurately describe matter fluctuations from the largest observable scales down to scales deep into the nonlinear regime. Due to their computational efficiency, conventional N-body simulations (i.e., those treating cold dark matter models and some variants thereof) cover a wide dynamic range (Gpc to kpc, allowing coverage of survey-size volumes), with relative ease. It should be noted, however, that multi-Gpc-scale simulations at high mass resolution are still significantly expensive, even on exascale resources.
N-body simulations have essentially no free parameters, and when properly designed, can reach sub-percent accuracy over a wide dynamic range. A significant part of our current knowledge of nonlinear structure formation has been a direct byproduct of advances in N-body techniques. In the near future, survey-scale simulation suites are likely to be dominated by N-body simulations, although some large-volume hydrodynamic simulations will begin to appear at a reasonable mass resolution for the baryonic component.
The key shortcoming of the N-body approach is that the physics of the baryonic sector is not accounted for, thus many of the directly observable quantities are derived in somewhat heuristic ways and by adding a number of modeling or nuisance parameters. Galaxies in N-body simulations are usually reconstructed by applying additional modeling on top of a simulation, such as the halo occupation distribution (HOD) \cite{Berlind2002}, sub-halo abundance matching (SHAM) \cite{Kravtsov2004, Hearin2013}, or semi-analytic modeling (SAM) schemes (for a description of many SAM approaches, see for example Ref.~\cite{Knebe2015}).
\item {\bf Hydrodynamical Simulations.}
The primary role of hydrodynamical simulations in cosmology is to provide a reasonably accurate description of the distribution of baryons, to quantify the effects of baryons on various probes of large-scale structure (e.g., galaxy clustering, weak and strong lensing, matter-galaxy cross-correlations, redshift-space distortions, Lyman-$\alpha$ forest, SZ signal, 21cm and other line intensity mapping signals), and to provide useful results for the distribution and properties of galaxies, groups, and clusters. Because the final results depend strongly on the choices made for parameterized subgrid models, there is substantial variability in the robustness of the results, depending on the nature of the cosmic probe under consideration. There is, therefore, considerable interest in melding the results of hydrodynamic simulations with empirical modeling of galaxy properties, in order to produce a set of predictive forward models that can plausibly cover a wide range of physical galaxy formation scenarios. These phenomenological models parameterize baryonic effects in a form that can be used directly in constraining the dark sector.
The exascale systems that will be available shortly -- and in the second half of the decade, post-exascale computing resources -- will allow hydrodynamical simulations to become significantly more useful in cosmology (as compared to qualitative interpretation of astronomical observations). In particular, it is expected that there will be a coming together of very small scale, high resolution simulations that currently aim to study the details of galaxy formation at the level of individual objects with simulations that aim to model billions of galaxies. The hope is that this confluence of methods, combined with new observations, will significantly improve the robustness of the obtained results.
\item {\bf Beyond $\Lambda$CDM Simulations.} Although $\Lambda$CDM has been very successful on large scales, the fact that dark energy is not theoretically understood and that at small scales different dark matter models may have different signatures that will be observationally accessible has motivated the development of simulations in different directions. Modified gravity simulations typically involve the solution of a nonlinear variant of the Poisson equation and are therefore significantly more expensive than traditional (N-body) Vlasov-Poisson solvers. Different dark matter models may require the addition of new treatments of local interactions, or may not be accessible to an N-body approach at all (as in the case of fuzzy dark matter models). For further details on the last topic, we refer the reader to a related White Paper on simulations focusing on dark matter~\cite{DMsimsWP}.
\item{\bf Radiative Transfer Simulations.}
Radiative transfer is playing an increasingly important role in astrophysics and cosmology. It is especially important for modeling reionization, which is thought to have occurred when the earliest generations of galaxies created photo-ionized bubbles that grew and merged until overlapping completely. In addition to driving reionization itself, the ionizing radiation from these galaxies also affected their own evolution, as well as the density structure of the intergalactic medium between them. This resulted in a complex feedback loop in which small-scale effects were tightly coupled to radiation originating from a multitude of galaxies over vast cosmological volumes, e.g., Refs.~\cite{2015MNRAS.449.4380R,2022arXiv220205869L}. The demands on the dynamic range and accuracy of radiative transfer simulations will increase dramatically as observations reach further into the epoch of reionization.
\end{itemize}
\subsection{Target Probes and Observables}
While simulations geared towards a particular probe satisfying the requirements of a specific survey have well-defined road maps, simulations capable of describing more than one probe, especially those consisting of more than one experiment, are less developed and discussed in the community. Since an immense amount of cosmological and astrophysical information could be extracted from combinations of observables from different surveys, the development of such simulations is of great interest.
\begin{itemize}
\item {\bf Galaxy clustering/lensing, cluster clustering/lensing/counts.} These are the key observables for photometric galaxy surveys (see \cite{LSST2012, Euclid2011, WFIRST2019}). While the 2-point correlation function has been commonly used to measure these observables \cite{DES2018}, there has been an increased interest in applying higher-point correlation functions \cite{McBride2011} as well as alternate summary statistics that capture higher-order information \cite{Pratten2012, Liu2016, Banerjee2021} to extract additional signal from non-Gaussian density fields. Suites of simulations are needed to make predictions for these summary statistics since equivalent analytical frameworks do not exist. Additionally, multi-wavelength simulations of galaxies will allow us to test our detection/deblending pipelines, improve photometric redshift error estimations and validate shape measurements through cross-correlations \cite{dc2, Rhodes2017}.
\item {\bf Spectroscopic galaxies.}\footnote{While physically there are no differences between photometric and spectroscopic galaxies, we separate these here since their implementation in simulations are significantly different.} Spectroscopic instruments \cite{Takada2014, Schlegel2019, Bundy2019, MSE2019, Ellis2019} measure redshifts, radial velocities, gas dynamics and chemical compositions of galaxies. Cosmological information will be extracted through Baryonic Acoustic Oscillation (BAO) and redshift space distortions (RSD) measurements \cite{GilMarin2020}. These galaxies are ideal for galaxy–galaxy lensing analyses \cite{Heymans2021}, as well as for calibrating photometric redshifts using the clustering redshift technique \cite{Davis2017, VanDenBusch2020}. When correlated with CMB temperature maps, the distribution of gas in low mass systems can be mapped out by using the kinematic Sunyaev Zel’dovich (kSZ) effect \cite{Schaan2016, Hill2016, Smith2018}. Additionally, the Lyman-$\alpha$ forest can be used to measure the three-dimensional power spectrum to intermediate redshifts \cite{Bautista2017}, which can be correlated with galaxy/CMB lensing \cite{Doux2016}.
\item {\bf CMB Lensing.} Lensing of the cosmic microwave background (CMB) measures the integrated mass between the last scattering surface and us. Experiments such as CMB-S4 will produce clean (i.e. polarization based) maps of the integrated mass at high detection significance \cite{Abazajian2016}. Since the signal is sensitive to the full redshift range of the observable Universe, it is correlated with all of the other probes listed \cite{Omori2019, Omori2019b}. It is especially useful for weighing distant objects that are beyond the redshift ranges accessible through optical weak lensing \cite{Geach2019}.
\item {\bf Lyman-$\alpha$ forest.} Experiments such as DESI \cite{desi1} will observe the Lyman-$\alpha$ forest in the spectra of distant quasars $2 \lesssim z \lesssim 4$. Statistical properties of the Lyman-$\alpha$ forest can be used to constrain thermal properties of the intergalactic medium \cite{Walther2019} and cosmological parameters \cite{NPD2015, Bautista2017}. The Lyman-$\alpha$ signal originates in low density regions, thus probing different parts of the universe from most other probes.
\item {\bf tSZ/kSZ Effects.} Both the thermal (tSZ) and kinematic Sunyaev Zel’dovich (kSZ) effects are sensitive to the distribution of gas in the Universe. The SZ effects are strongly correlated with the locations of high gas densities such as in galaxy clusters, and are hence correlated with lensing \cite{Osato2020} and X-ray \cite{Hurier2014} observations. The kSZ signal can effectively probe the early universe as it also correlates with the ionization pattern of the intergalactic gas during reionization \cite{Park2013}.
\item {\bf CIB.} The cosmic infrared background (CIB) consists of emission from dusty star forming galaxies at $z \sim 2$. The CIB is highly correlated with CMB lensing since their redshift kernels overlap well, and therefore the CIB has been used to delens the CMB \cite{Larsen2016, Carron2017}. The number counts and clustering measurements of these infrared galaxies as well as their properties such as stellar mass, star formation rate, dust mass, and metallicity can give us insights into galaxy evolution \cite{Maniyar2018, Simpson2020}, and are strongly related to the characterization of galaxies at lower redshifts \cite{Behroozi2013}.
\item {\bf X-ray maps.} Experiments such as eRosita \cite{Merloni2012} will measure about $\mathcal{O}(10^5)$ clusters of galaxies and 3 million active galactic nuclei over the full sky. By exploiting the tight correlation between X-ray emission and mass, X-ray observations could be used to calibrate mass estimates of SZ-selected clusters \cite{Bulbul2019}.
\item {\bf Line intensity mapping.} Experiments such as SPHEREx \cite{spherex} and SKA \cite{SKA2020} will map out the density field at $0.5 \lesssim z \lesssim 3$. While the treatment of foregrounds are anticipated to be challenging, density fluctuations of the dark ages could be measured cleanly by cross-correlating with CMB lensing maps. \cite{Tanaka2019}.
\item{{\bf High-redshift 21-cm.}}
Interferometers such as HERA~\cite{DeBoer:2016tnn}, or the SKA~\cite{Mellema:2012ht}, aim to give us access to 3D maps of the universe during cosmic dawn and reionization ($z \approx 5-30$), by using the 21-cm line of hydrogen. These maps track the density of hydrogen, processed by a factor that depends on its spin temperature and ionized fraction~\cite{Furlanetto:2006jb,Pritchard:2011xb}. As such, they provide invaluable information on the thermal and ionization state of the IGM at high redshifts, which can be used to learn about dark matter, as it can cool the gas~\cite{Munoz:2018pzp}, heat/ionize it~\cite{Lopez-Honorez:2016sur,Liu:2018uzy}, or delay structure formation~\cite{Sitwell:2013fpa,Munoz:2019hjh}.
\end{itemize}
\subsection{Modeling Challenges}
Cosmological simulations have a well-established history, going back to about half a century, when computers first became powerful enough to enable very early studies of structure formation in the universe~\cite{peebles}. Since then, progress has been rapid, and cosmological simulations now rank among the most complex and computationally challenging problems for HPC systems. This situation is likely to remain unchanged for the foreseeable future. Below we describe some of the modeling challenges that are being faced in the area of cosmological simulations. This is by no means a complete list, but it is generally representative of the type of advances that are needed.
\begin{itemize}
\item {\bf Volume/Resolution/Number of simulations.} A challenging aspect in generating simulations that encompass multiple probes, is the computational cost, as the base simulation needs to meet the precision requirements of all the individual observables.
In addition, some of the observables (such as tSZ/kSZ or Lyman-$\alpha$) require hydrodynamical simulations, which are computationally demanding. In estimating covariance matrices where a large number of realizations are essential, approximate methods or machine learning techniques \cite{Troster2019} to accelerate the simulation procedure will be required. In specific cases, the modeling of certain observables confronts a very large dynamical range. For instance, the 21-cm signal depends on X-ray and UV photons with long mean free paths ($\sim$ Gpc), whereas the first galaxies formed in very small haloes (with $M_h\sim 10^6 M_\odot$). In these examples, detailed hydrodynamical simulations cannot cover large-enough volumes while reaching small-enough halo masses~\cite{Kannan:2021xoz}. Semi-numerical simulations (such as {\tt 21cmFAST}~\cite{Mesinger:2010ne,Munoz:2021psm}), which rely on sub-grid models, are instead commonly used. Detailed calibrations and comparisons between these different approaches are currently lacking, and will be critical to interpret upcoming data.
\item {\bf Consistent galaxy formation model.} Connecting galaxy properties to the underlying dark matter structure in a way that reproduces observed correlations between multi-wavelength observables is a major challenge. Sufficiently robust hydrodynamical simulations are capable of making predictions for such correlations, but are too expensive to run in large volumes. As such, development of galaxy formation models that can be applied on dark-matter-only simulations while accounting for the correlations between neutral and ionized gas, stars and dust in galaxies and galaxy clusters will be necessary.
Multi-probe simulations should also offer predictions for the intrinsic shapes of galaxies, another example of a very small-scale observable that is extremely difficult to model. The correlations of these shapes, known as their intrinsic alignments (IA), is an important systematic effect for next generation weak lensing surveys, but also contain information on galaxy formation and fundamental physics. Currently, galaxy shapes are either drawn from semi-empirical models, which require both high mass resolution simulations and extensive observations \cite{Joachimi2013}, or are obtained from hydrodynamical simulations \cite{Bate2020, Tenneti2021} which are computationally infeasible to be run with the required volumes. New techniques to rapidly assign realistic shapes to galaxies without incurring significant additional computational costs should be explored as an alternative, and outputs stored from future simulations should include the required quantities.
\item{\bf Neutrinos.} Neutrino oscillation measurements have shown that at least two of the three mass eigenstates of the Standard Model neutrinos are massive \cite{Zyla:2020zbs}. Massive neutrinos produce scale-dependent suppression of cosmic structures, with the largest effects on small scales, allowing for constraints on the total mass of neutrinos from cosmological measurements. Simulating massive neutrinos, which make up a non-negligible fraction of the total energy budget of the Universe, can be challenging since they decouple when relativistic, and have a free streaming scale of $\sim \mathcal O (1 h^{-1}{\rm Gpc})$. On smaller scales, their thermal velocity distribution needs to be accounted for in a structure formation calculation, unlike the CDM component. In an N-body approach, therefore, the six-dimensional distribution function of neutrinos needs to be sampled, i.e.~that at each location, an ensemble of neutrino particles should be initialized with the momentum distribution given by a Fermi-Dirac distribution. This is a fully non-linear approach and represents a ``gold standard'' in the field, but suffers from Poisson noise unless the number of neutrino particles is prohibitively large \cite{Banerjee2018}. Some recent approaches on how to reduce this noise include better sampling of neutrino momentum directions \cite{Banerjee2018}, hybrid fluid and N-body techniques \cite{2016JCAP...11..015B}, and by sampling only the deviations from the linear solution with particles \cite{Elbers2021}. A computationally more efficient, albeit approximate method, is to model massive neutrinos with linear or perturbative approach which is then added to the large-scale non-linear gravitational potential in a simulation (see, e.g. Refs.~\cite{Brandbyge2009,2013MNRAS.428.3375A,Upadhye2016,Senatore2017}). Depending on the mass of the individual neutrino species being simulated, this approximation eventually break down at sufficiently late times and sufficiently small scales since it lacks nonlinear evolution of neutrino perturbations as well as the back-reaction of non-linear matter on the neutrinos. For small neutrino masses, which are usually of primary interest, these effects may not be significant \cite{Pedersen2021, Bayer2021}, but need to be calibrated carefully, depending on precision targets set by the sensitivity of future surveys.
\item {\bf Ray tracing.} With currently available ray tracing algorithms (see e.g.~Ref.~\cite{Hilbert2020}), it is computationally infeasible to cover both the large volume required by future weak lensing surveys, and yet maintain the accuracy at small scales required for strong lensing. Therefore, we must develop a multi-resolution ray tracing algorithm that will effectively cover the two regimes.
\item {\bf Baryonic effects in large-volume simulations.} Baryonic feedback effects are known to alter the local matter density and hence the weak lensing observables \cite{Schneider2019, Chung2020}. This is one of the leading systematic effects in cosmic shear analyses that is limiting the extraction of information from small scale measurements~\cite{Huang2021}. Modeling gas dynanics and feedback is also a crucial aspect of predicting the SZ signals, which depend on the ionized gas density/temperature at small scales \cite{Shaw2012,Park2018}. Therefore, these effects must be included in the modeling for future analyses. While attempts have already been made in existing hydrodynamical simulations, the predictions vary significantly due to lack of predictive control over the relevant astrophysical processes.
\end{itemize}
\section{New Physics Modeling Needs}
Cosmological simulations need to advance in terms of increased resolution, larger volumes, and better treatments of known physics, as described in the previous section. Additionally, as the observational reach of the surveys expands, they can be used to explore previously unconsidered physics regimes. It is therefore only natural that simulation methods be developed to model these new probes.
\begin {itemize}
\item {\bf Modified Gravity.}
The current theory of gravity is given by Einstein's theory of General Relativity (GR), and the currently leading explanation for the observed accelerated expansion of the universe is the cosmological constant, $\Lambda$, which is supported by all current observations. While the cosmological constant is mathematically simple idea, it is extremely unnatural from a theoretical physics standpoint \cite{Weinberg1989}. Alternative models for accelerated expansion roughly split into two categories: dark energy and modified gravity. Dark energy models add an energy component with equation of state $w \neq -1$, and these are straightforward to simulate using virtually any existing code as this amounts to a modification of the background expansion rate. The situation is quite different with modified gravity models. There are many proposed models (see for example Ref.~\cite{Clifton2012}), and they generically modify the Poisson equation in a non-trivial way by introducing non-linearity. This makes modified gravity simulations much more computationally expensive than $\Lambda$CDM\xspace simulations, although algorithmic improvements and physical approximations used in recent modified gravity codes help reducing the cost \cite{Bose2017,Arnold2019,Ruan2021,Hernandez2022}. Additional complication is that different models of modified gravity generally require different simulation setups. Nevertheless, understanding theory of gravity remains a fundamental question in Physics, and testing gravity on cosmological scales (for a comprehensive review, see Ref.~\cite{Ishak2019}) will continue to be one of the primary science goals of the upcoming generation of large-scale structure observations \cite{Belgacem2019,Alam2021}.
\item {\bf Dark Matter Models.}
Most cosmological N-body and hydrodynamical simulations have focused on modeling particle dark matter that is cold, collisionless, and stable, as part of the $\Lambda$CDM paradigm. However, since the microphysical nature of dark matter, or even possibly a complicated dark sector, remains unclear, particle theorists have proposed a landscape of dark matter candidates with mass across tens of order of magnitudes~\cite{Battaglieri:2017aum}. Many of these candidates demand different simulation approaches from that of the cold dark matter (CDM) given their different properties. Below we list some examples of dark matter candidates and their related simulation demands or challenges, which are further summarized in a companion white paper focusing on cosmological simulations for dark matter physics \cite{DMsimsWP}.
\begin{itemize}
\item {\bf Warm dark matter.} Warm dark matter (WDM) is a family of models with sizable thermal motions, in between that of CDM and (ruled out) hot dark matter at the first epoch of structure formation. It is associated with a free-streaming length that washes out small structures below the length, which leads to a cutoff in the matter power spectrum~\cite{Bode:2000gq}. Examples of WDM include sterile neutrinos and gravitinos from SUSY theories~\cite{Viel:2005qj,Drewes:2016upu}. WDM has been constrained from the Lyman-$\alpha$ forest~\cite{Irsic:2017ixq}, Milky Way subhalos, and the 21-cm signal~\cite{Schneider:2018xba}. But many of those studies suffer from systematic uncertainties related to baryons. For example, the constraint from Lyman-$\alpha$ forest data strongly depends on the modeling of the intergalactic medium, such as its temperature fluctuations~\cite{Hui:2016ltb}. Dedicated hydrodynamic simulations will be helpful to reduce the systematic uncertainties.
\item {\bf Interacting dark matter.} Interacting dark matter (IDM) candidates that strongly interact with Standard Model particles such as protons, neutrons, or electrons. For some part of the parameter space, the interaction is so strong such that IDM cannot be probed by direct-detection experiments due to the overburden from the Earth's atmosphere or crust. Therefore, cosmological observations are one of the most sensitive probes for IDM, including the CMB, the Lyman-$\alpha$ forest, and Milky way subhalos~\cite{Buen-Abad:2021mvc}.
\item {\bf Self-interacting dark matter.} Self-interactions are ubiquitous for dark matter models, especially when dark matter is a part of the dark sector. Sizable self-interactions of dark matter, with cross section strength at $\mathcal{O}(1\,\text{cm}^2/\text{g})$, may address the so-called ``small-scale problems" of $\Lambda$CDM while keeping its success on predicting the large-scale structure~\cite{Tulin:2017ara}. The self-interactions of the dark matter can be diverse, e.g., elastic, dissipative, velocity-dependent, forward interactions, but many numerical studies only capture a small subset with phenomenological descriptions. The central region of self-interacting dark matter halos may experience dramatic changes in structure given the gravothermal collapse or dark matter-ordinary matter interactions. But this region is often omitted in numerical simulations because of the high computational cost.
\item {\bf Dissipative dark matter.} If dark matter is connected to other light dark sector particles, its self-interactions can emit those particles and become dissipative. Similar to ordinary matter, dissipative dark matter could experience cooling and heating through interactions with the environment. As a sub-component of the total dark matter, it could also fragment into dark clumps or form dark disks if the cooling effect is strong. Dedicated hydrodynamic simulations are needed to study dissipative dark matter.
\item {\bf Decaying dark matter.} Dark matter particles can be long-lived yet unstable. They could decay into Standard Model particles (e.g. sterile neutrinos) or other dark matter/dark sector particles on a long-time scale. High-resolution cosmological N-body simulations are often employed in studies of decaying dark matter~\cite{Wang:2014ina,Hubert:2021khy, Mau:2022sbf}. Hydrodynamical simulations are also needed to study the impacts of processes such as baryonic feedback~\cite{Wang:2014ina,Hubert:2021khy}.
\item {\bf Ultralight dark matter (fuzzy dark matter).} Dark matter can be made of ultralight scalar, psudo-scalar, or vector particles. They collectively behave as a classical wave given their high occupation number~\cite{Hui:2021tkt}. Ultralight dark matter candidates are featured in many beyond-Standard-Model scenarios as the pseudo-Nambu-Goldstone-Boson of the broken symmetries. Examples include fuzzy dark matter~\cite{Hu:2000ke,Marsh:2015xka,Hui:2016ltb}, QCD axions, axion-like-particles, and dark photon dark matter. Ultralight dark matter suppress small structures on the scale below the de Broglie wavelength. Thus it can be probed by observations such as Lyman-$\alpha$ forest, MW subhalos, or the formation of the first galaxies at cosmic dawn~\cite{Irsic:2017yje,Schutz:2020jox,Munoz:2019hjh,Jones:2021mrs}.
Numerical simulations for ultralight dark matter include: 1) Schr\"{o}dinger–Poisson equations that govern the evolution of the wave function of the ultralight dark matter ~\cite{Schive:2014hza,Schive:2014dra,Veltmaat:2018dfz}; 2) N-body simulations based on the Schr\"{o}dinger-Vlasov correspondence (for scales much greater than de Broglie wavelength)~\cite{Widrow:1993qq}; 3) fluid simulations based on the Madelung-transformed Schr\"{o}dinger–Poisson equations~\cite{Niemeyer:2019aqm}. Few simulations of ultralight dark matter go beyond the dark matter only simulations to include baryons. Additionally, higher resolution simulations over wider ranges of parameters are needed.
The cosmological evolution of QCD axion dark matter can be classified into two scenarios: (a) Peccei-Quinn symmetry is broken before or during cosmic inflation, and (b) broken after inflation. While the production process of axion dark matter of scenario (a) is relatively easy to model, that of scenario (b) requires numerical simulations given the production of the topological defects in the intermediate stage. Dedicated high-resolution numerical simulations have been recently developed to accurately track axion dark matter abundance~\cite{Gorghetto:2018myk,Eggemeier:2019khm,Gorghetto:2020qws,Buschmann:2021sdq} for scenario (b).
\item {\bf Ultraheavy dark matter.} Dark matter can be made of ultraheavy objects
with mass from around the Planck mass to solar masses. Examples of ultraheavy dark matter include primordial black holes, massive compact halo objects, exotic compact objects~\cite{Giudice:2016zpa}, and dark matter blobs~\cite{Diamond:2021dth}. In the absence of strong self-interactions or interactions with ordinary matter, probes of ultraheavy dark matter are limited to gravitational probes such as micro-lensing or graviational waves produced from merging binaries. These probes can be strongly affected by the distribution of ultraheavy dark matter~\cite{Carr:2021bzv, Giudice:2016zpa,Diamond:2021dth}, motivating dedicated numerical studies of the clustering of ultraheavy dark matter.
\item {\bf Multiple dark matter components.} Given the landscape of the dark matter candidates, it is easy to imagine that the dark matter consists of multiple components. For example, CDM can be the dominant component, while other dark matter candidates are sub-dominant. Examples of this scenario include axiverse~\cite{Arvanitaki:2009fg} and cannibal dark matter~\cite{Carlson:1992fn}. An important question is the distribution of the sub-dominant component inside dark matter halos. Just as for the distribution of baryonic matter and dark matter, the distribution of the sub-dominant component inside the halo may not be a simple re-scaling of the dominant component. Dedicated numerical simulations are needed to pin down the distribution of the sub-dominant components and make the predicted signatures reliable (e.g.~\cite{Anderhalden:2012qt,Banerjee:2022era}).
\end{itemize}
\end{itemize}
There are other new ideas on dark matter, motivated by modified gravity theories, such as superfluid dark matter~\cite{Khoury:2021tvy} and the apparent dark matter from entropic gravity~\cite{Verlinde:2016toy}. Many of those models still lack dedicated cosmological simulations or simulations with baryons.
\section{Statistical Inference and Simulation Suites}
The current cosmological Standard Model, $\Lambda$CDM, is an excellent fit to the data but has several theoretical shortcomings and is generally perceived, very like the particle physics Standard Model, to possess only a transitory existence, and be eventually replaced by a more complete description. But because $\Lambda$CDM is so successful, deviations from it will be subtle and difficult to nail down. Consequently, the next generation of cosmology experiments will be driven not only by the accumulation of statistics but also by the need to understand, mitigate, and control systematic uncertainties. To make substantial headway in the latter task, the ability to create detailed and realistic ``virtual universes" on demand is gaining central importance, so much so, that the ultimate scientific success of upcoming sky surveys hinges critically on the success of the modeling and simulation effort.
Scientific inference with sky surveys is a statistical inverse problem, where, given a set of measurement results, one attempts to fit a class of physical models to the data (which include models for the observational process), and to infer the values of the model parameters. Typically, such analyses require many evaluations over a very large number of ``virtual universes". The main difficulty lies in the fact that producing each virtual universe requires, in principle, an extremely expensive numerical simulation carried out at high fidelity. Emulators are effectively fast surrogate models that can be used as an alternative route to solving this inverse problem \cite{Heitmann2006}.
\subsection{Emulating the Observable Universe}
The importance of providing predictions for cosmological surveys via emulators is now widely recognized; emulation-based predictions have become very popular over the last few years~\cite{Lawrence2010,Heitmann2014,Lawrence2017,Bocquet2020,McClintock2019,Zhai2019,McClintock2019b,Nishimichi2019,Kobayashi2020,Mootoovaloo2020,Heydenreich2021,Knabenhans2021,Takhtaganov2021}. It is now possible to carry out high-quality simulation suites that provide the input to the emulators for a range of cosmological statistics, such as halo mass functions, matter power spectra, and halo bias. (Some of the emulators have gone beyond $\Lambda$CDM as well.) However, in order to fully integrate the emulators into the analysis frameworks used by the surveys to extract cosmological parameters, it is very desirable to create emulators connected directly to the survey observables. As a concrete example, the cluster mass function is commonly used to derive cosmological constraints, but it is not a quantity that can be easily extracted from the observations. All measurements, including weak lensing shear, do not directly provide mass measurements, but rather approximations or proxies for an idealized ``cluster mass". The translation from the observable to the mass function from simulations adds additional uncertainties into derived cosmological parameters and could be avoided if emulators would directly predict the quantity of interest, which is the cluster abundance, measured in a way that is most relevant to how the survey is actually carried out.
This level of forward modeling would involve new simulation and analysis efforts and the development of more flexible and sophisticated emulation approaches. Given that a successful implementation can potentially eliminate a major source of uncertainty and bias, this is clearly a worthwhile step. With a broad enough simulation footprint, such an approach would also enable easier connections across measurements carried out in different wavebands.
\subsection{Extending the Physics Content of Simulations}
Most emulator efforts so far have relied upon gravity-only simulations. These are an order of magnitude less expensive than hydrodynamic simulations, and yet carrying out a high-quality suite of gravity-only simulations has only become possible in the last few years due to increases in available computing resources. For hydrodynamics simulations, such campaigns are still out of reach because of the much larger time to solution per simulation. Additionally,
hydrodynamics simulations have many modeling (nuisance) parameters and uncertainties, increasing the design space for the emulation and in turn increasing the number of simulations to be carried out. (For gravity-only simulations, after having established criteria for precision simulations, only the fundamental cosmological parameters have to be varied, keeping the simulation campaign size manageable.) With the advent of exascale supercomputing resources (and beyond) in the coming years, and employing strategies such as multi-fidelity simulations, this problem can be significantly reduced, assuming the hydrodynamics codes can take full advantage of the new generation of architectures. If this turns out to be the case, many opportunities open up: Emulators can be built to investigate and optimize subgrid model parameters, be deployed to gain a better understanding of the interplay of subgrid model and cosmology parameters, and to directly predict observational quantities for different surveys accessing different wavelength regimes.
\subsection{Robust Error Estimation and Parameter Exploration}
Error estimation with emulators is a potentially very powerful avenue of research but remains to be properly realized in many of the current generation of emulators. Partly this is because error estimation is inherently difficult and partly because the methods used have been too informal, and insufficiently sharp. For example, there is no rigorous theory for error convergence and systematically handling discrepancy between model predictions and observational measurements remains an open problem. Because the formal statistical uncertainties in the observations are reducing with time, the onus is on modeling systematic errors, including the errors in the emulators. This area is relatively little-studied in the statistics literature although there are some useful investigations in discrepancy modeling~\cite{Brynjarsdottir2014}; the power of the results obtained, however, is relatively limited.
Another problem is that the dynamic range in cosmology is vast and it is computationally impractical to model all the relevant processes via a first principles approach. Consequently, some of the inputs in the subgrid models must be empirical, based on known results from observations. This adds another layer of complexity to error estimation because the proper treatment of such evidence in a cosmological analysis potentially requires a separate set of investigations for each empirical input. However, we note that continuous inclusion of observational data in emulator construction will be helpful in reducing the volume of parameter space that needs to be explored. (As mentioned previously, multi-fidelity simulations are also useful here in minimizing the amount of computational work.) Adaptive sampling methods are very useful in time-domain applications and they can be easily transplanted to cosmology, provided error analyses can continue to be undertaken in a robust manner.
\section{Future of High Performance Computing}
\vspace{-0.3cm}
\subsection{Next-generation Supercomputing Platforms}
\vspace{-0.2cm}
The arrival of the first generation of exascale supercomputers, Aurora and Frontier, at the Leadership Computing Facilities at Argonne and Oak Ridge National Laboratories provides an extraordinary opportunity to push scientific simulations to the next level. In cosmology, they enable two classes of simulations relevant to cosmological surveys: gravity-only simulations with unprecedented volume coverage and resolution and hydrodynamics simulations with exceptionally detailed and realistic modeling of baryonic physics in the Universe. The Exascale Computing Project (ECP) led by the Office of Advanced Scientific Computing Research (ASCR) in collaboration with other DOE science program offices has made tremendous strides to prepare scientific applications for these resources and will continue to do so~\cite{ecp}. As part of this effort, important challenges have been identified, including the efficient use of computational accelerators, effective memory access patterns, performance portable programming models and scalable algorithms. For gravity-only simulations, some of these challenges have already been successfully addressed by a subset of codes~\cite{Habib2016, Potter2017, Garrison2021}. For hydrodynamics simulations, these challenges are far more complex, but are being tackled by different codes \cite[for example, see][]{Sexton2021, Springel2021, Frontiere2022}. Continuous developments on these fronts are extremely important in order to enable full use of exascale systems and the ones that will follow them.
\subsection{Scalable Analysis Approaches}
\vspace{-0.2cm}
Scalable analysis approaches are as important as the development of the simulation codes. In principle, many petabytes of data can be easily generated by current and next-generation HPC systems, in practice, however, storage capacities are limited and the handling and processing of very large data sets require large supercomputing resources in their own right. Consequently, carefully designed analysis routines have to be instantiated on-the-fly while the simulation codes themselves are running (``in situ" analysis). The development of these analysis routines faces the same challenges as the simulation codes, and scalability and efficient usage of the available architectures are mandatory \cite{foresight2020, foresightGPU}. A successful cosmological simulation program therefore needs to ensure that the development of the codes and the analysis routines go hand in hand. In particular, these analysis routines can leverage a tight coupling with simulation codes to best balance available memory and compute capabilities of the supercomputing system. This co-development of simulation codes and analysis routines is complicated by the fact that cosmological simulations aim to provide predictions for a wide range of observations. A carefully orchestrated analysis approach has to be developed with cosmological surveys in mind -- close collaboration between simulators and observers is essential for its success.
\subsection{Verification and Validation}
\vspace{-0.2cm}
The accuracy requirements for cosmological simulations are stringent. As outlined above, the simulations provide the foundation for the analysis of current and next-generation cosmological surveys. Given the aim to constrain, e.g., dark energy parameters at the percent level, simulations and the coupled analysis and modeling approaches have to deliver results at least at the same level of accuracy, and better, if possible. The community has made good progress with regard to code verification in the last few years by carrying out rigorous comparison projects \cite{Heitmann2005,Schneider2016,Onions2012,Agertz2007} and convergence studies \cite{Heitmann2008}. However, not all differences between the codes and analysis tools have been fully resolved and/or understood. In particular, in the area of hydrodynamic simulations, much more work is needed to obtain the desired levels of robustness, although there is recent evidence of progress in this direction~\cite{nifty,frontiere}.
Validation (confirming the accuracy of the simulation predictions by direct comparison against observations) is another crucial area that requires a concerted effort between different code and analysis development teams and observers. The upcoming surveys will provide a rich data set for this effort. A delicate issue is how to control errors coming from empirical modeling used within the setup of the simulations. The detailed connection between the simulations and the survey observables has to be tightened up considerably as this is the most problematic aspect of the validation program from the simulation perspective.
\section{Conclusion}
Cosmological surveys carried out over the next decade are poised to make discoveries that will either extend or confirm the $\Lambda$CDM model. Both alternatives are significant -- in the first instance, observational input in finding ``Beyond $\Lambda$CDM'' corrections is clearly of fundamental importance, and in the second instance there will be a sharp reduction in the number of possible alternatives to the model, with ramifications for future tests and other investigations. To achieve the level of accuracy that is desired, close coupling to a state-of-the-art simulation campaign that not only provides a complete and robust modeling platform for each survey but also provides a capability to simultaneously model a number of observations from a range of facilities, including their cross-correlations, is required. Next-generation HPC systems promise to provide a capability that can help achieve these goals; getting to the desired results will require a concerted effort in implementing new algorithms/models, and evolving the simulation codes and associated analysis tools. Additionally, close collaborations with survey teams will be an essential element for success.
Over the course of the next decade, we expect exciting discoveries to be made by combining and analysing data sets from large-scale structure, CMB, and line intensity mapping experiments. In preparation, we must develop simulations with a broad set of observables, including their correlations, in order to conduct these analyses. However, despite their importance, resources to aid in developing these simulations and associated analysis methods have been scarce since 1) they do not belong to a specific collaboration/telescope, and 2) to generate fully coherent simulations, expertise from various disparate areas is required. This situation will have to evolve in a positive direction, if we are to achieve the full scientific potential of future surveys.
\bibliographystyle{unsrt}
|
1,108,101,564,319 | arxiv | \section{Introduction}
In this paper, we explore the connections among
the theories of first-order rigidity of bar and joint
frameworks (and associated structures) in various metric
geometries extracted from the underlying projective space of
dimension $n$, or
$\mathbb R^{n+1}.$ The standard examples include Euclidean space,
elliptical (or spherical) space, hyperbolic space, and a metric
on the exterior of hyperbolic space.
In his book, Pogorelov explored more general issues of
uniqueness, and local uniqueness of realizations in these
standard spaces, with some first-order correspondences as
corollaries \cite{pogo}.
We will take the opposite tack --
beginning directly with the first-order theory, in this paper.
We believe this presents a more transparent and accessible
starting point for the correspondences. In a second paper, we
will use the additional technique of `averaging' in combination
with the first-order results to transfer results about pairs of
objects with identical distance constraints in one space to
corresponding pairs in a second space
Like Pogorelov (and perhaps for related reasons) we will begin
with the correspondence between the theory in
elliptical or spherical space and the theory in
Euclidean space (\bb S \ref{equivalence S -> E}).
This correspondence of configurations
is direct -- using gnomic projection (or central projection) from
the upper half sphere to the corresponding Euclidean space.
This correspondence between spherical frameworks and their
central projections into the plane is also embedded in previous
studies of frameworks in dimension $d$ and their one point cones
into dimension $d+1$ \cite{Wh2}.
With a firm grounding for the first-order rigidity in
spherical space, it is simpler to work from the spherical
$n$-space to the other metrics extracted from the underlying
$\mathbb R^{n+1}$ (\bb S\ref{equivalence in other geometries}).
The correspondence works for any metric of
the form $\langle p,q \rangle = \sum_{i=1}^{n+1} a_{i} p_i q_i$, $a_{i} \neq 0$,
in addition to the special case of Euclidean space (with
$a_{n+1}=0$). It has a particularly simple form, for selected
normalizations of the rays as points in the space, such as
$\langle p,p \rangle = \pm 1$, which is the form we present.
Having examined the theory of first-order motions, we pause to
present the motions as the solutions to a matrix equation
$R_X (G,p) x = 0$ for the metric space $X$ (\bb S\ref{rigidity matrix}).
In this
setting, we have the equivalent theory of static rigidity working
with the row space and row dependences (the self-stresses) of
these matrices, instead of the column dependencies (the
motions). The correspondence is immediate, but it takes a
particular nice form for the `projective' models in Euclidean
space of the standard metrics. In this setting, the rigidity
correspondence is a simple matrix multiplication:
$$
R_X (G,p) [T_{XY}] = R_Y (G,p)
$$
for the same underlying configuration $p$, where $[T_{XY}]$ is
a block diagonal matrix with a block entry for each vertex,
based on how the sense of `perpendicular' is twisted at that
location from one metric to the other.
As a consequence of this simple correspondence of matrices,
we see that row dependencies (the static self-stresses) are
completely unchanged by the switch in metric. As a biproduct
of this static correspondence, there is a correspondence for
the first-order rigidity of the structures with inequalities,
the tensegrity frameworks, which are well understood as a
combination of first-order theory and self-stresses of the
appropriate signs for the edges with pre-assigned
inequality constraints.
As this shared underlying statics hints, there is a
shared underlying projective theory of statics
(and associated first-order kinematics)
\cite{CrapoWhiteley}.
We will not present that theory here but we note
the projective invariance, in all the metrics, of the
first-order and static theories
(\S7).
There are various
extensions that follow from this underlying projective
theory, such as inclusion of `vertices at infinity' in
Euclidean space \cite{CrapoWhiteley},
and the
possibility that polarity has a role to play (see below).
As an application of these correspondences, we consider a
classical theory of rigidity for polyhedra -- the theorems of
Cauchy, Alexandrov, and the associated theory of Andreev.
This theory provides theorems about the first-order
rigidity of convex polyhedra and convex polytopes with
either rigid faces, or $2$-faces triangulated with bars and
joints in dimensions
$d\geq 3$, in Euclidean space. Since the basic concepts of
convexity transfer among the metrics (if we remove the equator
on the sphere, or the corresponding line at infinity in
Euclidean space), this first-order and static theory immediately
transfers to identical theorems in the other metric spaces
(\bb S\ref{andreev}). There are some first-order extensions of Cauchy's
Theorem to versions of local convexity, which will automatically
extend to the various metrics and on through to hyperplanes and
angles, giving additional generalizations.
Moreover,
this theory for hyperplanes and angles will be projectively
invariant, if we are careful with the transfer of concepts such
as `convexity' through the projective transformations.
In hyperbolic space, there is a correspondence between rigidity
of `bar-and-joint frameworks' with vertices and distance constraints
in the exterior hyperbolic space (or ideal points) and planes
and angle constraints in the interior hyperbolic space.
We present this correspondence directly, although it can be
viewed as a polarity about the absolute.
With this correspondence, the first-order Cauchy theory in
exterior hyperbolic space gives a first-order theory for planes
and angles in hyperbolic space. This result turns out to be a
generalization of the
first-order version of Andreev's Theorem. In this setting, the
constraint that angles be less than $\pi/2$ disappears and the
angles have the full range of angles in a convex polyhedron ($<
\pi$).
Moreover, as this hints, there is a correspondence, via
spherical polarity, which connects the first-order Cauchy
Theorem in the spherical or elliptic space with an
Andreev style first-order theorem for planes and angles of a
simple convex polytope in elliptical geometry (\bb S\textbf{none}).
The effect of polarity in Euclidean space is drastically different.
It has an interesting, and distinctive interpretations in
dimensions $d=2$ and $d=3$ \cite{Wh5, Wh6}.
The general problem of characterizing which graphs have some
(almost all) realizations in $d$-space as first-order rigid
frameworks is hard for dimensions $d\geq 3$. With these
correspondences, we realize that this problem is identical in
all the metric spaces and we will not get additional leverage
by comparing first-order behaviour under the various metrics.
On the other hand, in general geometric constraint programming
in fields such as CAD, there is an interest in more general
systems of geometric objects and general constraints. For
example, circles of variable radii with angles of intersection
as constraints are in interest in CAD. As people familiar
with hyperbolic geometry may realize, these are equivalent,
both a first-order and at all orders, to planes and angles in
hyperbolic 3-space. The correspondence presented here provides
the final step in the correspondence between circles and angles
in the plane and points and distances in Euclidean $3$-space
\cite{SaliolaWhiteley}.
The basic first-order correspondence among metrics should
extend to differentiable surfaces from these discrete
structures. The major difference here is that static rigidity
and first-order rigidity are distinct concepts in the this
world which corresponds to infinite matrices. Still the
correspondence should apply to both theories, and all the
metrics.
\section{First-Order Rigidity in $\mathbb E^n$}
\subsection{Euclidean $n$-space} Let $\mathbb E^n$ denote the set of vectors
in $\mathbb R^{n+1}$ with $x_{n+1} = 1$,
$$\mathbb E^n = \{ x \in \mathbb R^{n+1} \mid \textbf{e} \cdot x = 1 \},$$
where $\textbf{e} = (0,0, \ldots, 1) \in \mathbb R^{n+1}$.
An $m$-plane of $\mathbb E^n$ is the intersection of $\mathbb E^n$ with an
$(m+1)$-subspace of $\mathbb R^{n+1}$. The distance between $x, y \in \mathbb E^n$ is
$d_{\mathbb E}(x,y) = |x - y| = \sqrt{\sum_i^n (x_i - y_i)^2}$.
\subsection{Frameworks and rigidity in $\mathbb E^n$}
A graph $G = (V, E)$ consists of a finite vertex set
$V = \{1, 2, \ldots, v\}$ and an edge set $E$,
where $E$ is a collection of unordered pairs of
vertices. A \emph{bar-and-joint framework} $G(p)$ in $\mathbb E^n$ is a
graph $G$ together with a map $p: V \to \mathbb E^n$.
Let $p_i$ denote $p(i)$.
A \emph{motion} of the framework $G(p)$ is a continuous family of
functions $p(t): V \rightarrow \mathbb E^n$ with $p(0) = p$ such that for $\{i,
j\} \in E$, $d_{\mathbb E}(p_i(t), p_j(t)) = c_{ij}$, where $c_{ij}$ is a
constant, for all $t$. A framework is \emph{rigid} if all motions are
\emph{trivial}: for each $t$, there is a rigid motion $A_t$ of $\mathbb E^n$,
such that $A_t(p_i) = p_i(t)$, for all $i \in V$.
\subsection{Motivation for first-order rigidity.} \label{motivationE}
Suppose $p(t)$ is a motion of the framework $G(p)$ in $\mathbb E^n$
differentiable at $t = 0$.
Since $d_{\mathbb E}(p_i(t), p_j(t)) = c_{ij}$ for each $\{i,j\} \in E$,
the derivative of $p(t)$ must satisfy
$$(p_i - p_j) \cdot (p'_i(0) - p'_j(0)) = 0,$$
where $x \cdot y$ denotes the Euclidean inner product of the vectors
$x$ and $y$. Since the framework lies in $\mathbb E^n$ during the motion
($p_k(t) \in \mathbb E^n$ for all $k \in V$), $p_k(t)$ satisfies
$\textbf{e} \cdot p_k(t) = 0$ for all $k \in V$. Hence its derivative satisfies,
$$\textbf{e} \cdot p'_i(0) = 0$$
for each $i \in V$. This motivates the following definition.
\subsection{First-order rigidity in $\mathbb E^n$.} \label{f.o.m in E}
A \emph{first-order motion}
of the framework $G(p)$ in $\mathbb E^n$ is a map $u: V \to \mathbb R^{n+1}$
satisfying, for each $\{i,j\} \in E$ and $k \in V$,
\begin{equation} \label{equations for Euclidean f.o.m.}
(p_i - p_j) \cdot (u_i - u_j) = 0 \quad\text{and}\quad
\textbf{e} \cdot u_k = 0,
\end{equation}
where $u_i$ denotes $u(i)$.
\vspace{-2em}
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.90]{images/figure1.ps}
\caption{$u$ is a first-order motion if $(p_i - p_j) \cdot u_i =
(p_i - p_j) \cdot u_j$ for all edges $\{i,j\}$. That is, the
projection of $u_i$ onto $p_i - p_j$ must equal the projection of
$u_j$ onto $p_i - p_j$.}
\label{figure: first-order motion}
\end{center}
\end{figure}
A \emph{trivial first-order motion}
of $\mathbb E^n$ is a map $u: \mathbb E^n \to \mathbb R^{n+1}$ satisfying
\begin{equation*}
(x - y) \cdot (u(x) - u(y)) = 0 \quad\text{and}\quad
\textbf{e} \cdot u(z) = 0,
\end{equation*}
for all $x$, $y$ and $z$ in $\mathbb E^n$.
%
%
$G(p)$ is \emph{first-order
rigid} in $\mathbb E^n$ if all the first-order motions of the framework
$G(p)$ are restrictions of trivial first-order motions of $\mathbb E^n$.
\subsection{Remark} Any rigid motion of $\mathbb E^n$ yeilds a trivial
first-order motion of a given framework: the isometry restricts to a
motion of the framework whose derivative satisfies the equations in
(\ref{equations for Euclidean f.o.m.}).
\subsection{Remark} First-order rigidity is a good indicator of
rigidity: first-order rigidity implies rigidity, but not conversely.
\section{First-Order Rigidity in $\mathbb S_+^n$}
\subsection{Spherical $n$-Space.} Let $\mathbb S_+^n$ denote the upper hemisphere
of the unit sphere in $\mathbb R^{n+1}$,
\[
\mathbb S_+^n = \{ x \in \mathbb R^{n+1} \mid x \cdot x = 1, \textbf{e} \cdot x > 0 \},
\]
An $m$-plane of $\mathbb S_+^n$ is the intersection of $\mathbb S_+^n$ with an
$(m+1)$-subspace of $\mathbb R^{n+1}$. The distance between two points
$x, y \in \mathbb S_+^n$ is given by the angle subtended by the vectors
$x$ and $y$, $d_{\mathbb S_+}(x,y) = \text{arccos}(x \cdot y)$.
\subsection{Frameworks and rigidity in $\mathbb S_+^n$}
A \emph{bar-and-joint framework} $G(p)$ in $\mathbb S_+^n$ is a
graph $G$ together with a map $p: V \to \mathbb S_+^n$.
A \emph{motion} of the framework $G(p)$ in $\mathbb S_+^n$ is a continuous family of
functions $p(t): V \rightarrow \mathbb S_+^n$ with $p(0) = p$ such that for $\{i,
j\} \in E$, $d_{\mathbb S_+}(p_i(t), p_j(t)) = c_{ij}$, where $c_{ij}$ is a
constant, for all $t$. A framework is \emph{rigid} if all motions are
\emph{trivial}: for each $t$, there is a rigid motion $A_t$ of $\mathbb S_+^n$,
such that $A_t(p_i) = p_i(t)$, for all $i \in V$.
\subsection{Motivation for first-order rigidity in $\mathbb S_+^n$.}
To extend the definitions of first-order motion and first-order rigidity
to frameworks in $\mathbb S_+^n$, mimic the motivation presented in section
\ref{motivationE}. If $p(t)$ is a motion of a framework $G(p)$ in
$\mathbb S_+^n$, then for all $t$ and $\{i,j\} \in E$,
$$d_{\mathbb S_+}(p_i(t) \cdot p_j(t)) = c_{ij},$$
where $c_{ij}$ is constant for all $\{i,j\} \in E$,
and for all $t$ and $k \in V$,
$$p_k(t) \cdot p_k(t) = 1.$$
Equivalently, for all $t$, $\{i,j\} \in E$ and $k \in V$,
$$p_i(t) \cdot p_j(t) = \cos c_{ij},$$
$$p_k(t) \cdot p_k(t) = 1.$$
If the motion $p(t)$ is differentiable at $t=0$, then $p(t)$ must satisfy,
$$p_i \cdot p'_j(0) + p'_i(0) \cdot p_j = 0,$$
$$p_k \cdot p'_k(0) = 0.$$
This leads to the following definition.
\subsection{First-Order Rigidity in $\mathbb S_+^n$.} A \emph{first-order motion}
of the framework $G(p)$ in $\mathbb S_+^n$ is a map $u: V \to \mathbb R^{n+1}$
satisfying, for each $\{i,j\} \in E$ and for each $k \in V$,
\begin{equation} \label{f.o.m in S}
p_i \cdot u_j + p_j \cdot u_i = 0
\quad\text{and}\quad
p_k \cdot u_k = 0.
\end{equation}
A \emph{trivial first-order motion} of $\mathbb S_+^n$ is
a map $u: \mathbb S_+^n \to \mathbb R^{n+1}$ satisfying
\begin{equation*}
x \cdot u(y) + y \cdot u(x) = 0 \quad\text{and}\quad
z \cdot u(z) = 0,
\end{equation*}
for all $x$, $y$ and $z$ in $\mathbb E^n$.
%
%
The framework $G(p)$ is
\emph{first-order rigid} in $\mathbb S_+^n$
if all first-order motions of $G(p)$ are restrictions
of trivial first-order motions.
\subsection{Remark} Note that the equations in (\ref{f.o.m in S}) are equivalent
to the following conditions,
$$
(p_i - p_j) \cdot (u_i - u_j) = 0 \quad\text{and}\quad p_k \cdot u_k = 0,
$$
which are similar to the equations defining first-order rigidity in
$\mathbb E^n$.
\subsection{Remark} If $G(p)$ is a bar-and-joint framework in $\mathbb S_+^n$, then
the graph obtained from $G$ by adjoining a new vertex with edges incident
with all vertices of $G$, together with the map
$\widehat p: V \cup \{v+1\} \to \mathbb E^{n+1}$ given by
$$ \widehat p(i) = \begin{cases}
p(i) & \text{if } i \neq v + 1 \\
0 & \text{if } i = v+1 \\
\end{cases},$$
is first-order rigid in $\mathbb E^{n+1}$ iff $G(p)$ is first-order rigid in
$\mathbb S_+^{n+1}$. That is, frameworks in $\mathbb S_+^n$ can be modeled by the cone on
the same framework in $\mathbb E^{n+1}$.
\section{Equivalence of First-Order Rigidity in $\mathbb S_+^n$ and $\mathbb E^n$.}
\label{equivalence S -> E}
This section presents two maps, a map carrying a framework $G(p)$ in
$\mathbb S_+^n$ into a framework $G(q)$ in $\mathbb E^n$, and a map carrying the
first-order
motions of $G(p)$ into first-order motions of $G(q)$. The latter
map carries trivial
first-order motions of $\mathbb S_+^n$ to trivial first-order motions
of $\mathbb E^n$, yielding the
result $G(p)$ is first-order rigid iff $G(q)$ is first-order rigid.
\subsection{Mapping frameworks and first-order motions}
\label{mapping motions}
If $G(p)$ is a framework in $\mathbb S_+^n$,
then $G(\psi \circ p)$ is a framework in $\mathbb E^n$, where $\psi : \mathbb
S^n \rightarrow \mathbb E^n$ is given by $\psi(x) = x/(\textbf{e} \cdot x)$.
The inverse of $\psi$ is given by
$\psi^{-1}(x) = x / \sqrt{ x \cdot x }$.
\begin{figure}[htb]
\begin{center}
\includegraphics*{images/figure2.ps}
\caption{Mapping first-order motions of a framework in $\mathbb S^n_+$
to first-order motions of a framework in $\mathbb E^n$.}
\label{figure:sphere to plane}
\end{center}
\end{figure}
If $u$ is a first-order motion of the framework $G(p)$ in $\mathbb S_+^n$, let
$\varphi$ denote the map
$$ \varphi : u_i \mapsto
\frac{1}{\textbf{e} \cdot p_i} \left(u_i
- (u_i \cdot \textbf{e}) \textbf{e} \right).
$$
If $G(q)$ is a framework
in $\mathbb E^n$ with first-order motion $v$, then $\varphi^{-1}$ is given
by
$$\varphi^{-1}: v_i \mapsto
\frac{1}{\sqrt{q_i \cdot q_i}} \left(
v_i - (v_i \cdot q_i) \textbf{e}
\right).
$$
Observe that $\varphi$ and $\varphi^{-1}$ map into the appropriate
tangent spaces: $\psi^{-1}(q_i) \cdot \varphi^{-1}(v_i) = 0$
and $\varphi(u_i) \cdot \textbf{e} = 0.$
\subsection{Theorem} \label{theoremSE}
$u$ is a first-order motion of the framework $G(p)$ in $\mathbb S_+^n$ iff
$\varphi \circ u$ is a first-order motion of the framework $G(\psi \circ p)$
in $\mathbb E^n$. Moreover, $u$ is a trivial first-order motion iff
$\varphi \circ u \circ \psi^{-1}$ is a trivial first-order motion.
\medskip\emph{Pf}. \ Note that
\begin{align} \label{expansion}
(\psi(p_i) - \psi(p_j)) \cdot (\varphi(u_i) - \varphi(u_j))
=
\frac{p_i \cdot u_i}{(\textbf{e} \cdot p_i)^2}
- \frac{p_i \cdot u_j + p_j \cdot u_i}{(\textbf{e} \cdot p_i)(\textbf{e} \cdot p_j)}
+ \frac{p_j \cdot u_j}{(\textbf{e} \cdot p_j)^2}.
\end{align}
If $u$ is a first-order motion of $G(p)$, then $u_i \cdot p_i = 0$
for all $i \in V$, and $p_i \cdot u_j + p_j \cdot u_i = 0$
for all $\{i,j\} \in E$. By (\ref{expansion}),
$(\psi(p_i) - \psi(p_j)) \cdot (\varphi(u_i) - \varphi(u_j)) = 0$
for all $\{i,j\} \in E$.
The definition of $\varphi$ ensures that $\varphi(u_i) \cdot \textbf{e} = 0.$
Therefore, $\varphi \circ u$ is a first-order motion of $G(\psi \circ p)$.
Conversely,
suppose $\varphi \circ u$ is a first-order motion of $G(\psi \circ p)$.
Then for all $\{i,j\} \in E$, $(\psi(p_i) - \psi(p_j)) \cdot (\varphi(u_i) - \varphi(u_j)) = 0$.
The observation at the end of the \ref{mapping motions} gives that
$p_i \cdot u_i = \psi^{-1}(\psi(p_i)) \cdot \varphi^{-1}(\varphi(u_i)) = 0$
for all $i \in V$. Equation (\ref{expansion}) reduces to
$p_i \cdot u_j + p_j \cdot u_i = 0.$
So $u$ is a first-order motion of $G(p)$.
Suppose $u$ is a trivial first-order motion. Then
$x \cdot u(x) = 0$ for all $x \in \mathbb S_+^n$ and
$x \cdot u(y) + y \cdot u(x) = 0$ for all
$x, y \in \mathbb S_+^n$. Let $v: \mathbb E^n \to \mathbb R^{n+1}$
denote the composition $\phi \circ u \circ \psi^{-1}$. If $\widehat x, \widehat y
\in \mathbb E^n$ with $x$ denoting $\psi^{-1}(\widehat x)$ and $y$ denoting
$\psi^{-1}(\widehat y)$, then (\ref{expansion}) gives
$$
(\widehat x - \widehat y) \cdot (v(\widehat x) - v(\widehat y)) =
\frac{
x \cdot u(x)}{(\textbf{e} \cdot x)^2}
- \frac{x \cdot u(y) + y \cdot u(x)}
{(\textbf{e} \cdot x)(\textbf{e} \cdot y)}
+ \frac{y \cdot u(y)}{(\textbf{e} \cdot y)^2} = 0.
$$
So $v$ is a trivial first-order motion. The converse follows similarly.
\medskip\noindent\textbf{Corollary. } $G(p)$ is first-order rigid in $\mathbb S_+^n$ iff
$G(\psi \circ p)$ is first-order rigid in $\mathbb E^n$.
\subsection{Remark} $\mathbb S_+^n$ versus $\mathbb S^n$: Given a discrete
framework, there exists a rotation of the $n$-sphere such that no vertex
of the framework lies on the equator of the sphere. Therefore, we need
not restrict our frameworks to a hemisphere.
\section{Equivalence of First-Order Rigidity in Other Geometries.}
\label{equivalence in other geometries}
\subsection{Geometries.} For
$x$, $y \in \mathbb R^{n+1}$, let $\langle x, y \rangle_k$ denote
the function
$$
\langle x, y \rangle_k = x_1y_1 + \cdots + x_{n-k+1}y_{n-k+1} -
x_{n-k+2}y_{n-k+2} - \cdots - x_{n+1}y_{n+1},
$$
and let $X_{c,k}^n$ denote the set,
$$
X_{c,k}^n = \{ x \in \mathbb R^{n+1} \mid \langle x, x \rangle_k = c, x_{n+1} > 0 \},
$$
for some constant $c \neq 0$ and $k \in \mathbb N$. We write $X^n$ to simplify
notation, if $c$ and $k$ are understood. If $k=1$ and $c=-1$, then $X^n$ is
\emph{hyperbolic space}, $\mathbb H^n$. If $k=1$ and $c=1$, then $X^n$ is
\emph{exterior hyperbolic space}, $\mathbb D^n$. Spherical space $\mathbb S_+^n$
is the case $k=0$, $c=1$. Note that $\mathbb E^n \neq X^n$ for
any choice of $c$ and $k$.
\subsection{Remark} In more generality we can replace $\langle x, y \rangle_k$
with
$$
\langle x, y \rangle = a_1x_1y_1 + \cdots + a_{n+1}x_{n+1}y_{n+1},
$$
where $a_i \neq 0$ for all $i$, with the exception for Euclidean space:
$a_1 = a_2 = \cdots = a_n = 1$ and $a_{n+1} = 0$.
\subsection{First-order rigidity in $X^n$}
\label{first-order rigidity in X}
A metric $d_X$ can be placed on $X^n$ so that $d_X(x,y)$ is a function
of $\langle x, y \rangle_k$. A sufficient condition for the distance $d_X(x,y)$
remaining constant is the requirement $\langle x, y \rangle_k$ remain constant.
Therefore, the same analysis motivates the following extensions of the
definitions of first-order rigidity to $X^n$.
A \emph{bar-and-joint framework} $G(p)$ in $X^n$
is a graph $G$ together with a map $p: V \rightarrow X^n$.
A \emph{first-order motion} of the
framework $G(p)$ in $X^n$ is a map $u: V \rightarrow \mathbb R^{n+1}$ satisfying
for each $\{i,j\} \in E$,
\begin{equation} \label{eqn_oneX}
\langle p_i, u_j \rangle_k + \langle p_j, u_i \rangle_k = 0,
\end{equation}
and for each $i \in V$,
\begin{equation} \label{eqn_twoX}
\langle p_i, u_i \rangle_k = 0.
\end{equation}
A \emph{trivial first-order motion} of $X^n$ is a map $u: X^n \to \mathbb R^{n+1}$
satisfying
$$
\langle x, u(y) \rangle_k + \langle y, u(x) \rangle_k = 0
\quad\text{and}\quad
\langle z, u(z) \rangle_k = 0
$$
for all $x,y,z \in X^n$. $G(p)$ is \emph{first-order rigid} in $X^n$ if
all first-order motions of $G(p)$ are the restrictions of trivial
first-order motions of $X^n$.
\subsection{$X^n$ and $\mathbb E^n$.}
In section \ref{equivalence S -> E} we established the
equivalence between first-order rigidity in $\mathbb E^n$ and
first-order rigidity in $\mathbb S_+^n$. We need only demonstrate the
equivalence holds between the first-order rigidity theories of
$X^n$ and $\mathbb S_+^n$.
\subsection{$X^n$ and $\mathbb S_+^n$}
Let $\psi_{\mathbb S_+}: X^n \rightarrow \mathbb S_+^n$ denote the map $x \mapsto x /
\sqrt{x \cdot x}$, and let $\varphi_{\mathbb S_+}$ denote the map
$$
\varphi_{\mathbb S_+}: u_i \mapsto \frac{J_k(u_i)}{\sqrt{p_i \cdot p_i}},
$$
where $J_k(x) = (x_1, \cdots, x_{n-k+1}, -x_{n-k+2}, \cdots, -x_{n+1})$.
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.40]{images/figure3.ps}
\caption{Mapping a bar-and-joint framework from the spherical plane
$\mathbb S_+^2$ into the hyperbolic plane $\mathbb H^2$.}
\label{figure: transfering frameworks}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics{images/figure4.ps}
\caption{Mapping first-order motions of a framework in $\mathbb S^n_+$
to first-order motions of a framework in $\mathbb H^n$.}
\label{figure: hyperboloid to sphere}
\end{center}
\end{figure}
\subsection{Theorem} $G(p)$ is first-order rigid in $X^n$ iff
$G(\psi_{\mathbb S_+} \circ p)$ is first-order rigid in $\mathbb S_+^n$.
\medskip\emph{Pf}. \ Since, $\langle x, y \rangle_k = x \cdot J_k(y)$ we have
\begin{align*}
\left(\psi_{\mathbb S_+}(p_i) - \psi_{\mathbb S_+}(p_j)\right) \cdot
\left(\varphi_{\mathbb S_+}(u_i) - \varphi_{\mathbb S_+}(u_j)\right)
= \frac{\langle p_i, u_i \rangle_k}{p_i \cdot p_i} -
\frac{\langle p_i, u_j \rangle_k + \langle p_j, u_i \rangle_k}
{\sqrt{p_i \cdot p_i}\sqrt{p_j \cdot p_j}} +
\frac{\langle p_j, u_j \rangle_k}{p_j \cdot p_j}.
\end{align*}
As in the proof of Theorem \ref{theoremSE}, the above equation and the
definitions of $\psi_{\mathbb S_+}$ and $\varphi_{\mathbb S_+}$ give that
$\varphi_{\mathbb S_+} \circ u$ is a first-order motion of $G(\psi_{\mathbb S_+}
\circ p)$ iff $u$ is a first-order motion of $G(p)$.
It is clear that trivial motions of $\mathbb S_+^n$ map to trivial motions
of $X^n$. However, a trivial motion of $X^n$ maps onto a ``trivial
motion'' of a proper subset of $\mathbb S_+^n$. The following fact finishes
of this proof.
\medskip\noindent\textbf{Fact.} Given a first-order motion $u$ of
$K_{n+1}$, the complete graph on $n+1$ vertices in $\mathbb E^n$, there
exists a unique trivial first-order motion of $\mathbb E^n$ extending $u$.
\medskip
(This result and the equivalence of the first-order theories of $\mathbb
E^n$ and $\mathbb S_+^n$ give the corresponding result for $\mathbb S_+^n$,
which was needed to finish the proof of the proceeding theorem.)
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=1.25]{images/figure5.ps}
\caption{Mapping first-order motions of a framework in $\mathbb S^n_+$
to first-order motions of a framework in $\mathbb E^n$.}
\label{figure: hyperbolic to plane}
\end{center}
\end{figure}
\subsection{Remark} There is no obstruction to defining a framework with
vertices in $\mathbb H^n$ and $\mathbb D^n$: the equations defining first-order
motions provide formal constraints between these vertices, although the
geometric interpretations of these constraints may not be obvious. In
general, the theorem holds for frameworks with vertices on the surface
$\langle x, x \rangle_k = \pm 1$, but not with vertices on $\langle x, x \rangle_k = 0$.
\section{The Rigidity Matrix}
\label{rigidity matrix}
\subsection{Projective models of $X^n$} The projective model of $X^n$ is
the subset of $\mathbb E^n$ obtained by projecting from the origin
the points of $X^n$ onto $\mathbb E^n$,
$$
\left\{ \frac{1}{\textbf{e} \cdot x}\ x \ \Big| \
x \in X^n \right\} \subset \mathbb E^n.
$$
The projective model of hyperbolic $n$-space $\mathbb H^n$ is the interior
of the unit $n$-ball $B^n$ of $\mathbb E^n$ and the projective model of
exterior hyperbolic $n$-space $\mathbb D^n$ is the exterior of $B^n$. The
unit $(n-1)$-sphere $S^{n-1}$ is the \emph{absolute}, the points at
infinity of hyperbolic geometry. Spherical $n$-space is model
projectively by $\mathbb E^n$.
Since we are now restricting our attention to points in $\mathbb E^n$, we
identify $\mathbb E^n$ with $\mathbb R^n$ and write $PX^n$ to denote the
projective model of $X^n$ as a subset of $\mathbb R^n$. Distance in $PX^n$
is calculated by normalizing the points
into $X^n$ and applying the definition of distance in $X^n$. For example,
the distance between points $x$ and $y$ in $P\mathbb S_+^n$ (so $x,y \in \mathbb R^n$)
is
$$
d_{P\mathbb S_+}(x,y) = \arccos \left(\frac{ 1 + x \cdot y }
{\sqrt{ 1 + x \cdot x} \sqrt{ 1 + y \cdot y}}\right),
$$
and for points $x$ and $y$ in $P\mathbb H^n$,
$$
d_{P\mathbb H}(x,y) = \text{arccosh} \left(\frac{ 1 - x \cdot y }
{\sqrt{ 1 - x \cdot x} \sqrt{ 1 - y \cdot y}}\right).
$$
\subsection{The rigidity matrix of a framework}
A first-order motion $u: V \to \mathbb R^n$ of the framework $G(p)$ in $\mathbb R^n$,
satisfies
\begin{equation*}\label{equation euclidean fom}
(p_i - p_j) \cdot (u_i - u_j) = 0.
\end{equation*}
This system of homogeneous linear equations, indexed by the edges of $G$,
induces a linear transformation with matrix $R_{\mathbb E}(G,p)$, called
the \emph{rigidity matrix} of $G(p)$,
$$
R_{\mathbb E}(G,p) = \bordermatrix{
& & i & \cdots & j & & \cr
& & \vdots & & \vdots & & \cr
\{i,j\} & \cdots & p_i - p_j & \cdots & p_j - p_i & \cdots & \cr
& & \vdots & & \vdots & & \cr}.
$$
The kernel of $R_{\mathbb E}(G,p)$ is precisely the space of first-order motions
of $G(p)$.
A first-order motion $u: V \to \mathbb R^n$ of the framework $G(p)$ in $P\mathbb H^n$,
$P\mathbb D^n$ or $P\mathbb S^n$ satisfies
\begin{equation*}\label{equation non-euclidean fom}
(k_{ij} + k_{ji}) \cdot (u_i + u_j) = 0,
\end{equation*}
where $k_{ij}$ is
$$
k_{ij} =
\begin{cases}
\left(\frac{1 - p_i \cdot p_j}{1 - p_i \cdot p_i}\right) p_i - p_j,
& \text{ for } P\mathbb H^n \text{ or } P\mathbb D^n \\
\left(\frac{1 + p_i \cdot p_j}{1 + p_i \cdot p_i}\right) p_i - p_j,
& \text{ for } P\mathbb S_+^n
\end{cases}.
$$
The matrix of the linear transformation induced by this system of linear
equations is the \emph{rigidity matrix} $R_{X}(G,p)$ of $G(p)$,
$$
R_{X}(G,p) = \bordermatrix{
& & i & \cdots & j & & \cr
& & \vdots & & \vdots & & \cr
\{i,j\} & \cdots & k_{ij} & \cdots & k_{ji} & \cdots & \cr
& & \vdots & & \vdots & & \cr}.
$$
Note that $k_{ij}$ depends on $X$.
\begin{figure}[htb]
\begin{center}
\includegraphics{images/figure6.ps}
\caption{A visual summary of the equivalence of first-order rigidity
in the projective models of hyperbolic geometry $H$,
spherical geometry $S$ and Euclidean geometry $E$. Here $T_{SE}$
denotes the linear transformation $T_1(G,p)$ defined in the text,
$T_{ES}$ the inverse of $T_{SE}$.}
\label{figure: summary of theorem}
\end{center}
\end{figure}
\subsection{Transforming rigidity matrices} Let $T_K(G,p)$ denote the matrix
$$
T_K(G,p) =
\left[\begin{array}{cccc}
T_{p_1} & 0 & 0 & 0 \\
0 & T_{p_2} & 0 & 0 \\
0 & 0 & \ddots & 0 \\
0 & 0 & 0 & T_{p_{v}}
\end{array}\right],
$$
where $T_{p_k} = I + K (p_k^{(i)}p_k^{(j)})$ ($I$ is the $n
\times n$ identity matrix and $(p_k^{(i)}p_k^{(j)})$ is the $n \times n$
matrix with $p_k^{(i)}p_k^{(j)}$ as entry $(i,j)$, where $p_k^{(i)}$ is
the $i$-th component of $p_k$). For example, for
$n=3$ and $p_{k} = (x_1, x_2, x_3)$,
$$
T_{p_k} =
\left[\begin{array}{ccc}
1 + Kx_1^2 & Kx_1x_2 & Kx_1x_3 \\
Kx_1x_2 & 1 + Kx_2^2 & Kx_2x_3 \\
Kx_1x_3 & Kx_2x_3 & 1 + Kx_3^2
\end{array}\right].
$$
\medskip\noindent\textbf{Theorem. } Let $G(p)$ be a framework with $p \in \mathbb R^n$.
Then
\begin{enumerate}
\item
$T_K(G,p)$ satisfies
\begin{align*}
R_{P\mathbb H} \times T_{-1}(G,p) = R_{\mathbb E}(G,p)
\quad\text{and}\quad
R_{P\mathbb S_+} \times T_{1}(G,p) = R_{\mathbb E}(G,p);
\end{align*}
\item
$G(p)$ is first-order rigid in $P\mathbb S_+^n$ iff
$G(p)$ is first-order rigid in $P\mathbb E^n;$
\item
$G(p)$ is first-order rigid in $P\mathbb H^n \cup P\mathbb D^n$ iff
$G(p)$ is first-order rigid in $P\mathbb E^n$ and $p_i \cdot p_i \neq 1$
for all $i \in V$ (no vertex is on the absolute).
\end{enumerate}
\medskip\emph{Pf}. \ (1) Since $T_{p_i}$ multiplies only the columns corresponding to vertex
$i$, we need only verify $k_{ij} \times T_{p_i} = p_i - p_j$.
This is a straightforward calculation,
\begin{eqnarray*}
& &
k_{ij} \times \left(\text{column } \ell \text{ of } T_{p_i} \right) \\
& = &
\left(\frac{1 + K (p_i \cdot p_j) }{1 + K (p_i \cdot p_i)}
p_i - p_j\right) \cdot \left(\textbf{e}_{\ell} + K p_i^{(\ell)}
p_i\right)
\\
& = &
\left(\frac{1 + K (p_i \cdot p_j) }{1 + K (p_i \cdot p_i)}\right)
\left( p_i \cdot \textbf{e}_{\ell} + K p_i^{(\ell)} (p_i \cdot p_i)\right)
- \left( p_j^{(\ell)} + K p_i^{(\ell)} (p_j \cdot p_i) \right)
\\
& = &
\left(\frac{1 + K (p_i \cdot p_j) }{1 + K (p_i \cdot p_i)}\right)
\left( 1 + K (p_i \cdot p_i) \right) p_i^{(\ell)}
- \left( p_j^{(\ell)} + K p_i^{(\ell)} (p_j \cdot p_i) \right)
\\
& = &
\left( {1 + K (p_i \cdot p_j) } \right) p_i^{(\ell)}
- \left( p_j^{(\ell)} + K p_i^{(\ell)} (p_j \cdot p_i) \right)
\\
& = &
p_i^{(\ell)} + K p_i^{(\ell)} (p_i \cdot p_j)
- p_j^{(\ell)} - K p_i^{(\ell)} (p_j \cdot p_i)
\\
& = &
p_i^{(\ell)} - p_j^{(\ell)},
\end{eqnarray*}
which is column $\ell$ of $p_i - p_j$.
(2), (3): Since the determinant of $T_K(G,p)$ is the product
$\prod_{i=1}^v \det(T_{p_i})$ and
$$
\det(T_{p_i}) = 1 + K(p_i \cdot p_i),
$$
the dimension of the vector space of first-order motions of $G(p)$
is the same in each geometry iff $1 + K(p_i \cdot p_i) \neq 0$ for
all $i \in V$.
\subsection{Remark} It is well-known that the rank of the rigidity
matrix, and thus first-order rigidity, of a framework in $\mathbb E^n$ is
invariant under projective transformations of $\mathbb E^n$. Due to the
equivalence of first-order theories, the same is true of frameworks in
$X^n$. (In fact, there exists an underlying projective theory.)
Intuitively at least, this projective invariance suggests the
equivalences presented in this paper since all the geometries discusses
can be obtained from projective geometry by choosing an appropriate set
of transformations.
\begin{figure}[htb]
\begin{center}
\includegraphics[trim=100 275 100 275]{images/figure7.ps}
\caption{A visual summary of the
underlying projective theory: hyperbolic space $H$, Euclidean space $E$
and spherical space $S$ can be realized as \emph{subgeometries} of
projective geometry.}
\label{figure: summary of projective theory}
\end{center}
\end{figure}
\section{The First-Order Uniqueness Theorems of Andreev and Cauchy-Dehn}
\label{andreev}
An immediate consequence of the equivalence of these first-order rigidity
theories is the ability to transfer results between the theories.
\subsection{The Cauchy-Dehn Theorem}
The Cauchy-Dehn theorem for polytopes in $\mathbb E^n$ states that a convex,
triangulated polyhedron in $\mathbb E^n$, $n \geq 3$, is first-order rigid.
Before the generalization of this theorem can be stated, convexity in $X^n$
needs to be defined. A set $S \subset X^n$ is \emph{convex} if, for any
line $L$ of $X^n$, $L \cap S$ is connected. Therefore, $S \subset X^n$
is convex iff $\psi_{\mathbb E}(S) \subset \mathbb E^n$ is convex.
\medskip\noindent\textbf{Theorem. } (Cauchy-Dehn)
A convex, triangulated polytope $P$ in $X^n$, $n \geq 3$,
is first-order rigid.
\subsection{A first-order version of Andreev's uniqueness theorem}
If $p$ denotes a point of $\mathbb D^n$, then the set of points $x$ in
$\mathbb R^{n+1}$ satisfying $\langle p, x \rangle_{1} = 0$ (orthogonal in the
hyperbolic sense) defines a unique hyperplane of $\mathbb R^{n+1}$ through
the origin. Therefore, to each point of $p$, there corresponds a
unique hyperplane of $\mathbb H^n$,
$$
P = \{ x \in \mathbb H^n \mid \langle p, x \rangle_1 = 0 \},
$$
and conversely.
If $q$ is another point of $\mathbb D^n$ with $Q$ the corresponding hyperplane
of $\mathbb H^n$, the angle of intersection of the hyperplanes $P$ and $Q$
is defined to be $\arccos( \langle p, q \rangle_1 )$. So
equations~(\ref{eqn_oneX}) and ~(\ref{eqn_twoX})
defining a first-order motion $u$ of a framework $G(p)$
in $\mathbb D^n$,
$$
\langle p_i, u_j \rangle_k + \langle p_j, u_i \rangle_k = 0
\quad\text{and}\quad
\langle p_i, u_i \rangle_k = 0,
$$
are precisely the conditions defining a ``first-order motion''
of a collection of planes under angle constraints (a bar-and-joint
framework is merely a collection of points under distance constraints).
Polyhedra with fixed dihedral angles are examples of such objects.
Under this point-plane correspondence of $\mathbb D^n$ and $\mathbb H^n$, the
Cauchy-Dehn theorem for $\mathbb D^n$ gives a first-order version of Andreev's
uniqueness theorem. Indeed, a simple, convex polytope in $\mathbb H^n$ is a
triangulated, convex polytope in $\mathbb D^n$. We use \emph{stiff} to
denote the analogous definition of first-order rigid.
\medskip\noindent\textbf{Theorem. } (Andreev)
If $M$ is a simple, convex polytope in $\mathbb H^n$, $n \geq 3$,
then $M$ is stiff.
\subsection{Remark} The usual hypothesis of Andreev's theorem requires the
polytope $M$ to have dihedral angles not exceeding $\pi/2$. This
supposition implies $M$ is simple.
\subsection{Remark} The point-plane correspondence described above is
known as polarity. There is a version of this result for the sphere that
requires a better discussion of polarity on the sphere.
|
1,108,101,564,320 | arxiv | \section{Introduction}
\label{sec:intro}
The development of devices employing protein transport through complex\
geometries is a critical problem with a great deal left to be\
comprehended. An accurate simulation model of proteins that allows\
efficient simulations of large systems ($>50$~nm) would provide a\
platform for computational studies to obtain a deeper understanding\
of methods for protein detection~\cite{davenport2012,dekker2013},\
analysis~\cite{fologea2007}, drug delivery~\cite{lesinski2005}, and\
molecular separation~\cite{safty2011}, as well as processes on long\
time scales, such as protein oligomerization and the effects of\
polymer-protein conjugation on transport.
The motion of colloid particles is governed by surface\
interactions, volume exclusions, hydrodynamic interactions, and\
electrostatics~\cite{russel1989}. In confined spaces, such as within nanoporous\
media, there is a large surface area to volume ratio, and all\
aforementioned interactions become enhanced due to close proximities\
of immersed particles and boundaries. These enhanced interactions lead to\
complexity in the transport properties of colloids such as\
proteins. For complex geometries the mathematical description of a\
system becomes a major hurdle to understanding. A modern review of\
approximations has been published by Dechadilok and Deen~\cite{deen2006}.
Modeling is an effective way to understand the influence of\
environmental factors on transport, and here we choose molecular\
dynamics (MD) as our method. Studies in MD on diffusion of proteins\
is limited to mainly studies of cytoplasm\
~\cite{frembgen-kesner2009,ridgway2008,ando2010,dlugosz2011,mereghetti2012}\
or lipid membranes~\cite{javanainen2013,goose2013}. A limited number\
of studies simulating protein diffusion for separation and detection\
exist. In order to capture events that require long time scales with\
all-atom MD, such as nanopore translocation, steered trajectories are\
often required~\cite{kannam2014}. An attempt to capture the rotational\
diffusion tensor through autocorrelation measurements proved to require\
trajectories outside the timescales available to all-atom methods\
~\cite{wong2008}. Liang et al.~\cite{liang2012} simulated\
coarse-grained MARTINI models~\cite{monticelli2008} of human serum\
albumin and bovine hemoglobin proteins sorbing to ion-exchange\
chromatographic media, obtaining 4.80~$\mu$s of simulated time.\
Zavadlav et al.~\cite{zavadlav2014} demonstrated a protein solvated\
in water described by adaptive resolution, in which water was modeled\
as individual molecules close to the protein and as four molecules per\
interaction site at longer distances. More extreme Coarse-graining\
models include single bead models of proteins~\cite{lee2012,tringe2015chemphys},\
$\alpha-$helix models ~\cite{javidpour2009,yang2010}, or the use of\
combinations of continuum methods and explicit MD particles~\cite{jubery2012}.
This study seeks a model with a consistent set of input\
parameters that accurately reproduces the diffusive transport of\
proteins (sans hydrophobic and electrostatic interactions).\
An accurate and efficient\
non-steered MD model of proteins that\
includes all major interactions is the eventual goal.\
While atomic resolution is a desirable\
goal in the modeling of soft materials in complex geometries,\
accessible time and length scales limit the efficient implementation of\
a sufficiently detailed model. Therefore, coarse-graining is required\
to obtain quantitative descriptions.
We begin with a spherical model and subsequently augment the validation\
using ellipsoidal geometries. Modeling proteins as ellipsoids is a common\
tactic in experimental~\cite{roosen-runge2011,yaroslav2006}\
and modeling~\cite{kovalenko2006,schluttig2010} contexts.
The intended use of the presented coarse-grained model is to increase\
the understanding in mass-transport processes in proteins. As such,\
measurement of diffusive transport is the method by which we validate\
the model. We discuss diffusion in the context of translation and\
rotation. The average properties of the Brownian motion of a particle\
is described by the diffusivity, the relationship between the driving\
force of thermal energy and the frictional resistance to motion~\cite{einstein1905}.\
Diffusion is often measured experimentally using, for example,\
particle tracking~\cite{han2006}, fluorescence correlation\
spectroscopy~\cite{magde1972}, scattering~\cite{roosen-runge2011}, or NMR\
relaxation~\cite{yaroslav2006}.
Without knowledge of the orientational\
configuration of the body, diffusive motion appears isotropic in the\
static laboratory coordinate frame (lab-frame). The lab-frame\
diffusivity is characterized by a scalar, $D$. Diffusivity can be more\
completely be described as a tensor $\overline{\overline{D}}$. The\
diffusivity tensor is a symmetric 3-by-3 matrix, and in the coordinate\
frame defined by the principal axes of the particle body, also known\
as the body-frame, the diffusivity tensor becomes diagonal:
\begin{equation}
\label{eq:diffusion_tensor}
\overline{\overline{D}} = \overline{\overline{\mu}}k_{B}T
\end{equation}
\begin{equation}
\label{eq:diffusion_tensor_diagonal}
\begin{pmatrix}
D_{x^{b}} & 0 & 0 \\[0.3em]
0 & D_{y^{b}} & 0 \\[0.3em]
0 & 0 & D_{z^{b}} \\[0.3em]
\end{pmatrix} =
\begin{pmatrix}
1/\xi_{col,x^{b}} & 0 & 0 \\[0.3em]
0 & 1/\xi_{col,y^{b}} & 0 \\[0.3em]
0 & 0 & 1/\xi_{col,z^{b}} \\[0.3em]
\end{pmatrix} k_{B}T
\end{equation}
\noindent
where $\overline{\overline{\mu}}$ is the mobility tensor and\
$\xi_{col,x^{b}}$, $\xi_{col,y^{b}}$, $\xi_{col,z^{b}}$ are the resistance\
values in the body-frame that correspond to the principal diffusivity\
values $D_{x^{b}}$, $D_{y^{b}}$, $D_{z^{b}}$. For spherical particles,\
all diagonal elements are equal and the tensor collapses to a scalar value.
Many processes on the mass-transport timescale of proteins rely on\
anisotropic diffusion. The timescale for crossover from anisotropic\
to isotropic diffusion depend on the rotational dynamics of the\
particle~\cite{han2006}. For proteins, this timescale is on the order\
of microseconds, therefore, the translation of a protein through a\
pore may be dominated by anisotropic diffusion or, in the case of small\
pores, require a body-frame rotation to proceed. Information on anisotropic rotation\
is needed to understand NMR relaxation~\cite{yaroslav2006},\
fluorescence spectroscopy, protein oligomerization~\cite{kuttner2005}.\
The rotational reorientation needed to enter a small pore may help\
explain the strange behavior of pore diffusivity of proteins~\cite{ku2004}. Ensuring a\
complete description of the correct diffusion tensor for particles in our model is\
therefore critical.
We validate the anisotropic motion based on the\
calculations of Perrin~\cite{perrin1934,perrin1936} and Happell \&\
Brenner~\cite{happelbrenner1983}. In those works, the authors\
investigated the flow field past an ellipsoid and determined the\
disturbance to flow has the same form as a sphere with a radius\
$R_{eff}$. The relationship between this effective radius $R_{eff}$ and\
the dimensions of the ellipsoid result are the geometric factors that\
we validate our results against in this study.
The motion of a colloid particle in a pore of comparable size is\
affected by the hydrodynamic coupling enforced by no-slip boundary conditions between the particle\
and the wall~\cite{deen2006}. The result is that the resistance to motion will be\
higher. The increased resistance is called enhanced drag and (for a sphere) depends on\
the direction of motion, the ratio of particle size to pore size,\
and the distance to the pore wall. Many protein separation processes involve\
concentration gradient-driven transport through pores. We therefore\
seek to validate the diffusive transport of colloid particles through\
cylindrical pores and reproduce the enhanced drag calculated previously~\cite{higdon1995}.
The present study simulates protein-like colloid particles using the\
raspberry model scheme~\cite{dunweg2004} in combination with a fast\
hydrodynamic solver, the lattice Boltzmann (LB) method~\cite{succi,dunweg2008}, to\
explore the model at the protein scale and to study anisotropic\
transport due to particle aspect ratio and confinement. This contribution\
also sets the foundation for future model development that\
will include electrostatic as well as hydrophobic interactions.
\subsection{Hydrodynamic Interactions}
\label{sec:hydrodynamics}
The straightforward way to model hydrodynamic interactions is\
through the explicit treatment of water. In large systems, most of the\
computational effort is spent on water. Many methods have been devised to\
describe the effect of fluid interactions on particle motion\
without paying the cost of an explicit solvent, such as dissipative\
particle dynamics~\cite{dunweg2003} and Stokesian dynamics~\cite{brady1988}. In order to successfully model hydrodynamic interactions,\
the algorithm must exhibit Galilean invariance and recovery\
of the Navier-Stokes equations~\cite{succi}.
Originally developed as a lattice-gas solver~\cite{mcnamara1988}, the lattice Boltzmann method is a fluid phase model\
constructed from a discretization of the Boltzmann transport equation\
in space and time. LB reproduces the incompressible Navier-Stokes equation for mass\
and momentum transport at long length and timescales. The relaxation\
of fluid degrees of freedom in liquid systems is much faster than the\
transport of particles, and this separation of timescales allows\
the microscopic details of the fluid to be neglected~\cite{succi}. Compared to explicit\
solvents, LB is computationally efficient.
In LB the fluid ``lives" as density packets on a three dimensional\
square grid of nodes \textit{aka} the lattice. The fluid follows a\
two step scheme: the streaming step followed by the collision step.\
In the streaming step, fluid moves to neighboring nodes along\
discrete velocity links. In the collision step,\
inbound fluid densities at every node exchange momentum and relax\
towards an equilibrium that represents the Maxwell-Boltzmann velocity\
distribution. The simplest way to model relaxation is the lattice\
Bhatnagar, Gross, and Krook~\cite{LBGK1954} method, where the\
collision operator is simply a $1/\tau$ term, and the fluid relaxes over a single time scale.\
In the modern multi-relaxation time scheme~\cite{lallemand2003}, the\
collision operator includes the hydrodynamic moments built into\
independent modes which improves stability.
The incorporation of immersed particles in LB fluid began with Ladd\
et al.~\cite{ladd1994} and Aidun et al.~\cite{aidun1998}. The\
particles were discretized on the lattice as solid nodes immersed in\
fluid nodes, and the lattice was updated between fluid or solid states as the particles translated\
and rotated through the system. In order to resolve the lubrication\
forces that arise between surfaces in close contact,\
a formalism was developed to explicitly add these\
interactions~\cite{nguyen2002}. Recent examples of the discretized\
particle method for ellipsoids include modeling of red blood\
cell dynamics~\cite{janoschek2010} and of colloids at fluid-fluid interfaces~\cite{gunther2013,davies2014}.
LB was first joined with MD in 1999 by Ahlrichs and\
D\"{u}nweg~\cite{ahlrichs1999} by coupling the fluid to the individual\
monomers of a polymer chain. The force of the fluid is imposed upon\
the MD beads using a modified Langevin equation
\begin{equation}
\label{eq:latticeboltzmannfriction}
\overline{F}_{fl}=- \xi_{bead} \left [\overline{v}-\overline{u}(\overline{r},t) \right ] + \overline{f}_s,
\end{equation}
\noindent
where $\xi_{bead}$ is the resistance factor, $\overline{v}$ the\
particle velocity, $\overline{u}(\overline{r},t)$ the fluid velocity\
interpolated at the given position $\overline{r}$ between the nearest\
grid points, and $\overline{f}_s$ a noise term that follows the\
fluctuation dissipation theorem. The bead resistance $\xi_{bead}$\
is a tunable parameter corresponding to friction.
The LB equation can be run in a stochastic or deterministic\
(noise-free) system, depending on the focus of the investigation. LB\
allows hydrodynamic interactions to be included in MD\
simulations at low computational cost. It is inherently parallelizable\
due to the grid representation of fluid populations, and dramatic\
improvement to speed can achieved using GPU\
processing~\cite{lbgpu2012} which is especially suited to mesh systems\
like LB. Boundary conditions are enforced at low cost by assigning\
nodes as ``boundary nodes." The populations streaming to boundary nodes\
are reflected in the next step, known as the ``bounce-back" rule.
\subsection{Colloid particle model}
\label{sec:colloidmodel}
Several models for protein coarse graining currently\
exist, including MARTINI~\cite{monticelli2008},\
shape based coarse graining~\cite{arkhipov2006}, and the models listed previously. A review of methods\
has been given by Baaden~\cite{baaden2013}.
When hydrodynamics is modeled with LB, the raspberry colloid model~\cite{dunweg2004} is\
a practical solution for simulating coarse-grained bodies in MD. A raspberry is a\
rigid body with an outer shell of MD beads spread evenly over a surface. Each surface bead\
represents an Oseen force in the LB field. Integrating singularity forces\
over the surface of a sphere recovers the Stokes force, similarly to boundary element methods.\
This rectifies the difference between point forces and distributed\
surface forces when coupling LB to MD. The surface beads are virtual\
particles with respect to a bead at the center of mass of the colloid.\
This requires the body to be rigid but allows rotational degrees of\
freedom and reduced computational time, since only the center bead is\
integrated in the Verlet scheme. The raspberry was first introduced by\
Lobaskin and D\"{u}nweg~\cite{dunweg2004} as hollow spheres\
and has since been constructed as a filled body in Fischer et al. and de Graff et al.~\cite{fischer2015rasp2layer1st,degraff2015rasp2layerconfined}.\
The body filling beads are also virtualized with respect to the center\
of mass bead. Figure \ref{figure:rasp} shows different geometries of raspberries.
\begin{figure*}
\centering
\begin{tabular}{m{2.9cm} m{5.6cm} m{3.8cm} }
\resizebox{1.0\hsize}{!}{\includegraphics{./images/ellipsoids/sphere_crop2}} &
\resizebox{1.0\hsize}{!}{\includegraphics{./images/prolate_axes_red}} &
\resizebox{1.0\hsize}{!}{\includegraphics{./images/oblate_axes_red}}
\end{tabular}
\caption{Raspberry colloid particles. Left: sphere $(a=b=c)$. Middle: prolate ellipsoid $(a>b=c)$. Left: oblate ellipsoid $(a<b=c$) Cyan beads are volume-filling sites (shown only in the sphere) and red beads are hydrodynamic sites only. Blue lines indicate the semi-axes $a$, $b$, and $c$. At the centers of masses are black beads that all other particles in the raspberry are virtualized with respect to.}
\label{figure:rasp}
\end{figure*}
\section{Methods}
\label{sec:meths}
\subsection{Raspberry construction}
\label{sec:raspconstruction}
Raspberry colloids are constructed in an MD-style scheme using a combination of two forces.\
The first is a harmonic potential between the surface bead and the\
coordinate on the surface with the shortest distance to the bead. The\
second is a Weeks-Chandler-Anderson \cite{wca1971}\
potential that is applied pairwise between the surface beads. When\
integrated in an MD style scheme, the sum of these forces induces the\
beads to spread out evenly over the surface. The spherical raspberry\
colloid particles in this study were constructed using ESPResSo \cite{espresso2013}\
in the same manner as de Graff et al.~\cite{degraff2015rasp2layerconfined}.\
We construct non-spherical raspberry colloid particles here for the first time.\
For a non-spherical body, the construction is more complicated and we\
describe the method in the SI.
All ellipsoids constructed and simulated in this study were ellipsoids\
of revolution, with two degenerate axis lengths $(a,b=c)$.\
Eight prolate $(a > b = c)$ and eight oblate $(a < b = c)$ ellipsoidal raspberry\
colloids of anisotropy $0.1< \phi=\frac{a}{b}<10.0 $ were constructed:\
$\phi = \frac{1}{10}$, $\frac{1}{5}$, $\frac{1}{4}$, $\frac{1}{3}$,\
$\frac{1}{2}$, $\frac{2}{3}$, $\frac{4}{5}$, $\frac{9}{10}$,\
$\frac{10}{9}$, $\frac{5}{4}$, $\frac{3}{2}$, 2, 4, 5, 10.\
All ellipsoids were of volume $(4/3)\pi (4.25)^{3}$\ nm$^{3}$.\
For all raspberries, two surface shell layers were overlaid. The outer\
serves as the hydrodynamic layer and the inner layer as the\
excluded volume layer. During construction the inner layer beads were\
placed at a radius 0.5~nm less than the radius of the outer layer from the center of the\
colloid. The inner layer was assigned a Lennard-Jones radius of 0.5~nm\
in MD. For ellipsoids the inner layer shell was constructed\
as $(a-0.5,b-0.5,c-0.5)$\ nm. This allowed us to place hydrodynamic\
sites at the same location as the edge of the excluded volume of the\
raspberries. Filling the raspberries with interior hydrodynamic coupling points\
resulted in a model that obeyed Faxen's Law for enhanced drag between parallel plates\
in de Graff et al.~\cite{degraff2015rasp2layerconfined}\
and consequently all raspberries discussed here were filled with\
interior beads.
\subsection{Simulation methods}
\label{sec:simmthds}
All simulations were performed using the MD suite ESPResSo version\
3.2.0, development code version 678-g31c4458~\cite{espresso2013}.\
The hydrodynamic interactions were\
computed using the lattice Boltzmann (LB) method implemented on graphics\
processing units (GPUs)~\cite{lbgpu2012}. Each simulation was performed\
on the Surface machine at Lawrence Livermore National Laboratories on\
eight Intel Xeon E5-2670 processors and one NVIDIA Tesla K40m GPU.\
\subsection{Simulation parameters}
\label{sec:simparams}
A unit system relevant to protein-sized\
colloids particles was established: $M_0=100$~amu, $\sigma=1$~nm,\
$\epsilon=k_{B}298$~K, $t_0=\sigma \sqrt{ \epsilon/M_0 }=6.352$~ps. The integration\
time step was $\Delta t = 0.01~t_0$.
In ESPResSo, the input parameters\
to LB are bead resistance constant $\xi_{bead}$ (eq.~\ref{eq:latticeboltzmannfriction}),\
fluid density $\rho$, lattice spacing\
$a_{grid}$, kinematic fluid viscosity $\nu$ and the fluid time\
step $\Delta t_{LB}$. The density and kinematic viscosity were determined from water at 298~K:\
$\rho = $~997~kg$/$m$^{3} =$~6.004~$M_0/\sigma^3$, $\nu= 8.934\times10^{-7}$~m$^{2}/$s~$=5.677\sigma^2 / t_0$ \cite{sengers1986}.\
The bead resistance was varied as dicussed later. The bead\
resistance and viscosity were scaled such that the actual simulated\
viscosity was $\nu=0.8~ \sigma^2 / t_0$\
in order to increase the simulated time. Transport measurements\
using normal and scaled parameters were compared to validate the low viscosity system. The fluid update time\
was chosen to be $0.02~t_0$, or twice the MD time step.\
At each integration step, the forces and torques from the virtual\
beads are computed and applied to the non-virtual center of mass bead\
in the body-frame defined by the principal axes of the particle. The\
positions of the center of mass bead were updated using the Velocity\
Verlet algorithm. The rotations of the center of mass bead were\
handled using a quaternion. This quaternion $\overline{q}(t)$\
represents a rotation from the intial configuration to the current\
configuration at time $t$. The quaternions were updated using a\
formalism of the Verlocity Verlet algorithm for rotations that\
includes time derivatives for the quaternion~\cite{omelyan1998,martys1999},\
which is effectively an angular velocity. The postions of the virtual\
beads are subsequently updated with respect to the position and\
quaternion of the center of mass bead.
Two types of stochastic transport simulations were performed: Infinite dilution\
and confinement. In the infinite dilution simulations, a single\
raspberry colloid was simulated in lattice Boltzmann fluid. The\
fluid was allowed to warm up over 1000 integration steps. After the\
warm up, recordings were made every 200 integration steps.
The position of the center of mass bead, the position of a reference bead on\
the surface, and a quaternion with respect to initial orientation\
were recorded. Multiple runs were performed for a total of $2 \times 10^{5}$
recordings or $2.541$~$\mu$s. Infinite dilution conditions were determined using finite\
size scaling (see Section~\ref{sec:methslabdt}). The box sizes for all ellipsoids were cubic with 27, 32,\
38, 48, 55, 64, 76, 94~nm long sides. For spheres the boxes were similarly\
scaled relative to the size of the raspberry. The lab-frame translation, lab-frame rotation, body-frame\
translation, and body-frame rotation diffusivity were determined where appropriate.
Transport under confinement was simulated by observing the translation of a colloid within a\
cylindrical pore. Spherical raspberries were of size $R_{rasp}=4.25$~nm. Multiple pore\
sizes were simulated such that the non-dimensional pore size was\
$\lambda=R_{rasp}/R_{pore}=$~0.1, 0.2, 0.3, 0.4, 0.5. The simulation box size in the $x$ and $y$ directions depended\
on the pore size. In the $z$ direction the box\
size was 85~nm. Beads defining the pore surface were positioned\
5~nm apart, centers to centers distance, around the pore.\
The Lennard-Jones radius was $r_p=$ 6.25~nm for the pore beads and $r_r=$ 0.5~nm for the\
volume surface layer of colloid beads. The interaction potential between these particles was\
defined as a purely repulsive WCA potential with $\sigma_{WCA}= r_r + r_p$\
and $\epsilon = k_{B}298$~K. Lattice Boltzmann boundary nodes were positioned one half grid spacing,\
0.5~nm, outside the pore wall in order to create a hydrodynamic barrier\
at the pore wall. After the fluid was warmed, recordings of the position of the center of mass bead\
were taken every 200 integration steps for a total of $1.2 \times 10^{6}$ recordings.\
The motion of the colloid particle was analyzed in the pore-frame\
which will be explained in a later section.
In a third type of simulation, Stokes flow over a raspberry was used to calculate colloid resistance.\
Colloid particles were held fixed in the center of a\
cubic simulation box with length 50~nm with lattice Boltzmann boundary nodes along the\
$+y$ and $-y$ faces of the simulation box with fluid velocity imposed\
as $\overline{v}=[0,0,2.20]$\ m/s at these boundaries. The thermostat was turned off. This set-up\
created a Galilei transform of a particle moving through a quiescent\
fluid in a wide channel with particle Reynolds number\
$Re=0.0135$. After the flow was allowed to develop fully, the force\
on each bead on the raspberry colloid was recorded. The mean of\
100 force measurements was recorded. Finite size scaling was\
performed to ensure that the proximity to the boundary did not affect\
the results. The colloid resistance was determined based on Equation~\ref{eq:stokesflow}:
\begin{equation}
\label{eq:stokesflow}
\sum_{beads} \overline{F}_{bead} = -\xi_{col} \overline{v}
\end{equation}
\noindent
where the force on every bead was added to determine the total\
frictional force experienced by the raspberry. Increasing values of\
of bead resistance were used. This colloid resistance $\xi_{col}$ was\
compared to the resistance determined from stochastic transport\
simulations.
\subsection{Lab-frame translational diffusivity}
\label{sec:methslabdt}
Lab-frame translational diffusivities were determined by the slope of the mean squared\
displacement of position versus lag time:
\begin{equation}
\label{eq:dt_lab}
D_{t}=\lim_{\tau\rightarrow \infty} \frac{MSD}{2d\tau}=\lim_{\tau\rightarrow \infty} \frac{\left<(\overline{r}(t)-\overline{r}(t+\tau))^{2}\right>}{6\tau}
\end{equation}
\noindent
where $\tau$ represents the lag time.
In order to eliminate the hydrodynamic interactions of the colloids\
with their images, finite size scaling was used. The result of Hasimoto~\cite{hasimoto1959}\
of forces on period arrays of spheres due to fluid flow can be used to\
determine diffusivity at infinite dilution:
\begin{equation}
\label{eq:hasimoto}
D^{s}(L)=D^{0}-\frac{2.837k_{B}T}{6\rho\eta}\frac{1}{L}
\end{equation}
\noindent
where $D^s(L)$ is the self diffusivity at a certain box size $L$, $D^{0}$ is the\
diffusivity in the given limit of infinite dilution, $\rho$ the density, and\
$\eta$ the dynamic viscosity. Eight\
different simulation sizes were used and the intercept of $D^s$\
versus $L^{-1}$ gave the diffusivity at infinite dilution $D^{0}$.
The colloid resistance to translation $\xi^{t}_{col}$ was determined\
from the Einstein equation
\begin{equation}
\label{eq:einstein_trans}
D^{0}_{t}= \frac{k_{B}T}{\xi^{t}_{col}}.
\end{equation}
The colloid resistance to diffusive motion at infinite dilution was\
determined for colloids at $R_{rasp}=3.25$~nm and\
$5.25$~nm at various bead resistance values. A trend line was established\
for $R_{rasp}$ versus $\xi^{t}_{col}$ for spherical raspberry\
particles of size $R_{rasp}=$2.25, 2.75, 3.25, 3.75,\
4.25, 4.75, 5.25, 5.75~nm based on a least squares fit.\
The bead resistance values for these trendlines were 50 and 1000~$M_{0}/t_{0}$.
For ellipsoidal colloids, the trendline was used to find a translational effective\
hydrodynamic radius $R_{eff,t}$ and the geometric factor $A({\phi})$ for resistance due to\
the anisotropy was determined from
\begin{equation}
\label{eq:perrin_a}
A({\phi}) = \frac{a}{R_{eff,t}}
\end{equation}
\noindent
and compared to the analytical values by Perrin \cite{perrin1934,perrin1936}.\
The formula for the Perrin factor is given in the SI.\
The error was determined by applying Equation~\ref{eq:hasimoto} to the\
standard deviation of the separate simulation runs at each box size.
\subsection{Lab-frame rotational diffusivity}
\label{sec:methslabdr}
Lab-frame rotational diffusivities were determined by the slope of the\
mean squared displacement of angle versus lag time:
\begin{equation}
\label{eq:dr_lab}
D_{r}=\lim_{\tau\rightarrow \infty} \frac{MSD}{2d\tau}=\lim_{\tau\rightarrow \infty} \frac{\left<\Delta \Theta(t+\tau)^{2}\right>}{6\tau}
\end{equation}
The angular displacement was determined by measuring the vector\
pointing from the center of mass bead to the reference bead on the surface\
of the raspberry particle. This vector $\overline{p}(t)$ changes direction with the rotation of the\
rigid raspberry body, and the angular displacement $\Delta \Theta$\
is determined from
\begin{equation}
\label{eq:delta_theta}
\Delta \Theta = \cos^{-1} { \left ( \hat{p}(t) \cdot \hat{p} (t+\tau) \right ) }
\end{equation}
\noindent
where $\hat{p}$ is the unit vector of $\overline{p}$. For our box sizes, the periodic image does not affect the\
rotational motion and $D_{r}$ was taken as the average of all runs\
at all box sizes.
The colloid resistance to rotation $\xi^{r}_{col}$ was determined similarly\
to the case of translation and a trendline was established for\
$R^{3}_{col}$ versus $\xi^{r}_{col}$ for the spherical colloids.
For ellipsoidal colloids, the trendline was used to find a rotational effective\
hydrodynamic radius $R_{eff,r}$ and the geometric factor $B({\phi})$ for resistance due to\
the anisotropy was determined from
\begin{equation}
\label{eq:perrin_b}
B({\phi}) = \left ( \frac{a}{R_{eff,r}} \right ) ^{3}
\end{equation}
\noindent
and compared to the analytical values by Perrin \cite{perrin1934,perrin1936}.\
The formula for the Perrin factor is given in the SI.
The error was determined from the standard deviation of all\
separate simulation runs at all box sizes.
\subsection{Body-frame translational diffusivity}
\label{sec:methsbfdt}
An anisotropic particle has anisotropic resistance to motion through a fluid. A prolate ellipsoid with semi-axes $(a > b = c)$ should have\
higher diffusivity along the principal axis $a$ compared to $b$ and $c$.\
An oblate ellipsoid with semi-axes $(a < b = c)$ should have\
lower diffusivity along the principal axis $a$ compared to $b$ and $c$.\
The anisotropic geometry shifts the translational effective hydrodynamic radius\
when the body translates along the individual principal axes of the ellipsoid.\
Figure~\ref{figure:rasp} shows the principal axes of a prolate and an oblate ellipsoid.\
We examined motion in the body-frame of a raspberry colloid\
particle. The principal axes along the semi-axes $a$, $b$, and $c$\
define the body-frame as $x_{b}$, $y_{b}$, and $z_{b}$.
As the colloid particle rotates, the body-frame rotates with it.\
We sought to observe the effects of anisotropy on the body-frame diffusive motion and\
compare the results to the analytical values of Happel and Brenner \cite{happelbrenner1983}.
We followed the method of Han et al.~\cite{han2006} to develop a body-frame trajectory and analyze\
body-frame transport. The body-frame is related to the lab-frame\
by a rotation. Han used a two\
dimensional body-frame rotation to experimentally track ellipsoidal\
particle motion confined to two dimensions.\
We require a three dimensional\
rotation which was made straightforward by the use of quaternions.\
ESPResSo tracked the quaternion that would generate the rotation from the initial\
configuration of the colloid particle to the present orientation. We therefore\
used the inverse quaternion to evaluate motion along the body axes.\
The lab-frame displacement over one measurement interval is
\begin{equation}
\label{eq:labdisp}
\delta \overline{X}(t_n) = \overline{X}(t_n) - \overline{X}(t_{n-1}).
\end{equation}
For an inverse quaternion $\overline{q}^{-1}(t) = (w,-x,-y,-z)$ at\
time $t$ the rotation
\begin{equation}
\label{eq:quat}
\delta \overline{X}^{b}_n = \overline{q}^{-1} \delta \overline{X_n} \overline{q}
\end{equation}
\noindent
gives the body-frame displacement over one measurement interval,\
$\delta \overline{X}^{b}_n$. The\
quaternion rotation is given explicitly in the SI.
The total body-frame displacement for time $t_{n}$ was built from a\
summation over all previous body-frame displacements:
\begin{equation}
\label{eq:bodyframedisplacement}
\overline{X}^{b}(t_{n}) = \sum\limits_{k=1}^n \delta \overline{X}^{b}(t_{k})
\end{equation}
\noindent
and the body-frame displacement between time $t$ and lag time $\tau$ was based on\
equation \ref{eq:bodyframetrajectory}:
\begin{equation}
\label{eq:bodyframetrajectory}
\Delta \overline{X}^{b}(t+\tau) = \overline{X}^{b}(t+\tau) - \overline{X}^{b}(t).
\end{equation}
Mean squared displacements of the trajectories along the principal axes defined by the semi-axes\
$a$,\ $b$,\ and $c$ were analyzed to determine $D_{t,x^{b}_{i}}$, the\
translational diffusivity along body axis $x^{b}_{i}$, $\overline{X}^{b} \in \{x^{b},y^{b},z^{b}\}$:
\begin{equation}
\label{eq:dt_body}
D_{t,x^{b}_{i}}=\lim_{\tau\rightarrow \infty} \frac{MSD}{2d\tau}=\lim_{\tau\rightarrow \infty} \frac{\left<(\Delta x^{b}_{i}(t+\tau))^{2}\right>}{2\tau}.
\end{equation}
The trendline of $\xi^{t}_{col}$ versus $R_{rasp}$ was used to find a translational effective\
hydrodynamic radius $R_{eff,x^{b}_{i},t}$ for motion along the $x^{b}_{i}$\
body axis. The relationship between effective size and\
actual size of the colloid particle is shown as a geometric factor.
For prolate ellipsoids, the geometric factor $E({\phi})$ on resistance to motion along the long singular ($a$) axis was determined from
\begin{equation}
\label{eq:happel_e}
E({\phi}) = \frac{R_{eff,x^{b},t}}{c}
\end{equation}
\noindent
and the geometric factor $F({\phi})$ on resistance to motion along the short degenerate ($b$,\ $c$) axes was determined from
\begin{equation}
\label{eq:happel_f}
F({\phi}) = \frac{R_{eff,y^{b}z^{b},t}}{c} .
\end{equation}
For oblate ellipsoids, the geometric factor $G({\phi})$ on resistance to motion along the short singular axis ($a$) was determined from
\begin{equation}
\label{eq:happel_g}
G({\phi}) = \frac{R_{eff,x^{b},t}}{c}
\end{equation}
\noindent
and the geometric factor $H({\phi})$ on resistance to motion along the long degenerate axes ($b$,\ $c$) was determined from
\begin{equation}
\label{eq:happel_H}
H({\phi}) = \frac{R_{eff,y^{b}z^{b},t}}{c} .
\end{equation}
$E({\phi})$,\ $F({\phi})$,\ $G({\phi})$,\ and $H({\phi})$ were compared to the analytical values\
calculated by Happel and Brenner~\cite{happelbrenner1983} which are given in the SI. It should\
be noted that the body frame geometric factors in this section and Section~\ref{sec:methsbfrot}\
are reciprocal to the factors in Section~\ref{sec:methslabdr} and are\
given with respect to $c$ as opposed to $a$.
\subsection{Body-frame rotational diffusivity}
\label{sec:methsbfrot}
The body-frame rotational diffusivity was measured for ellipsoidal\
colloids. A body-frame rotation about the body-frame axis $x^{b}_{i}$\
between time $t-1$ and time $t$ is defined\
as the two dimensional projection of the three dimensional\
lab-frame rotation between time $t-1$ and time $t$ onto the plane perpendicular\
to $x^{b}_{i}$ at time $t-1$. For example, rotation about the\
$x^{b}$ axis, or the rotation about the singular axis $a$, is\
determined by projecting the full rotation onto the $(y^{b}z^{b})_{t-1}$ plane.\
The method for determining body-frame angular displacement can be found\
in the SI.
The total body-frame angular displacement about the $x^{b}_{i}$ body axis for time $n$ is built from a\
summation over all previous body-frame displacements:
\begin{equation}
\label{eq:bodyframerotdisplacement}
\Theta_{x^{b}_{i}}(t_{n}) = \sum\limits_{k=1}^n \delta \Theta_{x^{b}_{i}}(t_{k})
\end{equation}
\noindent
and the body-frame displacement between time $t$ and lag time $\tau$ was based on\
equation \ref{eq:bodyframetrajectory}:
\begin{equation}
\label{eq:bodyframerottrajectory}
\Delta \Theta_{x^{b}_{i}}(t+\tau) = \Theta_{x^{b}_{i}}(t+\tau) - \Theta_{x^{b}_{i}}(t).
\end{equation}
Mean squared displacements of the trajectories for the semi-axes\
$a$,\ $b$,\ and $c$ were analyzed to determine the $D_{r,x^{b}_{i}}$, the\
rotational diffusivity about body axis $x^{b}_{i}$.
\begin{equation}
\label{eq:dr_body}
D_{r,x^{b}_{i}}=\lim_{\tau\rightarrow \infty} \frac{MSD^{2}}{2d\tau}=\lim_{\tau\rightarrow \infty} \frac{\left<(\Delta \Theta_{x^{b}_{i}}(t+\tau))^{2}\right>}{(3/2)2\tau}.
\end{equation}
The $(3/2)$ factor in the denominator of Equation \ref{eq:dr_body}\
was required for lab-frame $D_{r}$ and body-frame $D_{r}$ values to\
agree and is a result of projecting three dimensional motion into\
two dimensions.
The trendline of $\xi^{r}_{col}$ versus $R^{3}$ was used to find a rotational effective\
hydrodynamic radius $R_{eff,x^{b}_{i},r}$ for motion about the $x^{b}_{i}$\
body axis. For prolate ellipsoids, the geometric factor $I({\phi})$ on resistance to motion about the singular ($a$) axis was determined from
\begin{equation}
\label{eq:dr_bf_i}
I({\phi}) = \left ( \frac{R_{eff,x^{b},r}}{c} \right )^{3}
\end{equation}
\noindent
and the geometric factor $J({\phi})$ on resistance to motion about the degenerate ($b$,\ $c$) axes was determined from
\begin{equation}
\label{eq:dr_bf_j}
J({\phi}) = \left ( \frac{R_{eff,y^{b}z^{b},r}}{c} \right )^{3}.
\end{equation}
For oblate ellipsoids, the geometric factor $K({\phi})$ on resistance to motion about the singular axis was determined from
\begin{equation}
\label{eq:dr_bf_k}
K({\phi}) = \left ( \frac{R_{eff,x^{b},r}}{c} \right )^{3}
\end{equation}
\noindent
and the geometric factor $L({\phi})$ on resistance to motion about the degenerate axes was determined from
\begin{equation}
\label{eq:dr_bf_l}
L({\phi}) = \left ( \frac{R_{eff,y^{b}z^{b},r}}{c} \right )^{3}.
\end{equation}
\subsection{Colloid transport under confinement inside cylindrical pores}
\label{sec:methsenhdrag}
The analysis of transport of\
the colloid in a cylindrical pore was performed by a transformation\
from motion in the lab-frame into motion in the\
pore-frame $\overline{X}^{p} \in \{x^{p},y^{p},z^{p}\}$. As the colloid\
translates within the pore, the pore-frame rotates such that the\
$x^{p}$ axis is equivalent to the radial coordinate, $z^{p}$ is\
equivalent to the axial coordinate, and $y^{p}$ is orthogonal\
to $x^{p}$ and $z^{p}$. The axial coordinate $z^{p}$ requires no\
conversion. The pore coordinate frame is illustrated in Figure~\ref{figure:pore_coords}.
\begin{figure}
\centering
\resizebox{0.5\hsize}{!}{\includegraphics{./images/pore_coords}}
\caption{Pore-frame coordinate system, adapted from \cite{higdon1995}.}
\label{figure:pore_coords}
\end{figure}
The method for transformation into the pore-frame $\overline{X}^{p}$\
is given in the SI. The total pore-frame displacement about the\
$x^{p}_{i}$ pore-frame axis for time $t_{n}$ is built from a\
summation over all previous displacements:
\begin{equation}
\label{eq:poreframedisplacement}
\overline{X}^{p}(t_{n}) = \sum\limits_{k=1}^n \delta \overline{X}^{p}(t_{n})
\end{equation}
\noindent
and the body-frame displacement between time $t$ and lag time $\tau$ was based on\
equation~\ref{eq:bodyframetrajectory}:
\begin{equation}
\label{eq:poreframetrajectory}
\Delta \overline{X}^{p}(t+\tau) = \overline{X}^{p}(t+\tau) - \overline{X}^{p}(t_{n})(t) .
\end{equation}
The total pore frame displacement contains the full trajectory in the\
moving pore-frame, where the $x^{p}$ direction is always the line\
between the radial center, the colloid particle, and the wall. The\
$y^{p}$ direction is always normal to this and the axial $z^{p}$\
direction, and each displacement is analyzed with respect to the\
pore-frame. Each frame in the trajectory was recorded according to\
its radial coordinate and a subtrajectory starting with this frame\
was built from the subsequent displacements of the colloid in the\
pore-frame. The mean squared displacement for each bin was built\
from all the subtrajectories of that bin and the slope of the mean\
squared displacement was used to determine the pore frame\
translational diffusivity at that bin. The lowest twenty-five lag\
times in the linear regime of the mean squared displacement for\
each bin were used for the diffusivity calculation. The pore-frame\
diffusivity values were normalized by the unbound lab-frame\
diffusivity at the respective friction to give enhanced drag\
to finally be compared with enhanced drag data from the literature.
Higdon and Muldowney~\cite{higdon1995} calculated the enhanced drag of a sphere in cylindrical\
pores at off-center nondimensional radial coordinates and nondimensional\
pore sizes using a spectral boundary\
element method. They defined enhanced drag $S_{x^{p}_{i}}$ as\
\begin{equation}
\label{eq:enhanced_drag_force}
S_{x^{p}_{i}} = \frac{F_{x^{p}_{i}}}{F_{\infty}}
\end{equation}
\noindent
where $F_{x^{p}_{i}}$ is the force of the fluid in response to motion\
along the $x^{p}_{i}$ direction in the pore and $F_{\infty}$ is the\
force of the fluid in response to motion in an unbounded system.\
In the present study enhanced drag is equivalently defined as
\begin{equation}
\label{eq:enhanced_drag_diffusion}
S_{x^{p}_{i}} = \frac{D_{0}}{D_{x^{p}_{i}}}
\end{equation}
\noindent
where $D_{x^{p}_{i}}$ is the diffusivity of the particle along the\
$x^{p}_{i}$ direction in the pore-frame and $D_{0}$ is the\
diffusivity of the particle at infinite dilution. The error was\
determined from the standard deviation of the mean squared displacement\
normalized by the square root number of independent measurements at each bin,\
making the error bars invisible on the plots.
\section{Results}
\label{sec:rslts}
\subsection{Bead resistance}
\label{rsltsfriction}
The input friction value for bead resistance in Equation~\ref{eq:latticeboltzmannfriction}\
has an empirical effect on the relationship between MD beads and LB fluid~\cite{dunweg2008}.\
Figure~\ref{figure:friction_trans} shows the translational resistance of the colloid particle determined in the Stokes\
flow simulations and in the infinite dilution simulations. These\
simulations relate the bead resistance from Equation~\ref{eq:latticeboltzmannfriction}\
to the colloid resistance.\
The colloid resistance $\xi^{t}_{col}$ is determined from Equation~\ref{eq:einstein_trans} for the diffusion simulations and from Equation~\ref{eq:stokesflow} for the Stokes flow simulations.
\begin{figure}
\centering
\begin{tabular}{c c}
\resizebox{0.48\hsize}{!}{ \includegraphics{./images/2016_friction_325_filled_trans} } &
\resizebox{0.48\hsize}{!}{\includegraphics{./images/2016_friction_575_filled_trans} }\\
\end{tabular}
\caption{Raspberry colloid translational resistance as a function of bead resistance from Stokes flow and lab-frame translational diffusion results. a) $R=3.25$\ nm. b) $R=5.75$\ nm. Red triangles: colloid resistance from Stokes flow simulation. Blue square: colloid resistance from translational diffusion.}
\label{figure:friction_trans}
\end{figure}
As the bead resistance\
is increased, the colloid translational resistance increases\
steeply initially but then reaches a maximum value around $\xi_{bead}=1000M_{0}/t_{0}=26.14\times 10^{-12}$kg$/$s. The raspberry allows\
LB fluid to pass through because each MD bead\
is only a point particle to LB. The raspberry becomes more water-tight\
to LB fluid as the bead resistance increases.\
Becoming completely water-tight and having no-slip boundary conditions should result in a colloid resistance\
equal to the Stokes result $\xi^{t}_{col}=6\pi\eta R$. Figure~\ref{figure:friction_trans} shows the\
limiting resistance value for the raspberries\
to be $28.3\%$ and $25.3\%$ higher than the Stokes result for $R_{rasp}$=3.25nm and $R_{rasp}$=5.75nm, respectively.
\begin{figure}
\centering
\begin{tabular}{c c}
\resizebox{0.48\hsize}{!}{ \includegraphics{./images/2016_friction_325_filled_rot}} &
\resizebox{0.48\hsize}{!}{ \includegraphics{./images/2016_friction_575_filled_rot} }\\
\end{tabular}
\caption{Colloid rotational resistance as a function of bead resistance from lab-frame rotational diffusion results on raspberry colloid particles. a) $R=3.25$~nm. b) $R=5.75$~nm. }
\label{figure:friction_rot}
\end{figure}
Figure~\ref{figure:friction_rot} shows the rotational resistance of the colloid particle\
determined from the diffusion. The\
colloid rotational resistance approaches a limiting value, but the input\
bead resistance required is higher for colloid rotational resistance than colloid translational resistance.\
Larger bead resistance values were not allowed due to numerical instabilities\
in the simulations. The maximum value obtained for colloid rotational resistance was $146\%$ and $102\%$ greater than the\
Stokes-Einstein-Debye result $\xi^{r}_{col}=8\pi\eta R^{3}$ for $R_{rasp}$=3.25~nm and $R_{rasp}$=5.75~nm, respectively.
The bead resistance $\xi_{bead}=$50 $M_{0}/t_{0}$ resulted in translational\
colloid resistance values close to the Stokes result for the two\
spherical raspberries of different size. Transport behavior may also depend on the\
leakiness of the colloid, therefore the three resistance values under\
investigation are $\xi_{bead}=$50, 200, 1000 $M_{0}/t_{0}$\
which we call low, mid and high bead resistance.
\subsection{Spheres: Radius}
\label{sec:rsltsradius}
The relationships between colloid radius and translational and rotational\
resistance were established for two resistance values: $\xi_{bead}=$ 50 and 1000 $M_{0}/t_{0}$.\
Figure~\ref{figure:radius} presents these data. All raspberry particles\
show linear relations between $\xi^{t}_{col}$ and $R_{rasp}$ as well as between\
$\xi^{r}_{col}$ and $R_{rasp}^{3}$.
\begin{figure}
\centering
\begin{tabular}{c c}
\resizebox{0.48\hsize}{!}{ \includegraphics{./images/2016_radius_xi50_xi1000_t_int} } &
\resizebox{0.48\hsize}{!}{ \includegraphics{./images/2016_radius_xi50_xi1000_r_int} }\\
\end{tabular}
\caption{Colloid translational (a) and rotational (b) resistance as a function of colloid radius. Red triangles represent the results of low bead resistance, blue squares represent high bead resistance. The black lines represent the Stokes-Einstein result for translational resistance and the Stokes-Einstein-Debye result for rotational resistance, respectively, and red and blue lines represent the trendlines for low and high bead resistance, respectively.}
\label{figure:radius}
\end{figure}
Figure~\ref{figure:trendlines} gives a bar chart representation of\
the least squares fit equations of the lines in Figure~\ref{figure:radius}\
between $\xi^{t}_{col}$ and $R_{rasp}$ and between $\xi^{r}_{col}$ and\
$R_{rasp}^{3}$. The theoretical slopes are $6 \pi \eta R_{rasp}$ for\
translation and $8 \pi \eta R^{3}_{rasp}$ for rotation. The slope was\
greater than the theoretical values for\
each configuration and did not pass through the origin. For\
translation, the slopes of both bead resistanes were similar, and the\
intercept for low resistance was an order of magnitude larger compared to high resistance.\
For rotation, the slope of high bead resistance was 24.2\% higher than\
low bead resistance, and the intercepts were within an order of magnitude.
\begin{figure}
\centering
\begin{tabular}{c c}
\resizebox{0.48\hsize}{!}{ \includegraphics{./images/rasp_slopes_t} } &
\resizebox{0.48\hsize}{!}{ \includegraphics{./images/rasp_slopes_r} }\\
\end{tabular}
\caption{Fits for colloid resistance to translational and rotational lab-frame transport trends. Stokes-Einstein and Stokes-Einstein-Debye with simulation results from different friction values. a: translation; Red bar: slope $\times 10^{2}$; Blue bar: intercept $\times 10^{12}$. b: rotation; Red bar: slope $\times 10^{2}$; Blue bar: intercept $\times 10^{29}$. Units of slopes are kg~s$^{-1}$m$^{-1}$, units of intercepts are kg~s$^{-1}$ for translation, kg~s$^{-1}$~m$^{-2}$ for rotation.}
\label{figure:trendlines}
\end{figure}
The nature of LB leads to a renormalization of the hydrodynamic radius\
of MD beads in LB~\cite{ahlrichs1999,dunweg2008}.\
With raspberry particles, the designed radius and hydrodynamic radius are offset as shown by these data and explored by Fischer et al.~\cite{fischer2015rasp2layer1st}.\
Within the renormalized system, however, the\
raspberry particles are defined by a transport frame that is\
self-consisent. Later on, the data will show that the resistance of\
nonspherical particles were also consistent within this frame.\
The equations for the trend lines between $\xi^{t}_{col}$ and\
$R_{rasp}$ and between $\xi^{r}_{col}$ and $R_{rasp}^{3}$ for low\
bead resistance and high bead resistance are given in Table 1 of\
the SI. If the methods and parameters in this study are repeated,\
these trendlines will be useful as a systematic calibration between\
designed radius (including effective radii for nonspherical bodies)\
and hydrodynamic radius.
\subsection{Spherical colloid transport under confinement}
\label{sec:rsltsenhdrag}
Enhanced drag on spherical colloids due to confinement in cylindrical\
pores was simulated and analyzed in the pore-frame. Enhanced drag values\
depend on direction within the pore-frame, radial coordinate, and\
relative colloid size. The nondimensional radial coordinate is defined as
\begin{equation}
\label{eq:beta_definition}
\beta=\frac{d}{R_{pore}-R_{rasp}}
\end{equation}
\noindent
where $d$ is the distance from the center of the pore to the center of\
the raspberry. The nondimensional colloid size is defined as
\begin{equation}
\label{eq:lambda_definition}
\lambda=R_{rasp}/R_{pore}.
\end{equation}
Figure \ref{figure:enhanced_drag} shows the enhanced drag in the $x^{p}$, $y^{p}$, and\
$z^{p}$ directions. The results of simulations for each pore size are shown\
in the same color as points connected by solid lines across the $\beta$
axis. The dotted lines are the enhanced drag results of Higdon and\
Muldowney~\cite{higdon1995}.
We find high values of $D_x$ at small\
values of $\beta$, where the colloid is close to the pore center. This\
is because motion is biased to the $x^{p}$ direction close to the\
pore center. The result is that enhanced drag in the $x^{p}$ direction is too low\
and the enhanced drag in the $y^{p}$ direction is too high.
Enhanced drag generally increases as $\lambda$ and $\beta$ increase in the simulations.\
For the $x^{p}$ direction of motion, the colloid particles\
approach the pore wall head-on. The $x^{p}$ data\
are shown in Figure~\ref{figure:enhanced_drag}.a, \ref{figure:enhanced_drag}.d, \ref{figure:enhanced_drag}.g. The\
transport of the raspberries shows close agreement with the\
enhanced drag values of Higdon and Muldowney.\
As bead resistance increased the enhanced drag data of Higdon and\
Muldowney was increasingly better reproduced. The enhanced drag increased up to large values of $\beta$,\
where the colloid was so close to the pore wall that there were few\
fluid lattice nodes between the raspberry surface and boundary nodes. Enhanced drag\
tracked most closely to the reference values in the $y^{p}$\
direction (Figure~\ref{figure:enhanced_drag}.b, \ref{figure:enhanced_drag}.e, \ref{figure:enhanced_drag}.h) with high bead resistance, as well.
For the $z^{p}$ axial direction of motion (Figure~\ref{figure:enhanced_drag}.c, \ref{figure:enhanced_drag}.f, \ref{figure:enhanced_drag}.i) resistance was increased for\
larger colloid to pore ratios. As bead resistance increases the enhanced drag increases,\
which is consistent with the $x^{p}$ and $y^{p}$ directions.
The enhanced drag along the $x^{p}$ coordinate, $S_{z}$, is lower\
than Higdon and Muldowney's calculations for small pores or $\lambda > 0.2$.\
The minima of the reference values for $S_{z}$ are at off-center\
coordinates. The pressure force is maximum at the center\
and minimum at the pore wall, the lubrication force is minimum at the\
center and maximum at the pore wall, and the sum is minimized\
at an off-center position. This phenomenon is similar to the well-known\
Segr\'{e}-Silberberg effect\cite{segre1961}, in which lateral\
($x^{p}$ direction) lift forces move particles into off-center\
positions but here the particle and fluid are not at the same\
velocity. As the pore size decreases, the location\
$\beta$ and depth of the minimum in $S_{z}$ increases. This behavior\
was not captured by the simulations.
We also ran pore confinement simulations with raspberries that did\
not have the extra hydrodynamic-only layer, in order to show the\
importance of the extra hydrodynamic coupling points in reproducing\
enhanced drag. Since the two-layer filled raspberry colloid particles\
showed the highest agreement with the spectral boundary element data\
of Higdon and Muldowney, these single layer filled raspberry\
simulations were conducted at high friction coupling values. These\
confinement data are presented in Figure 4 of the SI and are lower\
in enhanced drag than the two layer filled raspberry particles. This\
supports our claim that raspberries particles should have outer\
surface beads at the design radius.
The resistance value under further investigation is $\xi_{bead}=$ 1000 $M_{0}/t_{0}$.\
Based on the enhanced drag data the raspberries are least leaky to LB at this high resistance value.\
Within the renormalized frame described in Section ~\ref{sec:rsltsradius},\
the high bead resistance gives consistent transport behavior exhibiting\
enhanced drag within cylindrical pores.
\begin{figure*}
\begin{tabular}{|c c c|}
\hline
\resizebox{0.3\hsize}{!}{ \includegraphics{./images/xi50_2layer_filled_x_HM_D1-eps-converted-to.pdf}} &
\resizebox{0.3\hsize}{!}{ \includegraphics{./images/xi50_2layer_filled_y_HM_D1-eps-converted-to.pdf}} &
\resizebox{0.3\hsize}{!}{ \includegraphics{./images/xi50_2layer_filled_z_HM_D1-eps-converted-to.pdf}}\\
\resizebox{0.3\hsize}{!}{ \includegraphics{./images/xi200_2layer_filled_x_HM_D1-eps-converted-to.pdf}} &
\resizebox{0.3\hsize}{!}{ \includegraphics{./images/xi200_2layer_filled_y_HM_D1-eps-converted-to.pdf}} &
\resizebox{0.3\hsize}{!}{\includegraphics{./images/xi200_2layer_filled_z_HM_D1-eps-converted-to.pdf}}\\
\resizebox{0.3\hsize}{!}{\includegraphics{./images/xi1000_2layer_filled_x_HM_D1-eps-converted-to.pdf}} &
\resizebox{0.3\hsize}{!}{\includegraphics{./images/xi1000_2layer_filled_y_HM_D1-eps-converted-to.pdf}} &
\resizebox{0.3\hsize}{!}{\includegraphics{./images/xi1000_2layer_filled_z_HM_D1-eps-converted-to.pdf}}\\
\hline
\end{tabular}
\caption{Enhanced drag due to confinement within cylindrical pores. Red: $\lambda=0.1$. Orange: $\lambda=0.2$. Yellow: $\lambda=0.3$. Green: $\lambda=0.4$. Blue: $\lambda=0.5$. Top row (a,b,c): $\xi_{bead}=50M_{0}t^{-1}$. Middle row (d,e,f): $\xi_{bead}=200M_{0}t^{-1}$. Bottom row (g,h,i): $\xi_{bead}=1000M_{0}t^{-1}$. Left column (a,d,g): enhanced drag in $x^{p}$ direction. Middle column (b,e,h): enhanced drag in $y^{p}$ direction. Right column (c,f,i): enhanced drag in $z^{p}$ direction.}
\label{figure:enhanced_drag}
\end{figure*}
\subsection{Ellipsoids: lab-frame diffusion}
\label{sec:rsltslabdt}
The lab-frame translational and rotational diffusivities were\
determined via simulation for filled ellipsoidal raspberries of\
constant volume. After diffusivities were converted to translational and rotational\
resistances, the translational and rotational effective radii of the ellipsoids were calculated\
from the trendlines of $\xi^{t}_{col}$ versus $R_{rasp}$ and $\xi^{r}_{col}$ versus $R_{rasp}^{3}$.\
The lab-frame geometric factors $A(\phi)$ and $B(\phi)$ of the colloid\
particles were determined using equations~\ref{eq:perrin_a} and~\ref{eq:perrin_b},\
respectively.
Perrin~\cite{perrin1934,perrin1936} calculated the relationship between the translational and rotational effective radius $R_{eff}$, the aspect ratio $\phi$,\
and the singular semi-axis length $a$. The formulas can be found in the SI.\
The analytical geometric resistance factors\
were calculated by Perrin via averaging the friction over the entire body of the ellipsoid.\
Figure~\ref{figure:labframe_ellipsoids} shows these curves with respect to $\phi$. In prolate ellipsoids, $a$ is greater than\
$R_{eff}$, hence, $A(\phi)$ and $B(\phi)$ are both greater than 1.0. In oblate ellipsoids,\
$a$ is less than $R_{eff}$, hence, $A(\phi)$ and $B(\phi)$ are less than 1.0.
The lab-frame geometric factors $A(\phi)$ and $B(\phi)$ of the sixteen\
ellipsoids plus the curves for Perrin factors are presented in Figure~\ref{figure:labframe_ellipsoids}. Each of the sixteen\
ellipsoids are represented by one red circle and one blue square\
located at the same $\phi$ position on the plot. The translational\
factor $A(\phi)$ of these colloid particles are in excellent agreement\
with the analytical result.\
The rotational factor $B(\phi)$ is also in excellent agreement with\
Perrin's result for oblate ellipsoids and prolate ellipsoids for\
$\phi<3$. For prolate ellipsoids with $\phi>3$ the rotational\
diffusivity values are lower than Perrin's calculations.
\begin{figure}
\centering
\resizebox{0.47\hsize}{!}{\includegraphics{./images/labframe_ab_data_filled}}
\caption{Lab-frame transport geometric factors. The red line and blue line are the geometric factor values calculated by Perrin and represent A and B, respectively. Red circles and blue circles are the lab-frame geometric factors for the sixteen simulated ellipsoidal raspberries and represent A and B, respectively.}
\label{figure:labframe_ellipsoids}
\end{figure}
\subsection{Ellipsoids: body-frame diffusion}
\label{sec:rsltsbodyframe}
The body-frame translational and rotational diffusivities were\
determined for filled ellipsoidal raspberries of\
constant volume. The translational diffusivity of the ellipsoids in the body-frame\
followed the finite size scaling of $D_{s}$ versus $L^{-1}$.\
After conversion of the diffusivity values to colloid resistances, the $R_{eff,x^{b}_{i},t}$ for translation along the body axes\
and $R_{eff,x^{b}_{i},r}$ for rotation about the body axes were determined via the\
trendlines of $\xi^{t}_{col}$ versus $R_{rasp}$ and $\xi^{r}_{col}$ versus $R_{rasp}^{3}$.
Happel and Brenner determined the geometric factors that define the\
relationship between an ellipsoid's actual body size $c$, aspect ratio\
$\phi$, and translational effective radius $R_{eff,x^{b}_{i},t}$ along\
the body-axes of an ellpsoid of revolution\
~\cite{happelbrenner1983}. Perrin determined the\
geometric factors that determine the rotational effective radius\
along different body-axes. The formulas can be found in the SI.
Figure~\ref{figure:bftransport}.a gives the body-frame translational geometric\
factors $E$, $F$, $G$, $H$ of the sixteen ellipsoidal colloid particles\
plus the curves for the Happel-Brenner factors.\
The translational effective radii of prolate ellipsoids are smaller\
compared to the short body length $c$, hence, $E$ and $F$ are greater than 1.0.
The translational effective radii of oblate ellipsoids are larger\
compared to the short body length $c$, hence, $G$ and $H$ are less than 1.0. All\
ellipsoids agree very well with the Happel-Brenner factors.\
These data support the raspberry model for future applications of\
protein transport where anisotropy may help to explain open questions in protein\
separation~\cite{ku2004}.
\begin{figure}
\begin{tabular}{c c}
\resizebox{0.47\hsize}{!}{ \includegraphics{./images/bodyframe_efgh_data_smbox_filled}} &
\resizebox{0.47\hsize}{!}{\includegraphics{./images/bodyframe_ijkl_data_smbox_filled}} \\
\end{tabular}
\caption{Body-frame transport geometric factors. a) Translation geometric factors. Red line, green line, blue line, and green line are the geometric factor values calculated by Happel and Brenner and represent $E$, $F$, $G$, and $H$, respectively. Red triangles, green x's, blue circles, and gray squares are the body-frame translational geometric factors for the simulated ellipsoidal raspberries and represent $E$, $F$, $G$, and $H$, respectively. b) Rotation geometric factors. Red triangles, green x's, blue circles, and gray squares are the body-frame rotational geometric factors for the simulated ellipsoidal raspberries and represent $I$, $J$, $K$, and $L$, respectively.}
\label{figure:bftransport}
\end{figure}
The projection method to calculate body-frame angular displacement was tested for spheres to validate\
this method and can be found in the SI.\
Figure~\ref{figure:bftransport}.b gives the body-frame rotational geometric factors $I$, $J$, $K$, $L$ determined by simulation.\
The rotational effective radii of prolate ellipsoids are smaller\
compared to the short body length $c$, hence, $I$ and $J$ are greater than 1.0.\
The ratio of the rotational effective radii $\frac{R_{eff,a,r}}{R_{eff,bc,r}}=\left(\frac{I}{J}\right)^{1/3}$\
monotonically decreases with the aspect ratio of prolate ellipsoids.\
All ellipsoids agree very well with the Perrin factors~\cite{perrin1934,perrin1936}.
At long times, anisotropic transport reverts to isotropic transport\
via the diffusive rotation of an ellipsoid~\cite{han2006}. The time\
scale at which this occurs is $\tau_{\Theta}=\frac{1}{2D^{r}}=2.32$~ns-$1.16$~$\mu$s\
for the ellipsoidal raspberries in this study. In principle this\
crossover could be calculated by measuring many displacement\
trajectories by using one body-frame at a time to construct a full\
trajectory. Since these simulations are 2.541~$\mu$s in length,\
diffusion statistics using mean squared displacemens at lag times\
beyond a few nanoseconds are very poor. These measurements are\
therefore beyond the scope of this study. Interesting dissipative\
coupling between translational and rotational motion have also been\
observed for ellipsoids~\cite{han2006}, however, they manifest on the\
same $\tau_{\Theta}$ time scale and are beyond our reach.
\section{Conclusion}
\label{sec:conc}
The raspberry model in LB was investigated for protein-sized colloid\
particles. As bead resistance was increased, the overall colloid\
resistance to motion plateaued and attained a limiting value. The\
limiting colloid resistance to translation and rotation were higher\
than the predictions of the Stokes-Einstein and Stokes-Einstein-Debye\
relationships. The raspberries with high bead resistance were\
the least penetrable to LB as evidenced by significant enhanced drag under\
confinement in cylindrical pores. The enhanced drag was correct at high\
resistance values for coordinates more than a few LB grid spaces from the\
pore wall.
Since anisotropy has a pronounced effect on resistance to motion,\
ellipsoidal raspberries of aspect ratios between 0.1 and 10 were\
constructed and their transport was simulated. The Perrin and Happel-Brenner factors of these simulated\
colloids showed that the raspberry model reproduces the correct\
hydrodynamic resistances in the lab-frame and the body-frame.\
The raspberry model has been shown to be applicable to nonspherical colloid\
particles and appropriate for reproducing the hydrodynamics of protein-sized particles.\
It now allows us to go forward and use such a model in more complex\
environments where analytical calculations are not possible. Also more\
complex rigid shapes are now needed.
\section{acknowledgement}
Many helpful discussions with Ron Phillips, Pieter Stroeve, Joe\
Tringe, and Jonathan Higdon are gratefully acknowledged. We thank the UC Office of the\
President Labfee program (grant number 12-LR-237353) for financial\
support. We also thank Lawrence Livermore National Lab for allowing us\
access to their computer cluster.
\newpage
\section{SUPPLEMENTARY INFORMATION}
\section{Details for ellipsoidal raspberry construction}
\label{ellsurfconstruction}
See main text, Sec 2.1.
The type of surface dictates the complexity of the method required to calculate\
the bead-surface force. In the degenerate case of a sphere, a harmonic potential
\begin{equation}
\label{eq:harmonic_sphere}
U=k(r-R_{rasp})^2
\end{equation}
between the surface bead and the bead at the center of the sphere\
is sufficient to determine the force directed normal\
to the surface ~\cite{degraff2015rasp2layerconfined}.
In the case of an anisotropic body,\
such as an ellipsoid of revolution $(a,b=c)$, where $a$, $b$, $c$ represent the lengths of the\
ellipsoidal semi-axes, the surface coordinate $\overline{r}_{s}=[x_{s}, y_{s}, z_{s}]$\
and the normal direction $\overline{n}$\
depend on polar and azimuthal location, as shown in a 2-D representation in\
Figure~\ref{figure:ell_surf}. For some particle coordinate $\overline{r}$ the surface coordinate $\overline{r}_{s}$ is therefore unknown and must be\
calculated numerically, which we describe here.
\begin{figure}
\centering
\begin{tabular}{c c}
\resizebox{0.28\hsize}{!}{ \includegraphics{./images/sphere_coords} } &
\resizebox{0.48\hsize}{!}{\includegraphics{./images/ell_coords} }\\
\end{tabular}
\caption{Cross section of sphere, left, and ellipsoid of revolution, right.\
For an ellipsoid, the vector $\overline{r}-\overline{r}_{s}$ must\
be determined in order to impose a surface restraining force on a bead at position\
$\overline{r}$ towards surface position $\overline{r}_{s}$.}
\label{figure:ell_surf}
\end{figure}
We start with the analytical description of the surface:
\begin{equation}
\label{eq:ellsurfacecoords}
\frac{x_s^2}{a^2}+\frac{y_s^2}{b^2}+\frac{z_s^2}{c^2} = 1
\end{equation}
From Figure 2 in the main text,
\begin{equation}
\label{eq:nandr}
\frac{\overline{n}}{\|\overline{n}\|}=\frac{\overline{r}-\overline{r}_{s}}{\|\overline{r}-\overline{r}_{s}\|}.
\end{equation}
The normal vector $\overline{n}$ is also defined by the surface:
\begin{equation}
\label{eq:surfacegradient}
\overline{n} = \nabla S = \left (\frac{2x_s}{a^2}\right)\overline{e}_x+\left (\frac{2y_s}{b^2}\right)\overline{e}_y+\left (\frac{2z_s}{c^2}\right)\overline{e}_z
\end{equation}
A parameter $t$ is defined:
\begin{equation}
\label{eq:tdefine}
t \equiv 2 \frac{\|\overline{r}-\overline{r}_{s}\|}{\|\overline{n}\|}
\end{equation}
and substituted into Equation \ref{eq:nandr},
\begin{equation}
\label{eq:trearrange}
\frac{t}{2} \overline{n} = \overline{r}-\overline{r}_{s}
\end{equation}
The surface gradient components in Equation \ref{eq:surfacegradient} are\
substituted into Equation \ref{eq:trearrange}:
\begin{equation}
\frac{t}{2} \left [ \left (\frac{2x_s}{a^2}\right)\overline{e}_x+ \left (\frac{2y_s}{b^2}\right)\overline{e}_y + \left (\frac{2z_s}{c^2}\right)\overline{e}_z \right ] = \left (x-x_s\right)\overline{e}_x + \left (y-y_s\right)\overline{e}_y + \left (z-z_s\right)\overline{e}_z
\end{equation}
where the coordinates of a bead are $\overline{r}=[x,y,z]$.\
The relationship between the coordinates of the bead and the coordinates\
to the closest surface position are now established:
\begin{equation}
\label{eq:xtoxs}
x=x_s \left ( \frac{t}{a^2}+1\right), y=y_s \left ( \frac{t}{b^2}+1\right), z=z_s \left ( \frac{t}{c^2}+1\right)
\end{equation}
by rearranging and inserting Equation \ref{eq:xtoxs} into Equation\
\ref{eq:ellsurfacecoords}, the function $F(t)$ is determined:
\begin{equation}
\label{eq:bisection}
F(t) = \left ( \frac{xa}{t+a^2}+1\right)^2+\left ( \frac{yb}{t+b^2}+1\right)^2+\left ( \frac{zc}{t+c^2}+1\right)^2 -1 =0
\end{equation}
which can be solved using a bisection method. The largest value of\
$t$ that gives a solution to $F(t)<\lvert tol \rvert$ where\
$tol = 10^{-6}$ will give $\overline{r}_{s}$ using\
Equation \ref{eq:xtoxs}. Equation \ref{eq:bisection} was solved for\
each surface bead at each integration step to determine the\
surface constraining force during raspberry construction.
Ellipsoidal colloids were built in a script that integrated surface beads using the\
Leapfrog algorithm with a damping coefficient of $\eta = 1000$,\
Lennard-Jones size $\sigma = 1.4$, Lennard-Jones energy $\epsilon = 0.1$,\
time step $\Delta t = 0.001$. Typically $10^{7}$ integration steps were\
necessary to create a raspberry with evenly spaced surface beads. The\
surface density for all ellipsoids was one bead per nm$^{2}$ of surface area.
\section{Spheres: Radius}
\label{sec:radiusfit}
See main text, Sec. 3.2.
\begin{table*}
\centering
\begin{tabular}{lll}
\hline\noalign{\smallskip}
configuration & $\xi^{t}_{col}$ vs. $R_{rasp}$ & $\xi^{r}_{col}$ vs. $R_{rasp}^{3}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\scriptsize theoretical &\scriptsize $\xi^{t}_{col}=6\pi\eta R_{rasp}=1.679\times10^{-2}R_{rasp}$ &\scriptsize $\xi^{r}_{col}=8\pi\eta R_{rasp}^{3}=2.224\times10^{-2}R_{rasp}^{3}$ \\
\noalign{\smallskip}
\scriptsize low $\xi_{bead}$ &\scriptsize $\xi^{t}_{col}=2.047\times10^{-2}R_{rasp} - 1.424\times10^{-11}$ &\scriptsize $\xi^{r}_{col}=3.133\times10^{-2}R_{rasp}^{3} -9.539\times10^{-29}$ \\
\noalign{\smallskip}
\scriptsize high $\xi_{bead}$ &\scriptsize $\xi^{t}_{col}=2.046\times10^{-2}R_{rasp} - 1.716\times10^{-12}$ &\scriptsize $\xi^{r}_{col}=4.173\times10^{-2}R_{rasp}^{3} + 2.640\times10^{-28}$ \\
\noalign{\smallskip}\hline
\end{tabular}
\caption{Fits for colloid resistance to translational and rotational lab-frame transport. Units of slopes are kg~s$^{-1}$m$^{-1}$. Units of intercepts are kg~s$^{-1}$.}
\label{tab:radiusfits}
\end{table*}
\section{Perrin factors for lab-frame transport of ellipsoids}
\label{sec:perrinfactors}
See main text, Sec. 2.4, 2.5, 3.4.
For an ellipsoid with semi-axis lengths $(a,b=c)$ and aspect ratio\
$\phi=a/b$, these factors are \cite{perrin1934,perrin1936}
\begin{equation}
\label{eq:a_phi}
A(\phi) =\frac{a}{R_{eff,t}} = \frac{1}{\sqrt{|1-\phi^{-2}|}}\begin{cases} \arctan\sqrt{\phi^{-2}-1} & \text{for oblate: }\phi<1\\
\ln\left ( \phi+\sqrt{\phi^{2}-1} \right ) & \text{for prolate: }\phi>1 \end{cases}
\end{equation}
\begin{equation}
\label{eq:b_phi}
B(\phi) = \left ( \frac{a}{R_{eff,r}} \right )^3 =\frac{1+3\phi^{-2}A(\phi)}{2\phi^{-2}(1+\phi^{-2})}.
\end{equation}
\section{Happel-Brenner factors for body-frame translation of ellipsoids}
\label{sec:happelbrennerfactors}
See main text, Sec. 2.6, 3.5.
Note that the following equations are with respect to the degenerate\
axes $b$ and $c$, whereas the lab-frame equations for $A(\phi)$ and $B(\phi)$\
are written with respect to $a$. We have reported the equations as\
Happel and Brenner originally presented them \cite{happelbrenner1983}.
For translation along the singular axis of a prolate ellipsoid,
\begin{equation}
\label{eq:happelbrenner_e}
E(\phi) = \frac{R_{eff,x^{b},t}}{c} = \frac{8}{3}~ \frac{1}{ -\frac{2\phi}{\phi^{2}-1} + \frac{2\phi^{2}-1}{(\phi^{2}-1)^{3/2}} \ln \left ( \frac{\phi+\sqrt{\phi^{2}-1}}{\phi-\sqrt{\phi^{2}-1}} \right )}.
\end{equation}
For translation along the degenerate axes of a prolate ellipsoid,
\begin{equation}
\label{eq:happelbrenner_f}
F(\phi)=\frac{R_{eff,y^{b}z^{b},t}}{c}=\frac{8}{3}~ \frac{1}{ \frac{\phi}{\phi^{2}-1} + \frac{2\phi^{2}-3}{(\phi^{2}-1)^{3/2}} \ln \left ( \phi+\sqrt{\phi^{2}-1} \right )}.
\end{equation}
For translation along the singular axis of of an oblate ellipsoid,
\begin{equation}
\label{eq:happelbrenner_g}
G(\phi)=\frac{R_{eff,x^{b},t}}{c}=\frac{8}{3}~ \frac{1}{ \frac{2\phi}{1-\phi^{2}} + \frac{2-4\phi^{2}}{(1-\phi^{2})^{3/2}} \tan^{-1} \left ( \frac{\sqrt{1-\phi^{2}}}{\phi} \right )}.
\end{equation}
For translation along the degenerate axis of an oblate ellipsoid,
\begin{equation}
\label{eq:happelbrenner_h}
H(\phi)=\frac{R_{eff,y^{b}z^{b},t}}{c}=\frac{8}{3}~ \frac{1}{ -\frac{\phi}{1-\phi^{2}} + \frac{2\phi^{2}-3}{(1-\phi^{2})^{3/2}} \sin^{-1} \left ( \sqrt{1-\phi^{2}} \right )}.
\end{equation}
\section{Body-frame translational displacement}
\label{bftransdisplacement}
See main text, Sec. 2.6.
For an inverse quaternion $\overline{q}^{-1}(t) = (w,-x,-y,-z)$ at\
time $t$ the rotation can be written as a vector-matrix multiplication:
\begin{equation}
\label{eq:bodydisp}
\delta \overline{X}^{b}_n = \overline{\overline{R}}^{-1} \delta \overline{X}_{n}
\end{equation}
where $\overline{\overline{R}}$ represents the rotation matrix. The\
notation is written as inverse because the inverse\
quaternion transformation is required here. The matrix is
\begin{equation}
\label{eq:quatmatrix}
\overline{\overline{R}}^{-1}=
\begin{pmatrix}
1 - 2y^{2} - 2z^{2} & 2xy - 2zw & 2xz - 2yw \\[0.3em]
2xy + 2zw & 1 - 2x^{2} - 2z^{2} & 2yz - 2xw \\[0.3em]
2xz - 2yw & 2yz + 2xw & 1 - 2x^{2} - 2y^{2} \\
\end{pmatrix}.
\end{equation}
\section{Body-frame angular displacement}
\label{sec:bfangulardisplacement}
See main text, Sec. 2.7, 3.5.
\begin{figure}[h]
\centering
\begin{tabular}{m{48mm} m{48mm} m{48mm} }
\resizebox{1.0\hsize}{!}{ \includegraphics{./images/bfrot1}} &
\resizebox{1.0\hsize}{!}{ \includegraphics{./images/bfrot2} }&
\resizebox{1.0\hsize}{!}{ \includegraphics{./images/bfrot3} }\\
\end{tabular}
\caption{Projection method for measurement of body-frame rotation about $y^{b}$ between times $t$ and $t-1$. Left: Body frame axes for the $t$ and $t-1$ shapshot. Middle: Projections $x^{b}_{P}(t)$ and $z^{b}_{P}(t)$ of the $x^{b}_{t}$ and $z^{b}_{t}$ axes into the plane normal to $y^{b}_{t-1}$. Right: Same as middle image with perspective aligned with the $y^{b}_{t-1}$ axis, showing the angles between the $x^{b}_{P}(t)$ and $x^{b}_{t-1}$ which determines the angular displacement $\delta \Theta_{y^{b}}(t)$.}
\label{figure:bfrot_axes}
\end{figure}
The body-frame rotations can be seen in Figure \ref{figure:bfrot_axes}.\
Quaternions were used to calculate the body-frame\
rotational motion.
The body-frame $x^{b}$ axis unit vector at time $t$ is
\begin{equation}
\label{eq:bodyframeunitvectorcalc}
\hat{i}^{b}_{t}= q_{t}\ \hat{i}\ q^{-1}_{t}=\overline{\overline{R}}\ \hat{i}
\end{equation}
and similarly calculated\
for the body-frame unit vectors $\hat{j}^{b}_{t} $ and $\hat{k}^{b}_{t} $. In\
order to observe a body-frame rotation about the $x^{b}$ axis between time $t-1$ and time $t$, the\
$\hat{k}^{b}_{t}$ vector was projected onto the $\hat{j}^{b}\hat{k}^{b}_{t-1}$\
plane:
\begin{equation}
\hat{k}_{P}^{b}(t)=P_{\hat{j}^{b}\hat{k}^{b}_{t-1}}(\hat{k}_{t}^{b}) = \hat{k}_{t}^{b} - \hat{i}_{t-1}^{b}\left(\hat{k}_{t}^{b} \cdot \hat{i}_{t-1}^{b}\right).
\end{equation}
These projection vectors remain in the lab-frame, but in order to use\
arctangent to calculate the $\delta\Theta_{x^{b}}$, or the angle between\
$\hat{k}_{P}^{b}(t)$ and $\hat{k}^{b}_{t-1}$, transformation into the frame\
where $\hat{i}_{t-1}^{b}=\{1,0,0\}$,\ $\hat{j}_{t-1}^{b}=\{0,1,0\}$,\
$\hat{k}_{t-1}^{b}=\{0,0,1\}$ is required:
\begin{equation}
\hat{k}_{P}''(t) = \bar{\bar{A}}_{t-1}\ \hat{k}_{P}^{b}(t)
\end{equation}
The rotation matrix $\bar{\bar{A}}_{t-1}$ is a standard rotation\
between two Cartesian coordinate frames:
\begin{equation}
\bar{\bar{A}}_{t-1}=
\begin{pmatrix}
\hat{i}_{t-1}^{b} \cdot \hat{i} & \hat{i}_{t-1}^{b} \cdot \hat{j} & \hat{i}_{t-1}^{b} \cdot \hat{k} \\[0.3em]
\hat{j}_{t-1}^{b} \cdot \hat{i} & \hat{j}_{t-1}^{b} \cdot \hat{j} & \hat{j}_{t-1}^{b} \cdot \hat{k} \\[0.3em]
\hat{k}_{t-1}^{b} \cdot \hat{i} & \hat{k}_{t-1}^{b} \cdot \hat{j} & \hat{k}_{t-1}^{b} \cdot \hat{k} \\
\end{pmatrix}
\end{equation}
where $\hat{i}=\{1,0,0\}$,\ $\hat{j}=\{0,1,0\}$,\
$\hat{k}=\{0,0,1\}$ are the unit vector lab-frame axes.
The angle displacement $\delta \Theta_{x^{b}}$ between time $t$ and\
$t-1$ is calculated via the arctangent function:\
\begin{equation}
\delta \Theta_{x^{b}}(t)=\arctan \left ( \frac{\hat{k}_{P2}''(t)}{\hat{k}_{P3}''(t)} \right )
\end{equation}
where $P2$ and $P3$ refer to the 2nd and 3rd components of the\
rotated-projected vector, respectively. For displacements about the\
$y^{b}$ and $z^{b}$ axes, we calculate them using
\begin{equation}
\delta \Theta_{y^{b}}(t)=\arctan \left ( \frac{\hat{i}_{P3}''(t)}{\hat{i}_{P1}''(t)} \right )
\end{equation}
\begin{equation}
\delta \Theta_{z^{b}}(t)=\arctan \left ( \frac{\hat{j}_{P1}''(t)}{\hat{j}_{P2}''(t)} \right ).
\end{equation}
\section{Perrin factors for body-frame rotation of ellipsoids}
\label{poreframedetermination}
See main text, Sec. 2.7, 3.5.
Perrin calculated effective radii for rotation of a triaxial ellipsoid\
with semi-axis lengths $(a,b,c)$~\cite{perrin1934,perrin1936}. Note that the equations\
collapse for an ellipsoid of revolution $(a,b=c)$.\
For rotation about the singular axis of a prolate ellipsoid,
\begin{equation}
\label{eq:dr_bf_i}
I({\phi}) = \left ( \frac{R_{eff,x^{b},r}}{c} \right )^{3}
\end{equation}
For rotation about the degenerate axes of a prolate ellipsoid,
\begin{equation}
\label{eq:dr_bf_j}
J({\phi}) = \left ( \frac{R_{eff,y^{b}z^{b},r}}{c} \right )^{3}
\end{equation}
For rotation about the singular axis of an oblate ellipsoid,
\begin{equation}
\label{eq:dr_bf_k}
K({\phi}) = \left ( \frac{R_{eff,x^{b},r}}{c} \right )^{3}
\end{equation}
For rotation about the degenerate axes of an oblate ellipsoid,
\begin{equation}
\label{eq:dr_bf_l}
L({\phi}) = \left ( \frac{R_{eff,y^{b}z^{b},r}}{c} \right )^{3}
\end{equation}
The radii are calculated as
\begin{equation}
\label{eq:r_rot_a}
R_{eff,x^{b},r} = \left ( \frac{2}{3Q} \right )^{1/3}
\end{equation}
\begin{equation}
\label{eq:r_rot_a}
R_{eff,x^{b},r} = \left ( \frac{2}{3}~ \frac{a^{2} + c^{2}}{a^{2}P + c^{2}Q} \right )^{1/3}
\end{equation}
\begin{equation}
\label{eq:P}
P = \int^{\infty}_{0} \frac{ds}{(c^{2} +s)(a^{2} + s)^{3/2}}
\end{equation}
\begin{equation}
\label{eq:Q}
Q = \int^{\infty}_{0} \frac{ds}{(c^{2} +s)^{2}\sqrt{(a^{2} + s)}}
\end{equation}
\section{Pore-frame coordinate determination}
\label{poreframedetermination}
See main text, Sec. 2.8, 3.3.
The pore-frame coordinates of the raspberry are determined by\
translating the lab-frame origin to the center of the pore
\begin{equation}
x^{c} = (x-0.5L_{xy})
\end{equation}
\begin{equation}
y^{c} = (y-0.5L_{xy})
\end{equation}
\begin{equation}
\theta = \arctan \left ( \frac{y_{c}}{x_{c}} \right ) .
\end{equation}
Over one interval,
\begin{equation}
\label{eq:xcenter}
\delta \overline{X}^{c}(t_{n})=
\begin{pmatrix}
x^{c}(t_{n})-x^{c}(t_{n-1}) \\[0.3em]
y^{c}(t_{n})-y^{c}(t_{n-1}) \\[0.3em]
z(t_{n})-z(t_{n-1}) \\
\end{pmatrix} .
\end{equation}
The pore coordinates displacement are the result of a rotation
\begin{equation}
\delta \overline{X}^{p}(t_{n}) = \overline{\overline{P}}\ \delta \overline{X}^{c}(t_{n})
\end{equation}
via the following transformation matrix
\begin{equation}
\label{eq:porematrix}
\overline{\overline{P}}=
\begin{pmatrix}
x_{1}\cos(\theta) & y_{1}\sin(\theta) & 0 \\[0.3em]
-x_{1}\sin(\theta) & y_{1}\cos(\theta) & 0 \\[0.3em]
0 & 0 & 1 \\
\end{pmatrix} .
\end{equation}
\section{Body-frame rotational diffusion of spheres}
\label{sec:bfrotspheres}
See main text, Sec. 3.5.
\begin{figure}[h!]
\centering
\resizebox{0.5\hsize}{!}{ \includegraphics{./images/2016_radius_xi1000_rot_lab_body} }
\caption{Comparison of body-frame and lab-frame rotational resistance of filled spherical raspberries for different radii. Red, green, and blue x's represent colloid resistance to rotation about initial (arbitrary) body axes $(a=b=c)$ determined by the body-axis projection method. Gray x's represents resistance to rotation in the lab-frame.}
\label{figure:bfrotsphere}
\end{figure}
Figure \ref{figure:bfrotsphere} shows the rotational resistance of spherical raspberries\
versus size and validates the body-axis projection method for body-frame\
rotation. The mean-squared\
rotation about each of the $a$, $b$, and $c$ body axes are equivalent\
for all spheres.
\section{Enhanced drag for spheres with no outer layer}
\label{sec:bfrotspheres}
\begin{figure*}[h!]
\begin{tabular}{|c c c|}
\hline
\resizebox{0.3\hsize}{!}{ \includegraphics{./images/xi1000_1layer_filled_x_HM_D1-eps-converted-to.pdf}} &
\resizebox{0.3\hsize}{!}{ \includegraphics{./images/xi1000_1layer_filled_y_HM_D1-eps-converted-to.pdf}} &
\resizebox{0.3\hsize}{!}{ \includegraphics{./images/xi1000_1layer_filled_z_HM_D1-eps-converted-to.pdf}}\\
\hline
\end{tabular}
\caption{Enhanced drag due to confinement within cylindrical pores, no outer layer, high bead resistance. Red: $\lambda=0.1$. Orange: $\lambda=0.2$. Yellow: $\lambda=0.3$. Green: $\lambda=0.4$. Blue: $\lambda=0.5$.}
\label{figure:enhanced_drag}
\end{figure*}
|
1,108,101,564,321 | arxiv | \section{Introduction}
\subsection{Motivation}
The electric power grid is recognized as the largest engineering achievement in the 20th century. In recent years, it has been experiencing a transformation to an even more complicated
system with an increased number of distributed energy sources and more active and less predictable load endpoints. At the same time, intermittent renewable generation introduces high uncertainty into system operation and may compromise power system stability and security. The existing control operations and modeling approaches, which are largely developed several decades ago for the much more predictable operation of a vertically integrated utility with no fluctuating generation, need to be reassessed and adopted to more stressed operating conditions \cite{camacho2011control, zhao2014design, sharma2015smart,Chen2013139,Xu2016478}. In particular, operating reserves \cite{Wood1996}, traditionally put in place to maintain power system frequency in the presence of uncertainties in production and demand, face limitations in the current grid paradigm. First, the increased uncertainty in production requires new ways of dimensioning the reserves available to the operator at any given moment. Second, because of the substantially higher stochastic component in the current and future power system operation, the power grid becomes increasingly vulnerable to large disturbances, which can eventually lead to major outages. Such events evolve in time scales much faster than what the secondary or tertiary frequency control can handle.
Hence, emergency control, i.e., quick actions to recover the stability of a power grid under critical contingency, is required.
Currently, emergency control of power grids is largely based on remedial actions, special protection schemes (SPS), and load shedding \cite{Vittal2003}, which aim to quickly rebalance power and hopefully stabilize the system. Although these emergency control schemes make the electrical power grid reasonably stable to disturbances, their drawbacks are twofold.
First, some of these emergency actions rely on interrupting electrical service to customers. The unexpected service loss is extremely harmful to customers since it may lead to enormously high economic damage, e.g., it is reported that the economic cost of power interruptions every year in the US is about $\$79$ billion \cite{Lawrence05cost}. Second, protective devices are usually only effective for individual elements, but less effective in preventing the whole grid from collapse. Recent major blackouts exhibit the inability of operators to prevent grid from cascading failures \cite{2003blackout}, regardless of the good performance of individual protective devices. The underlying reason is the lack of coordination among protective devices, which makes them incapable of maintaining the stability of the whole grid. These drawbacks call for system-level, cost-effective solutions to the emergency control of power grids.
On the other hand, new generations of smart electronic devices provide fast actuation to smart power grids. Advanced transmission resources for active and reactive power flow control
are gradually installed into the system and are expected to be widely available in the future. Besides shunt compensation (switched reactors and capacitors, Static Var Compensators, and STATCOMs), over the last decades a large number of Phase-Shifting Transformers (PSTs) has been installed in power systems all over the world, while a gradually increased installation of Thyristor-Controlled Series Capacitors (TCSCs) has also been observed. Both of these devices can be represented by a variable susceptance (for PST modeling see e.g., \cite{ENTSOE_PST}). At the same time, HVDC lines and HVDC back-to-back converters are installed at several locations, which can also be used for power flow and voltage control.
Motivated by the aforementioned observations, this paper aims to extract more value out of the existing fast-acting controllable grid elements to quickly stabilize the power grid when it is about to lose synchronism after experiencing contingencies (but the voltage is still well-supported). In particular, through the use of PSTs, TCSCs, or HVDC, we propose to adjust selected susceptances and/or power injections in the transmission system to control the post-fault dynamics and thereby stabilize the power system. In the rest of this paper, we will refer to all these devices as FACTS devices.
One of the most remarkably technical difficulties to realize such a control scheme is that the post-fault dynamics of a power grid possess multiple equilibrium points, each of which has its own stability region (SR), i.e., the set of states from which the post-fault dynamics will converge to the equilibrium point. If the fault-cleared state stays outside the stability region of the stable equilibrium point (SEP), then the post-fault dynamics will result in an unstable condition and eventually, may lead to major failures.
Real-time direct time-domain simulation, which exploits advances in computational hardware, can
perform an accurate assessment for post-fault transient dynamics following the contingencies. However, it does not suggest how to properly design the emergency control actions that are guaranteed to drive critical/emergency states back to some stable operating condition.
\begin{figure}[h!]
\centering
\includegraphics[width = 3.2in]{StructuralControl}
\caption{Stability-driven smart transmission control: the fault-cleared state is made stable by changing the stable equilibrium point (SEP) through adjusting the susceptances of the network transmission lines.}
\label{fig.EmergencyControl_Idea}
\end{figure}
\subsection{Novelty}
To deal with this technical difficulty, we propose a structural control paradigm to drive post-fault dynamics from critical fault-cleared states to the desired stable equilibrium point. In particular, we will change the transmission line susceptances and/or power injection setpoints to obtain a new stable equilibrium point such that
the fault-cleared state is guaranteed to stay strictly inside the stability region of this new equilibrium point, as shown in Fig. \ref{fig.EmergencyControl_Idea}. Hence, under the new post-fault dynamics, the system trajectory will converge from the fault-cleared state to the new equilibrium point. If this new equilibrium point stays inside the stability region of the original equilibrium point, then we recover the original line susceptances/power injections and the system state will automatically converge from the new equilibrium point to the original equilibrium point. Otherwise, this convergence can be performed through a sequence of new transmission control actions which drive the system state to the original equilibrium point through a sequence of other equilibrium points, as shown in Fig. \ref{fig.EmergencyControl_EquilibriumSelection}.
It is worth noting that the proposed control scheme is a new control paradigm which is unusual in classical control systems theory. Indeed, in the proposed control paradigm, we drive the system from the initial state (i.e., the fault-cleared state) to the desired equilibrium point by relocating its equilibrium point and the corresponding stability region. This setup is unusual from the classical control theory point of view where the equilibrium point is usually assumed to be unchanged under the effects of control inputs.
Compared to the existing control methods, the proposed emergency control method has several advantages, including:
\begin{itemize}
\item [i)] It belongs to the family of special protection schemes, thus being much faster than secondary and tertiary frequency controls, and making it suitable to handle emergency situations.
\item [ii)] It avoids load shedding which causes damages and severe economic loss to consumers.
\item [iii)] The investment for the proposed control is minor since we only employ the already-installed FACTS devices to change the line impedance/power injection and relocate the equilibrium point
\item [iv)] It avoids the usage of continuous measurement of power system state, reducing the resources needed for data storage and processing. The last feature distinguishes the proposed structural control paradigm from other link control methods \cite{PRL.96.164102} where the system state is continuously measured to change the link continuously.
\end{itemize}
To guarantee the convergence of the post-fault dynamics under control, we utilize our recently introduced Lyapunov function family-based transient stability certificate \cite{VuTuritsyn:2015TAC,VuTuritsyn:2014}. This stability certificate gives us sufficient conditions to assess whether a given post-fault dynamics will converge from a given initial state to a given equilibrium point. In this paper, we construct a new family of Lyapunov functions which are convex and fault-dependent, which can balance the trade-off between computational complexity and conservativeness of the stability certificate.
Similar idea with such control-Lyapunov function for power systems was also investigated in \cite{ghandhari2001control}, yet this is based on the continuous measurement/control design to change the power injections.
On the practical implementation of the proposed control approach, we note that it may be dangerous if some step went wrong during the whole emergency control procedure, e.g., due to failure of the corresponding FACTS devices. This is at the same degree of risk that the grid would experience in case of protection equipment malfunctions during faults. Operators are familiar with such risks, and there are standardized procedures to ensure the reliable operation of protection relays, e.g., periodic checks, tests, etc. Similar procedures should be followed to ensure the reliable operation of FACTS devices during emergencies.
In addition, we expect that the proposed approach will act complementary to other emergency control actions. Finally, it is worth noting that in the proposed approach, we only change the susceptances of the transmission lines in the allowable range of FACTS devices, while the selected lines are still on service and the network structure is unchanged. This is different from the line switching approach, which may cause oscillatory behavior after switching action.
The paper is structured as follows. Section \ref{sec.model}
recalls the structure-preserving model of power
systems and formulates the emergency control problem of
power grids. In Section \ref{sec:LFF}, we construct a new convex, fault-dependent
Lyapunov function family for stability analysis.
In Section \ref{sec.PostfaultControl}, we design the emergency controls and propose the procedure for
remedial actions. Section \ref{sec.simulations}
numerically illustrates the effectiveness of the proposed emergency control action, and Section \ref{sec.conclusion} concludes the paper.
\section{Network Model and Emergency Control Problem}
\label{sec.model}
\subsection{Network Model}
In this paper, we consider power systems under critical situations when the buses' phasor angles may significantly fluctuate but the buses' voltages are still well-supported and maintained. For such situations, we utilize the standard
structure-preserving model to describe the dynamics of generators and frequency-dependent dynamic loads in
power systems \cite{bergen1981structure}. This model naturally
incorporates the dynamics of the generators' rotor angle as well as the response of
load power output to frequency deviation.
Mathematically, the grid is described by an undirected graph
$\mathcal{A}(\mathcal{N},\mathcal{E}),$ where
$\mathcal{N}=\{1,2,\dots,|\mathcal{N}|\}$ is the set of buses and
$\mathcal{E} \subseteq \mathcal{N} \times \mathcal{N}$ is the set
of transmission lines connecting those buses. Here, $|A|$ denotes
the number of elements in set $A.$ The sets of generator buses
and load buses are denoted by $\mathcal{G}$ and $\mathcal{L}$. We assume that the grid is lossless with
constant voltage magnitudes $V_k, k\in \mathcal{N},$ and the
reactive powers are ignored. Then, the structure-preserving model of the system is given by \cite{bergen1981structure}:
\begin{subequations}
\label{eq.structure-preserving}
\begin{align}
\label{eq.structure-preserving1}
m_k \ddot{\delta_k} + d_k \dot{\delta_k} + \sum_{j \in
\mathcal{N}_k} a_{kj} \sin(\delta_k-\delta_j) = &P_{m_k}, k \in \mathcal{G}, \\
\label{eq.structure-preserving2}
d_k \dot{\delta_k} + \sum_{j \in
\mathcal{N}_k} a_{kj} \sin(\delta_k-\delta_j) = &-P^0_{d_k}, k \in \mathcal{L},
\end{align}
\end{subequations}
where equation \eqref{eq.structure-preserving1} represents
the dynamics at generator buses and equation
\eqref{eq.structure-preserving2} the dynamics at load buses.
In these equations, with $k \in \mathcal{G},$ then
$m_k>0$ is the generator's dimensionless moment of inertia, $d_k>0$ is the term representing primary frequency
controller action on the governor, and $P_{m_k}$ is the input shaft power producing the mechanical torque acting on the rotor of the
$k^{th}$ generator. With $k \in \mathcal{L},$ then $d_k>0$ is the constant frequency coefficient of load and
$P^0_{d_k}$ is the nominal load.
Here, $a_{kj}=V_kV_jB_{kj},$ where $B_{kj}$ is the
(normalized) susceptance of the transmission line $\{k,j\}$ connecting the $k^{th}$ bus and $j^{th}$ bus,
$\mathcal{N}_k$ is the set of neighboring buses of the $k^{th}$
bus. Note that, the system described by equation \eqref{eq.structure-preserving}
has many stationary points $\delta_k^*$ that are characterized, however,
by the angle differences $\delta_{kj}^*=\delta_k^*-\delta_j^*$
(for a given $P_k$) that solve the following system of power flow-like equations:
\begin{align}
\label{eq.SEP}
\sum_{j \in
\mathcal{N}_k} a_{kj} \sin(\delta_{kj}^*) =P_{k}, k \in \mathcal{N},
\end{align}
where $P_k=P_{m_k}, k \in \mathcal{G},$ and $P_k=-P^0_{d_k}, k \in
\mathcal{L}.$
\subsection{Emergency Control Problem}
\label{sec.formulation}
In normal conditions, a power grid operates at a stable equilibrium point of
the pre-fault dynamics. Under emergency situations,
the system evolves according to the fault-on dynamics laws and moves away
from the pre-fault equilibrium point to a fault-cleared state $\delta_0$.
After the fault is cleared, the system evolves according to the post-fault dynamics described by equation \eqref{eq.structure-preserving}. Assume that these post-fault dynamics possess a stable operating condition $\delta^*_{\bf origin}$ with its own stability region.
The critical situations considered in this paper are when the fault-on trajectory is leaving polytope $\Pi/2$ defined by inequalities $|\delta_{kj}| \le \pi/2, \forall \{k,j\} \in \mathcal{E},$ i.e., the fault-cleared state $\delta_0$ stays outside polytope $\Pi/2.$ In normal power systems, protective devices will be activated to disconnect faulted lines/nodes, which will isolate the fault and prevent the post-fault dynamics from instability (this would usually happen at some point beyond a voltage angle difference $\pi/2$).
Avoiding disconnecting line/node, our emergency control objective is to make post-fault dynamics become stable by controlling the post-fault dynamics from
the fault-cleared state $\delta_0$ to the stable equilibrium point $\delta^*_{\bf origin},$ which, e.g., may be an optimum point of some optimal power flow (OPF) problem.
To achieve this, we consider adjusting the post-fault dynamics through adjusting the susceptance of some selected transmission lines and/or changing power injections. These changes can be implemented by the FACTS devices available on power transmission grids.
The rationale of this control is based on the observation illustrated in Fig. \ref{fig.EmergencyControl_Idea} that, by appropriately changing the structure of power systems, we can obtain new post-fault dynamics with a new equilibrium point whose region of attraction contains the fault-cleared state $\delta_0$, and therefore, the new post-fault dynamic is stable.
Formally, we consider the following control design problem:
\begin{itemize}
\item [(\textbf{P})] \textbf{Structural Emergency Control Design:} \emph{Given a fault-cleared state $\delta_0$ and the stable equilibrium point $\delta^*_{\bf origin},$ determine the feasible values for susceptances of selected transmission lines and/or feasible power injection such that the post-fault dynamics are driven from the fault-cleared state $\delta_0$ to the original post-fault equilibrium point $\delta^*_{\bf origin}$.}
\end{itemize}
In the next section, we will present the stability certificate for given post-fault dynamics, which will be instrumental in designing a structural emergency control solving problem $(\textbf{P})$ in Section \ref{sec.PostfaultControl}.
\section{Fault-Dependent Transient Stability Certificate}
\label{sec:LFF}
In this section, we recall the Lyapunov function family
approach for transient stability analysis \cite{VuTuritsyn:2014,VuCCT2016}. Then, we construct a new set of fault-dependent Lyapunov functions that are convex and result in an easy-to-verify stability certificate. This set of Lyapunov functions balances the tradeoff between computational tractability and conservativeness of the stability certificate.
\subsection{The Lyapunov Function Family Approach}
In the LFF approach (see \cite{VuTuritsyn:2014,VuCCT2016} for details), the nonlinear couplings and the linear model
are separated, and we obtain an equivalent representation of \eqref{eq.structure-preserving} as
\begin{equation}\label{eq.Bilinear}
\dot x = A x - B F(C x).
\end{equation}
For the system defined by \eqref{eq.Bilinear}, the LFF approach
proposes to use the Lyapunov functions family given by:
\begin{align} \label{eq.Lyapunov}
V(x) = \frac{1}{2}x^\top Q x - \sum_{\{k,j\}\in \mathcal{E}}
K_{\{k,j\}} \left(\cos\delta_{kj}
+\delta_{kj}\sin\delta_{kj}^*\right),
\end{align}
in which the diagonal, nonnegative matrices $K, H$ and the
symmetric, nonnegative matrix $Q$ satisfy the following linear
matrix inequality (LMI):
\begin{align}
\label{eq.QKH}
\left[ \begin{array}{ccccc}
A^\top Q+QA & R \\
R^\top & -2H\\
\end{array}\right] &\le 0,
\end{align}
with $R = QB-C^\top H-(KCA)^\top$. The classical energy function
is just one element of the large cone of all possible Lyapunov
functions corresponding to a solution of LMI \eqref{eq.QKH}: $Q=\emph{\emph{diag}}(0,...,0,m_1,...,m_m,0,...,0)$,
$K=S,$ and $H=0$.
Then, we can prove that an estimation for the region of attraction
of the equilibrium point is given by
\begin{align}\label{eq.invariant}
\mathcal{R_P} = \left\{x \in\mathcal{P}: V(x) < V_{\min}(\mathcal{P})\right\},
\end{align}
where the polytope $\mathcal{P}$ is
defined by inequalities $|\delta_{kj}+\delta_{kj}^*| \le \pi,
\forall \{k,j\} \in \mathcal{E}$, and $V_{\min}(\mathcal{P})$ is the minimum value of $V(x)$ over the flow-out boundary of polytope $\mathcal{P}$.
Finally, to determine if the post-fault dynamics are stable, we
check to see if the fault-cleared state $x_0$ is inside the stability
region estimate $\mathcal{R_P}.$
\subsection{The Fault-Dependent Convex Lyapunov Function}
\label{sec:certificate}
A property of the Lyapunov
function $V(x)$ defined in equation \eqref{eq.Lyapunov} is that it may be nonconvex in polytope $\mathcal{P}$, making it computationally complicated to
calculate the minimum value $V_{\min}(\mathcal{P})$. One way to get the convex Lyapunov function is to restrict the state inside the polytope defined by inequalities $|\delta_{kj}|\le \pi/2.$ However, this Lyapunov function can only certify stability for fault-cleared states with phasor differences less than $\pi/2.$
To certify stability for fault-cleared state staying outside polytope $\Pi/2,$ which likely happens in emergency situations, we construct a family of the fault-dependent convex Lyapunov functions.
Assume that the fault-cleared state $x_0$ has a number of phasor differences larger than $\pi/2.$ Usually, this happens when the phasor angle at a node becomes significantly large, making the phasor difference associated with it larger than $\pi/2.$
Without loss of generality, we assume that $|\delta_{ij}(0)|>\pi/2, \forall j\in\mathcal{N}_i$ at some given node $i\in\mathcal{N}$. Also, it still holds that $|\delta_{ij}(0)+\delta_{ij}^*|\le\pi$
for all $j\in\mathcal{N}_i.$ Consider polytope $\mathcal{Q}$ defined by inequalities
\begin{align}
\label{eq.Qmatrix}
|\delta_{ij}+\delta_{ij}^*| &\le\pi, \forall j\in\mathcal{N}_i, \nonumber \\
|\delta_{kj}| &\le\pi/2, \forall j\in\mathcal{N}_k, \forall k \neq i.
\end{align}
Hence, the fault-cleared state is inside polytope $\mathcal{Q}.$ Inside polytope $\mathcal{Q},$ consider the Lyapunov function family \eqref{eq.Lyapunov}
where the matrices $Q,K \ge 0$ satisfying the following LMIs:
\begin{align}
\label{eq.NewQKH}
\left[ \begin{array}{ccccc}
A^\top Q+QA & R \\
R^\top & -2H\\
\end{array}\right] &\le 0, \\
\label{eq.NewQKH1}
Q- \sum_{j\in \mathcal{N}_i}K_{\{i,j\}}C_{\{i,j\}}^\top C_{\{i,j\}} &\ge 0,
\end{align}
where $C_{\{i,j\}} $ is the row of matrix $C$ that corresponds to the row containing $K_{\{i,j\}} $ in the diagonal matrix $K.$
From \eqref{eq.Qmatrix} and \eqref{eq.NewQKH1}, we can see that the Hessian of the Lyapunov function inside $\mathcal{Q}$ satisfies
\begin{align}
H(V(x))&=Q + \sum_{\{k,j\}\in \mathcal{E}}K_{\{k,j\}}C_{\{k,j\}}^\top C_{\{k,j\}}\cos\delta_{kj} \nonumber \\&
\ge Q+ \sum_{j\in \mathcal{N}_i}K_{\{i,j\}}C_{\{i,j\}}^\top C_{\{i,j\}}\cos \delta_{ij} \nonumber \\&
\ge Q- \sum_{j\in \mathcal{N}_i}K_{\{i,j\}}C_{\{i,j\}}^\top C_{\{i,j\}} \ge 0.
\end{align}
As such, the Lyapunov function is convex inside polytope $\mathcal{Q}$
and thus, the corresponding minimum value $V_{\min}(\mathcal{Q}),$ defined over the flow-out boundary of $\mathcal{Q},$ can be calculated in polynomial time.
Also, the corresponding estimate for region of attraction is
given by
\begin{align}\label{eq.RoAestimate}
\mathcal{R_Q} = \left\{x \in\mathcal{Q}: V(x) < V_{\min}\right\},
\end{align}
with
\begin{align}
\label{eq.Vmin2} V_{\min}=V_{\min}(\mathcal{Q})=\mathop {\min}\limits_{x \in
\partial\mathcal{Q}^{out}} V(x).
\end{align}
The convexity of $V(x)$ in polytope $\mathcal{Q}$ allows us to quickly compute the minimum value $V_{\min}$ and come up with an easy-to-verify stability certificate. Therefore, by exploiting properties of the fault-cleared state, we have a family of fault-dependent Lyapunov functions that balance the tradeoff between computational complexity and conservativeness. It is worth noting that though the Lyapunov function is fault-dependent, we only need information for the fault-cleared states instead of the full fault-on dynamics.
Another point we should note is that LMIs \eqref{eq.NewQKH}-\eqref{eq.NewQKH1} provide us with a family of Lyapunov functions guaranteeing the stability of the post-fault dynamics. For a given fault-cleared state, we can find the best suitable function in this family to certify its stability. The adaptation algorithm is similar to that in \cite{VuTuritsyn:2014}, with the only difference being the augment of inequality \eqref{eq.NewQKH1}, i.e., $Q- \sum_{j\in \mathcal{N}_i}K_{\{i,j\}}C_{\{i,j\}}^\top C_{\{i,j\}} \ge 0.$ More details can be found in Appendix \ref{appendix}.
\section{Structural Emergency Control Design}
\label{sec.PostfaultControl}
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{EmergencyControl_EquilibriumSequence}
\caption{Selection of the sequence of stable equilibrium points $\delta^*_{i}, i=1,...,N,$ such that the fault-cleared state
is driven through the sequence of equilibrium points back to the original equilibrium point $\delta^*_{\bf origin}$.}
\label{fig.EmergencyControl_EquilibriumSelection}
\end{figure}
In this section, we solve the post-fault emergency control problem $\textbf{(P)}.$ As illustrated in Fig. \ref{fig.EmergencyControl_EquilibriumSelection}, to render the post-fault dynamics from the fault-cleared state $x_0$ to the equilibrium point $\delta^*_{\bf origin},$ we will find a sequence of stable equilibrium points $\delta^*_1,...,\delta^*_N$ with their corresponding region of attractions ${\bf SR_1,...,SR_N}$ such that
\begin{align}
x_0 \in {\bf SR_1}, \delta_1^*\in {\bf SR_2},..., \delta_{N-1}^*\in {\bf SR_{N}},\delta_N^* \in {\bf SR_{origin}}.
\end{align}
Then, the post-fault dynamics can be attracted from the fault-cleared state $x_0$ to the original equilibrium point $\delta^*_{\bf origin}$ through a sequence
of appropriate structural changes in the power network. In this section, we will show that we only need to determine a finite number of equilibrium points through solving convex optimization problems.
Recall that, the equilibrium point $\delta^*$ is a solution to the power flow-like equations:
\begin{align}
\label{eq.PFE}
\sum_{j\in \mathcal{N}_k}V_kV_jB_{kj}\sin \delta^*_{{kj}}=P_k, \forall k \in \mathcal{N}.
\end{align}
As such, the sequence of equilibrium points $\delta^*_1,...,\delta^*_N$ can be obtained by appropriately changing the susceptances $\{B_{kj}\}$ of the transmission lines or by changing the power injection $P_k$.
In the following, we will design the first equilibrium point $\delta^*_1$ by changing the selected line susceptances/power injection, and then design the other equilibrium points $\delta^*_2,...,\delta^*_N$ by only adjusting the
susceptances of selected transmission lines. We note that, in each control step, the susceptances of transmission lines or the power injections will only be changed one time. This scheme eliminates the need for the continuous measurement and continuous control actuation required in traditional feedback control practices.
Designing the first equilibrium point $\delta^*_1$ to drive the system from an unstable state (i.e., the fault-cleared state $x_0$) to the stable state $\delta_1^*$ will be performed in a way that differs from designing the other equilibrium points which serve to drive the system from the stable state $\delta_1^*$ to the original stable state $\delta^*_{\bf origin}.$
\subsection{Design the first equilibrium point $\delta^*_1$ by changing the transmission susceptances}
We need to find the new susceptances of transmission lines such that the equilibrium point $\delta_1^*$, which has the stability region ${\bf SR_1}$, contains $x_0.$
Consider the energy function in the Lyapunov function family \eqref{eq.Lyapunov}:
\begin{align}
\label{eq.energy}
V(x) &=\sum_{k \in \mathcal{N}}\frac{m_k \dot{\delta}_k^2}{2} - \sum_{\{k,j\}\in \mathcal{E}}B_{kj}V_kV_j(\cos \delta_{kj}+\delta_{kj}\sin\delta_{1_{kj}}^*)\nonumber\\&
=\sum_{k \in \mathcal{N}}\frac{m_k \dot{\delta}_k^2}{2} - \sum_{\{k,j\}\in \mathcal{E}}B_{kj}V_kV_j\cos \delta_{kj} -\sum_{k\in\mathcal{N}}P_k\delta_k.
\end{align}
We will find $\{B_{kj}\}$ such that $x_0 \in \mathcal{R}_{\mathcal{Q}}(\delta_1^*),$ i.e., $x_0\in \mathcal{Q}$ and $V(x_0)<V_{\min}.$ Note that, $V(x_0)$ is a linear function of $\{B_{kj}\}.$ Generally, $V_{\min}$ is a nonlinear function of $\{B_{kj}\}$. However, if we use the lower bound of $V_{\min}$ \cite{VuTuritsyn:2014}, we can have a bound $V_{\min}^{lower}$ that is linear in $\{B_{kj}\}.$ Then, the condition $V(x_0)<V_{\min}^{lower}$ is a linear matrix inequality, and thus can be solved quickly by convex optimization solvers to obtain a feasible solution of $V(x_0)<V_{\min}$.
\subsection{Design the first equilibrium point $\delta_1^*$ by changing the power injections}
Another way to design $\delta^*_1$ is by changing the power injection. The post-fault dynamics are locally stable when the equilibrium point stays inside the polytope defined by the inequalities $|\delta_{kj}|<\pi/2$ \cite{Dorfler:2013}. To make the post-fault dynamics stable, we can place the equilibrium point
far away from the margin $|\delta_{kj}|=\pi/2,$ i.e., making the phasor differences $\delta_{kj}$ near $0.$ As such, to search for the equilibrium point $\delta^*_1$ such that $x_0\in {\bf SR_1},$ we will find the equilibrium point $\delta^*_1$ such that its phasor differences are as small in magnitude as possible.
We recall in \cite{Dorfler:2013} that, for almost all power systems, to make sure $|\delta^*_{kj}|<\gamma <\pi/2$, we need
\begin{align}
\label{eq.SynchronizationCondition}
\|L^{\dag}p\|_{\mathcal{E},\infty} \le \sin\gamma.
\end{align}
Here, $L^\dag$ is the pseudoinverse of the network Laplacian
matrix, $p=[P_1,...,P_{|\mathcal{N}|}]^\top,$ and
$\|x\|_{\mathcal{E},\infty}=\max_{\{i,j\}\in
\mathcal{E}}|x(i)-x(j)|.$
Therefore, to make the phasor differences of the equilibrium point $\delta^*_1$ as small as possible, we will find the power injection $P_k$ such that
$\|L^{\dag}p\|_{\mathcal{E},\infty}$ as small as possible, i.e., minimizing $\|L^{\dag}p\|_{\mathcal{E},\infty}.$
Note that, with fixed susceptances, the Laplacian matrix $L^\dag$ is fixed. As such, minimizing $\|L^{\dag}p\|_{\mathcal{E},\infty}$
over all possible power injections is a linear optimization problem.
After designing the first equilibrium point $\delta^*_1,$ we can check if $x_0 \in {\bf SR_1}$ by applying the stability certificate presented in the previous section. In particular, given the equilibrium point $\delta^*_1$ and the fault-cleared state $x_0,$ we can adapt the Lyapunov function family to find a suitable function $V(x)$ such that
$V(x_0)<V_{\min}.$ A similar adaptation algorithm with what was introduced in \cite{VuTuritsyn:2014} can find such a Lyapunov function after a finite number of steps.
We summarize the procedure as follows.
{\bf Procedure 1.}
\begin{itemize}
\item Minimize the linear function $\|L^{\dag}p\|_{\mathcal{E},\infty}$ over the power injection space;
\item Calculate the new equilibrium point from the optimum value of the power injection;
\item Given the new equilibrium point, utilize the adaptation algorithm to search for a Lyapunov function that can certify stability for the fault-cleared state $x_0.$
\end{itemize}
\subsection{Design the other equilibrium points by changing the susceptances of transmission lines}
Now, given the equilibrium points $\delta^*_1$ and $\delta^*_{\bf origin},$ we will design a sequence of stable equilibrium points $\delta^*_2,...,\delta^*_N$ such that
$\delta_1^*\in {\bf SR_2},..., \delta_{N-1}^*\in {\bf SR_{N}},\delta^*_N \in {\bf SR_{origin}}.$ Since all of these stable equilibrium points stay inside polytope $\Pi/2,$ this design can be feasible.
{\bf Case 1:} The number of transmission lines that we can change is larger than the number of buses $|\mathcal{N}|$ (i.e., the number of lines with FACTS/PST devices available is larger than $|\mathcal{N}|$), and there are no constraints on the corresponding susceptances. Then, given the equilibrium point $\delta^*,$ it is possible to solve equation \eqref{eq.PFE} with variables the varying susceptances. Now, we can choose the sequence of stable equilibrium points $\delta^*_2,...,\delta^*_N$ equi-spaced between the equilibrium points $\delta^*_1$ and $\delta^*_{\bf origin},$ and find the corresponding susceptances. Then we use the stability certificate presented in Section III to check if $\delta_1^*\in {\bf SR_2},..., \delta_{N-1}^*\in {\bf SR_{N}},\delta^*_N \in {\bf SR_{origin}}.$
{\bf Case 2:} The number of transmission lines that we can change is smaller than the number of buses $|\mathcal{N}|,$ or there are some constraints on the corresponding susceptances. Then, it is not always possible to find the suitable susceptances satisfying equation \eqref{eq.PFE} from the given equilibrium point $\delta^*.$
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{EmergencyControl_EquilibriumLocalization}
\caption{Localization of $\delta^*_{i}$ as the closest point to $\delta_{i-1}^*$ that stays inside the ball around $\delta_{\bf origin}^*$ with the radius
$d_{i-1}(\delta_{i-1}^*,\delta_{\bf origin}^*)-d$. The minimization of the distance is taken over all the reachable susceptance values of the selected transmission lines. Here, minimizing the distance between $\delta^*_{i}$ and $\delta^*_{i-1}$ enables the convergence from $\delta^*_{i-1}$ to $\delta^*_{i}.$ The constraint that $\delta^*_{i}$ stays in the ball will make sure that the distance from the designed equilibrium point to $\delta_{\bf origin}^*$ is decreasing, and eventually, the equilibrium point stays closed enough to $\delta_{\bf origin}^*$ such that the system will converge from this equilibrium point to $\delta_{\bf origin}^*$.}
\label{fig.EmergencyControl_EquilibriumLocalization}
\end{figure}
In each step, to allow the convergence from $\delta_{i-1}^*$ to $\delta_i^*,$ we will search over all the reachable susceptance values of selected transmission lines the best one that minimizes the distance from $\delta_{i-1}^*$ to $\delta_i^*$. At the same time, we will make the distance
from these equilibrium points to the original equilibrium point $\delta^*_{\bf origin}$ strictly decreasing to make sure that we only need to design a finite number of equilibrium points. Intuitively, the localization of the equilibrium point $\delta^*_{i}$ is shown in Fig. \ref{fig.EmergencyControl_EquilibriumLocalization}. Accordingly, for the reachable set of transmission susceptances, we define $\delta^*_2$ as the closest possible equilibrium point to $\delta^*_1$ and the distance between $\delta_2^*$ and $\delta_{\bf origin}^*$ satisfies
\begin{align}
\label{eq.DistanceCondition}
d_2(\delta^*_2,\delta^*_{\bf origin})\le d_1(\delta_1^*,\delta^*_{\bf origin})-d,\end{align}
where $d>0$ is a constant. Similarly, $\delta^*_3$ is the closest possible equilibrium point to $\delta^*_2,$ and satisfies
\begin{align}
\label{eq.DistanceCondition1}d_3(\delta^*_3,\delta^*_{\bf origin}) \le d_2(\delta_2^*,\delta^*_{\bf origin})-d.\end{align},
and so on.
Here, $d>0$ is a sufficiently small constant chosen such that the convergence from $\delta_{i-1}^*$ to $\delta_{i}^*$ is satisfied for all $i=2,...,N$,
and $d_i(\delta^*_i,\delta)$ is the distance from $\delta$ to the equilibrium point $\delta_i^*$, which is defined via $\{B_{kj}^{(i)}\},$ i.e.,
\begin{align*}
d_i(\delta^*_i,\delta) &= \sum_{k \in \mathcal{N}} \big(\sum_{j\in \mathcal{N}_k}V_kV_jB_{kj}^{(i)}(\sin\delta_{i_{kj}}^*-\sin\delta_{{kj}})\big)^2 \nonumber \\&
= \sum_{k \in \mathcal{N}} \big(P_k-\sum_{j\in \mathcal{N}_k}V_kV_jB_{kj}^{(i)}\sin\delta_{{kj}}\big)^2.
\end{align*}
Note that, with $d=0,$ the trivial solution to all of the above optimization problems is $\delta_N^* \equiv ... \equiv \delta_2^* \equiv \delta_1^*,$
and the convergence from $\delta_{i-1}^*$ to $\delta_{i}^*$ is automatically satisfied.
Nonetheless, since each of the equilibrium points has a nontrivial stability region, there exists a sufficiently small $d>0$ such that
the convergence from $\delta_{i-1}^*$ to $\delta_{i}^*$ must still be satisfied for all $i=2,...,N.$
On the other hand, since $d_i(\delta^*_i,\delta^*)$ is a quadratic function of $\{B_{kj}^{(i)}\},$
defining $\delta^*_2,...,\delta^*_N$ can be described by the quadratically
constrained
quadratic
program (QCQP) in $\{B_{kj}^{(i)}\}:$
\begin{align}
\label{eq.DefiningEquilibrium}
&\min_{\{B^{(i)}_{kj}\}} d_i(\delta^*_i,\delta^*_{i-1}) \\
{\bf s.t.\;\; } & d_i(\delta^*_i,\delta^*_{\bf origin}) \le d_{i-1}(\delta^*_{i-1},\delta^*_{\bf origin})-d \nonumber.
\end{align}
In optimization problem \eqref{eq.DefiningEquilibrium}, $d_{i-1}(\delta^*_{i-1},\delta^*_{\bf origin})$ is a constant obtained from the previous step.
Note that, the condition $d_i(\delta^*_i,\delta^*_{\bf origin}) \le d_{i-1}(\delta^*_{i-1},\delta^*_{\bf origin})-d$ will probably place $\delta^*_i$
between $\delta^*_{i-1}$ and $\delta^*_{\bf origin},$ which will automatically guarantee that $\delta^*_i$ stays inside polytope $\Pi/2$. Also, since the equilibrium points are strictly staying inside polytope $\Pi/2,$ the objective function $d_i(\delta^*_i,\delta^*_{i-1}) $ and the constraint function
$d_i(\delta^*_i,\delta^*_{\bf origin})$ are strictly convex functions of $\{B_{kj}^{(i)}\}.$ As such, QCQP \eqref{eq.DefiningEquilibrium} is convex and can be quickly solved using convex optimization solvers.
When all of these optimization problems are feasible, then with $d>0$ from Eqs. \eqref{eq.DistanceCondition}-\eqref{eq.DistanceCondition1}, we have
\begin{align}
d_1(\delta_1^*,\delta^*_{\bf origin})& \ge d_2(\delta_2^*,\delta^*_{\bf origin})+d \ge...\nonumber \\&\ge d_N(\delta_N^*,\delta^*_{\bf origin})+(N-1)d
\nonumber \\&\ge (N-1)d.
\end{align}
As such, $N\le 1+ (d_1(\delta_1^*,\delta^*_{\bf origin})/d),$ and hence, there is only a finite number of equilibrium points $\delta_2^*,...,\delta_N^*$ that
we need to determine.
\subsection{Structural remedial actions}
We propose the following procedure of emergency controls to render post-fault dynamics from critical fault-cleared states to the desired stable equilibrium point.
\begin{itemize}
\item {\bf Initialization:} Check if the given fault-cleared state $\delta_0$ stays inside the stability region of the original equilibrium point $\delta^*_{\bf origin}$ by utilizing the stability certificate in Section \ref{sec:certificate}. If not, go to {\bf Step 1}, otherwise end.
\item {\bf Step 1:} Fix the susceptances and change the power injection such that the fault-cleared state $\delta_0$ stays inside the stability region ${\bf SR_1}$ of the new equilibrium point $\delta^*_1.$ The post-fault dynamics with power injection control will converge from the fault-cleared state $\delta_0$ to the equilibrium point $\delta_1^*.$ Recover the power injections after the post-fault dynamics converge to $\delta_1^*.$
Check whether $\delta^*_1$ stays in the stability region of the original equilibrium point $\delta^*_{\bf origin}$
by using the Lyapunov function stability certificate. If this holds true, then the post-fault dynamics will converge from the new equilibrium point to the original equilibrium point. If not, then go to Iterative Steps.
\item {\bf Iterative Steps:} Determine the transmission susceptances such that the sequence of stable equilibrium points $\delta^*_2,...,\delta^*_N$ satisfies that $\delta_1^*\in {\bf SR_2},..., \delta_{N-1}^*\in {\bf SR_{N}},\delta^*_N \in {\bf SR_{origin}}.$
Apply consecutively the susceptance changes on the transmission lines to render the post-fault dynamics from $\delta^*_1$ to $\delta^*_N.$
\item {\bf Final Step:} Restore the susceptances to the original susceptances. Then, the post-fault dynamics will automatically converge from $\delta^*_N$ to the original equilibrium point $\delta^*_{\bf origin}$
since $\delta^*_N \in {\bf SR_{origin}}.$
\end{itemize}
\section{Numerical Validation}
\label{sec.simulations}
\subsection{Kundur 9-Bus 3-Generator System}
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{SLD}
\caption{A 3 generator 9 bus system with frequency-dependent dynamic
loads.} \label{fig.3generator9bus}
\end{figure}
Consider the 9-bus 3-machine system depicted in Fig.
\ref{fig.3generator9bus} with 3 generator buses and 6 load buses.
The susceptances of the transmission lines are as follows
\cite{Anderson:2003}:
$B_{14}=17.3611 p.u.,B_{27}=16.0000 p.u.,B_{39}= 17.0648 p.u.,
B_{45}=11.7647 p.u., B_{57}= 6.2112p.u., B_{64}=10.8696p.u.,
B_{78}= 13.8889p.u.,B_{89}=9.9206p.u., B_{96}=5.8824p.u.$
The parameters for generators are $m_1=0.1254, m_2=0.034, m_3=0.016, d_1=0.0627, d_2=0.017, d_3=0.008.$ For simplicity, we take $d_k=0.05, k=4\dots,9.$
Assume that the fault trips the line between buses $5$ and $7$ and make the power injection variate. When the fault is cleared this line is re-closed. We also assume the fluctuation of the generation (probably due to renewables) and load such that the bus
voltages $V_k$, mechanical inputs $P_{m_k}$, and steady state load
$-P_{d_k}^0$ of the post-fault dynamics after clearing the fault are given in Tab. \ref{tab.data9bus}. The stable
operating condition is calculated
as $\delta_{\bf origin}^*=[-0.1629\;
0.4416\;
0.3623\;
-0.3563\;
-0.3608\;
-0.3651\;
0.1680\;
0.1362\;
0.1371]^\top, \dot{\delta}_{\bf origin}^*=0.$ However, the fault-cleared state, with angles
$\delta_0=[0.025 \;-0.023\; 0.041\; 0.012\; -2.917\; -0.004\; 0.907\; 0.021\; 0.023]^\top$ and generators angular velocity $[-0.016\; -0.021\; 0.014]^\top,$ stays outside polytope $\Pi/2.$ By our adaptation algorithm, we do not find a suitable Lyapunov function certifying the convergence of this fault-cleared state to the original equilibrium point $\delta_{\bf origin}^*,$ so this fault-cleared state may be unstable. We will design emergency control actions to bring the
post-fault dynamics from the possibly unstable fault-cleared state to the equilibrium point $\delta_{\bf origin}^*.$ All the convex optimization problems associated in the design will be solved by CVX software.
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{NoControl_Angle}
\caption{Buses angular dynamics when the proposed control is not employed} \label{fig.NoControl_Angle}
\end{figure}
\begin{table}[ht!]
\centering
\begin{tabular}{|c|c|c|}
\hline
Node & V (p.u.) & $P_k$ (p.u.) \\
\hline
1 & 1.0284 & 3.6466 \\
2 & 1.0085 & 4.5735 \\
3 & 0.9522 & 3.8173 \\
4 & 1.0627 & -3.4771 \\
5 & 1.0707 & -3.5798 \\
6 & 1.0749 & -3.3112 \\
7 & 1.0490 & -0.5639 \\
8 & 1.0579 & -0.5000 \\
9 & 1.0521 & -0.6054 \\
\hline
\end{tabular}
\caption{Bus voltages, mechanical inputs, and static
loads.}\label{tab.data9bus}
\end{table}
\subsubsection{Designing the first equilibrium point}
Assume that the three generators 1-3 are dispatchable and terminal loads at buses 4-6 are controllable, while terminal loads at the other buses are fixed. We design the first equilibrium point by changing the power injections of the three generators 1-3 and load buses 4-6. With the original power injection,
$\|L^{\dag}p\|_{\mathcal{E},\infty}=0.5288.$ Using CVX software to minimize $\|L^{\dag}p\|_{\mathcal{E},\infty},$ we obtain the new power injections at buses 1-6 as follows: $P_1= 0.5890, P_2=
0.5930, P_3=
0.5989, P_4=
-0.0333, P_5=
-0.0617,$ and $P_6=
-0.0165.$ Accordingly,
the minimum value of $\|L^{\dag}p\|_{\mathcal{E},\infty}=0.0350 < \emph{\emph{sin}}(\pi/89).$ Hence, the first equilibrium point obtained from equation \eqref{eq.SEP} will stay in the polytope defined by the inequalities $|\delta_{kj}|\le \pi/89, \forall \{k,j\}\in\mathcal{E},$ and can be approximated by
$\delta^*_1 \approx L^{\dag}p=[ 0.0581\;
0.0042\;
0.0070\;
0.0271\;
0.0042\;
0.0070\;
-0.0308\;
-0.0486\;
-0.0281]^\top$.
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{InjectionControl_Angle}
\caption{Effect of power injection control: Convergence of buses angles from the fault-cleared state to $\delta_1^*$ in the post-fault dynamics} \label{fig.InjectionControl_Angle}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{InjectionControl_Distance}
\caption{Effect of injection control: the convergence of the distance $D_1(t)$ to $0$. Here, the Euclid distance $D_1(t)$ between a post-fault state and the first equilibrium point $\delta_1^*$
is defined as $D_1(t)=\sqrt{\sum_{i=2}^{9} (\delta_{i1}(t)-\delta_{1_{i1}}^*)^2}$.}
\label{fig.InjectionControl_Distance}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{InjectionControl_Frequency}
\caption{Effect of power injection control: Convergence of generators frequencies to the base value.}
\label{fig.InjectionControl_Frequency}
\end{figure}
Next, we apply the fault-dependent stability certificate in Section III.B. With the new equilibrium point $\delta_1^*$, we have a family of Lyapunov functions satisfying LMIs \eqref{eq.NewQKH}-\eqref{eq.NewQKH1}. Using the adaptation algorithm presented in \cite{VuTuritsyn:2014}, after some steps we find that there is a Lyapunov function in this family such that $V(x_0)<V_{\min}.$ As such, when we turn on the new power injections, the post-fault dynamics are stable and the post-fault trajectory will converge from the fault-cleared state $x_0$ to the new equilibrium point $\delta^*_1$. After that, we switch power injections back to the original values.
\subsubsection{Designing the other equilibrium points by changing transmission susceptances}
Using the adaptation algorithm, we do not find a suitable Lyapunov function certifying that $\delta_1^* \in {\bf SR_{origin}}$. As such, the
new equilibrium point $\delta_1^*$ may stay outside the stability region of the original equilibrium point $\delta^*_{\bf origin}$. We design the impedance adjustment controllers to render the post-fault dynamics from the new equilibrium point back to the original equilibrium point.
Assume that the impedances of transmission lines $\{1,4\}, \{2,7\}, \{3,9\}$ can be adjusted by FACTS devices integrated with these lines. The distance from the first equilibrium point to the original equilibrium point is calculated as $d_1(\delta^*_1,\delta^*_{\bf origin})=70.6424.$ Let $d = d_1(\delta^*_1,\delta^*_{\bf origin})/2 +1=36.3212,$ and solve the following convex QCQP with variable $B^{(2)}_{14}, B^{(2)}_{27},$ and $B^{(2)}_{39}:$
\begin{align}
\label{eq.DesignSecondEquilibrium}
&\min_{\{B^{(2)}_{kj}\}} d_2(\delta^*_2,\delta^*_{1}) \\
{\bf s.t.\;\; } & d_2(\delta^*_2,\delta^*_{\bf origin}) \le d_{1}(\delta^*_{1},\delta^*_{\bf origin})-d= 34.3212. \nonumber
\end{align}
Solving this convex QCQP problem, we obtain the new susceptances at transmission lines $\{1,4\},\{2,7\}, \{3,9\}$ as $B^{(2)}_{14}=33.4174 p.u.,
B^{(2)}_{27}=22.1662 p.u.,$ and $B^{(2)}_{39}= 24.3839 p.u.,$ with which the distance from the second equilibrium point to the first equilibrium point and the original equilibrium point are given by $d_2(\delta^*_2,\delta^*_{1})= 60.9209$ and $d_2(\delta^*_2,\delta^*_{\bf origin})=34.3212.$ Using the adaptation algorithm, we can check that $\delta^*_1 \in {\bf SR}_2$ and $\delta^*_2 \in {\bf SR}_{\bf origin}.$
\subsubsection{Simulation results}
When there is no control in use, the post-fault dynamics evolve as in Fig. \ref{fig.NoControl_Angle} in which we can see that the angle of the load bus 5 significantly deviates from that of other buses with the angular differences larger than 6. This implies that the post-fault dynamics evolve to a different equilibrium point instead of the desired stable equilibrium point $\delta_{\bf origin}^*,$ where the angular differences are all smaller than 0.6.
We subsequently perform the following control actions:
\begin{itemize}
\item [(i)] Changing the power injections of generators 1-3 and controllable load buses 4-6 to $P_1= 0.5890, P_2=
0.5930, P_3=
0.5989, P_4=
-0.0333, P_5=
-0.0617, P_6=
-0.0165.$ From Fig. \ref{fig.InjectionControl_Angle} and Fig \ref{fig.InjectionControl_Distance}, it can be seen that the bus angles of the post-fault dynamics converge to the equilibrium point of the controlled post-fault dynamics which is the first equilibrium point $\delta_1^*.$ In Fig. \ref{fig.InjectionControl_Frequency}, we can see that the generator frequencies converge to the nominal frequency, implying that the post-fault dynamics converge to the stable equilibrium point $\delta_1^*.$ However, it can be seen that the frequencies remarkably fluctuate. The fluctuation happens because we only change the power injection one time and let the post-fault dynamics automatically evolve to the designed equilibrium point $\delta_1^*.$ This is different from using the AGC where the fluctuation of the generator frequencies is minor, however we need to continuously measure the frequency and continuously update the control.
\item [(ii)] To recover the resource spent for the power injection control, we switch the power injections to the original value. At the same time, we change the susceptances of transmission lines $\{1,4\}, \{2,7\},$ and $\{3,9\}$ to $B^{(2)}_{14}=33.4174 p.u.,
B^{(2)}_{27}=22.1662 p.u.,$ and $B^{(2)}_{39}= 24.3839 p.u.$ The system trajectories will converge from the first equilibrium point $\delta_1^*$ to the second equilibrium point $\delta_2^*$, as shown in Figs. \ref{fig.ImpedanceControl_Angle}-\ref{fig.ImpedanceControl_Frequency}. Similar to the power injection control, in this case we also observe the fluctuation of generator frequencies, which is the result of the one-time change of line susceptances and autonomous post-fault dynamics after this change.
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{ImpedanceControl_Angle}
\caption{Effect of susceptance control: Convergence of buses angles from $\delta_1^*$ to the second equilibrium point $\delta_2^*$ in post-fault dynamics.} \label{fig.ImpedanceControl_Angle}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{ImpedanceControl_Distance}
\caption{Effect of susceptance control: the convergence of the distance $D_2(t)$ to $0$. Here, the Euclid distance $D_2(t)$ between a post-fault state and the second equilibrium point $\delta_2^*$
is defined as $D_2(t)=\sqrt{\sum_{i=2}^{9} (\delta_{i1}(t)-\delta_{2_{i1}}^*)^2}$.} \label{fig.ImpedanceControl_Distance}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{ImpedanceControl_Frequency}
\caption{Effect of susceptance control: Convergence of generators frequencies to the base value.}
\label{fig.ImpedanceControl_Frequency}
\end{figure}
\item [(iii)] Switch the susceptances of transmission lines $\{1,4\}, \{2,7\},$ and $\{3,9\}$ to the original values. The system trajectories will autonomously converge from the second equilibrium point to the original equilibrium point $\delta^*_{\bf original},$ as shown in Fig. \ref{fig.Autonomous_Distance}.
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{Autonomous_Distance}
\caption{Autonomous dynamics when we switch the line susceptances to the original values: the convergence of the distance $D_{\bf origin}(t)$ to $0$. Here, the Euclid distance $D_{\bf origin}(t)$ between a post-fault state and the original equilibrium point $\delta_{\bf origin}^*$
is defined as $D_{\bf origin}(t)=\sqrt{\sum_{i=2}^{9} (\delta_{i1}(t)-\delta_{{\bf origin}_{i1}}^*)^2}$.} \label{fig.Autonomous_Distance}
\end{figure}
\end{itemize}
\subsection{Scalability demonstration on 118 bus system}
The scalability of the proposed control design depends on minimizing $\|L^{\dag}p\|_{\mathcal{E},\infty}$ to find the optimum power injections $p^*$ and solving the quadratically
constrained quadratic program (QCQP) \eqref{eq.DefiningEquilibrium} to find the optimum line susceptances. Minimizing $\|L^{\dag}p\|_{\mathcal{E},\infty}$ is a linear problem and can be solved extremely fast even with the high number of variables. The QCQP \eqref{eq.DefiningEquilibrium} is a convex problem, and can also be solved quickly in large power systems if we have a small number of susceptance variables.
To clearly demonstrate the scalability of the proposed control method to large scale power systems, we utilize the modified IEEE 118-bus test case \cite{118bus}, of which 54 are generator buses and the other 64 are load buses as showed in Fig. \ref{fig.IEEE118}. The data is taken directly from the test files \cite{118bus}, otherwise specified. The damping and inertia are not given in the test files and thus are randomly selected in the following ranges: $m_i \in [0.02,0.04], \forall i \in \mathcal{G} ,$ and $d_i\in [0.01,0.02], \forall i \in \mathcal{N}.$
The grid originally contains 186 transmission lines. We eliminate 9 lines whose susceptance is zero, and combine 7 lines $\{42,49\},\{49,54\},\{56,59\},\{49,66\},\{77,80\},$ $\{89,90\},$ and $\{89,92\},$ each of which contains double transmission lines as in the test files \cite{118bus}. Hence, the grid is reduced to 170 transmission lines connecting 118 buses. Assume that we can use the integrated FACTS devices to change the susceptances of the 3 transmission lines $\{19,34\}, \{69,70\},$ and $\{99,100\}$ which connect generators in different Zones 1, 2, and 3. These transmission lines may have strong effects on keeping the synchronization of the whole system.
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{IEEE118}
\caption{IEEE 118-bus test case}
\label{fig.IEEE118}
\end{figure}
We renumber the generator buses as $1-54$ and load buses as $55-118$. Assume that each of the first ten generator buses increases $0.01p.u.$ and each of the first ten load buses decreases $0.01p.u.,$ which result in an equilibrium point with $\|L^{\dag}p\|_{\mathcal{E},\infty}=0.8383.$ This equilibrium point stays near the stability margin $\delta_{kj}=\pi/2,$ and weakly stable. As a result, the fault-cleared state $\delta_{fault-cleared}$ does not stay inside the stability region of this equilibrium point, as can be seen from Fig. \ref{fig.118bus_NoControl} which shows that the uncontrolled post-fault dynamics converges to an equilibrium point with some angular differences larger than $\pi$.
Assume that we can control the power generation at generator buses $1-20,$ the load buses $55-64$ are deferrable, and the terminal loads at other buses are fixed. We design the first equilibrium point by changing the power injections of the generators 1-20 and load buses 55-64. Using CVX software to minimize $\|L^{\dag}p\|_{\mathcal{E},\infty},$ after less than 1 second, we obtain the optimum power injections at these controllable buses with
the minimum value of $\|L^{\dag}p\|_{\mathcal{E},\infty}=0.0569 < \emph{\emph{sin}}(\pi/55).$ Accordingly, the new equilibrium point $\delta_1^*$ is strongly stable since it stays far away from the stability margin $\delta_{kj}=\pi/2.$ The controlled post-fault dynamics converges from the fault-cleared state to the designed equilibrium point as showed in Fig. \ref{fig.118bus_InjectionControl}.
Now, we change the susceptances of the above selected transmission lines, which are $\{9,16\}, \{30,31\}$, and $\{44,45\}$ in the new order. Using CVX software in a normal laptop to solve the convex QCQP with variable set $\mathcal{B}=\{B^{(2)}_{\{9,16\}}>0, B^{(2)}_{\{30,31\}}>0,B^{(2)}_{\{44,45\}}>0 \},$
\begin{align}
&\min_{\mathcal{B}} d_2(\delta^*_2,\delta^*_{1}) \\
{\bf s.t.\;\; } & d_2(\delta^*_2,\delta^*_{\bf origin}) \le d_{1}(\delta^*_{1},\delta^*_{\bf origin})-0.001, \nonumber
\end{align}
we obtain the optimum susceptances at transmission lines $\{9,16\}, \{30,31\}$, and $\{44,45\}$ in less than one second:
$B^{(2)}_{\{9,16\}}=0.0005p.u., B^{(2)}_{\{30,31\}}=0.0008p.u.,B^{(2)}_{\{44,45\}}=0.0012p.u.$. Therefore, the proposed control method can quickly determine the optimum values of both power injection and susceptance controls, and hence, it is suitable to handle faults in large scale power systems.
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{118bus_NoControl}
\caption{Dynamics of buses angle differences in post-fault dynamics when the proposed control is not applied.} \label{fig.118bus_NoControl}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width = 3.2in]{118bus_InjectionControl}
\caption{Convergence of buses angle differences in post-fault dynamics under the control to the designed equilibrium point.} \label{fig.118bus_InjectionControl}
\end{figure}
\section{Conclusions}
\label{sec.conclusion}
This paper proposed a novel emergency control paradigm for power grids by exploiting the transmission facilities widely available on the grids. In particular, we formulated a control problem to recover the transient stability of power systems by adjusting the transmission susceptances of the post-fault dynamics such that a given fault-cleared state, that originally can lead to unstable dynamics, will be attracted to the post-fault equilibrium point. To solve this problem, we extended our recently introduced Lyapunov function family-based transient stability certificate \cite{VuTuritsyn:2014,VuTuritsyn:2015TAC} to a new set of convex fault-dependent functions. Applying this stability certificate, we determined suitable amount of transmission susceptance/power injection to be adjusted in critical/emergency situations. We showed that the considered control design can be quickly performed through solving a number of linear and convex optimization problems in the form of SDP and convex QCQP. The advantage of the proposed control is that the transmission line's susceptance or power injection only needs to be adjusted one time in each step, and hence, no continuous measurement is required, as in the classical control setup.
To make the proposed emergency control scheme applicable in practice, we need to take into account the computation and regulation delays, either by offline scanning contingencies and calculating the emergency actions before hand, or by allowing specific delayed time for computation. Also, the variations of load and generations during this delayed time should be considered.
On the theoretical side, several questions are still open not only for power grids, but also for the general complex networks:
\begin{itemize}
\item At which locations are the suitable transmission lines to adjust susceptances such that we can drive the post-fault dynamics from a given initial state to the desired equilibrium point?
\item Given a grid, what is the minimum number of lines required to adjust susceptances to obtain the control objective? How many equilibrium points should be designed?
\item What are the emergency situations where the proposed control scheme is not effective? Can the proposed control scheme be extended to deal with situations of voltage instability?
\end{itemize}
Finally, the installation of FACTS devices is certainly associated with non-negligible costs for the power system stakeholders. However, this paper does not advocate the installation of new FACTS devices solely for emergency control. It rather proposes the use of existing FACTS devices, e.g. PSTs, TCSCs, or HVDC, to assist in emergency control situations. For example, a large number of PSTs has been installed in several power systems for power flow control. HVDC lines and Back-to-Back converters become more and more widespread in systems in Europe, the US, or Asia. In this paper, we propose to use only a number of these already installed devices, in order to ensure power system stability in emergency situations. The proposed method can also be combined with transmission line switching, an approach already used by operators to ensure power system security or minimize losses. This will however lead to a mixed-integer optimization problem, instead of the convex QCQP optimization problem as in Section IV.C of this paper. In that case, convex relaxations should be considered to make the control design computationally tractable \cite{6502760}.
\section{Appendix}
\subsection{Adaptation algorithm to find suitable Lyapunov function}
\label{appendix}
The family of Lyapunov functions characterized by the matrices $Q,K$ satisfying LMIs \eqref{eq.NewQKH}-\eqref{eq.NewQKH1} allow us to find a
Lyapunov function that is best suited for a given fault-cleared state
$x_{0}$ or family of initial states. In the
following, we propose a simple algorithm for the
adaptation of Lyapunov functions to a given initial state $x_0$ (similar to that in \cite{VuTuritsyn:2014}).
\vskip 0.2cm
\noindent Let $\epsilon$ be a positive constant.
\begin{itemize}
\item[$-$] \emph{Step 1:} Find $Q^{(1)}, K^{(1)}$ by solving LMIs \eqref{eq.NewQKH}-\eqref{eq.NewQKH1}.
Calculate $V^{(1)}(x_{0})$ and $V^{(1)}_{\min}$.
\item[$-$] \emph{Step $k$:} If $x_{0} \notin
\mathcal{R}(Q^{(k-1)},K^{(k-1)}),$ (i.e., $V^{(k-1)}(x_{0})\ge V^{(k-1)}_{\min}$), then find matrices $Q^{(k)}, K^{(k)}$ by solving the
following LMIs:
\begin{align}
&\left[\begin{array}{ccccc}
A^\top Q^{(k)}+Q^{(k)}A & R \\
R^\top & -2H^{(k)}\\
\end{array}\right] \le 0, \nonumber \\
& Q^{(k)}- \sum_{j\in \mathcal{N}_i}K^{(k)}_{\{i,j\}}C_{\{i,j\}}^\top C_{\{i,j\}} \ge 0, \nonumber\\
& V^{(k)}(x_0) \le V^{(k-1)}_{\min}-\epsilon, \nonumber
\end{align}
with $R = Q{(k)}B-C^\top H{(k)}-(K{(k)}CA)^\top.$ Note that, $V^{(k)}(x_0)$ is a linear function of $Q^{(k)}, K^{(k)}.$
\end{itemize}
With this algorithm, we have
\begin{align}
V^{(k-1)}_{\min} &\le V^{(k-1)}(x_{0}) \le V^{(k-2)}_{\min}-\epsilon
\le ... \le V^{(1)}_{\min}-(k-2)\epsilon.
\end{align}
Since $V^{(k-1)}_{\min}$ is lower bounded, this algorithm will terminate
after a finite number of the steps. There are two alternative
exits then. If $V^{(k)}(x_{0}) < V^{(k)}_{\min},$ then the Lyapunov
function is identified. Otherwise, the value of $\epsilon$ is
reduced by a factor of $2$ until a valid Lyapunov function is
found. Therefore, whenever the stability certificate of the given
initial condition exists, this algorithm possibly finds it after a
finite number of iterations.
\bibliographystyle{IEEEtran}
|
1,108,101,564,322 | arxiv | \section{Introduction}
Bimetallic nanoparticles are of fundamental interest for electrocatalysis applications as they exhibit emergent catalytic properties driven by ligand, strain, and ensemble effects that are absent in monometallic catalysts. Because of their profound impact on catalytic performance, significant attention has been devoted to understanding the roles of these promotional effects in enhancing electrocatalytic kinetics over bimetallic surfaces. It is generally believed that the observed performance enhancements arise from a beneficial modification of the local surface electronic structure, directly influencing the adsorption energies of key reaction intermediates in addition to leading to shifts in the surface Fermi level. Among these promotional mechanisms, ensemble effects are unique in that they rely on a distribution of different types of atomic ensembles along the catalyst surface to effectively activate electrochemical processes.
The prototypical example of a bimetallic catalyst that exhibits ensemble effects are the family of palladium--gold alloy catalysts, which have been shown to be active for a number of thermochemical and electrochemical processes, such as catalyzing hydrogen evolution, low temperature carbon monoxide oxidation, hydrogen peroxide production from hydrogen and oxygen gas, vinyl acetate production, hydrocarbon hydrogenation, among other reactions.\cite{Gao2012} Model studies using both single crystals and supported bimetallic palladium--gold nanoparticles have been carried out. For example, Behm and co-workers electrodeposited palladium--gold surface alloys on single crystal gold (111) surfaces and showed via a combination of \emph{in situ} scanning tunneling microscopy, cyclic voltammetry, and \emph{in situ} Fourier transform infrared spectroscopy that carbon monoxide oxidation proceeds over palladium monomers while proton adsorption can only occur at palladium multimers containing two or more palladium atoms.\cite{maroun2001role} Goodman and co-workers found that the presence of non-adjacent but proximal palladium monomers can lead to enhanced reaction kinetics for the acetoxylation of ethylene to vinyl acetate over palladium-gold surface alloys on low index single crystal gold surfaces.\cite{chen2005promotional} Brodsky and co-workers studied supported octahedral core-shell gold-palladium nanoparticles and found that gold has a tendency to segregate to the particle surface upon potential cycling effectively diluting the palladium surface coverage, leading to an enhancement in the catalytic performance for the ethanol oxidation reaction.\cite{Brodsky2014} The extent of the gold surface segregation was observed to be highly sensitive to pH, electrolyte composition, and voltage range. Detailed electrochemical measurements of palladium-gold nanoalloys by Pizzutilo and co-workers showed that palladium may be selectively dealloyed under fuel cell operating conditions, altering the performance of surface engineered bimetallic catalysts over the course of their lifetime.\cite{Pizzutilo2017} \emph{in situ} X-ray absorption studies have also been carried out to understand the role of environmental effects on the stability of the composition and structure of palladium-gold nanocatalysts. Through extended X-ray absorption fine structure measurements, Okube and co-workers observed a strong voltage-dependence on the surface structure and proposed a mechanism through which proton adsorption at low applied potentials can draw palladium to the surface from sub-surface layers of the gold-palladium particle, generating new palladium ensembles.\cite{okube2014topologically}
To date, a number of theoretical studies have also been conducted aimed at understanding the connection between palladium-gold surfaces and their observed catalytic properties. N{\o}rskov, Behm and co-workers conducted a detailed experimental and theoretical study aimed at clarifying the adsorption of protons on palladium-gold surface alloys on palladium (111) surfaces.\cite{takehiro2014hydrogen} Employing scanning tunneling microscopy, temperature programmed desorption spectroscopy, high resolution electron energy loss spectroscopy, and semi-local density functional theory, they found that proton adsorption is most stable at compact palladium trimer three-fold hollow sites, then at palladium dimer bridge sites, and least stable at palladium monomers. They additionally found that proton adsorption at a palladium dimer--gold hollow site was more stable than at the palladium bridge site. Santos and co-workers studied near surface alloys of palladium on a gold (111) surface and found that a full sub-layer of palladium is more stable than a full monolayer on the gold (111) surface in agreement with experiment.\cite{juarez2015catalytic} They additionally found that d-electrons transfer from gold to palladium while s- and p-electrons transfer from palladium to gold with the net effect of the palladium d-band shifting up in energy towards the Fermi level. Ham and co-workers studied palladium-gold surface alloys on the palladium (111) surface and estimated the population of Pd monomers and dimers on the surface by fitting a cluster expansion Hamiltonian to density-functional results and computed averages via Monte Carlo simulations in the canonical ensemble.\cite{Ham2011} Their work showed that in the absence of adsorption effects, Pd monomers were prominent for palladium surface fractions up to 50\% for a wide range of temperatures.
While much of this work has led to profound insights into the performance and durability of bimetallic catalysts, a prominent limitation has been the ability to model bimetallic surfaces under realistic electrochemical conditions. Understanding the interplay between catalyst surface structure, performance, and stability in electrolytic environments and under applied voltage is a requirement for advancing the design of high performance electrocatalysts. In an effort to progress towards this goal, we consider a bottom-up quantum--continuum Monte Carlo approach to model the effects of solvation, applied voltage, and finite temperature on the population of palladium multimers in a palladium-gold surface alloy on a gold (111) surface. For brevity, we will refer to the system as Pd-Au/Au(111).
\section{Computational Methods}
We model the Pd-Au/Au(111) surface alloy under applied voltage by first considering the electrochemical equilibria that exists between the bimetallic electrode and aqueous solution. As shown in Fig.~\ref{fig:PdAu_Pourbaix}, bulk gold and alloys of palladium and gold are anticipated to be thermodynamically stable at voltages between 0 V and 0.6 V vs. the standard hydrogen electrode (SHE) in strongly acidic media.
\begin{figure}[h]
\centering\includegraphics[width=1\columnwidth]{Figure_1.pdf}
\caption{\small The Pourbaix diagram for a PdAu$_4$ alloy with Pd and Au solution concentrations taken to be $10^{-8}$ M.\cite{Persson2012} The lower and upper red dashed lines denote the onset of hydrogen evolution and oxygen reduction, respectively. }
\label{fig:PdAu_Pourbaix}
\end{figure}
At voltages between 0.6 V and 0.9 V, gold remains to be stable while palladium may be oxidized to form divalent Pd$^{2+}$ cations. Under high voltage conditions across a wide range of pH values, both palladium and gold may oxidize to form a variety of aqueous ions and palladium may additionally form solid PdO and PdO$_2$. In light of this diversity, we narrow the scope of our modeling efforts to study the surface alloy at a pH of 0 and between 0--0.6 V to focus solely on the effects of the applied voltage on the surface structure. It follows then, that the solution phase does not serve as a significant source of palladium and gold species.
The bimetallic surface alloy was modeled by computing the energies of 79 unique neutrally-charged Pd-Au surface configurations in contact with a solvent using a recently developed quantum--continuum model.\cite{Weitzner2017,weitzner2017voltage,Keilbart2017} The quantum--continuum calculations were carried out using the planewave density-functional theory (DFT) code {\sc PWscf} that is part of the open-source {\sc Quantum ESPRESSO} software suite.\cite{Giannozzi2009, giannozzi2017advanced} Quantum electronic interactions were modeled using the semi-local Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional. Solvent effects were described using the self-consistent continuum solvation (SCCS) model as implemented in the {\sc Environ} module that extends the {\sc PWscf} code to consider the effects of implicit liquid environments.\cite{Andreussi2012,Dupont2013} Periodic boundary artifacts on slab surfaces were additionally corrected for using the generalized electrostatic correction schemes implemented in the {\sc Environ} module.\cite{Dabo2008,Andreussi2014} The atomic cores were modeled using projector augmented wavefunction (PAW) pseudopotentials, and the wavefunction and charge density cutoffs were taken to be 50 Ry and 600 Ry, respectively, after verifying numerical convergence of forces within 5 meV/{\AA} and total energies within 50 meV per cell. The Brillouin zone of each cell was sampled with a gamma-centered $15\times 15\times 1$ Monkhorst-Pack grid, or a grid of equivalent density for larger surface cells. The electronic occupations were smoothed with 0.005 Ry of Marzari-Vanderbilt cold smearing to aid the numerical convergence of the metallic slabs.
Surfaces were modeled as symmetric slabs containing eight interior layers of pure gold and a symmetric alloy layer on the top and bottom of the slab. Employing a similar approach to that used in Ref.~\onlinecite{Ham2011}, the surface alloy is modeled by allowing only the outermost layers to have occupational degrees of freedom. The formation enthalpy per site $\Delta H_F$ for one surface of the symmetric slabs can be computed as
\begin{equation}
\Delta H_F(x_\text{Pd}) = \frac{1}{2 N_\text{cell} }\big[ E(N_\text{Pd})
- N_\text{Pd} \mu^\circ_\text{Pd}
- N_\text{Au} \mu^\circ_\text{Au} \big]
\end{equation}
where $x_\text{Pd} = N_\text{Pd} / N_\text{cells}$ is the surface fraction of palladium, $N_\text{cell}$ is the number of surface primitive cells within the configuration, $N_\text{Au}$ is the total number of gold atoms in the slab, $E(N_\text{Pd})$ is the total quantum-continuum energy of a configuration with $N_\text{Pd}$ palladium atoms, and $\mu_\text{Pd}^\circ$ and $\mu_\text{Au}^\circ$ are the cohesive energies of bulk palladium and gold, which we have computed to be $-4.10$ eV/atom and $-3.14$ eV/atom, respectively. We can additionally define a mixing enthalpy for the surface alloy as
\begin{equation}
\Delta H_\text{mix} (x_\text{Pd}) = \Delta H_F(x_\text{Pd}) - x_\text{Pd} \Delta H_F(x_\text{Pd} =1) - (1-x_\text{Pd} ) \Delta H_F(x_\text{Pd} = 0),
\end{equation}
which provides the enthalpy of each configuration relative to the pure gold (111) surface and a gold (111) surface covered with a pseudomorphic monolayer of palladium.
The formation and mixing enthalpies for the considered surfaces are shown below in Fig.~\ref{fig:form_enthalpy} and Fig.~\ref{fig:mix_enthalpy} along with the $T = 0$ K chemical potentials $\mu_\text{Pd}$ defining the equilibria amongst adjacent ground states, which are shown in red along the energy hull.
\begin{figure}[h]
\centering\includegraphics[width=0.9\columnwidth]{Figure_2.pdf}
\caption{\small (a) The formation enthalpies of the sampled Pd-Au/Au(111) surface alloy configurations referenced to bulk gold and bulk palladium. The ground state configurations that lie along the energy hull are shown in red. (b) The $T = 0$ K chemical potentials of the sampled Pd-Au surface alloy configurations computed from the energy hull of the formation enthalpy data.}
\label{fig:form_enthalpy}
\end{figure}
Using the formation enthalpy or mixing enthalpy results in equivalent chemical potential--composition curves, save for a linear shift in the chemical potential as a result of adopting different reference states. The ground state configurations are depicted below in Fig~\ref{fig:ground_state_configs}, where we observe that both gold and palladium prefer dispersed configurations when they exist as minority components along the surface.
\begin{figure}[h]
\centering\includegraphics[width=0.9\columnwidth]{Figure_3.pdf}
\caption{\small (a) The mixing enthalpies of the sampled Pd-Au/Au(111) surface alloy configurations referenced to a pristine gold (111) surface and a gold (111) surface covered with a pseudomorphic monolayer of palladium. The ground state configurations that lie along the energy hull are shown in red. (b) The $T = 0$ K chemical potentials of the sampled Pd-Au surface alloy configurations computed from the energy hull of the mixing enthalpy data.}
\label{fig:mix_enthalpy}
\end{figure}
\begin{figure}[h]
\centering\includegraphics[width=0.9\columnwidth]{Figure_4.pdf}
\caption{\small The ground state surface configurations identified in Fig.~\ref{fig:form_enthalpy} and Fig.~\ref{fig:mix_enthalpy}. In each ground state, palladium and gold is observed to mix favorably as anticipated from their negative mixing enthalpies. For surfaces with 43\% and 57\% palladium surface fractions, alternating rows of fully coordinated gold or palladium and lines of clustered gold or palladium result in the most stable configurations.}
\label{fig:ground_state_configs}
\end{figure}
\section{The Pd-Au surface alloy under applied voltage}
As we have shown in previous studies, surface electrification effects can be considered by adding explicit charges to the supercell and inserting a planar ionic countercharge several {\AA}ngstroms from the surface within the bulk of the continuum dielectric region.\cite{Weitzner2017,weitzner2017voltage,Keilbart2017} We can then define a charge-dependent enthalpy by Taylor expanding the enthalpies obtained for neutral surfaces with respect to charge
\begin{equation}
\Delta H_\text{mix} (x_\text{Pd}, Q) = \Delta H_\text{mix} (x_\text{Pd}, Q=0) + \Phi_0(x_\text{Pd}) Q + \frac{1}{2}\frac{Q^2}{AC_0(x_\text{Pd})},
\label{eq:mix_enth_q}
\end{equation}
where $Q$ is the total charge per site in the cell, $\Phi_0(x_\text{Pd})$ is the configuration-dependent potential of zero charge (PZC) of the alloy surface, $A$ is the surface area of one side of the slab, and $C_0(x_\text{Pd})$ is the differential capacitance of the electrode-solution interface. This capacitance term can be computed fully \emph{ab initio} for a given surface configuration by fitting Eq.~\ref{eq:mix_enth_q} to a set of enthalpies obtained for different charges and for a fixed position of the ionic countercharge.\cite{Weitzner2017,weitzner2017voltage,Keilbart2017} Alternatively, experimental capacitance values may be considered or the capacitance may be taken to be an environmental parameter and adjusted in a sensitivity analysis to approximately model the effects of the surface charge. The latter is similar in spirit to the analytical dipole corrections applied to neutral surfaces in the study of electrocatalysis.\cite{yeh2013density} This is true to the extent that a neutral surface is modeled and the effects of a perturbed interfacial electric field are approximated via an analytical correction to the configurational energies. However, unlike the dipole correction which involves an expansion in terms of the interfacial electric field, our approach (Eq.~\ref{eq:mix_enth_q}) achieves a similar result indirectly as an expansion in terms of the surface charge. Furthermore, the expansion coefficients are identified to be more natural interfacial quantities such as the potential of zero charge and differential capacitance as opposed to the dipole moment and polarizability of surface species. We anticipate that both methods deliver equivalent accuracies and entail similar amounts of post-processing work, however a direct comparison of the methods has yet to be conducted.
While obtaining the enthalpy as a function of charge and composition is practical for performing quantum--continuum calculations where the charge is easily controlled, it is desirable to model the surface alloy at fixed voltages since the charge or current density is measured at a fixed potential in experiments. A voltage-dependent enthalpy or \emph{electrochemical enthalpy} may be obtained through the Legendre transform $\mathscr{F}(x_\text{Pd}, \Phi) = \Delta H_\text{mix}(x_\text{Pd}, Q) - \Phi Q$, where the voltage $\Phi$ becomes an independent potential of the system and the charge that develops on the electrode is modeled as $Q = AC_0 (\Phi - \Phi_0(x_\text{Pd}))$. In order to compute this new enthalpy, it is necessary to determine the PZC of each surface configuration, which is nothing other than the equilibrium voltage on the neutral electrode. The PZC is analogous to the work function of the neutral electrode in solution, and can be computed as $\Phi_0 = -e_0 \phi(z=\infty) - E_F$, where $e_0$ is the unsigned elementary charge, $\phi(z=\infty)$ is the electrostatic potential far from the electrode surface in the bulk of the solution, and $E_F$ is the Fermi level of the electrode. In practice, we can determine the configuration-dependent PZC by aligning the converged electrostatic potential at the edge of the supercell to zero and computing the PZC directly as $\Phi_0 = -E_F$. This provides an absolute value for the voltage relative to the bulk of the solution which must subsequently be referenced to a common standard such as the standard or reversible hydrogen electrode. This can be achieved by using Trasatti's estimate for the absolute value of the standard hydrogen electrode $\Phi_\text{SHE} = 4.44$ V, or by aligning the set of PZC values to an experimentally determined PZC so that the computed PZC of the neutral pristine surface matches the experimental one.\cite{trasatti1986absolute} In this work, we consider a non-reconstructed Au(111) surface for which PZC data is scarce. We therefore reference the voltages in these simulations by subtracting $\Phi_\text{SHE}$, as shown in Fig.~\ref{fig:au_pd_pzcs}.
\begin{figure}[h]
\centering\includegraphics[width=0.80\columnwidth]{Figure_5.pdf}
\caption{\small Potentials of zero charge of the sampled Pd-Au/Au(111) surface alloy configurations. Voltages are reported both on the absolute scale of the quantum--continuum calculations (left) and the standard hydrogen electrode scale (right).}
\label{fig:au_pd_pzcs}
\end{figure}
Here, we observe that the PZC of the gold (111) surface increases by 50 mV after replacing the top surface layer with a full palladium monolayer. An increase in the PZC after metal electrodeposition is not very surprising since the palladium (111) surface has a work function of 5.6 eV compared to a work function of 5.3 eV for the gold (111) surface, and a similar trend is to be expected for the PZCs.\cite{michaelson1977work} It has also been shown previously that thin electrodeposited metal films often exhibit PZCs between that of the substrate surface and a bulk surface of the depositing metal.\cite{el2006potential}
In Fig.~\ref{fig:electrochem_mixing_enthalpies}, we show how the applied voltage affects the mixing enthalpies of the Pd-Au/Au(111) surface alloy.
\begin{figure}[h]
\centering\includegraphics[width=1\columnwidth]{Figure_6.pdf}
\caption{\small The electrochemical enthalpies at several values of the applied voltage and differential capacitance. At a voltage of 0.6 V, the surface charge tends towards zero as the PZC of the Pd-Au/Au(111) surface varies approximately between 0.50 and 0.60 V. The mixing enthalpies thus approach the limiting case of the neutral surface. At lower voltages, a negative charge develops leading to a strong perturbation of the ground states. The effects of this surface charging process is most evident when the differential capacitance is increased, since this leads to a concomitant increase in the magnitude of the surface charge. A nonuniform shift in the mixing enthalpy curve occurs, where the configurations with palladium surface fractions near 60\% are stabilized more strongly than more dilute configurations due to the differences in their PZCs.}
\label{fig:electrochem_mixing_enthalpies}
\end{figure}
At low potentials, surface electrification effects become prominent and we observe that the ground state at $x_\text{Pd} = 0.57$ becomes increasingly stabilized with an increasingly negative surface charge. At low voltages, the neutral ground state configurations at $x_\text{Pd} = 0.25$ and $x_\text{Pd} = 0.83$ move away from the energy hull. This indicates two possibilities: that either new two-phase regions in these composition ranges appear under electrochemical conditions, or that new ground state configurations with surface cells larger than those considered to construct the training set may exist. Further study will be required to clarify this observation.
\section{Cluster expansion fitting}
Following the procedure outlined in Ref.~\onlinecite{weitzner2017voltage}, two-dimensional cluster expansions were fitted to electrochemical enthalpies computed at voltages of 0, 0.3 and 0.6 V vs. SHE and for differential capacitance values of 20, 40, and 60 $\mu$F/cm$^2$.\cite{el2002potentials} We used an in house code to perform the cluster expansion fitting that implements the steepest descent approach described in Ref.~\onlinecite{Herder2015}. We enforced the restriction that all proposed expansions contained the empty, point, and nearest-neighbor pair clusters to ensure that local interactions were adequately described in the expansion. The remaining clusters in the final expansion were included by minimizing a leave-one-out cross-validation (LOOCV) score $\Delta$ for the entire training set using the steepest descent approach. Briefly, the score is computed as
\begin{equation}
\Delta = \left( \frac{1}{k} \sum_k (\mathscr{F}(\{ \sigma_i \}) - \hat{\mathscr{F}}(\{ \sigma_i \})^2 \right)^{\frac{1}{2}},
\end{equation}
where the mean square error of the configurational energy is computed for which the cluster expansion estimate $\hat{\mathscr{F}}(\{ \sigma_i \})$ is computed with a set of ECIs obtained from a fit excluding the current configuration in the training set being considered. We allowed clusters that contained up to four vertices and with maximal diameters of up to nine nearest neighbors to be included in the cluster search space. In panel a of Fig.~\ref{fig:LOOCV_convergence}, we show the convergence of the LOOCV score as a function of the number of iterations of the steepest descent algorithm.
\begin{figure}[h]
\centering\includegraphics[width=1\columnwidth]{Figure_7.pdf}
\caption{\small Results of the cluster expansion fitting process showing a) the convergence of the LOOCV score for the neutral Pd-Au/Au(111) surface, and b) the predicted cluster expansion enthalpies for the training set using the optimized set of clusters. The cluster expansion provides accurate enthalpy estimates for lower energy configurations with dilute palladium surface fractions.}
\label{fig:LOOCV_convergence}
\end{figure}
Typically LOOCV scores are on the order of tens of meV/site, however we obtain converged results on the order 1 meV/site due to the small magnitude of the surface alloy mixing enthalpy. In panel b of Fig.~\ref{fig:LOOCV_convergence}, we show the cluster expansion estimates for the mixing enthalpies. Overall, the cluster expansion leads to a good fit of the training set, however higher energy alloys are predicted less accurately, as well as some high palladium content surface alloys. Neither of these pose issues to the present analysis since we perform simulations at room temperature where high energy configurations are infrequently sampled and we furthermore restrict our analysis to low-palladium content surfaces.
\section{Fixed-voltage canonical Monte Carlo simulations}
The Pd-Au/Au(111) surface alloy was studied at a set of fixed voltages within an extended canonical ensemble $(N_\text{Au}, N_\text{Pd},V,T,\Phi)$ via Metropolis Monte Carlo. The associated Boltzmann probability in this ensemble takes the form
\begin{equation}
\mathcal{P}_i = \frac{1}{\mathcal{Z}} \exp\left[ -\beta N_\text{cell}\Delta \mathscr{F}(\Phi) \right],
\end{equation}
where $\mathcal{Z}$ is the partition function, $\beta = \frac{1}{k_B T}$, $N_\text{cell}$ is the number of primitive surface cells in the system, and $\Delta \mathscr{F}(\Phi) = \Delta H_\text{mix}(\{\sigma_i\}) - \Phi \Delta Q(\{\sigma_i\}, \Phi) $ is the difference in electrochemical enthalpy between subsequently generated states in the simulation. New states are proposed via spin-exchange trial moves, which consist of randomly selecting a pair of opposite spins on the lattice and exchanging them, thereby preserving the overall composition of the system while ergodically exploring the configurational space.\cite{Newman1999} Trajectory data was analyzed for temporal correlations in the monomer, dimer, and trimer coverages and were found to be fully decorrelated within one Monte Carlo sweep (MCS). Samples were thus collected after each sweep of the lattice, where one sweep consists of performing a number of random Monte Carlo moves equal to the number of sites in the lattice. Finite size effects were additionally tested for as shown in Fig.~\ref{fig:finite_size_effects}, and it was found that a cell size of $40\times 40$ primitive cells led to a good balance of precision and computational cost.
\begin{figure}[h]
\centering\includegraphics[width=1\columnwidth]{Figure_8.pdf}
\caption{\small Convergence of monomer, dimer, and trimer distributions for simulation cell sizes of a) $20\times 20$, b) $30\times 30$, and c) $40\times 40$ primitive cells at a fixed capacitance of 60 $\mu$F/cm$^2$ and at a voltage of $\Phi = 0.3$ V/SHE.}
\label{fig:finite_size_effects}
\end{figure}
Simulations were allowed to equilibrate for 100 MCS prior to computing average multimer coverages over 10,000 MCS. This was sufficient to obtain averages for the palladium ensemble coverages converged to within a precision of $10^{-4}$.
\section{Results and discussion}
In order to assess the predictive accuracy of the quantum--continuum model and the sensitivity of the palladium multimer coverage distributions to solvation and surface electrification effects, we make a comparison with coverage measurements performed by Behm and co-workers via \emph{in situ} scanning tunneling microscopy.\cite{maroun2001role} To facilitate the comparison, canonical Monte Carlo simulations were performed for palladium surface fractions of $x_\text{Pd} = 0.07$ and $x_\text{Pd} = 0.15$ at voltages of 0, 0.3, and 0.6 V vs. SHE. We additionally consider differential capacitance values of 20, 40, and 60 $\mu$F/cm$^2$ in accordance with capacitance measurements made by Kolb and co-workers for a palladium monolayer-covered gold (111) surface in a 10 mM NaF electrolyte.\cite{el2002potentials}
In Fig.~\ref{fig:x_pd_07_snapshots}, we show several snapshots of the Pd-Au/Au(111) surface alloy with a composition of $x_\text{Pd} = 0.07$ obtained for simulations run under different electrochemical conditions.
\begin{figure}[h]
\centering\includegraphics[width=1\columnwidth]{Figure_9.pdf}
\caption{\small Snapshots of the palladium--gold surface alloy for a palladium surface fraction of $x_\text{Pd} = 0.07$ for different voltages and differential capacitances. The surface palladium atoms are shown in blue. Monomer clustering is evident at 0 V, while dimer and trimer formation can also be clearly seen at 0.3 V. At higher potentials, palladium adopts a more dispersed state along the surface.}
\label{fig:x_pd_07_snapshots}
\end{figure}
We observe that palladium monomers are the dominant type of multimer for all cases, and that systems with lower degrees of surface electrification achieved with either higher voltage or lower differential capacitance tend to adopt more dispersed configurations. For surfaces with higher degrees of surface electrification achieved via higher differential capacitance values or with lower voltages, we find that palladium tends to cluster along the surface. Interestingly, we find two particular cases of clustering where higher order dimer and trimer multimers appear to be stabilized at intermediate voltages, while at low voltages palladium surface atoms cluster to form locally ordered regions with palladium monomers situated at second nearest neighbor positions. In Fig.~\ref{fig:ave_multimer_007}, we make a quantitative comparison of the Monte Carlo multimer coverage estimates with the experimental results reported in Ref.~\onlinecite{maroun2001role}.
\begin{figure}[h]
\centering\includegraphics[width=1\columnwidth]{Figure_10.pdf}
\caption{\small Average palladium ensemble coverage for a palladium surface fraction of $x_\text{Pd} = 0.07$ under applied applied voltage for differential capacitance values of a) 20 $\mu$F/cm$^2$, b) 30 $\mu$F/cm$^2$, c) 60 $\mu$F/cm$^2$. Error bars for the Monte Carlo data are the standard deviation of each coverage distribution. Experimental data and random alloy data were obtained from Ref.~\onlinecite{maroun2001role}.}
\label{fig:ave_multimer_007}
\end{figure}
For each value of the differential capacitance considered, we obtain close agreement with the experimentally measured multimer coverages. We observe the general trend that the monomer coverage is highest at low voltages, decreases at intermediate voltages with an increased stabilization of dimers, and then increases again at higher voltages.
Similar behavior is observed for the Pd-Au/Au(111) surface alloy with a palladium surface fraction of $x_\text{Pd} = 0.15$. We show in Fig.~\ref{fig:x_pd_15_snapshots} several snapshots of the surface alloy simulated under the same set of electrochemical conditions as the dilute surface considered previously.
\begin{figure}[h]
\centering\includegraphics[width=1\columnwidth]{Figure_11.pdf}
\caption{\small Snapshots of the palladium--gold surface alloy for a palladium surface fraction of $x_\text{Pd} = 0.15$ for different voltages and differential capacitances. The surface palladium atoms are shown in blue. Monomer clustering is evident at 0 V and 0.3 V, while dimer and trimer formation can also be clearly seen at 0.3 V. Similar to the more dilute case, palladium adopts a more dispersed state along the surface at higher potentials.}
\label{fig:x_pd_15_snapshots}
\end{figure}
Like the dilute composition, we find that palladium monomers tend to be the dominant multimer under most of the considered electrochemical conditions except for large values of the differential capacitance at intermediate voltages. In this case we see a pronounced stabilization of dimers and trimers that appear to be uniformly distributed over the surface. In addition to this, we observe that palladium tends to exhibit the same type of ordering identified in the dilute case at low potentials, where palladium monomers are locally clustered sitting at second nearest neighbor positions from one another. In Fig.~\ref{fig:ave_multimer_015}, we compare the Monte Carlo multimer coverage estimates to the \emph{in situ} scanning tunneling microscopy results reported by Behm and co-workers.
\begin{figure}[h]
\centering\includegraphics[width=1\columnwidth]{Figure_12.pdf}
\caption{\small Average palladium ensemble coverage for a palladium surface fraction of $x_\text{Pd} = 0.15$ under applied applied voltage for differential capacitance values of a) 20 $\mu$F/cm$^2$, b) 30 $\mu$F/cm$^2$, c) 60 $\mu$F/cm$^2$. Error bars for the Monte Carlo data are the standard deviation of each coverage distribution. Experimental data and random alloy data were obtained from Ref.~\onlinecite{maroun2001role}.}
\label{fig:ave_multimer_015}
\end{figure}
We again find our results to be in good agreement with experiment, however in this case we observe a stronger response to the applied voltage and differential capacitance as compared to the dilute surface alloy. For almost all sets of electrochemical conditions, we predict a slightly higher monomer coverage for surface alloys simulated with low differential capacitance values. As the differential capacitance is increased, we find an enhanced stabilization of monomers at low voltages and an enhanced stabilization of dimers and trimers at intermediate voltages.
It is worthwhile to note that for both of the surface alloy compositions considered in this analysis, the closest results with experiment were found for surfaces considered at $\Phi = 0.6$ V vs. SHE, close to the potential at which the surface alloys were electrodeposited.\cite{maroun2001role} While this result is promising, it is important to point out that the adsorption of both protons and anions such as sulfate or bisulfate are known to occur and may play an important role in determining the composition and therefore the distribution of multimers along the surface.\cite{okube2014topologically, maroun2001role} This is especially true for palladium--gold nanoparticles where surface segregation effects are known to occur; however, the influence of these co-adsorbates is less clear for model surface alloys on single crystal surfaces where the active components are restricted to the topmost surface layer.\cite{Brodsky2014} It is additionally promising to see that accounting for the capacitive nature of the interface can lead to a measurable change in the equilibrium distribution of palladium multimers along the surface, indicating that the type and distribution of active sites along the surface exhibits a voltage-dependence that is independent of co-adsorption effects.
\section{Summary}
In this work, a quantum--continuum model was applied to study the effects of solvation and surface electrification on the equilibrium distribution of palladium multimers in a palladium--gold surface alloy on the gold (111) surface. Electrochemical enthalpies obtained with the quantum--continuum model were used to fit two-dimensional cluster expansions of the surface alloy for different sets of voltages and differential capacitances, defining several different electrochemical environments. Metropolis Monte Carlo simulations were performed in the canonical ensemble for fixed voltages using non-local spin-exchange moves. Close agreement with experimentally measured palladium multimer coverages was found for each case considered. We found that at voltages near 0 V vs. SHE, palladium monomers are predicted to be stable and tend to adopt locally ordered structures with neighboring palladium atoms occupying second-nearest neighbor positions. At voltages near 0.3 V vs SHE, we found that palladium dimers and trimers are stable and homogeneously distributed along the surface when the differential capacitance approaches 60 $\mu$F/cm$^2$, but adopts similar low voltage configurations for lower differential capacitances. At voltages near 0.6 V vs SHE, palladium is observed to exist primarily as monomers along the surface. These results suggest that applied voltages can provide a driving force for the ordering or clustering of catalytically active multimers within surface alloys, altering the distribution and variety of active sites along the catalyst surface that are available for electrocatalysis under different electrochemical conditions. This work provides a new perspective and direction for modeling the durability of electrocatalytic alloys in electrochemical environments.
\begin{acknowledgements}
This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, CPIMS Program, under Award \# DE-SC0018646. Computations for this research were performed on the Pennsylvania State University's Institute for CyberScience Advanced CyberInfrastructure (ICS-ACI).
\end{acknowledgements}
|
1,108,101,564,323 | arxiv | \section{Introduction}
\SetEQ
\vspace{1ex}\noindent
{\bf Motivation.} In Nonlinear Optimization, it seems to
be a natural idea to increase the performance of numerical
methods by employing high-order oracles. However, the main
obstacle to this approach consists in a prohibiting
complexity of the corresponding Taylor approximations
formed by the high-order multidimensional polynomials,
which are difficult to store, handle, and minimize. If we
go just one step above the commonly used quadratic
approximation, we get a multidimensional polynomial of
degree three which is never convex. Consequently, its
usefulness for optimization methods is questionable.
However, recently in \cite{nesterov2019implementable}
it was shown that the Taylor polynomials of
{\em convex functions} have a very interesting structure.
It appears that their augmentation by a power of Euclidean
norm with a reasonably big coefficients gives us a global
upper {\em convex} model of the objective function,
which keeps all advantages of the local high-order
approximation.
One of the classical and well-known results in Nonlinear
Optimization is related to the local quadratic convergence
of Newton's
method~\cite{kantorovich1948functional,nesterov2018lectures}.
Later on, it was generalized to the case of
\textit{composite} optimization
problems~\cite{lee2014proximal}, where the objective is
represented as a sum of two convex components: smooth, and
possibly nonsmooth but simple. Local superlinear convergence
of the Incremental Newton method for
finite-sum minimization problems
was established in~\cite{rodomanov2016superlinearly}.
The study of high-order numerical methods for solving
nonlinear equations is dated back to the work
of Chebyshev in 1838,
where the scalar methods of order three
and four were proposed~\cite{chebyshev1951sobranie}.
The methods of arbitrary order for solving
nonlinear equations were studied in~\cite{evtushenko2014methods}.
A big step in the second-order optimization theory was
made since~\cite{nesterov2006cubic}, where Cubic
regularization of the Newton method with its global
complexity estimates was proposed. Additionally, the local
superlinear convergence was justified.
See also~\cite{cartis2011adaptive1} for the local analysis
of the Adaptive cubic regularization methods.
Our paper is aimed to study local convergence of
high-order methods, generalizing corresponding results
from~\cite{nesterov2006cubic} in several ways. We
establish local superlinear convergence of Tensor
Method~\cite{nesterov2019implementable} of degree $p \geq
2$, in the case when the objective is composite, and its
smooth part is uniformly convex of arbitrary degree $q$
from the interval $2 \leq q < p + 1$. For strongly convex
functions ($q=2$), this gives the local convergence of
degree $p$.
\vspace{1ex}\noindent
{\bf Contents.}
We formulate our problem of interest and define
a step of the Regularized Composite Tensor Method
in Sect.~\ref{sc-MIn}.
Then, we declare some of its properties, which are
required for our analysis.
In Sect.~\ref{sc-LocF}, we prove local superlinear
convergence of the Tensor Method in function value, and in
the norm of minimal subgradient, under the assumption of
uniform convexity of the objective.
In Sect.~\ref{sc-GlobG}, we discuss global behavior of
the method and justify sublinear and linear global rates
of convergence for convex and uniformly convex cases,
respectively.
One application of our developments is provided in
Sect.~\ref{sc-Prox}. We show how local convergence can
be applied for computing an inexact step in proximal
methods. A global sublinear rate of convergence for the
resulting scheme is also given.
\vspace{1ex}\noindent
{\bf Notations and generalities.} In what follows, we
denote by $\E$ a finite-dimensional real vector space, and
by $\E^*$ its dual spaced composed by linear functions on
$\E$. For such a function $s \in \E^*$, we denote by $\la
s, x \ra$ its value at $x \in \E$. Using a self-adjoint
positive-definite operator $B: \E \to \E^*$ (notation $B =
B^* \succ 0$), we can endow these spaces with mutually
conjugate Euclidean norms:
$$
\ba{rcl}
\| x \| & = & \la B x, x \ra^{1/2}, \quad x \in \E, \quad
\| g \|_* \; = \; \la g, B^{-1} g \ra^{1/2}, \quad g \in
\E^*.
\ea
$$
For a smooth function $f: \dom f \to \R$ with convex and
open domain $\dom f \subseteq \E$, denote by $\nabla
f(x)$ its gradient, and by $\nabla^2 f(x)$ its Hessian
evaluated at point $x \in \dom f \subseteq \E$. Note that
$$
\ba{rcl}
\nabla f(x) & \in & \E^*, \quad \nabla^2 f(x) h \; \in \;
\E^*, \quad x \in \dom f, \; h \in \E.
\ea
$$
For non-differentiable convex function $f(\cdot)$, we
denote by $\partial f(x) \subset \E^*$ its subdifferential
at the point $x \in \dom f$.
In what follows, we often work with directional
derivatives. For $p \geq 1$, denote by
$$
\ba{c}
D^p f(x)[h_1, \dots, h_p]
\ea
$$
the directional derivative of function $f$ at $x$ along
directions $h_i \in \E$, $i = 1, \dots, p$. If all
directions $h_1, \dots, h_p$ are the same, we apply a
simpler notation
$$
\ba{c}
D^p f(x)[h]^p, \quad h \in \E.
\ea
$$
Note that $D^p f(x)[ \cdot]$ is a {\em symmetric
$p$-linear form}. Its {\em norm} is defined in the
standard way:
\beq\label{eq-DNorm1}
\ba{rcl}
\| D^pf(x) \| & = & \max\limits_{h_1, \dots, h_p \in \E}
\left\{ D^p f(x)[h_1, \dots, h_p ]: \; \| h_i \| \leq 1,
\, i = 1,
\dots, p \right\}\\
\\
& = & \max\limits_{h \in \E} \left\{ \Big| D^p
f(x)[h]^p\Big|: \; \| h \| \leq 1 \right\}
\ea
\eeq
(for the last equation see, for example, Appendix 1 in
\cite{nesterov1994interior}). Similarly, we define
\beq\label{eq-DNorm2}
\ba{rcl}
\| D^pf(x) - D^pf(y) \| & = & \max\limits_{h \in \E}
\left\{ \Big| D^p f(x)[h]^p - D^pf(y)[h]^p\Big|: \; \| h
\| \leq 1 \right\}.
\ea
\eeq
In particular, for any $x \in \dom f$ and $h_1, h_2 \in
\E$, we have
$$
\ba{rcl}
Df(x)[h_1] & = & \la \nabla f(x), h_1 \ra, \quad
D^2f(x)[h_1, h_2] \; = \; \la \nabla^2 f(x) h_1, h_2 \ra.
\ea
$$
Thus, for the Hessian, our definition corresponds to a
{\em spectral norm} of the self-adjoint linear operator
(maximal module of all eigenvalues computed with respect
to $B \succ 0$).
Finally, the Taylor approximation of function $f(\cdot)$
at $x \in \dom f$ is defined as follows:
$$
\ba{rcl}
f(x+h) & = & \Omega_p(f, x; x + h) + o(\|h\|^p), \quad x+h \in
\dom
f,\\
\\
\Omega_p(f,x;y) & \Def & f(x) + \sum\limits_{k=1}^p {1
\over k!} D^k f(x)[y-x]^k, \quad y \in \E.
\ea
$$
Consequently, for all $y \in \E$ we have
\beq\label{eq-PGrad}
\ba{rcl}
\nabla \Omega_p(f,x;y) & = & \sum\limits_{k=1}^p {1 \over
(k-1)!} D^k f(x)[y-x]^{k-1},
\ea
\eeq
\beq\label{eq-PHess}
\ba{rcl}
\nabla^2 \Omega_p(f,x;y) & = & \sum\limits_{k=2}^p {1
\over (k-2)!} D^k f(x)[y-x]^{k-2}.
\ea
\eeq
\section{Main inequalities}\label{sc-MIn}
\SetEQ
In this paper, we consider the following {\em composite}
convex minimization problem
\beq\label{prob-Main}
\min\limits_{x \in \dom h} \Big\{ F(x) = f(x) + h(x)
\Big\},
\eeq
where $h: \E \to \R \cup \{+\infty\}$ is a {\em simple} proper
closed convex function and $f \in C^{p,p}(\dom h)$ for a
certain $p \geq 2$. In other words, we assume that the
$p$th derivative of function $f$ is Lipschitz continuous:
\beq\label{eq-Lip}
\ba{rcl}
\| D^p f(x) - D^p f(y) \| & \leq & L_p \| x - y \|,
\quad x, y \in \dom h.
\ea
\eeq
Assuming that $L_{p} < +\infty$, by the standard
integration arguments we can bound the residual between
function value and its Taylor approximation:
\beq\label{eq-BoundF}
\ba{rcl}
| f(y) - \Omega_p(f,x;y) | & \leq & {L_{p} \over (p+1)!}
\| y - x \|^{p+1}, \quad x, y \in \dom h.
\ea
\eeq
Applying the same reasoning to functions $\la \nabla
f(\cdot), h \ra$ and $\la \nabla^2 f(\cdot) h, h \ra$ with
direction $h \in \E$ being fixed, we get the following
guarantees:
\beq\label{eq-BoundG}
\ba{rcl}
\| \nabla f(y) - \nabla \Omega_p(f,x;y) \|_* & \leq & {L_p
\over p!} \| y - x \|^{p},
\ea
\eeq
\beq\label{eq-BoundH}
\ba{rcl}
\| \nabla^2 f(y) - \nabla^2 \Omega_p(f,x;y) \| & \leq &
{L_p \over (p-1)!} \| y - x \|^{p-1},
\ea
\eeq
which are valid for all $x, y \in \dom h$.
Let us define now one step of the {\em Regularized
Composite Tensor Method} (RCTM) of degree $p \geq 2$:
\beq\label{def-RCTS}
\ba{rcl}
T & \equiv & T_H(x) \; \Def \; \arg\min\limits_{y \in \E}
\left\{ \Omega_p(f,x;y) + {H \over (p+1)!} \| y - x
\|^{p+1} + h(y) \right\}.
\ea
\eeq
It can be shown that for
\beq\label{eq-H}
\ba{c}
\mbox{\fbox{\rule[-2mm]{0cm}{6mm}$H \; \geq \; p L_p$}}
\ea
\eeq
the auxiliary optimization problem in (\ref{def-RCTS}) is
{\em convex} (see Theorem 1 in \cite{nesterov2019implementable}).
This condition
is crucial for implementability of our methods and we
always assume it to be satisfied.
Let us write down the first-order optimality condition for
the auxiliary optimization problem in (\ref{def-RCTS}):
\beq\label{eq-OptC}
\ba{rcl}
\la \nabla \Omega_p(f,x;T) + {H \over p!} \| T - x
\|^{p-1}B(T-x), y - T \ra + h(y) & \geq & h(T),
\ea
\eeq
for all $y \in \dom h$.
In other words, for vector
\beq\label{def-HP}
\ba{rcl}
h'(T) & \Def & - \left( \nabla \Omega_p(f,x;T) + {H \over
p!} \| T - x \|^{p-1}B(T-x) \right)
\ea
\eeq
we have $h'(T) \stackrel{(\ref{eq-OptC})}{\in} \partial
h(T)$. This fact explains our notation
\beq\label{def-FP}
\ba{rcl}
F'(T) & \Def & \nabla f(T) + h'(T) \; \in \partial F(T).
\ea
\eeq
Let us present some properties of the point $T = T_H(x)$.
First of all, we need some bounds for the norm of vector
$F'(T)$. Note that
\beq\label{eq-N3}
\ba{rcl}
\Big\| F'(T) + {H \over p!} \| T - x \|^{p-1}B(T-x)
\Big\|_* & \refEQ{def-HP} & \Big\| \nabla f(T) - \nabla
\Omega_p(f,x;T) \Big\|_*\\
\\
& \refLE{eq-BoundG} & {L_p \over p!} \| T - x \|^p.
\ea
\eeq
Consequently,
\beq\label{eq-NewG}
\ba{rcl}
\| F'(T) \|_* & \leq & {L_p+H \over p!} \| T - x \|^p.
\ea
\eeq
Secondly, we use the following lemma.
\BL\label{lm-DecF}
Let $\beta > 1$ and $H = \beta L_p$. Then
\beq\label{eq-DecFB}
\ba{rcl}
\la F'(T), x - T \ra & \geq & \left( {p! \over (p+1)L_p}
\right)^{1 \over p} \cdot \| F'(T) \|_*^{p+1 \over p}
\cdot {(\beta^2 - 1)^{p-1 \over 2p} \over \beta} \cdot {p
\over (p^2-1)^{p-1 \over 2p}}.
\ea
\eeq
In particular, if $\beta = p$, then
\beq\label{eq-DecF}
\ba{rcl}
\la F'(T), x - T \ra & \geq & \left( {p! \over (p+1)L_p}
\right)^{1 \over p} \cdot \| F'(T) \|_*^{p+1 \over p}.
\ea
\eeq
\EL
\proof
Denote $r = \| T - x \|$, $h = {H \over p!}$, and $l =
{L_p \over p!}$. Then inequality (\ref{eq-N3}) can be
written as follows:
$$
\ba{rcl}
\| F'(T) + h r^{p-1} B(T-x) \|^2_* & \leq & l^2 r^{2p}.
\ea
$$
This means that
\beq\label{eq-N4}
\ba{rcl}
\la F'(T), x - T \ra & \geq & {1 \over 2 h r^{p-1}} \|
F'(T) \|_*^2 + {r^{2p} (h^2 - l^2) \over 2h r^{p-1}}.
\ea
\eeq
Denote
$$
\ba{rcl}
a & = & {1 \over 2h} \| F'(T) \|_*^2, \quad b \; = \; {h^2
- l^2 \over 2h}, \quad \tau \; = \; r^{p-1}, \quad \alpha
\; = \; {p+1 \over p-1}.
\ea
$$
Then inequality (\ref{eq-N4}) can be rewritten as follows:
$$
\ba{rcl}
\la F'(T) , x - T \ra & \geq & {a \over \tau} + b
\tau^{\alpha} \; \geq \; \min\limits_{t > 0} \left\{ {a
\over t} + b t^{\alpha} \right\} \; = \; (1+\alpha)
\left({a \over \alpha} \right)^{\alpha \over 1 + \alpha}
b^{1 \over 1 + \alpha}.
\ea
$$
Taking into account that $1+\alpha = {2p \over p-1}$ and
${\alpha \over 1 + \alpha} = {p + 1 \over 2p}$, and using the
actual meaning of $a$, $b$, and $\alpha$, we get
$$
\ba{rcl}
\la F'(T), x - T \ra & \geq & {2 p \over p-1} \cdot { \|
F'(T) \|_*^{p+1 \over p} \over (2h)^{p+1 \over 2p}} \cdot
{(p-1)^{p+1 \over 2p} \over (p+1)^{p+1 \over 2p}} \cdot
{(h^2 - l^2)^{p-1 \over 2p} \over (2h)^{p-1 \over 2p}}\\
\\
& = & \| F'(T) \|_*^{p+1 \over p} \cdot {(h^2 - l^2)^{p-1
\over 2p} \over h} \cdot {p \over (p+1)^{p+1 \over 2p}
(p-1)^{p-1 \over 2p}}\\
\\
& = & \| F'(T) \|_*^{p+1 \over p} \cdot {(h^2 - l^2)^{p-1
\over 2p} \over h} \cdot {p \over (p^2-1)^{p-1 \over 2p}
(p+1)^{1 \over p}}.
\ea
$$
It remains to note that
$$
\ba{rcl}
{(h^2 - l^2)^{p-1 \over 2p} \over h} & = & {(H^2 -
L_p^2)^{p-1 \over 2p} \over H} \cdot (p!)^{1 \over p} \; =
\; {(\beta^2 - 1)^{p-1 \over 2p} \over \beta} \cdot
\left({p! \over L_p} \right)^{1 \over p}.
\ea
$$
\qed
\section{Local convergence}\label{sc-LocF}
\SetEQ
The main goal of this paper consists in analyzing the
local behavior of the {\em Regularized Composite Tensor
Method} (RCTM):
\beq\label{met-RCTM}
\ba{rcl}
x_0 \; \in \; \dom h, \quad x_{k+1} & = & T_H(x_k), \quad
k \geq 0,
\ea
\eeq
as applied to the problem (\ref{prob-Main}). In order to
prove local superlinear convergence of this scheme, we
need one more assumption.
\begin{assumption}\label{assumption-Uni}
The objective in problem
(\ref{prob-Main}) is uniformly convex of degree $q \geq 2$.
Thus, for all $x, y \in \dom h$ and for all $G_x \in \partial F(x), G_y \in \partial F(y)$,
it holds:
\beq\label{eq-UniC}
\ba{rcl}
\la G_x - G_y, x - y \ra & \geq & \sigma_q
\| x - y \|^q,
\ea
\eeq
for certain $\sigma_q > 0$.
\end{assumption}
It is well known that this assumption guarantees the
uniform convexity of the objective function
(see, for example, Lemma 4.2.1 in
\cite{nesterov2018lectures}):
\beq\label{eq-UniF}
\ba{rcl}
F(y) & \geq & F(x) + \la G_x, y - x \ra + {\sigma_q \over
q} \| y - x \|^q, \quad y \in \dom h,
\ea
\eeq
where $G_x$ is an arbitrary subgradient from $\partial
F(x)$. Therefore,
\beq\label{eq-DecFG}
\ba{rcl}
F^* & = & \min\limits_{y \in \dom h} F(y) \; \geq \;
\min\limits_{y \in \E} \left\{ F(x) + \la G_x, y
- x \ra + {\sigma_q \over q} \| y - x \|^q \right\}\\
\\
& = & F(x) - {q-1 \over q} \left({1 \over \sigma_q}
\right)^{1 \over q-1} \| G_x \|_*^{q \over q-1}.
\ea
\eeq
This simple inequality gives us the following local
convergence rate for RCTM.
\BT\label{th-LocF}
For any $k \geq 0$ we have
\beq\label{eq-RateF}
\ba{rcl}
F(x_{k+1}) - F^* & \leq & (q-1) q^{p-q+1 \over q-1}
\bigl({1 \over \sigma_q}\bigr)^{p+1 \over q-1} \left({L_p + H
\over p!} \right)^{q \over q - 1} \big[F(x_k) - F^* \big]^{p
\over q-1}.
\ea
\eeq
\ET
\proof
Indeed, for any $k \geq 0$ we have
$$
\ba{rcl}
F(x_k) - F^* & \geq & F(x_k) - F(x_{k+1}) \\
\\
& \refGE{eq-UniF} & \la
F'(x_{k+1}), x_k - x_{k+1} \ra + {\sigma_q \over q} \| x_k - x_{k+1} \|^q\\
\\
& \refGE{eq-DecFB} & {\sigma_q \over q} \| x_k - x_{k+1}
\|^q \; \refGE{eq-NewG} \; {\sigma_q \over q} \left( {p!
\over L_p+H} \| F'(x_{k+1}) \|_* \right)^{q
\over p}\\
\\
& \refGE{eq-DecFG} & {\sigma_q \over q} \left( {p! \over
L_p+H}\right)^{q \over p} \left( {q \, \sigma_q^{1 \over
q-1} \over q-1} (F(x_{k+1})-F^*)
\right)^{q -1\over p}.
\ea
$$
And this is exactly inequality (\ref{eq-RateF}).
\qed
\BC\label{cor-LocF}
If $p > q-1$, then method (\ref{met-RCTM}) has local
superlinear rate of convergence for problem
(\ref{prob-Main}).
\EC
\proof
Indeed, in this case ${p \over q-1} > 1$.
\qed
For example, if $q = 2$ (strongly convex function) and
$p=2$ (Cubic Regularization of the Newton Method), then
the rate of convergence is quadratic. If $q=2$, and $p =
3$, then the local rate of convergence is cubic, etc.
Let us study now the local convergence of the method
(\ref{met-RCTM}) in terms of the norm of gradient. For any
$x \in \dom h$ denote
\beq\label{def-GNorm}
\ba{rcl}
\eta(x) & \Def & \min\limits_{g \in \partial h(x)} \| \nabla
f(x) + g \|_*.
\ea
\eeq
If $\partial h(x) = \emptyset$, we set $\eta(x) =
+\infty$.
\BT\label{th-RateG}
For any $k \geq 0$ we have
\beq\label{eq-RateG}
\ba{rcl}
\eta(x_{k+1})
& \, \leq \, &
\|F'(x_{k + 1}) \|_{*}
\; \leq \;
{L_p + H \over p!} \left[ {1 \over
\sigma_q} \, \eta(x_k) \right]^{p \over q-1}.
\ea
\eeq
\ET
\proof
Indeed, in view of inequality \eqref{eq-UniC}, we have
$$
\ba{rcl}
\la \nabla f(x_k) + g_k, x_{k} - x_{k + 1} \ra
& \geq &
\la F'(x_{k + 1}), x_k - x_{k + 1} \ra + \sigma_q \|x_k - x_{k + 1}\|^q \\
\\
& \refGE{eq-DecFB} & \sigma_q \|x_k - x_{k + 1}\|^{q},
\ea
$$
where $g_k$ is an arbitrary vector from $\partial h(x_k)$.
Therefore, we conclude that
$$
\ba{rcl}
\eta(x_k) & \geq & \sigma_q \| x_k -
x_{k+1} \|^{q-1}.
\ea
$$
It remains to use inequality
(\ref{eq-NewG}).
\qed
As we can see, the condition for superlinear convergence
of the method (\ref{met-RCTM}) in terms of the norm of the
gradient is the same as in Corollary \ref{cor-LocF}: we
need to have ${p \over q-1} > 1$, that is $p > q-1$.
Moreover, the local rate of convergence has the same order
as that for the residual of the function value.
According to Theorem~\ref{th-LocF}, the region of
superlinear convergence of RCTM in terms of the function
value is as follows:
\beq\label{eq-RegF}
\ba{rcl}
\Q & = & \left\{ x \in \dom h: \; F(x) - F^* \; \leq \;
{1 \over q} \cdot \biggl( { \sigma_q^{p + 1} \over (q -
1)^{q - 1} } \cdot \Bigl( { p! \over L_p + H } \Bigr)^{q}
\biggr)^{1 \over p - q + 1} \right\}.
\ea
\eeq
Alternatively, by Theorem~\ref{th-RateG}, in terms of the
norm of minimal subgradient~(\ref{def-GNorm}), the region
of superlinear convergence looks as follows:
\beq\label{eq-RegG}
\ba{rcl}
\G & = & \left\{ x \in \dom h: \; \eta(x) \; \leq \;
\biggl( \sigma_q^{p} \cdot \Bigl( { p! \over L_p + H }
\Bigr)^{q - 1} \biggr)^{1 \over p - q + 1} \right\}.
\ea
\eeq
Note that these sets can be very different. Indeed, set
$\Q$ is a closed and convex neighborhood of the point
$x^*$. At the same time, the structure of the set $\G$ can
be very complex since in general the function $\eta(x)$ is
discontinuous. Let us look at simple example where $h(x) =
\mbox{Ind}_Q(x)$, the indicator function of a closed
convex set $Q$.
\BE
Consider the following optimization problem:
\beq\label{prob-Eta}
\min\limits_{x \in \R^2} \left\{ f(x) : \; \| x \|^2
\Def (x^{(1)})^2 + (x^{(2)})^2 \leq 1 \right\},
\eeq
with
$$
\ba{rcl}
f(x) & = &
\frac{\sigma_2}{2}\|x - \bar{x}\|^2
+
\frac{2 \sigma_3}{3}\|x - \bar{x}\|^3,
\ea
$$
for some fixed $\sigma_2, \sigma_3 > 0$
and $\bar{x} = (0, -2) \in \R^2$.
We have
$$
\ba{rcl}
\nabla f(x) = r(x) \cdot ( x^{(1)}, x^{(2)} + 2),
\ea
$$
where $r: \R^2 \to \R$ is
$$
\ba{rcl}
r(x) = \sigma_2 + 2\sigma_3 \|x - \bar{x}\|.
\ea
$$
Note that $f$ is uniformly convex
of degree $q = 2$ with constant $\sigma_2$,
and for $q = 3$ with constant $\sigma_3$
(see Lemma~4.2.3 in~\cite{nesterov2018lectures}).
Moreover, we have for any $\nu \in [0, 1]$:
$$
\ba{rcl}
\la \nabla f(x) - \nabla f(y), x - y \ra
& \geq & \sigma_2 \|x - y\|^2 + \sigma_3 \|x - y\|^3 \\
\\
& \geq & \min\limits_{t \geq 0}
\Bigl\{ \frac{\sigma_2}{t^{\nu}} + \sigma_3 t^{1 - \nu}
\Bigr\} \cdot \|x - y\|^{2 + \nu} \\
\\
& \geq & \sigma_2^{1 - \nu} \sigma_3^{\nu} \cdot \|x - y\|^{2 + \nu}.
\ea
$$
Hence, this function is uniformly convex of any
degree $q \in [2, 3]$. At the same time,
the Hessian of $f$ is Lipschitz continuous
with constant $L_2 = 4 \sigma_3$
(see Lemma~4.2.4 in~\cite{nesterov2018lectures}).
Clearly,
in this problem $x^*=(0,-1)$,
and it can be written in the composite form (\ref{prob-Main}) with
$$
\ba{rcl}
h(x) & = & \left\{ \ba{rl} +
\infty, & \mbox{if $\| x \| > 1$,} \\ 0,
&\mbox{otherwise.} \ea \right.
\ea
$$
Note that for $x \in \dom h \equiv \{ x: \; \| x \| \leq 1\}$, we have
$$
\ba{rcl}
\partial h(x) \; = \; \left\{ \ba{cl} 0, & \mbox{if $\|
x \| < 1$,} \\ \{ \gamma x, \, \gamma \geq 0 \}, &\mbox{if
$\| x \| = 1$.}
\ea \right.
\ea
$$
Therefore, if $\| x \| < 1$, then $\eta(x) = \| \nabla
f(x) \| \geq \sigma_2$. If $\| x \| = 1$, then
$$
\ba{rcl}
\eta^2(x) & \refEQ{def-GNorm} &
\min\limits_{\gamma \geq
0}
\Bigl\{
\bigl[ (r(x) + \gamma) x^{(1)} \bigr]^2
+
\bigl[ (r(x) + \gamma) x^{(2)} + 2 r(x) \bigr]^2
\Bigr\} \\
\\
& = &
\min\limits_{\gamma \geq
0}
\Bigl\{
(r(x) + \gamma)^2 + 4r(x) (r(x) + \gamma) x^{(2)}
+ 4 r^2(x)
\Bigr\} \\
\\
& = &
\left\{
\ba{cl}
4r^2(x) (1 - (x^{(2)})^2), & \mbox{if $x^{(2)} \leq -\frac{1}{2}$,} \\
r^2(x) (5 + 4 x^{(2)}), & \mbox{otherwise.}
\ea \right.
\ea
$$
Thus, in any neighbourhood
of $x^*$, $\eta(x)$ vanishes only along the boundary of
the feasible set.
\qed
\EE
So, the question arises how the Tensor Method
(\ref{met-RCTM}) could come to the region $\G$. The answer
follows from the inequalities derived in Section
\ref{sc-MIn}. Indeed,
$$
\ba{rcl}
\| F'(x_{k+1}) \|_* & \refLE{eq-NewG} & {L_p + H \over p!}
\| x_k - x_{k+1} \|^p,
\ea
$$
and
$$
\ba{rcl}
F(x_k) - F(x_{k+1}) & \geq & \la F'(x_{k+1}), x_k -
x_{k+1} \ra \\
\\
& \refGE{eq-DecF} &
\left( {p! \over (p+1)L_p} \right)^{1 \over p} \cdot \|
F'(x_{k+1}) \|^{p+1 \over p}_*.
\ea
$$
Thus, at some moment the norm $\| F'(x_k) \|_*$ will be
small enough to enter $\G$.
\section{Global complexity bounds}\label{sc-GlobG}
\SetEQ
Let us briefly discuss the global complexity bounds of the
method~(\ref{met-RCTM}), namely the number of iterations
required for coming from an arbitrary initial point $x_0
\in \dom h$ to the region~$\Q$. First, note that for every
step $T = T_H(x)$ of the method with parameter $H \geq p
L_p$, we have
$$
\ba{rcl}
F(T) & \refLE{eq-BoundF} &
\Omega_p(f,x;T) + \frac{H}{(p + 1)!}\|T - x\|^{p + 1} + h(T) \\
\\
& \refEQ{def-RCTS} & \min\limits_{y \in \E} \Bigl\{
\Omega_p(f,x;y) + \frac{H}{(p + 1)!}\|y - x\|^{p + 1} +
h(y)
\Bigr\} \\
\\
& \refLE{eq-BoundF} &
\min\limits_{y \in \E} \Bigl\{
F(y) + \frac{H + L_p}{(p + 1)!} \|y - x\|^{p + 1}
\Bigr\}.
\ea
$$
Therefore,
\beq \label{eq-GlobF}
\ba{rcl}
F(T(x)) - F^{*} & \leq &
\frac{H + L_p}{(p + 1)!}\|x - x^{*}\|^{p + 1}, \quad x \in \dom h,
\ea
\eeq
with $x^{*} \Def \arg\min\limits_{y \in \E} F(y)$, which
exists by our assumption. Denote by $D$ the maximal radius
of the initial level set of the objective, which we assume
to be finite:
$$
\ba{rcl}
D \;\; \Def \; \sup\limits_{x \in \dom h} \Bigl\{ \|x - x^{*}\|
:\; F(x) \leq F(x_0) \Bigr\}
\; & < & \; +\infty.
\ea
$$
Then, by monotonicity of the method~(\ref{met-RCTM}) and
by convexity we conclude
\beq\label{eq-ResG}
{1 \over D}\Bigl( F(x_{k + 1}) - F^* \Bigr)
\; \leq \; {1 \over D}\la F'(x_{k + 1}),
x_{k + 1} - x^{*} \ra \; \leq \; \|F'(x_{k + 1})\|_{*}.
\eeq
In the general convex case, we can prove the global
sublinear rate of convergence of the Tensor Method of the
order $O({1 / k^p})$~\cite{nesterov2019implementable}. For
completeness of presentation, let us prove an extension of
this result onto the composite case.
\BT \label{th-SublR}
For the method~(\ref{met-RCTM}) with $H = pL_p$ we have
\beq\label{eq-SublR}
\ba{rcl}
F(x_{k}) - F^{*} & \leq & { (p + 1) (2p)^p \over p! }
\cdot {L_p D^{p + 1} \over (k - 1)^p}, \qquad k \geq 2.
\ea
\eeq
\ET
\proof Indeed, in view of~(\ref{eq-DecF}) and~(\ref{eq-ResG}),
we have for every $k \geq 0$
$$
\ba{rcl}
F(x_{k}) - F(x_{k + 1}) & \geq & \la F'(x_{k + 1}),
x_k - x_{k + 1} \ra\\
\\
& \refGE{eq-DecF} & \left( {p! \over (p+1)L_p}
\right)^{1 \over p} \cdot \| F'(x_{k + 1}) \|_*^{p+1 \over p} \\
\\
& \refGE{eq-ResG} & \left( {p! \over (p+1)L_p D^{p + 1} }
\right)^{1 \over p}
\cdot \Bigl( F(x_{k + 1}) - F^* \Bigr)^{ p + 1 \over p }.
\ea
$$
Denoting $\delta_k = F(x_k) - F^*$ and
$C = \left( {p! \over (p+1) L_p D^{p + 1} }\right)^{1 \over p}$,
we obtain the following recurrence:
\beq\label{eq-Recurr}
\ba{rcl}
\delta_{k} - \delta_{k + 1} & \geq &
C \delta_{k + 1}^{p + 1 \over p}, \qquad k \geq 0,
\ea
\eeq
or for $\mu_k = C^p \delta_k \refLE{eq-GlobF} 1$, as follows:
$$
\ba{rcl}
\mu_{k} - \mu_{k + 1} & \geq & \mu_{k + 1}^{p + 1 \over p},
\qquad k \geq 0.
\ea
$$
Then, Lemma~1.1 from~\cite{grapiglia2017regularized}
provides us with the following guarantee:
$$
\ba{rcl}
\mu_{k} & \leq &
\Bigl(
\frac{p(1 + \mu_1^{1 / p})}{k - 1}
\Bigr)^p
\; \leq \;
\Bigl( \frac{2p}{k - 1} \Bigr)^p, \quad k \geq 2.
\ea
$$
Therefore,
$$
\ba{rcl}
\delta_k & = & {\mu_{k} \over C^p} \; \leq \;
\left({ 2p \over C (k - 1) }\right)^p \; = \;
{ (p + 1) (2p)^p \over p! } \cdot {L_p D^{p + 1} \over (k - 1)^p},
\qquad k \geq 2.
\ea
$$
\qed
For a given degree $q \geq 2$ of uniform convexity with
$\sigma_q > 0$, and for RCTM of order $p \geq q - 1$, let
us denote by~$\omega_{p, q}$ the following
\textit{condition number}:
$$
\ba{rcl}
\omega_{p, q} & \Def &
\frac{p + 1}{p!} \cdot
\Bigl( \frac{q - 1}{q} \Bigr)^{q - 1}
\cdot \frac{L_p D^{p - q + 1}}{\sigma_q}.
\ea
$$
\BC\label{cor-Subl}
In order to achieve the region $\Q$ it is enough to perform
\beq \label{eq-Total1}
\Biggl\lceil
2p \cdot
\biggl(
{ q^{q} \over (q - 1)^{q - 1} }
\cdot
\omega_{p, q}^{\frac{p + 1}{p}}
\biggr)^{1 \over p - q + 1}
\Biggr\rceil + 2
\eeq
iterations of the method.
\EC
\proof
Plugging~(\ref{eq-RegF}) into~(\ref{eq-SublR}).
\qed
We can improve this estimate, knowing that the objective
is globally uniformly convex~(\ref{eq-UniC}). Then the
linear rate of convergence arises at the first state, till
the entering in the region~$\Q$.
\BT Let $\sigma_q > 0$ with $q \leq p + 1$.
Then for the method~(\ref{met-RCTM}) with $H = pL_p$, we
have
\beq\label{eq-LinR}
\ba{rcl}
F(x_{k}) - F^{*} & \leq & \exp\left( -{k \over 1 +
\omega^{1/p}_{p, q}}
\right)
\cdot \bigl( F(x_{0}) - F^*\bigr), \qquad k \geq 1.
\ea
\eeq
Therefore, for a given $\varepsilon > 0$ to achieve
$F(x_K) - F^{*} \leq \varepsilon$, it is enough to set
\beq \label{eq-Total2}
\ba{rcl}
K & = & \left\lceil (1+\omega^{1/p}_{p,q}) \cdot
\log{\frac{F(x_0) - F^{*}}{\varepsilon}} \right\rceil + 1.
\ea
\eeq
\ET
\proof
Indeed, for every $k \geq 0$
$$
\ba{rcl}
F(x_{k}) - F(x_{k + 1}) & \geq & \la F'(x_{k + 1}),
x_k - x_{k + 1} \ra\\
\\
& \refGE{eq-DecF} & \left( {p! \over (p+1)L_p}
\right)^{1 \over p} \cdot \| F'(x_{k + 1}) \|_*^{p+1 \over p} \\
\\
& = & \left( {p! \over (p+1)L_p} \right)^{1 \over p}
\cdot \| F'(x_{k + 1}) \|_*^{p - q + 1 \over p}
\cdot \| F'(x_{k + 1}) \|_*^{q \over p} \\
\\
& \stackrel{(\ref{eq-ResG}),(\ref{eq-DecFG})}{\geq} &
\left( {p! \over p + 1} \cdot
{ \sigma_q \over L_p D^{p - q + 1}} \right)^{1 \over p}
\cdot \left({ q \over q - 1 }\right)^{q - 1 \over p}
\cdot \Bigl( F(x_{k + 1}) - F^* \Bigr) \\
\\
& = &
\left( \frac{1}{\omega_{p, q}} \right)^{1 \over p}
\cdot \Bigl( F(x_{k + 1}) - F^* \Bigr).
\ea
$$
Denoting $\delta_k = F(x_k) - F^{*}$, we obtain
$$
\ba{rcl}
\delta_{k + 1} & \leq & {\omega^{1/p}_{p,q} \over 1 +
\omega^{1/p}_{p,q}} \cdot \delta_k \; \leq \; \exp \left(
- {1 \over 1 + \omega^{1/p}_{p,q}} \right) \cdot \delta_k,
\qquad k \geq 1.
\ea
$$
\qed
We see that, for RCTM with $p \geq 2$ minimizing the
uniformly convex objective of degree $q \leq p + 1$, the
condition number $\omega^{1/p}_{p, q}$ is the main factor
in the global complexity estimates~(\ref{eq-Total1})
and~(\ref{eq-Total2}). Since in general this number may be
arbitrarily big, complexity estimate $\tilde{O}(\omega_{p,
q}^{1 / p})$ in (\ref{eq-Total2}) is much better than the
estimate $O(\omega_{p, q}^{(p + 1) / (p(p - q + 1))})$ in
(\ref{eq-Total1}) because of relation ${ p + 1 \over p - q
+ 1} \geq 1$.
These global bounds can be improved, by using the
\textit{universal}~\cite{doikov2019minimizing,grapiglia2019tensor}
and the
\textit{accelerated}~\cite{nesterov2008accelerating,grapiglia2019accelerated,grapiglia2019tensor,gasnikov2019optimal,song2019towards}
high-order schemes.
High-order tensor methods for minimizing
the gradient norm were developed in
\cite{dvurechensky2019near}.
These methods achieve near-optimal global convergence rates,
and can be used for coming into the region~$\G$~\eqref{eq-RegG}.
Note, that for the composite minimization problems,
some modification of these methods is required,
which ensures minimization of the \textit{subgradient} norm.
Finally, let us mention some recent results~\cite{nesterov2020superfast,kamzolov2020near},
where it was shown that
a proper implementation of the third-order schemes by
second-order oracle may lead to a significant acceleration of the methods.
However, the relation of these techniques to the local convergence
needs further investigations.
\section{Application to proximal methods}
\label{sc-Prox}
\SetEQ
Let us discuss now a general approach, which uses the
local convergence of the methods for justifying the global
performance of proximal iterations.
The proximal method~\cite{rockafellar1976monotone} is
one of the classical methods in
theoretical optimization.
Every step of the method for solving problem~(\ref{prob-Main})
is a minimization of the regularized objective:
\beq \label{Prox-Subprob}
\ba{rcl}
x_{k + 1} & = & \arg\min\limits_{x \in \E}
\Bigl\{
a_{k + 1} F(x) + \frac{1}{2}\|x - x_k\|^2
\Bigr\}, \qquad k \geq 0,
\ea
\eeq
where $\{ a_k \}_{k \geq 1}$ is a sequence of positive
coefficients, related to the iteration counter.
Of course, in general, we can hope only to solve
subproblem~(\ref{Prox-Subprob}) inexactly. The questions
of practical implementations and possible
generalizations of the proximal method, are still in the
area of intensive research (see, for example
\cite{guler1991convergence,solodov2001unified,schmidt2011convergence,salzo2012inexact}).
One simple observation on the
subproblem~(\ref{Prox-Subprob}) is that it is $1$-strongly
convex. Therefore, if we would be able to pick an initial
point from the region of superlinear
convergence~(\ref{eq-RegF}) or~(\ref{eq-RegG}), we could
minimize it very quickly by RCTM of degree $p \geq 2$ up
to arbitrary accuracy. In this section, we are going to
investigate this approach. For the resulting scheme, we
will prove the global rate of convergence of the order
$\tilde{O}(1 / k^{p + 1 \over 2})$.
Denote by $\Phi_{k + 1}$ the regularized objective
from~(\ref{Prox-Subprob}):
$$
\ba{rcl}
\Phi_{k + 1}(x) & \Def &
a_{k + 1} F(x) + \frac{1}{2}\|x - x_k\|^2
\; = \;
a_{k + 1} f(x) + \frac{1}{2}\|x - x_k\|^2
+ a_{k + 1} h(x).
\ea
$$
We fix a sequences of accuracies $\{\delta_k\}_{k \geq 1}$
and relax the assumption on exact minimization
in~(\ref{Prox-Subprob}). Now, at every step we need to
find a point $x_{k + 1}$ and corresponding subgradient
vector $g_{k + 1} \in \partial \Phi_{k + 1}(x_{k + 1})$
with bounded norm:
\beq \label{RelaxedMin}
\ba{rcl}
\|g_{k + 1}\|_{*} & \leq & \delta_{k + 1}.
\ea
\eeq
Denote
$$
\ba{rcl}
F'(x_{k + 1}) & \Def &
\frac{1}{a_{k + 1}}( g_{k + 1} - B(x_{k + 1} - x_k))
\; \in \; \partial F(x_{k + 1}).
\ea
$$
The following global convergence result holds
for the general proximal method with inexact
minimization criterion~(\ref{RelaxedMin}).
\BT \label{th-InexProx}
Assume that there exist a minimum $x^{*} \in \dom h$ of
the problem~(\ref{prob-Main}). Then, for any $k \geq 1$,
we have
\beq \label{eq-InexProx}
\ba{rcl}
\sum\limits_{i = 1}^k a_i(F(x_i) - F^{*})
+ \frac{1}{2}\sum\limits_{i = 1}^k a_i^2 \|F'(x_i)\|_{*}^2
+ \frac{1}{2}\|x_k - x^{*}\|^2
& \leq & R_k(\delta),
\ea
\eeq
where
$$
\ba{rcl}
R_k(\delta) & \Def &
\frac{1}{2}\left(
\|x_0 - x^{*}\| + \sum\limits_{i = 1}^k \delta_i
\right)^2.
\ea
$$
\ET
\proof
First, let us prove that for all $k \geq 0$ and for every
$x \in \dom h$, we have
\beq \label{InductionCondition}
\ba{rcl}
\frac{1}{2}\|x_0 - x\|^2
+ \sum\limits_{i = 1}^k a_i F(x)
& \geq &
\frac{1}{2}\|x_k - x\|^2 + C_k(x),
\ea
\eeq
where
$$
\ba{rcl}
C_k(x) & \Def &
\sum\limits_{i = 1}^k \left( a_i F(x_i) +
\frac{a_i^2}{2} \|F'(x_i)\|_{*}^2 + \la g_i, x - x_{i - 1}
\ra - \frac{\delta_i^2}{2} \right).
\ea
$$
This is obviously true for $k = 0$. Let it hold for some
$k \geq 0$. Consider the step number $k + 1$ of the
inexact proximal method.
By condition~(\ref{RelaxedMin}), we have
$$
\ba{rcl}
\| a_{k + 1} F'(x_{k + 1}) + B(x_{k + 1} - x_k) \|_{*}^2 &
\leq & \delta_{k + 1}^2.
\ea
$$
Equivalently,
\beq \label{ProxOneStep}
\ba{cl}
& \la a_{k + 1} F'(x_{k + 1}), x_k - x_{k + 1} \ra \\
\\
& \; \geq \;
\frac{a_{k + 1}^2}{2}\|F'(x_{k + 1})\|_{*}^2
+ \frac{1}{2}\|x_{k + 1} - x_k\|^2
- \frac{\delta_{k + 1}^2}{2}.
\ea
\eeq
Therefore, using the inductive assumption and strong
convexity of $\Phi_{k + 1}(\cdot)$, we conclude
$$
\ba{rl}
& \frac{1}{2}\|x_0 - x\|^2 + \sum\limits_{i = 1}^{k + 1}
a_i F(x) \; = \; \frac{1}{2}\|x_0 - x\|^2 + \sum\limits_{i
= 1}^k a_i F(x) + a_{k + 1} F(x)
\\
\\
& \; \refGE{InductionCondition} \;
\Phi_{k + 1}(x) + C_k(x) \\
\\
& \;\;\, \geq \;\;\,
\Phi_{k + 1}(x_{k + 1}) + \la g_{k + 1}, x - x_{k + 1} \ra
+ \frac{1}{2}\|x_{k + 1} - x\|^2 + C_k(x) \\
\\
& \;\;\, = \;\;\, a_{k + 1} F(x_{k + 1}) + \frac{1}{2}\|x_{k + 1} -
x_k\|^2
+ \la g_{k + 1}, x_k - x_{k + 1} \ra \\
\\
& \;\;\qquad + \;\;\, \la g_{k + 1}, x - x_k \ra
+ \frac{1}{2}\|x_{k + 1} - x\|^2 + C_k(x) \\
\\
& \;\;\, = \;\;\,
a_{k + 1} F(x_{k + 1}) + \la a_{k + 1} F'(x_{k + 1}),
x_k - x_{k + 1} \ra
- \frac{1}{2}\|x_{k + 1} - x_k\|^2 \\
\\
& \;\;\qquad + \;\;\, \la g_{k + 1}, x - x_k \ra
+ \frac{1}{2}\|x_{k + 1} - x\|^2 + C_k(x) \\
\\
& \; \refGE{ProxOneStep} \; a_{k + 1} F(x_{k + 1}) + \frac{a_{k
+ 1}^2}{2}\|F'(x_{k + 1})\|_{*}^2
- \frac{\delta_{k + 1}^2}{2} \\
\\
& \;\;\qquad + \;\;\,
\la g_{k + 1}, x - x_k \ra + \frac{1}{2}\|x_{k + 1} - x\|^2 + C_k(x) \\
\\
& \;\;\, = \;\;\,
\frac{1}{2}\|x_{k + 1} - x\|^2 + C_{k + 1}(x).
\ea
$$
Thus, inequality~(\ref{InductionCondition}) is valid for
all $k \geq 0$.
Now, plugging $x \equiv x^{*}$ into~(\ref{InductionCondition}),
we have
\beq \label{RecurProx}
\ba{cl}
& \sum\limits_{i = 1}^k a_i (F(x_i) - F^{*})
+ \frac{1}{2}\sum\limits_{i = 1}^k a_i^2 \|F'(x_i)\|_{*}^2
+ \frac{1}{2}\|x_k - x^{*}\|^2 \\
\\
& \;\; \, \leq \;\;\,
\frac{1}{2}\|x_0 - x^{*}\|^2
+ \frac{1}{2}\sum\limits_{i = 1}^k \delta_i^2
+ \sum\limits_{i = 1}^k \la g_i, x_{i - 1} - x^{*} \ra \\
\\
& \; \refLE{RelaxedMin} \;
\frac{1}{2}\|x_0 - x^{*}\|^2
+ \frac{1}{2}\sum\limits_{i = 1}^k \delta_i^2
+ \sum\limits_{i = 1}^k \delta_i \|x_{i - 1} - x^{*} \|
\quad \Def \quad \alpha_k.
\ea
\eeq
In order to finish the proof, it is enough to show that
$\alpha_k \leq R_k(\delta)$.
Indeed,
$$
\ba{rcl}
\alpha_{k + 1} & = &
\alpha_k + \frac{1}{2} \delta_{k + 1}^2
+ \delta_{k + 1} \|x_k - x^{*}\| \\
\\
& \refLE{RecurProx} &
\alpha_k + \frac{1}{2}\delta_{k + 1}^2
+ \delta_{k + 1} \sqrt{2 \alpha_k} \\
\\
& = &
\left( \sqrt{\alpha_k}
+ \frac{1}{\sqrt{2}}\delta_{k + 1} \right)^2.
\ea
$$
Therefore,
$$
\ba{rcl}
\sqrt{\alpha_k} & \leq &
\sqrt{\alpha_{k - 1}} + \frac{1}{\sqrt{2}}\delta_{k}
\; \leq \; \dots \; \leq \;
\sqrt{\alpha_0} + \frac{1}{\sqrt{2}}\sum\limits_{i = 1}^k \delta_i \\
\\
& = &
\frac{1}{\sqrt{2}}\left( \|x_0 - x^{*}\|
+ \sum\limits_{i = 1}^k \delta_i \right)
\; = \; \sqrt{R_k(\delta)}.
\ea
$$
\qed
Now, we are ready to use the result on the local
superlinear convergence of RCTM in the norm of subgradient
(Theorem~\ref{th-RateG}), in order to minimize $\Phi_{k +
1}(\cdot)$ at every step of inexact proximal method.
Note that
$$
\ba{rcl}
\partial \Phi_{k + 1}(x) & = & a_{k + 1} \partial F(x) + B(x - x_k),
\ea
$$
and it is natural to start minimization process from the
previous point $x_k$, for which $\partial \Phi_{k +
1}(x_k) = a_{k + 1} \partial F(x_k)$. Let us also notice,
that the Lipschitz constant of the $p$th derivative ($p
\geq 2$) of the smooth part of~$\Phi_{k + 1}$ is $a_{k +
1} L_p$.
Using our previous notation, one step of RCTM can be
written as follows:
$$
\ba{cl}
&T_H(\Phi_{k + 1}, z) \\
\\
& \;\;\, \Def \;\;\,
\arg\min\limits_{y \in \E}
\Bigl\{
a_{k + 1} \Omega_{p}(f, z; y) + \frac{H}{(p + 1)!}\|y - z\|^{p + 1}
+ a_{k + 1}h(y)
+ \frac{1}{2}\|y - x_k\|^2
\Bigr\},
\ea
$$
where $H = a_{k + 1}pL_p$. Then, a sufficient condition
for $z = x_k$ to be in the region of superlinear
convergence~\eqref{eq-RegG} is
$$
\ba{rcl}
a_{k + 1} \| F'(x_k) \|_*
& \leq &
\left(
p! \over a_{k + 1} (p + 1) L_p
\right)^{1 \over p - 1},
\ea
$$
or, equivalently
$$
\ba{rcl}
a_{k + 1} & \leq &
\left({1 \over \|F'(x_k)\|_{*} }\right)^{p - 1 \over p}
\left({ p! \over (p + 1) L_p }\right)^{1 \over p}.
\ea
$$
To be sure that $x_k$ is strictly inside the region, we
can pick:
\beq \label{ak-Choice}
\boxed{
\ba{rcl}
a_{k + 1} & = &
\left({1 \over 2 \| F'(x_k)\|_{*}} \right)^{p - 1 \over p}
\left(
p! \over (p + 1) L_p
\right)^{1 \over p}
\ea
}
\eeq
Note, that this rule requires fixing an initial
subgradient $F'(x_0) \in \partial F(x_0)$, in order to
choose $a_1$.
Finally, we apply the following steps:
\beq\label{met-RCTM-2}
\ba{rcl}
z_0 \; = \; x_k, \quad z_{t+1} & = & T_{H}(\Phi_{k + 1}, z_t), \quad
t \geq 0.
\ea
\eeq
We can estimate the required number of these iterations as follows.
\BL
At every iteration $k \geq 0$ of the inexact proximal
method, in order to achieve $\| \Phi'_{k + 1}(z_t) \|_{*}
\leq \delta_{k + 1}$, it is enough to perform
\beq \label{eq-LogLog}
\ba{rcl}
t_k &=&
\biggl\lceil \frac{1}{\log_2 p} \cdot \log_2 \log_2
\left(
\frac{2 D_k(\delta) }{ \delta_{k + 1}}
\right) \biggr\rceil
\ea
\eeq
steps of RCTM~\eqref{met-RCTM-2},
where
$$
\ba{rcl}
D_k(\delta) & \Def &
\max \biggl\{
\|x_0 - x^{*}\| + \sum\limits_{i = 1}^k \delta_i,
\Bigl( \frac{p! \|F'(x_0)\|_{*} }{(p + 1)L_p2^{p - 1}} \Bigr)^{1 \over p}
\biggr\}
\ea
$$
\EL
\proof
According to~\eqref{eq-RateG}, one step of RCTM~\eqref{met-RCTM-2}
provides us with the following guarantee
in terms of the subgradients of our objective $\Phi_{k + 1}(\cdot)$:
\beq \label{step-RCTM-2}
\ba{rcl}
\| \Phi'_{k + 1}(z_t) \|_{*}
& \leq &
\frac{a_{k + 1} (p + 1) L_p}{p!} \| \Phi'_{k + 1}(z_{t - 1}) \|_{*}^p,
\ea
\eeq
where we used in~\eqref{eq-RateG} the values $q = 2$, $\sigma_q = 1$,
$a_{k + 1} L_p$ for the Lipschitz constant of the $p$th derivative of the smooth part of $\Phi_{k + 1}$,
and $H = a_{k + 1}pL_p$.
Denote
$\beta \equiv \left( { a_{k + 1}(p + 1)L_p \over p! } \right)^{1 \over p - 1}
\refEQ{ak-Choice}
\left( { (p + 1) L_p \over 2 \cdot p! \cdot \|F'(x_k)\|_* } \right)^{1 \over p}$.
Then, from~\eqref{step-RCTM-2} we have
\beq \label{Sublin-Conv}
\ba{rcl}
\beta \| \Phi'_{k + 1}(z_t) \|_{*} & \leq &
\bigl(\beta \| \Phi'_{k + 1}(z_{t - 1}) \|_{*}\bigr)^{p} \\
\\
& \leq & \dots \;\; \leq \;\;
\bigl(\beta \| \Phi'_{k + 1}(z_0) \|_{*}\bigr)^{p^t} \\
\\
& = &
(\beta a_{k + 1}\|F'(x_k)\|_{*})^{p^t} \\
\\
& = &
\left(
a_{k + 1}^{p \over p - 1}
\left({ (p + 1) L_p \over p! }\right)^{1 \over p - 1}
\|F'(x_k)\|_{*}
\right)^{p^t} \\
\\
& \refEQ{ak-Choice} & \left({1 \over 2}\right)^{p^t}.
\ea
\eeq
Therefore, for
\beq \label{eq-LogLog-2}
\ba{rcl}
t & \geq &
\log_p \log_2 \left( \frac{1}{\beta \delta_{k + 1}} \right)
\; = \;
\frac{1}{\log_2 p} \cdot \log_2 \log_2
\left(
\frac{1}{ \delta_{k + 1}}
\left( {
2 \cdot p! \cdot \| F'(x_k) \|_* \over (p + 1) L_p
} \right)^{1 \over p} \right),
\ea
\eeq
it holds $\| \Phi'_{k + 1}(z_t)\|_{*} \leq \delta_{k +
1}$. To finish the proof, let us estimate $\| F'(x_k)
\|_{*}$ from above. We have
\beq \label{eq-GBound}
\ba{rcl}
2^{3p - 2 \over p} \left( \frac{(p + 1)L_p}{p!} \right)^{2 \over p} R_k(\delta)
& \refGE{eq-InexProx} &
2^{2(p - 1) \over p} \left( \frac{(p + 1)L_p}{p!} \right)^{2 \over p}
\sum\limits_{i = 1}^k a_i^2 \|F'(x_i)\|_{*}^2 \\
\\
& \refEQ{ak-Choice} &
\sum\limits_{i = 1}^k \|F'(x_{i - 1})\|_{*}^{2(1 - p) \over p} \|F'(x_i)\|_{*}^2.
\ea
\eeq
Thus, for every $1 \leq i \leq k$ it holds
\beq \label{eq-GBound2}
\ba{rcl}
\|F'(x_i)\|_{*} & \refLE{eq-GBound} & \| F'(x_{i - 1})\|_{*}^{\rho}
\cdot \mathcal{D},
\ea
\eeq
with
$\mathcal{D} \equiv R_k^{1/2}(\delta)
\left( \frac{(p + 1) L_p}{p!} \right)^{1 \over p} 2^{3p - 2 \over 2p}$,
and $\rho \equiv \frac{p - 1}{p}$.
Therefore,
$$
\ba{rcl}
\|F'(x_k)\|_{*} & \refLE{eq-GBound2} & \|F'(x_0)\|_{*}^{\rho^k}
\cdot \mathcal{D}^{1 + \rho + \rho^2 + \dots + \rho^{k - 1}} \\
\\
& = & \|F'(x_0)\|_{*} \cdot
\Bigl( \|F'(x_0)\|_{*}^{\rho^k - 1} \cdot \mathcal{D}^{\frac{\; \;1 - \rho^k}{1 - \rho}} \Bigr) \\
\\
& = & \|F'(x_0)\|_{*} \cdot \left(
\frac{\mathcal{D}^{p}}{\|F'(x_0)\|_{*}} \right)^{1 - \rho^k}
\; \leq \;
\| F'(x_0) \|_{*} \cdot \max \bigl\{ \frac{\mathcal{D}^p}{\|F'(x_0)\|_{*}}, 1 \bigr\} \\
\\
& = & \max \biggl\{
\frac{(p + 1) L_p 2^{p - 1}}{p!}
\Bigl( \|x_0 - x^{*}\| + \sum\limits_{i = 1}^k \delta_i
\Bigr)^p, \; \|F'(x_0)\|_{*} \biggr\}.
\ea
$$
Substitution of this bound into~\eqref{eq-LogLog-2}
gives~\eqref{eq-LogLog}.
\qed
Let us prove now the rate of convergence for the outer
iterations. This is a direct consequence of
Theorem~\ref{th-InexProx} and the choice~\eqref{ak-Choice}
of the coefficients $\{ a_{k} \}_{k \geq 1}$.
\BL Let for a given $\varepsilon > 0$,
\beq \label{eq-epsLBound}
\ba{rcl}
F(x_k) - F^{*} & \geq & \varepsilon, \qquad 1 \leq k \leq K.
\ea
\eeq
Then for every $1 \leq k \leq K$, we have
\beq \label{eq-cProx}
\ba{rcl}
F(\bar{x}_k) - F^{*} & \leq &
\frac{L_p \left(
\|x_0 - x^{*} \| + \sum_{i = 1}^k \delta_i
\right)^{p + 1}
}{k^{p + 1 \over 2}}
\frac{(p + 1) 2^{p - 2} V_k(\varepsilon) }{ p!},
\ea
\eeq
where
$\bar{x}_k \Def \frac{\sum_{i = 1}^k a_i x_i}{\sum_{i = 1}^k a_i}$, and
$V_k(\varepsilon) \Def \left( \frac{\|F'(x_0)\|_{*} \cdot ( \|x_0 - x^{*}\|
+ \sum_{i = 1}^k \delta_i )}{\varepsilon}
\right)^{p - 1 \over k}$.
\EL
\proof
Using the inequality between the arithmetic and geometric
means, we obtain
\beq \label{ak-rate}
\ba{rcl}
R_{k}(\delta)
& \refGE{eq-InexProx} &
\frac{1}{2}\sum\limits_{i = 1}^k a_i^2 \|F'(x_i)\|_*^2
\; \refEQ{ak-Choice} \;
\frac{1}{8}
\left( \frac{p!}{(p + 1)L_p}
\right)^{2 \over p - 1}
\sum\limits_{i = 1}^k
\frac{a_i^2}{a_{i + 1}^{2p \over p - 1}} \\
\\
& \geq &
\frac{k}{8}
\left( \frac{p!}{(p + 1)L_p}
\right)^{2 \over p - 1}
\left(
\prod\limits_{i = 1}^k
\frac{a_i^2}{a_{i + 1}^{2p \over p - 1}}
\right)^{1 \over k} \\
\\
& = &
\frac{k}{8}
\left( \frac{p!}{(p + 1)L_p}
\right)^{2 \over p - 1}
\left(
\frac{a_1}{a_{k + 1}}
\right)^{2p \over (p - 1)k}
\left( \prod\limits_{i = 1}^k a_i \right)^{-2 \over (p - 1)k} \\
\\
& \geq &
\frac{k^{p + 1 \over p - 1}}{8}
\left( \frac{p!}{(p + 1)L_p}
\right)^{2 \over p - 1}
\left(
\frac{a_1}{a_{k + 1}}
\right)^{2p \over (p - 1)k}
\left( \sum\limits_{i = 1}^k a_i \right)^{-2 \over p - 1}.
\ea
\eeq
Therefore,
$$
\ba{rcl}
F(\bar{x}_k) - F^{*}
& \leq &
\frac{1}{\sum\limits_{i = 1}^k a_i}
\sum\limits_{i = 1}^k a_i (F(x_i) - F^{*})
\; \refLE{eq-InexProx} \;
\frac{R_k(\delta)}{\sum\limits_{i = 1}^k a_i} \\
\\
& \refLE{ak-rate} &
\frac{ R_k(\delta)^{p + 1 \over 2} }{k^{p + 1 \over 2}}
\frac{(p + 1) L_p}{p!}
\left( \frac{a_{k + 1}}{a_1} \right)^{p \over k}
8^{p - 1 \over 2} \\
\\
& = &
\frac{L_p \left(
\|x_0 - x^{*} \| + \sum_{i = 1}^k \delta_i
\right)^{p + 1}
}{k^{p + 1 \over 2}}
\frac{(p + 1) 2^{p - 2} }{ p!}
\left( \frac{\|F'(x_0)\|_{*}}{\|F'(x_k)\|_{*}} \right)^{p - 1 \over k},
\ea
$$
where the first inequality holds by convexity.
At the same time, we have
$$
\ba{rcl}
\|F'(x_k)\|_{*} & \geq & \frac{\la F'(x_k), x_k - x^{*} \ra}{\|x_k - x^{*}\|}
\; \geq \; \frac{F(x_k) - F^{*}}{\|x_k - x^{*}\|} \\
\\
& \refGE{eq-epsLBound} & \frac{\varepsilon}{\|x_k - x^{*}\|}
\; \refGE{eq-InexProx} \; \frac{\varepsilon}{\|x_0 - x^{*}\| + \sum_{i = 1}^k \delta_i }.
\ea
$$
Thus, $\left( \frac{\|F'(x_0)\|_{*}}{\|F'(x_k)\|_{*}} \right)^{p - 1 \over k} \leq V_k(\varepsilon)$
and we obtain~\eqref{eq-cProx}.
\qed
\BR
Note that
$\bigl(\frac{1}{\varepsilon}\bigr)^{p - 1 \over k}
= \exp\bigl( {p - 1 \over k} \ln {1 \over \varepsilon} \bigr)$.
Therefore after
$k = O\left( \ln {1 \over \varepsilon}\right)$ iterations, the factor $V_k(\varepsilon)$
is bounded by an absolute constant.
\ER
Since the local convergence of RCTM is very
fast~\eqref{eq-LogLog}, we can choose the inner
accuracies~$\{ \delta_i \}_{i \geq 1}$ small enough, to
have the right hand side of~\eqref{eq-cProx} being of the
order $\tilde{O}(1 / k^{p + 1 \over 2})$. Let us present a
precise statement.
\BT
Let $\delta_k \equiv \frac{c}{k^s}$ for fixed absolute
constants $c > 0$ and $s > 1$. Let for a given
$\varepsilon > 0$, we have
$$
\ba{rcl}
F(x_k) - F^{*} & \geq & \varepsilon, \qquad
1 \leq k \leq K.
\ea
$$
Then, for every $k$ such that $\ln \frac{\|F'(x_0)\|_{*}
R}{ \varepsilon} \leq k \leq K$, we get
\beq \label{eq-PrConv}
\ba{rcl}
F(\bar{x}_k) - F^{*} & \leq &
\frac{L_p R^{p + 1}}{k^{p + 1 \over 2}} \frac{(p + 1) 2^{p - 2} \exp(p - 1)}{p!},
\ea
\eeq
where
$$
\ba{rcl}
R & \Def & \|x_0 - x^{*}\| + \frac{cs}{s - 1}.
\ea
$$
The total number of oracle calls $N_k$ during the first
$k$ iterations is bounded as follows:
$$
\ba{rcl}
N_k & \leq & k \cdot \Bigl( 1 + \frac{1}{\log_2 p} \log_2 \log_2 \frac{2D k^s }{c} \Bigr),
\ea
$$
where
$$
\ba{rcl}
D & \Def & \max \biggr\{
R, \,
\Bigl( \frac{p! \|F'(x_0)\|_{*} }{(p + 1)L_p2^{p - 1}} \Bigr)^{1 \over p}
\biggl\}.
\ea
$$
\ET
\proof
Indeed,
$$
\ba{rcl}
\sum\limits_{i = 1}^k \delta_i & = &
c\Bigl(1 + \sum\limits_{i = 2}^k \frac{1}{i^s} \Bigr)
\; \; \leq \; \;
c\Bigl(1 + \int\limits_1^k \frac{dx}{x^{s}} \Bigr)
\; \; = \; \;
c\Bigl(1 - \frac{1}{s - 1} \int\limits_1^k dx^{-(s - 1)} \Bigr) \\
\\
& = &
c\Bigl(1 - \frac{k^{-(s - 1)}}{s - 1} + \frac{1}{s - 1} \Bigr)
\; \; \leq \; \; \frac{cs}{s - 1}.
\ea
$$
Thus, we obtain~\eqref{eq-PrConv} directly from
the bound~\eqref{eq-cProx}, and by the fact that
$$
\ba{rcl}
V_k(\varepsilon) & \equiv &
\Bigl( \frac{\| F'(x_0) \|_{*} R}{\varepsilon} \Bigr)^{\frac{p - 1}{k}}
\; = \;
\exp\Bigl( \frac{p - 1}{k} \log \frac{\| F'(x_0) \|_{*} R}{\varepsilon} \Bigr) \\
\\
& \leq & \exp(p - 1),
\ea
$$
when $k \geq \ln \frac{\| F'(x_0) \|_{*} R }{ \varepsilon} $.
Finally,
$$
\ba{rcl}
N_k & \refLE{eq-LogLog} &
\sum\limits_{i = 1}^k \left\lceil \frac{1}{\log_2 p}
\log_2 \log_2 \frac{2 D }{\delta_i} \right\rceil
\; \leq \;
k + \frac{1}{\log_2 p} \sum\limits_{i = 1}^k \log_2 \log_2 \frac{2Di^s}{c} \\
\\
& \leq & k + \frac{1}{\log_2 p} \sum\limits_{i = 1}^k \log_2 \log_2 \frac{2Dk^s}{c}
\; = \;
k \cdot \Bigl(1 + \frac{1}{\log_2 p} \log_2 \log_2 \frac{2Dk^s}{c} \Bigr).
\ea
$$
\qed
Note that we were able to justify the global performance
of the scheme, using only the local convergence results
for the inner method. It is interesting to compare our
approach with the recent results on the path-following
second-order methods \cite{dvurechensky2018global}.
We can drop the logarithmic components in the complexity
bounds by using the \textit{hybrid proximal methods}
(see~\cite{monteiro2010complexity}
and~\cite{marques2019iteration}), where at each iteration
only one step of RCTM is performed. The resulting rate of
convergence there is $O(1 / k^{p + 1 \over 2})$, without
any extra logarithmic factors. However, this rate is worse
than the rate $O(1 / k^p)$ provided by the
Theorem~\ref{th-SublR} for the primal iterations of
RCTM~\eqref{met-RCTM}.
\section*{Acknowledgements}
We are very thankful to anonymous referees
for valuable comments that improved the initial version of this paper.
|
1,108,101,564,324 | arxiv | \section{Introduction \label{intro}}
The problem studied in this note is motivated by a special feature
of the ordering process of a fashion discounter with many branches:
For each product that hits the shelves, the internal stock-turnover
has to distribute around 10\,000 pieces among the around 1\,000
branches, correctly assorted by size. This would mean 10\,000 picks
with high error probability in the central-warehouse (in our case in
the high-wage country Germany). In order to reduce the handling costs
and the error proneness in the central warehouse, all products are
ordered in multiples of so-called \emph{lot-types} from the suppliers
who in general are located in extremely low-wage countries.
A lot-type specifies a number of pieces of a product for each
available size, e.g., (2,2,2,2,2) if the sizes are (S, M, L, XL, XXL)
means two pieces of each size. A \emph{lot} of a certain lot-type is
a foiled pre-pack that contains as many pieces of each size as
specified in its lot-type. The number of different lot-types is
bounded by the supplier.
So we face an approximation problem: which (integral) multiples of
which (integral) lot-types should be supplied to a branch in order to
meet a (fractional) mean demand as closely as possible? We call this
specific demand approximation problem the \emph{lot-type design
problem (LDP)}. A detailed version of this work appeared
in~\cite{Gaul+Kurz+Rambau:LotTypeDesignProblemOMS:2009}, where also
references to related work can be found.
\section{The lot-type design problem\label{sec_lottype}}
Formally, the problem can be stated as follows: Consider a fashion
discounter with branches $b \in \mathcal{B}$ who wants to place an
order for a certain product that can be obtained in sizes $s \in
\mathcal{S}$ and that can be pre-packed in lot-types $l \in
\mathcal{L}$. Each lot-type is a vector $(l_s)_{s \in \mathcal{S}}$
specifying the number of pieces of each size contained in the
pre-pack. Only $\ensuremath{k}$ different lot-types from $\mathcal{L}$ are
allowed in this order, and each branch receives only lots of a single
lot-type. We are given lower and upper bounds $\underline{I},
\overline{I}$ for the total supply of this product. Moreover, we
assume that a the branch and size dependent mean demand $\ensuremath{d}_{b,
s}$ for the corresponding type of product is known to us.
The original goal is to find a set of at most $\ensuremath{k}$ lot-types, an
order volume for each of these chosen lot-types, and a distribution of
lots to branches such that the revenue is maximized. In order to
separate the order process from the sales process (which involves
mark-downs, promotions, etc.), we restrict ourselves in this paper to
the minimization of the distance between supply and mean demand
defined by a vector norm.
The \emph{Lot-Type Design Problem (LDP)} is the following
optimization problem:
\begin{center}
\begin{tabular}[t]{rp{0.8\linewidth}}
\emph{Instance:} &
We are given
\begin{itemize}
\item a set of branches $b \in \mathcal{B}$
\item a set of sizes $s \in \mathcal{S}$
\item a mean demand table $\ensuremath{d}_{b, s}$, $b \in \mathcal{B}$, $s
\in \mathcal{S}$
\item a norm $\lVert\cdot\rVert$ on~$\R^{\mathcal{B}
\times \mathcal{S}}$
\item a set $\mathcal{L}$ of feasible lot types $(l_s)_{s \in
\mathcal{S}} \in \N_0^{\mathcal{S}}$
\item a maximal number $M \in \N$ of possible
multiplicities
\item a maximal number $\ensuremath{k} \in \N$ of lot types to
use
\item lower and upper bounds $\underline{I}$, $\overline{I}$ for
the total supply
\end{itemize}\\
\emph{Task:} & For each branch~$b \in \mathcal{B}$ choose a lot
type $l(b) \in \mathcal{L}$ and a number $m(b) \in \N$,
$1 \le m(b) \le M$ of
lots to order for~$b$ such that
\begin{itemize}
\item the total number of ordered lot types is at most~$\ensuremath{k}$
\item the total number of ordered pieces is in $[\underline{I},
\overline{I}]$\newline(the \emph{total capacity constraint})
\item the distance of the order from the demand measured by
$\lVert\cdot\rVert$ is minimal
\end{itemize}
\end{tabular}
\end{center}
The LDP can be formulated as an Integer Linear Program if we restrict
ourselves to the $L^1$-norm for measuring the distance between supply
and demand. This norm is quite robust against outlies in the demand
estimation.
We use binary variables $x_{b,l,m}$, which are equal to $1$ if and
only if lot-type $l$ is delivered with multiplicity $m$ to Branch $b$,
and binary variables $y_l$, which are $1$ if and only if at least one
branch in $\mathcal{B}$ is supplied with Lottype~$l$. The
\emph{deviation} of the demand from the supply if Branch~$b$ is
supplied by $m$ lots of lot-type~$l$ is given by $\ensuremath{c}_{b,l,m}
:= \sum_{s \in \mathcal{S}} \lvert \ensuremath{d}_{b, s} - m \cdot l_s \rvert$.
The following integer linear program models the LDP with $L^1$-norm.
\begin{align}
\label{OrderModel_Target}
\min && \sum_{b\in\mathcal{B}}\sum_{l\in\mathcal{L}}\sum_{m=1}^M \ensuremath{c}_{b,l,m}\cdot x_{b,l,m}\\
\label{OrderModel_EveryBranchOneLottype}
s.t.
&&
\sum_{l\in\mathcal{L}}\sum_{m=1}^M x_{b,l,m} &= 1 && \forall b\in\mathcal{B}\\
\label{OrderModel_UsedLottypes}
&&
\sum_{l\in\mathcal{L}} y_l & \le \ensuremath{k}\\
\label{OrderModel_Binding}
&&
\sum_{m=1}^M x_{b,l,m} & \le y_l && \forall b\in\mathcal{B}, \forall
l\in\mathcal{L}\\
&&
\label{OrderModel_Cardinality}
\underline{I} \le \sum_{b\in\mathcal{B}}\sum_{l\in\mathcal{L}}\sum_{m=1}^M
\sum_{s \in \mathcal{S}}
m \cdot l_s \cdot x_{b,l,m} &\le \overline{I}\\
&&
x_{b,l,m} & \in\{0,1\} && \forall b\in\mathcal{B}, \forall
l\in\mathcal{L}, \forall m = 1,\dots,M\\
&&
y_l & \in\{0,1\} && \forall l\in\mathcal{L}
\end{align}
The objective function \eqref{OrderModel_Target} computes the $L^1$-distance
of the supply specified by $x$ from the demand. Condition
\eqref{OrderModel_EveryBranchOneLottype} enforces that each branch is
assigned a unique lot-type and a unique multiplicity. Condition
\eqref{OrderModel_UsedLottypes} models that at most $\ensuremath{k}$ different
lot-types can be chosen. Condition \eqref{OrderModel_Binding} forces
the selection of a lot-type whenever it is assigned to some branch
with some multiplicity. Finally, Condition
\eqref{OrderModel_Cardinality} ensures that the total number of pieces
is in the desired interval -- the total capacity constraint.
Our ILP formulation can be used to solve all real world instances of
our business partner in at most 30~minutes by using a standard ILP
solver like \texttt{CPLEX 11}. Interestingly, the model seems quite
tight -- most of the time is spent in solving the root LP.
Although 30~minutes may mean a feasible computation time for an
offline-optimization in many contexts, this is not fast enough for our
real world application. The buyers of our retailer need a software
tool which can produce a near optimal order recommendation in real
time on a standard laptop. For this reason, we present a fast
anytime-heuristic, which has only a small gap compared to the optimal
solution on a test set of real world data of our business partner.
We briefly sketch the idea of the heuristic \emph{Score-Fix-Adjust
(SFA)}: It
\begin{enumerate}
\item sorts all lot-types according to certain scores, coming from a
count for how many branches the lot-type fits best, second best,
\ldots (Score);
\item fixes $\ensuremath{k}$-subsets of lot-types in the order of decreasing
score sums (Fix);
\item greedily adjusts the multiplicities so as to achieve feasibility
w.r.t.\ the total capacity
constraint (Adjust).
\end{enumerate}
Details can be found in \cite{Gaul+Kurz+Rambau:LotTypeDesignProblemOMS:2009}.
Since in the case $\ensuremath{k} = 1$ we can very often loop over all
feasible lot-types, it is interesting that in this case SFA always
yields an optimal solution (for any norm).
\begin{lemma}
For $\ensuremath{k} = 1$ and costs $c_{b,l,m}=\Vert d_{b,\cdot}-m\cdot l\Vert$
for an arbitrary norm
$\left\Vert\cdot\right\Vert$, our heuristic SFA produces an optimal
solution whenever all lot-types $l\in\mathcal{L}$ are checked.
\end{lemma}
In order to substantiate the usefullness of our heuristic, we have
compared the quality of the solutions, given by this heuristic after
one second of computation time (on a standard laptop:
Intel$^{\textregistered}$ Core$\texttrademark\ $2 CPU with 2~GHz and
1~GB RAM) with respect to the solution given by \texttt{CPLEX 11}
(after solving to optimality).
Our business partner has provided us with historic sales information
for nine different commodity groups, each ranging over a sales period
of at least one-and-a-half years. From this we estimated mean demands
via aggregating over products in a commodity group. By normalizing
the lengths of the products' sales periods to the point in time when
half of the product was sold out, we were able to mod out the effects
of any product's individual success or failure. Prior to each test
calculation, the resulting demands were scaled so that the total mean
demand was in the center of the total capacity interval given by the
management for a new order of a product in that commodity group.
For each commodity group we have performed a test calculation for
$\ensuremath{k}\in\{2,3,4,5\}$ distributing some amount of items to almost
all branches. The crucial parameters are given in Table
\ref{table_parameters}, the results are presented in
Table~\ref{table_gap}.
\begin{table}[htp]\footnotesize\sffamily
\begin{center}\renewcommand{\arraystretch}{1.41}
\begin{tabular}{r@{\hspace*{1cm}}c@{\hspace*{1cm}}c@{\hspace*{1cm}}c@{\hspace*{1cm}}c@{\hspace*{1cm}}c}
\hline
Commodity group & $|\mathcal{B}|$ & $|\mathcal{S}|$ & $\left[\underline{I},\overline{I}\right]$
& $|\mathcal{L}|$ & $M$\\
\hline
1 & 1119 & 5 & [10\,630, 11\,749] & 243 & 10 \\
2 & 1091 & 5 & [10\,000, 12\,000] & 243 & 10 \\
3 & 1030 & 5 & [9\,785, 10\,815] & 243 & 10 \\
4 & 1119 & 5 & [10\,573, 11\,686] & 243 & 9 \\
5 & 1175 & 5 & [16\,744, 18\,506] & 243 & 15 \\
6 & 1030 & 5 & [11\,000, 13\,000] & 243 & 9 \\
7 & 1098 & 5 & [15\,646, 17\,293] & 243 & 9 \\
8 & 989 & 5 & [11\,274, 12\,461] & 243 & 9 \\
9 & 808 & 5 & [9\,211, 10\,181] & 243 & 10 \\
\hline
\end{tabular}
\caption{Parameters for the test calculations.}
\label{table_parameters}
\end{center}
\end{table}
\begin{table}[htp]\footnotesize\sffamily
\begin{center}\renewcommand{\arraystretch}{1.5}
\begin{tabular}{r@{\hspace*{1cm}}c@{\hspace*{1cm}}c@{\hspace*{1cm}}c@{\hspace*{1cm}}c}
\hline
Commodity group & $\ensuremath{k}=2$ & $\ensuremath{k}=3$ & $\ensuremath{k}=4$ & $\ensuremath{k}=5$ \\
\hline
1 & 2.114\,\% & 1.226\,\% & 2.028\,\% & 1.983\,\% \\
2 & 0.063\,\% & 0.052\,\% & 0.006\,\% & 0.741\,\% \\
3 & 0.054\,\% & 0.094\,\% & 0.160\,\% & 0.170\,\% \\
4 & 0.019\,\% & 0.007\,\% & 0.024\,\% & 0.038\,\% \\
5 & 0.015\,\% & 0.017\,\% & 0.018\,\% & 0.019\,\% \\
6 & 0.018\,\% & 0.022\,\% & 0.024\,\% & 0.022\,\% \\
7 & 0.013\,\% & 0.013\,\% & 0.014\,\% & 0.014\,\% \\
8 & 0.016\,\% & 0.017\,\% & 0.018\,\% & 0.019\,\% \\
9 & 0.011\,\% & 0.939\,\% & 0.817\,\% & 0.955\,\% \\
\hline
\end{tabular}
\caption{Optimality gap in the $\Vert\cdot\Vert_1$-norm for our heuristic on nine commodity groups and
different values for the maximum number $\ensuremath{k}$ of used lot-types.}
\label{table_gap}
\end{center}
\end{table}
We can see that -- given the uncertainty in the data -- the
performance of SFA is more than satisfactory.
\section{Conclusions}
We identified the lot-type design problem in the supply chain
management of a fashion discounter. It can be modeled as an ILP, and
real-world instances can be solved by commercial-of-the-shelf software
like CPLEX in half an hour whenever the number of lot-types is not too large.
Our SFA-heuristics finds solutions with a gap of mostly under 1\,\% in
a second, also for instances with a large number of lot-types. Given
the volatility of the demand estimation, these gaps are certainly
tolerable.
Meanwhile, the model and SFA have been put to operation by our
business partner with significant positive monetary impact.
|
1,108,101,564,325 | arxiv | \section*{Version fran\c{c}aise abr\'eg\'ee}
\vskip-.65cm
\selectlanguage{english}
\section{The global evolution problem for the Einstein equations}
\paragraph*{\bf Main objective.}
We consider the initial value problem when initial data are prescribed on a spacelike hypersurface and we tackle two major challenges:
\bei
\item The presence of {\sl propagating gravitational waves}. These may be impulsive waves in the sense that the spacetime Ricci curvature is a bounded measure (in the presence of matter), while the Weyl curvature may be even less regular. Such waves move at the speed of light and may propagate oscillation and concentration phenomena throughout the spacetime, as pointed out by the authors in~\cite{LeFlochLeFloch-1}.
\item The presence of {\sl shock waves} ---which generically arise and propagate in a compressible fluid, even when the initial data are regular. This is a classical phenomena in continuum physics due to the nonlinear nature of the Euler equations. Such waves cannot be avoided and a global Cauchy development to the Einstein equations {\sl beyond the formation of shocks} must be sought~\cite{LeFlochRendall-2011}.
\eei
\noindent Since we are interested in solutions that may be ``far'' from Minkowski space, it is natural to study this problem first under a symmetry assumption.
Dealing with spacetimes containing gravitational waves and shock waves requires a new methodology of mathematical analysis.
Hence, we study here the global dynamics of matter fields evolving under their own gravitational field, and solve the {\sl global evolution problem} for self-gravitating compressible fluids under the assumption of $\Tbb^2$ symmetry, while also providing a significant contribution to the {\sl nonlinear stability and instability} of vacuum spacetimes. This problem was left open after Christodoulou's breakthrough work in the 90's on the global evolution problem for scalar fields in spherical symmetry. We emphasize that propagating gravitational waves are suppressed in spherical symmetry.
We define here a notion of weak solutions to the Einstein equations and investigate their global geometric properties.
Our method is based on weak convergence techniques involving energy functionals and compensated compactness properties inspired by Tartar's method \cite{Tartar1,Tartar2}. While bounded variation functions were needed in dealing with spherically symmetric spacetimes, a more involved functional framework is required in $\Tbb^2$ symmetry. We summarize our results and method in this Note.
\paragraph*{\bf Einstein-matter spacetimes.}
The global study of matter spacetimes with symmetry was initiated by Rendall~\cite{Rendall-book} and Andreasson~\cite{Andreasson-1999}
for matter governed by Vlasov's kinetic equation. In this setup, since kinetic matter does not generate shock waves, the existence of global spacetime foliations can be established by arguments similar to those developed for vacuum spacetimes. However, it is significantly more challenging to understand their global geometry, since they can exhibit very different properties in comparison to their vacuum counterparts.
On the other hand, despite the importance of the problem
---for instance in astrophysical or cosmological contexts---
of the evolution of self-gravitating {\sl compressible perfect fluids,} the existing (physics only) literature provides
the construction of special classes of solutions (static fluids) and formal asymptotic analysis only. No rigorous mathematical analysis was available on this problem until recent years. Moreover, in the mathematical literature only partial results on self-gravitating compressible fluids were available until now, even when attention is restricted to Gowdy symmetry; see \cite{LeFlochRendall-2011}.
Hence, our results are new even in the case of Gowdy symmetry and new also even for vacuum spacetimes.
\paragraph*{\bf Vacuum spacetimes.}
In the past thirty years, significant progress has been made on the initial value problem for the {\sl vacuum} Einstein equations provided the spacetime metric is {\sl sufficiently regular.}
The typical symmetry assumption of interest is $\Tbb^2$ symmetry on $\Tbb^3$, that is,
the existence of two commuting and spacelike Killing fields acting on a spacetime with $\Tbb^3$ spatial topology;
cf.~Rendall \cite{Rendall-book}. Such spacetimes admit a global foliation by spacelike hypersurfaces in the areal gauge, that is, such that the area of the orbit of symmetry is a constant on each hypersurface and, furthermore, the corresponding time-function covers the whole range $(0, + \infty)$, describing the evolution originating in a Big Bang up to a forever dispersion toward the future. The literature on vacuum spacetimes is vast and we refer the reader to \cite{LeFlochSmulevici-2015} for a review and for the theory of future Cauchy developments for $T^2$ symmetric vacuum spacetimes. We emphasize that many very challenging questions still remain open concerning the global geometric behavior of $\Tbb^2$ symmetric vacuum spacetimes: geodesic completeness, curvature blow-up, Penrose conjecture, etc. It is only for the restricted class of Gowdy spacetimes that the global behavior toward the cosmological singularity is now well understood. For instance, we refer to Smulevici \cite{Smulevici-2009} (and the references cited therein) for the strong cosmic censorship in $T^2$ symmetric spacetimes in presence of a cosmological constant.
\section{Weak formulation of Einstein's field equations}
\paragraph*{\bf Einstein's field equations and the Euler equations.}
A spacetime is a $(3+1)$-dimensional Lorentzian manifold $(\Mcal, g)$ satisfying the Einstein equations of general relativity:
\bel{eq:44}
G = 8 \pi T,
\ee
which relate, on one hand, the curved geometry of the spacetimes as described by the Einstein tensor $G \coloneqq \operatorname{Ric} - (R/2) g$ and, on the other hand, the matter content of this spacetime represented by the stress-energy tensor~$T$.
We recall that $\operatorname{Ric}$ is the Ricci curvature tensor associated with the metric $g$ and, by convention, all Greek indices take the values $0, \ldots, 3$.
A perfect compressible fluid is governed by the stress-energy tensor
\bel{eq:45}
T = T(\mu,u) = (\mu + p(\mu)) \, g(u, \cdot) \otimes g(u, \cdot) + p(\mu) \, g,
\ee
where $\mu \geq 0$ denotes the mass-energy density of the fluid and $u$
its future-oriented, time-like velocity field normalized to be unit, that is, $g(u,u) = -1$.
The Einstein equations imply (thanks to the second contracted Bianchi identities associated with the metric $g$) the Euler equations
\bel{eq:46}
\Div_g T(\mu,u) = 0,
\ee
in which $\Div_g$ denotes the divergence operator based on the Levi-Civita connection of~$g$. The perfect fluid with stress-energy tensor~\eqref{eq:45} is governed by a general equation of state $p = p(\mu)$, satisfying the hyperbolicity conditions
\bel{hyperbolic-eos}
p'(\mu) > 0, \qquad 0 < p(\mu) \leq \mu \quad (\text{for all } \mu > 0),
\qquad p(0) =0.
\ee
The fluid is described by its mass-energy density function $\mu \geq 0$ and its (unit time-like, future-pointing) velocity vector field $u$. The solutions of \eqref{eq:46}, in general, become discontinuous in finite time even when the intitial data are smooth, so that weak solutions must be sought.
\paragraph*{\bf Spacetimes with weak regularity.}
Due to the possible formation of shock waves in the fluid variables, the curvature of such spacetimes is defined in the sense of distributions only. To this end, a central issue we investigate~\cite{LeFlochLeFloch-1,LeFlochLeFloch-4} is to find a weak regularity class for the metric and matter variables, under which the global evolution problem for the Einstein equations can be formulated and mathematically solved.
The first concept of generalized solutions to the Einstein equations was proposed by Christodoulou in a series of papers on spherically symmetric self-gravitating scalar fields. In a 1986 paper and subsequent papers, he investigated solutions to the Einstein-scalar field system in Bondi coordinates, and introduced a class of generalized solutions which are $C^2$ regular {\sl except possibly on the axis} of spherical symmetry. In 1992, Christodoulou introduced a broader class of solutions whose metric coefficients have bounded variation, and established Penrose's weak cosmic censorship conjecture for scalar fields in spherical symmetry.
In 2007, LeFloch and Mardare \cite{LeFlochMardare-2007} and, more recently, Lott \cite{Lott-2016} gave
a general definition of the Ricci curvature understood as distributions when the first-order derivatives of the
metric coefficients are square-integrable.
We follow \cite{LeFlochRendall-2011} (compressible fluids) and \cite{LeFlochSmulevici-2015} (vacuum spacetimes) and work at the level of weak regularity advocated in~\cite{LeFlochMardare-2007,Lott-2016}. More precisely, our proposal, first made in \cite{LeFlochLeFloch-1} with the more restrictive Gowdy symmetry assumption, is to work within a class of {\sl weak solutions with finite total energy,}
as we call them. We prove here that the Einstein equations can be solved for arbitrarily ``large'' initial data in such a class of solutions even in presence of a compressible fluid and beyond the formation of shocks.
\paragraph*{\bf The initial value problem.}
The Einstein equations, together with the Euler equations, can be expressed as a locally-well posed evolution system of partial differential equations of hyperbolic type, provided the pressure $p = p(\mu)$ obeys~\eqref{hyperbolic-eos}.
For the notion of maximal hyperbolic Cauchy development associated with a given initial data set, we refer to Choquet-Bruhat's textbook~\cite{Choquet-book} and the references therein. For a review of the Cauchy problem in general relativity, see Andersson \cite{Andersson-2004}.
An initial data set for the Einstein-Euler equations \eqref{eq:44}--\eqref{eq:46} consists of a Riemannian manifold $(\Mcal_0, g_0)$
together with a symmetric two-tensor~$k_0$ defined on~$\Mcal_0$, as well as a mass-energy field~$\mu_0$
and a vector field~$v_0$ defined on~$\Mcal_0$. These data must satisfy certain constraints, called Einstein's Hamiltonian and momentum constraints \cite{Choquet-book} which we tacitly assume throughout this Note.
Solving the initial value problem for the Einstein equations (from suitably chosen initial data) consists of finding
a Lorentzian manifold $(\Mcal, g)$ together with a scalar field $\mu \colon \Mcal \to [0, + \infty)$ and a vector field
$u$ defined on $\Mcal$, so that the Einstein equations, and therefore the Euler equations, are satisfied in a suitably weak sense while the initial data set is assumed.
\paragraph*{\bf The $\Tbb^2$ areal foliation.}
The global evolution problem is currently un-tractable by mathematical methods of analysis, and it is natural to study the global problem first under certain assumptions of symmetry.
Throughout, we assume $\Tbb^2$ symmetry with $\Tbb^3$ spatial topology, and we foliate the spacetime under consideration $(\Mcal, g)$ by spacelike hypersurfaces of constant areal time, denoted by~$t$. Namely, a global time coordinate $t\colon\Mcal \to I \subset \RR \setminus \{0\}$ is introduced that coincides (up to a sign) with the area of $\Tbb^2$-orbits of symmetry.
Here, $I$ denotes an interval that does not contain~$0$, of the form
$I = [t_0, t_*) \subset (0, + \infty)$ or $I= [t_0, t_*) \subset (-\infty, 0)$.
We also find it convenient sometimes to state definitions and results on a compact interval $J=[t_0, t_1]$ (not containing the origin). Einstein's constraint equations together with the positive energy condition (enjoyed by our matter model) imply that the gradient $\nabla\abs{t}$ of the area function is a timelike vector field (cf.~\cite{Rendall-book} and the references cited therein). We choose the sign of~$t$ such that $\nabla t$ is future-oriented, thus positive~$t$ and negative~$t$ correspond to future-expanding and future-contracting spacetimes, respectively.
\paragraph*{\bf Weakly regular, $\Tbb^2$ symmetric initial data sets and Cauchy developments.}
In this context, an initial data set denoted by $(g_0, k_0, \mu_0, v_0)$ consists of data that are defined on the torus $\Tbb^3$, are invariant under a $\Tbb^2$-action and, for simplicity, have constant $\Tbb^2$ area. Likewise, a solution $(g,\mu,u)$ to the Einstein equation is a Lorentzian metric~$g$, a scalar field $\mu\geq 0$ and a unit vector field~$u$, invariant under $\Tbb^2$~symmetry.
Here, $\mu_0 \geq 0$ may vanish and $\mu_0 v_0$ is a spacelike vector tangent to the initial hypersurface, representing the (timelike) matter momentum vector $\mu u$ suitably projected on this hypersurface. When the density vanishes, the value of the velocity vector~$u$ is irrelevant, and similarly for~$v_0$.
We can view the solution to the Einstein equation as a $\Tbb^2$ symmetric flow on the torus $\Tbb^3$ consisting of a Riemannian $3$-metric $g(t)$, a two-tensor $k(t)$ (representing the second fundamental form of each spatial slice),
a lapse function $N(t)= \Omega^{-1}(t)$, a scalar field $\mu(t) \geq 0$, and a vector field $v(t)$ (representing the projection of the physical velocity field).
In short, $k(t)$~as well as the first-order time and space derivatives of~$g(t)$ are square-integrable on spacelike slices, while the mass-energy density~$\mu(t)$ is integrable and the momentum per particle parallel to the $\Tbb^2$~orbits of symmetry, as well as the lapse function, obey sup-norm bounds.
At this level of regularity, the Einstein curvature is defined in the sense of distributions, only.
\section{Main results}
\paragraph*{\bf Global geometry of self-gravitating matter.}
For clarity in the presentation, we state a simplified version of our results and refer to \cite{LeFlochLeFloch-4} for more general statements. Our results may apply to fluids governed by pressure laws satisfying the natural hyperbolicity condition~\eqref{hyperbolic-eos} but, for simplicity, we assume that the flow is isothermal, in the sense that its pressure $p(\mu) = k^2 \mu$ depends linearly upon its mass-energy density, where $k \in (0,1)$ represents the speed of sound and does not exceed the speed of light (normalized to unit).
Since the Euler equations are not reversible in time,
suitable entropy inequalities are also imposed on the fluid variables, and it is natural to distinguish between two classes of initial data sets, that is, future-expanding and future-contracting spacetimes, respectively.
\
\begin{theorem}[The global future evolution problem for $\Tbb^2$ areal flows]
\label{thm:1.1}
Consider the initial value problem for the Einstein-Euler equations \eqref{eq:44}--\eqref{eq:46}
for vacuum spacetimes or isothermal fluids,
when the initial data set
$(g_0, k_0, \mu_0, v_0)$ is assumed to enjoy $\Tbb^2$ symmetry on $3$-torus topology $\Tbb^3$
and, moreover, enjoys weak regularity with finite total energy in the sense of Definition~\ref{weakdefinitionT2}
and has a constant area $\abs{t_0}$ (the sign of $t_0\neq 0$ being given below) of either future-contracting or future-expanding type. Suppose that a suitably rescaled component of the fluid momentum parallel to the orbit of symmetry is initially bounded.
Then, the future Cauchy development of this initial data set is a weak solution $(\Mcal,g,\mu,u)$
of the Einstein-Euler equations with finite total energy.
This solution can be seen as a $\Tbb^2$ areal flow on $\Tbb^3$,
endowed with a foliation by the time function $t\colon\Mcal \simeq I \times \Tbb^3 \to I=[t_0, t_*) \subset \RR$ whose leaves each have topology~$\Tbb^3$ and enjoy $\Tbb^2$~symmetry with orbits of area~$\abs{t}$,
such that:
\bei
\item {\bf Future-expanding regime.} One has $t_0>0$ and $t_* = \infty$ and the spacetime foliation extends until
the volume of the $\Tbb^3$ slices and the area of the $\Tbb^2$ orbits reaches infinity.
\item {\bf Future-contracting regime.} One has $t_0<0$ and the spacetime foliation extends
until an areal time $t_* \in (t_0, 0]$ such that the volume of $\Tbb^3$~slices tends to zero at $t_*$.
\bei
\item Either $t_*<0$, the length of the $\Tbb^3/\Tbb^2$ quotient reaches zero on the future boundary.
\item Or $t_* = 0$, that is, the area of the $\Tbb^2$ orbits approaches zero on the future boundary.
\eei
Furthermore, when the spacetime is vacuum or enjoys Gowdy symmetry, the second case $t_*=0$ generically holds true.
\eei
\end{theorem}
\paragraph*{\bf A nonlinear stability and instability theory for the Einstein equations.}
Our second main result concerns sequences of spacetimes. The statement below is relevant (and new) even for vacuum spacetimes and generalizes the study of vacuum Gowdy spacetimes in Le~Floch and LeFloch \cite{LeFlochLeFloch-1}. The notions of relaxed areal flow with finite energy and corrector stress tensor are defined precisely below.
\
\begin{theorem}[The global stability problem for $\Tbb^2$ areal flows]
\label{thm:stability}
Consider a sequence of $\Tbb^2$ symmetric initial data sets $(g_0^n, k_0^n, \mu_0^n, v_0^n)$ (for $n=1, 2, \ldots$) defined on $\Tbb^3$, with the same initial areal time~$t_0$,
and assume their natural norm (cf.~Definition~\ref{weakdefinitionT2}) is uniformly bounded with respect to~$n$, while their volume is uniformly bounded below.
By Theorem~\ref{thm:1.1} the corresponding areal flows $(g^n,\mu^n,u^n)$ are defined on time intervals $[t_0,t_*^n)$.
Up to extracting a subsequence, the maximal time of existence~$t_*^n$ has a limit~$t_*^\infty>t_0$ and the areal flows converge weakly (in a natural norm) to a limit $(g^\infty, \mu^\infty, u^\infty)$ on $[t_0,t_*^\infty)$.
\bei
\item {\bf Nonlinear stability for well-prepared initial data sets.}
If the initial data set $(g_0^n, k_0^n, \mu_0^n, v_0^n)$ converges strongly in the natural norm, the limit $(g^\infty, \mu^\infty, u^\infty)$ satisfies the Einstein-Euler equations \eqref{eq:44}--\eqref{eq:46}.
\item {\bf Nonlinear instability for general initial data sets.}
The limit $(g^\infty, \mu^\infty, u^\infty)$ is a {\sl relaxed areal flow with finite energy} (cf.~Definition~\ref{weakdefinitionT2-deux}), in the sense that it
satisfies an extension of the Einstein-Euler system, obtained by adding to $T_{\alpha\beta}$ a symmetric traceless {\sl corrector stress tensor}~$\tau_{\alpha\beta}$ which is orthogonal to the $\Tbb^2$~orbits of symmetry, is divergence-free and is a bounded measure.
\eei
\end{theorem}
\
This theorem exhibits a phenomenon in Einstein's field equations (which occurs even in vacuum spacetimes), that is, the appearance of spurious matter terms associated with the gravitational degrees of propagation.
Christodoulou discovered that singularities in spherically symmetric spacetimes arise at the center of symmetry and do not propagate. On the other hand, in $\Tbb^2$ symmetry, singularities do propagate and require an evolution system of their own.
Our theorem is a realization of Einstein's intuition that matter could arise as {\sl singularities of the gravitational field.} We refer the reader to the historical discussion in Kiessling and Tahvildar-Zadeh \cite{KiesslingTahvildar} and the references therein.
\section{Methodology}
\paragraph*{\bf Admissible coordinates.}
In the so-called areal gauge, any $\Tbb^2$ symmetric spacetime metric $g$ on the torus $\Tbb^3$ can be put in the form
\bel{metric:areal}
g = \Omega^2 (- dt^2 + a^{-4} dx^2)
+ \abs{t} e^P \bigl( dy + Q\,dz + (G+QH) dx\bigr)^2
+ \abs{t} e^{-P} (dz + H\,dx)^2,
\ee
where the metric coefficients $P,Q,\Omega, a, G, H$ with $\Omega,a>0$ are functions of $t \in I$ and $x \in \Tbb^1 \simeq [0,1]$, only, while the remaining ``transverse'' variables $y,z$ describe $\Tbb^2 \simeq [0,1]^2$.
The metric induced on a $\Tbb^2$-orbit parametrized by $(y,z)$ is
$\abs{t} e^P(dy + Q \, dz)^2 + \abs{t} e^{-P} dz^2
= \abs{t} e^P\,|dy + (Q+ie^{-P}) dz|^2$, which has area~$\abs{t}$ and modular parameter $Q+ie^{-P}$. Our general analysis can be specialized to two cases of interest:
\bei
\item {\sl Gowdy-symmetric spacetimes} are characterized by the condition $G=H=0$ (everywhere). Geometrically, this is equivalent to the vanishing of the so-called ``twist variables'' ($K_2, K_3$ introduced below).
\item {\sl Vacuum $\Tbb^2$ symmetric spacetimes} are defined by taking $\mu = 0$ (everywhere), in which case the Euler equations are automatically satisfied for any arbitrary (and irrelevant) velocity field~$u$.
\eei
In order to express the Einstein and Euler equations for the metric~\eqref{metric:areal}, some attention is required to arrive at a tractable system in which the nonlinearities can be analyzed.
We introduce the orthonormal moving frame
\bel{eq:ourframe}
\aligned
& e_0 \coloneqq \Omega^{-1} \del_t, \
&& e_1 \coloneqq \Omega^{-1} a^2 (\del_x - G\del_y - H\del_z), \
&& e_2 \coloneqq | t |^{-1/2} e^{-P/2} \del_y, \
&& e_3 \coloneqq | t |^{-1/2} e^{P/2} (\del_z - Q \del_y).
\endaligned
\ee
normalized by $g(e_m,e_n)=\eta_{mn}$ with $\eta=\operatorname{diag}(-1,1,1,1)$.
The spin connection components are then essentially ($a/2\Omega$ times) the variables
\be
\Pbb \coloneqq (P_0, P_1),
\quad
\Qbb \coloneqq (Q_0, Q_1),
\quad
\Kbb \coloneqq (K_2, K_3),
\ee
where
\bel{eq:definevar}
\aligned
P_0 & \coloneqq a^{-1} P_t,
\qquad
& Q_0 & \coloneqq a^{-1} e^P Q_t,
\qquad
& K_2 & \coloneqq \Omega^{-1} a\, | t |^{1/2} e^{P/2} (G_t+QH_t),
\\
P_1 & \coloneqq a\, P_x,
\qquad
& Q_1 & \coloneqq a\, e^P Q_x,
\qquad
& K_3 & \coloneqq \Omega^{-1} a\, | t |^{1/2} e^{-P/2} H_t.
\endaligned
\ee
These are our main metric variables. On the other hand, we parametrize the matter content by a momentum vector field $\Jbb=\Omega a^{-1} (2(1+k^2)\mu)^{1/2}u$, where $k$ denotes the sound speed. We denote its components along the frame~$(e_m)$ by $\Jbb=(\Jperp, \Jpar) = ( J_0, J_1, J_2, J_3)$, so that $\Jperp$ is orthogonal to the orbits of $\Tbb^2$ symmetry while $\Jpar$ is parallel to them. We also define
$\Jhatpar\coloneqq\abs{a^2\Jbb\cdot\Jbb}^{-q/2}a\Jpar\sim\abs{\Omega^2 \mu}^{(1-q)/2}u^\parallel$, which physically measures the parallel momentum per particle with $q=(1-k^2)/(1+k^2)$.
\paragraph*{\bf The notion of $\Tbb^2$ areal flows.}
We rely here on the notation $\Pbb, \Qbb, \Jperp$ and $\Kbb, \Jhatpar$ above.
\
\begin{definition}
\label{weakdefinitionT2}
Consider the Einstein-Euler equations with $\Tbb^2$~symmetry on $\Tbb^3$~spacelike slices when the metric is expressed in areal gauge with $t \in I$ describing a compact interval (not containing $0$). Then, a flow of Lebesgue measurable functions
$t \in I \mapsto (\Pbb, \Qbb, \Jperp, \Kbb, \Jpar)(t)$ together with two real-valued functions $a, \Omega$, defined over a compact spacetime domain $I \times \Tbb^1$
is called a {\bf $\Tbb^2$ areal flow on $\Tbb^3$} (or simply a weak solution) provided:
(1) The following admissibility inequalities hold:
\bel{eq:admis}
J_0 \geq 0,
\qquad
J_0^2 \geq J_1^2 + J_2^2 + J_3^2,
\qquad
a >0,
\qquad
\Omega > 0.
\ee
(2) The following integrability and boundedness conditions hold:
\bel{eq:regul}
\aligned
\Pbb, \Qbb, \Jperp, a^{-1} & \in L^\infty(I, L^2(\Tbb^1)),
\qquad
\Kbb, \Jhatpar, a, \log \Omega \in L^\infty(I \times \Tbb^1).
\endaligned
\ee
(3) The Einstein equations hold in the sense of distributions.
\end{definition}
\
More generally, for a flow defined on a half-open interval $I=[t_0, t_1)$ we require that the integrability and boundedness conditions hold on every compact subinterval. We validate the above definition by establishing the following statements.
\
\begin{proposition}
\label{weakdefinitionT2-2-propo}
Under the conditions in Definition~\ref{weakdefinitionT2}, the following properties hold.
\bei
\item Each Einstein equation $G_{mn}=8\pi T_{mn}$, for frame indices $m,n=0,1,2,3$, admits a rescaling by powers of $(\Omega, a)$ that is meaningful in the sense of distributions.
\item The Euler equations are meaningful in the sense of distributions and are satisfied as a consequence of the Einstein equations.
\item The constraint equations enjoy the propagation property: if they hold on a hypersurface of constant time, then they hold within the time interval $I$.
\eei
\end{proposition}
\paragraph*{\bf The notion of relaxed areal flows}
The previous notion of areal flow is now generalized in order to accommodate {\sl limits of sequences} of solutions to the Einstein equations. All of Einstein's equations hold for these limits, except for the evolution and constraint equations involving the conformal factor~$\Omega$ and additional contributions arises which are {\sl bounded measures,}
in contrast to the components of the standard matter tensor which are {\sl integrable functions.}
As we discover, a spurious matter tensor arises which originates in possible oscillations of the geometry itself. The modified Einstein equations are obtained by adding to the energy-momentum tensor
a {\bf corrector stress tensor}, as we propose to call it, whose components are bounded measures.
This tensor $\tau=(\tau_{mn})$ is symmetric, traceless, orthogonal to $\Tbb^2$~orbits (in the sense that $\tau_{m2}=\tau_{m3}=0$), satisfies also the positivity condition in
\bel{eq-posi}
\tau_{00} = \tau_{11}, \quad
\tau_{01} = \tau_{10},
\qquad
|\tau_{01} | \leq \tau_{00},
\ee
and is {\sl divergence free}
\bel{eq:Twave}
\begin{aligned}
\dive_{a} \big( t \, a \, \tau_{0 \bullet} \big)
=
\dive_{a} \big( t \, a \, \tau_{1 \bullet} \big)
& = 0 .
\end{aligned}
\ee
Here for a vector field $X = (X_0, X_1)$ we have introduced the notation
$\divt(X_\bullet) \coloneqq - ( a^{-1} X_0 )_t + ( a\, X_1 )_x$,
and we check that all the terms involving the measures~$\tau_{mn}$ are products of continuous functions by measures or are more regular.
We refer to the system consisting of Einstein equations with the matter term augmented with the
tensor $\tau_{mn}$ as the {\bf relaxed Einstein-Euler system.}
\
\begin{definition}
\label{weakdefinitionT2-deux}
Consider the relaxed Einstein-Euler system with $\Tbb^2$~symmetry on $\Tbb^3$~spacelike slices when the metric is expressed in areal gauge, with $t \in I$ describing a compact interval (not containing $0$). Then, a flow of Lebesgue measurable functions
$t \in I \mapsto (\Pbb, \Qbb, \Jperp, \Kbb, \Jpar)(t)$
together with two real-valued functions $a, \Omega$ and a corrector stress tensor $\tau_{mn}$, defined over a compact spacetime domain $I \times \Tbb^1$
is called a {\bf relaxed $\Tbb^2$ areal flow on $\Tbb^3$}, provided
(1) The admissibility inequalities \eqref{eq:admis} hold.
(2) The integrability and boundedness conditions \eqref{eq:regul} hold.
(3) The corrector stress tensor $\tau$ is a bounded measure and is symmetric, traceless, orthogonal to $\Tbb^2$~orbits, satisfies the positivity condition~\eqref{eq-posi} and is divergence free.
(4) The relaxed version of the Einstein equations holds in the sense of distributions.
\end{definition}
\paragraph*{\bf Weak convergence of null forms.}
By formulating Einstein's field equations as a system coupling nonlinear hyperbolic equations and generalized wave-map equations and relying on a divergence-curl structure that we uncover, we develop arguments of compensated compactness for weak solutions to the Einstein equations as well as for the Euler equations. The relevant structure is found in the following model (presented here in arbitrary spatial dimension ${N \geq 1}$): $\Box \phi_a = Q_a(\del\phi, \del \phi)$ in which $\Box$ is the wave operator in $\RR^{N+1}$ and $\phi = (\phi_a)_{a=1, \ldots, A}$ is the unknown, while each quadratic term
$Q_a(\del\phi, \del \phi)$ is a linear combination of the null forms $Q_{0}(\del \phi_a, \del \phi_b) = - \del_t \phi_a \del_t \phi_b + \sum_j \del_j \phi_a \del_j \phi_b$
and $Q_{jk}(\del \phi_a, \del \phi_b) = - \del_j \phi_a \del_k \phi_b + \del_k \phi_a \del_j \phi_b$
(with $1 \leq j,k \leq N$).
If $\phi^n$ is a sequence of solutions to $\Box \phi^n = Q(\del\phi^n, \del \phi^n)$
and this sequence is uniformly bounded in the $H^1$ norm and
the sequence of right-hand sides $Q(\del\phi^n, \del \phi^n)$ is $H^{-1}$ compact,
then its weak limit $\phi^\infty = \lim \phi^n$ is also a weak solution: $\Box \phi^\infty = Q(\del\phi^\infty, \del \phi^\infty)$.
Namely, under these conditions the div-curl lemma applies and shows that the null forms $Q(\del\phi^n, \del \phi^n)$ are stable under weak convergence, that is, converge to $Q(\del\phi^\infty, \del \phi^\infty)$.
In our setup, we prove that the sequence $Q(\del\phi^n, \del \phi^n)$ is bounded in the $L^2$ norm and therefore is $ H^{-1}$ compact. We refer to \cite{LeFlochLeFloch-1} for further details.
\paragraph*{\bf Acknowledgments.}
Both authors gratefully acknowledge financial support from the Simons Center for Geometry and Physics, Stony Brook University, at which some of this research was performed. This paper was completed in the Fall 2019 when the second author (PLF) was visiting the Institut Mittag-Leffler for the Semester Program ``General Relativity, Geometry and Analysis: beyond the first 100 years after Einstein''.
\vspace*{-.15cm}
\bibliographystyle{plain}
|
1,108,101,564,326 | arxiv | \section{Introduction}
Black phosphorus (BP) is a layered material similar to graphite with
the atomic layers coupled by van der Waals interactions
\cite{YBZhang,Ye,StevenP,FengnianXia}. Few-layer
\cite{YBZhang,Ye,StevenP,FengnianXia,Buscema} and monolayer
\cite{Ye,Andres,Lu,xmwang} BP (termed as phosphorene) have been
fabricated experimentally, attracted intensive attentions due to
their unique electronic properties and potential applications in
nanoelectronics \cite{Xiling,Gomez,YBZhangPL}. Unlike graphene, BP
is a semiconductor possessing a direct band gap ranging from 0.3 eV
to 1.8 eV depending on the thicknesses of BP samples
\cite{xmwang,YBZhangPL}. The field-effect-transistor (FET) based on
phosphorene is found to have an on/off ratio of 10$^{3}$ and a
carrier mobility of 800 cm$^{2}$/V$\cdot $s \cite{Sherman}. Sizable
band gap and relatively high mobility in phosphorene bridge the gap
between graphene and transition metal dichalcogenides (TMDs), which
are important for electronics and optoelectronics
\cite{Xiling,Gomez,YBZhangPL}. Inside phosphorene, phosphorus atoms
are covalently bonded with three adjacent atoms to form a puckered
honeycomb structure due to the $sp^{3}$ hybridization \cite{Rodin}.
Arising from the low symmetric and high anisotropic structure, BP
exhibits strongly anisotropic electrical
\cite{Ye,xyzhou,xyzhouoptic,rzhang,xyzhougfactor}, optical
\cite{Tony2,Tran,YBZhangPL,xmwang} and transport \cite{Zhenhua}
properties.
The band structure of 2D phosphorene can be well described by a four
band tight-binding (TB) model \cite{Rudenko,Rudenkogap}. Tailoring
it into 1D nanoribbons offer us a way to tune its electronic and
optical properties due to the quantum confinement and unique edge
effects \cite{Tran,Carvalhopnr,hanxy,ezawa,Taghizadeh,Zahra}. The
band structure of phosphorene nanoribbons (PNRs) depends on the edge
configurations \cite{Tran,Carvalhopnr,hanxy,ezawa,Taghizadeh,Zahra}.
The armchair-edged PNRs (APNRs) are semiconductors with direct band
gap sensitively depending on the ribbon width with scaling law of
$1/N^2$ \cite{Tran,Taghizadeh,Zahra}, while the bare zigzag-edged
PNRs (ZPNRs) are metallic regardless of their ribbon width due to
the quasi-flat edge states \cite{Carvalhopnr,ezawa}. In bare ZPNRs,
the edge states are entirely detached from the bulk bands and
localized at the boundaries. These edge states result in a
relatively large density of states near the Fermi energy
\cite{ezawa,Longlong Zhang}. Further, the band structure of ZPNRs
can be effectively modified by tensile strain \cite{hanxy} or
electric field \cite{hanxy,ezawa,HGuo}. Very recently,
few-layer ZPNRs are successfully synthesized in recent experiments
\cite{Paulmd,NakanishiAyumi}. Up to date, various interesting
properties have been predicted for ZPNRs, including those related to
transverse electric field controlled FET \cite{ezawa}, room
temperature magnetism and half metal phase \cite{wucj,yujia,Reny},
strain induced topological phase transition \cite{Sisakhtet}, and
symmetry dependent response to perpendicular electric fields
\cite{Zhoubl}, etc.
On the other hand, although there are already many research works on
2D phosphorene and its 1D ribbons, the analytical calculation on the
band structure of ZPNR is still lacking. Most of the previous works
on this issue are based on the first-principles calculation
\cite{Tran,Carvalhopnr,hanxy,HGuo} or numerical diagonalization
utilizing the TB model \cite{ezawa,Taghizadeh}. As well, there is
also less attention has been paid to the optical property of ZPNR
\cite{Zahra,Sima}, and particularly the optical transition selection
rule in relation to the lattice symmetry and wavefunction parity are
not fully understood. Optical spectrum measurements are fundamental
approach to detect and understand the crystal band structure, which
have been successfully performed for 2D phosphorene
\cite{YBZhangPL}. To this end, in this work we theoretically
investigate the optical properties of ZPNRs based on the TB model
and the Kubo formula. By solving the discrete Schordinger equation
analytically, we obtain the electronic structures of ZPNRs and
classify their eigenstates according to the crystal symmetry. We
then obtain the optical transition selection rules of ZPNRs directly
based on the symmetry analysis and the analytical expressions of the
optical transition matrix elements. When the incident light is
polarized along the ribbons (see Fig. 1), interestingly, we find
that the optical selection rules change significantly for a $N$-ZPNR
with even- or odd-$N$. In particular, for even-$N$ ZPNRs the
electronic wavefunction parity of the $n$th subband in the
conduction (valence) band is $(-1)^{n}[(-1)^{(n+1)}]$ due to the
$C_{2x}$ symmetry, and therefore their inter- (intra-) band
selection rule is $\Delta n$=$n-n'$=odd (even). For odd-$N$ ZPNRs
without $C_{2x}$ symmetry, in contrast, the optical transitions are
all possible among subbands. Further, the edge states of both even-
and odd-$N$ ZPNRs play an important role in the optical absorption.
Moreover, impurities or external electric field can break the
$C_{2x}$ symmetry of even-$N$ ZPNRs, which consequently enhances the
optical absorption.
The paper is organized as follows. Sec. II mainly presents
the analytical result. We first repeat the numerical diagonalization
procedure to obtain the band structure for the system, and the
detailed analytical calculations on the band structure with
particularity of approaching accurate edge bands are followed. Then
the wavefunctions, the joint density of states and the optical
conductivity for ZPNRs are expressed. In Sec. III, we present some
numerical examples and discussions on the band structure and optical
absorptions of the ZPNRs. Finally, we summarize our results in Sec.
IV.
\begin{figure}
\includegraphics[width=0.48\textwidth, bb=0 44 827 590]{Fig1.eps}
\caption{Top view of a even-$N$ ZPNR, where the red (blue) spheres
represent phosphorous atoms in the upper (lower) sub-layer with
primitive vectors $|\bm{a}_1|=3.32$ \r{A} and $|\bm{a}_2|=4.38$
\r{A} of 2D phosphorene. The bond length between two adjacent atoms
is $a$=2.207 \r{A} with bond angle $\theta$=96.79$^{\circ}$. The
(black) dashed-line rectangles are suppercells adopted here for TB
diagonalizing calculation. }
\end{figure}
\section{Electronic structure and optical properties}
\subsection{Numerical diagonalization on Hamiltonian}
The puckered honeycomb structure of phosphorene is shown in
Fig. 1, where the red and blue dots represent phosphorous atoms
in different sub-layers. There are four atoms in one unit cell with
the primitive vectors $|\bm{a}_1|=3.32$ \r{A} and $|\bm{a}_2|=4.38$
\r{A}. The bond length between two adjacent atoms is $a$=2.207 \r{A}
with bond angle $\theta$=96.79$^{\circ}$ \cite{Rudenko}. Tailoring
phosphorene into 1D nanoribbons along the zigzag direction leading
to ZPNRs. The length of the bond connecting different sub-layers is
2.207 $\r{A}$ and the layer spacing is $l=2.14$ \r{A}. The integers
1, 2, $\cdots$ $N$ describe the number of the zigzag atomic chains
of a ZPNR along its transversal direction. In the TB framework
\cite{Rudenko,Rudenkogap}, the Hamiltonian of a phosphorene in the
presence of in-plane transverse and of-plane vertical electric
fields as well as impurities can be generically written as
\begin{equation}
H=\sum\limits_{<i,j>}t_{ij}c_{i}^{\dag
}c_{j}+\sum_{i}(\frac{1}{2}eE_{v}l\mu_{i}+eE_{t}y_i+U_i)c_{i}^{\dag }c_{i},
\end{equation}
where the summation $\langle i,j\rangle$ runs over all neighboring
atomic sites with hopping integrals $t_{ij}$, and $c^{\dag}_{i}$
($c_j$) is the creation (annihilation) operator for atom site
$i$($j$). It has been shown that five hopping parameters (see Fig.
1) are enough to describe the electronic band structure of
phosphorene \cite{Rudenko} with hopping energies $t_1$=$-$1.22 eV,
$t_2$=3.665 eV, $t_3$=$-$0.205 eV, $t_4$=$-$0.105 eV, and
$t_5$=$-$0.055 eV. A uniform vertical electric field $E_v$ will
result in a staggered potential $elE_v$ between the upper
($\mu_i$=1) and lower ($\mu_i=-1$) sublayers due to the puckered
structure \cite{Zhoubl, R.Ma}. Applying a transverse electric field
$E_t$ will shift the on-site energy to $eE_{t}y_i$ with $y_i$ the
atom coordination in the $y$-direction, and $U_i$ is the impurity
potential.
For a $N$-ZPNR with the number of zigzag chains $N$ across the
width, by applying the Bloch's theorem the TB Hamiltonian in the
momentum space is \cite{Datta}
\begin{equation}
H=H_{00}+H_{01} e^{ik_x a_1}+H_{01}^{\dag}e^{-ik_x a_1},
\end{equation}
where Hamiltonian $H_{00}$ ($H_{01}$) describes the intra (inter)-
supercell [see the (black) dashed-line rectangles in Fig. 1]
interactions, $k_x$ is the wavevector along the $x$-direction. In
our calculation, we accordingly choose the basis ordered as
($|1A\rangle,|1B\rangle,|2A\rangle,|2B\rangle,\cdots |mA\rangle,
|mB\rangle,\cdots |NA\rangle,|NB\rangle)^T$ to write done $H_{00}$
and $H_{01}$ in the form of ($2N\times2N$) matrix for the super cell
adopted. Then, we can obtain the energy spectrum $E_{n,k_x}$ and the
corresponding wavefunction $|n,k_x\rangle$ for the system by
numerical diagonalization. In real space, the wavefucntion can be
formly expressed as
\begin{equation}
\psi_{n,k_x}(\bm{r})=\sum_{m=1}^{N}\sum_{i=A,B}\frac{e^{ik_xx_{m,i}}}{\sqrt{L_x}}
\frac{c_{m,i}}{\sqrt{2N\pi}\alpha}e^{-\frac{(\bm{r}-\bm{R}_{m,i})^2}{\alpha}},
\end{equation}
where $\bm{r}$=$(x,y)$ is the electron coordination,
$\bm{R}_{m,i}$=$(x_{m,i} ,y_{m,i})$ is the atomic position vector,
$\{c_{m,i}\}^T$= $[c_{1A},c_{1B},c_{2A},c_{2B},\cdots
c_{NA},c_{NB}]^T$ is the eigenvector of the Hamiltonian matrix in
Eq. (2) with the transpose operator $T$, and $\alpha$ is a Guass
broadening parameter. Up to now, the band structure of ZPNRs is well
understood by the first-principles calculations
\cite{Tran,Carvalhopnr,hanxy,HGuo} or numerical TB calculations
\cite{ezawa,Taghizadeh}. For comparison here, the band structure of
our numerical diagonalization for a 10-ZPNR is shown by the (black)
solid lines in Fig. 2(a), which is in good agreement with the
existed results \cite{ezawa,Taghizadeh}. We note that there is a
little difference compared with that of the first-principles
calculation \cite{Tran,Carvalhopnr,hanxy,HGuo} due to the relaxation
of the edge atoms. Considering the limitation of the first-principles calculation, the TB model can be applied to study the ZPNRs with large widths. More importantly, we give the analytical solutions for electronic states and optical transitions in the ZPNRs with arbitrary widths. In comparison with the previous
numerical calculations \cite{Tran,
ezawa,Taghizadeh,Carvalhopnr,hanxy,HGuo}, the analytical results are
more convenient to do further understand in the electronic property
of ZPNRs, i.e., identifying the subband symmetry property and
calculating the optical absorption. Hereafter, we will present the
analytical calculations on the band structure of ZPNRs in the next
subsection.
\subsection{Analytical calculation on electronic structure}
In this subsection, we firstly outline a scheme to obtain the
analytical energy spectrum for ZPNRs by solving the TB model
directly. According to the TB approximation, the discrete
Schordinger equation for a $N$-ZPNR is
\begin{equation}
\begin{aligned}
E\phi_{A}(m) & =t_{1}g_{k}\phi_{B}(m)+t_{2}\phi_{B}(m-1)+t_{3}g_{k}\phi_{B}(m-2)\\
&+t_{4}g_{k}[\phi_{A}(m-1)+\phi_{A}(m+1)]+t_{5}\phi_{B}(m+1),\\
E\phi_{B}(m) & =t_{1}g_{k}\phi_{A}(m)+t_{2}\phi_{A}(m+1)+t_{3}g_{k}\phi_{A}(m+2)\\
&+t_{4}g_{k}[\phi_{B}(m-1)+\phi_{B}(m+1)]+t_{5}\phi_{A}(m-1).%
\end{aligned}
\end{equation}
where $g_{k}$=$2\text{cos}(k_{x}a_{1}/2)$, $\phi_{A/B}(m)$ is the
wavefuntion of the $m$th A/B atom, and the site index $m=0, 1, 2,
\cdots N+1$. Since the $0$B and $(N+1)$A sites are missing,
we naturally have the hard-wall boundary condition for ZPNRs as
\begin{equation}
\begin{aligned}
\phi_{B}(0)=\phi_{A}(N+1)=0.
\end{aligned}
\end{equation}
According to the Bloch theorem, the generic solutions for
$\phi_{A}(m)$ and $\phi_{B}(m)$ can be written as
\begin{equation}
\begin{aligned}
\phi_{A}(m)=Ae^{ipm}+Be^{-ipm},\;\; \phi_{B}(m)=Ce^{ipm}+De^{-ipm},
\end{aligned}
\end{equation}
where $A$, $B$, $C$ and $D$ are arbitrary coefficients and $p$ the
wavenumber in the transverse direction, which can be defined by the
Schordinger equation combined with the boundary condition.
Substituting Eq. (6) into (5), the wavefunction $\phi_{A/B}(m)$ can
be simplified as
\begin{equation}
\begin{aligned}
\phi_{A}(m) & =A(e^{ipm}-z^{2}e^{-ipm})=A\varphi_{A}(p,m),\\
\phi_{B}(m) & =C(e^{ipm}-e^{-ipm})=C\varphi_{B}(p,m),
\end{aligned}
\end{equation}
where $z=e^{ip(N+1)}$. Meanwhile, substituting Eq. (7) into (4), we
obtain a matrix equation
\begin{equation}
M\left(
\begin{array}
[c]{c}%
A\\
C
\end{array}
\right)=0,
\end{equation}
where M is a 2$\times$2 matrix with elements
\begin{equation*}
\begin{aligned}
M_{11}=&E\varphi_{A}(p,m)-g_{k}t_{4}[\varphi_{A}(p,m-1)+\varphi_{A}(p,m+1)],\\
M_{12}=&-[t_{1}g_{k}\varphi_{B}(p,m)+t_{2}\varphi_{B}(p,m-1)\\
&+t_{5}\varphi_{B}(p,m+1)+t_{3}g_{k}\varphi_{B}(p,m-2)],\\
M_{21}=&-[t_{1}g_{k}\varphi_{A}(p,m)+t_{2}\varphi_{A}(p,m+1)\\
&+t_{5}\varphi_{A}(p,m-1)+t_{3}g_{k}\varphi_{A}(p,m+2)],\\
M_{22}=&E\varphi_{B}(p,m)-g_{k}t_{4}[\varphi_{B}(p,m-1)+\varphi_{B}(p,m+1)].
\end{aligned}
\end{equation*}
The condition for nontrivial solutions of $A$ and $C$ in Eq. (8),
namely $[A, C]^T\neq0$, is det($M$)=0. However, it is worth to note
that the solutions of $p=0$ and $\pm\pi$ should be excluded as
unphysical results because these values of $p$ yield
$\phi_{A/B}(m)$=0 [see Eq. (7)] for arbitrary $m$. In other words,
electrons are absent in the system in these cases, which is
unphysical. Therefore, we should find solutions that satisfy
det($M$)=0 for arbitrary $m$ except $p=0$, $\pm\pi$. After some
arithmetic, we find that the equation det($M$)=0 yields the
following equation
\begin{equation}
ve^{i2pm}+we^{-i2pm}+\xi=0,
\end{equation}
where $v$, $w$ and $\xi$ are functions of $E$, $k_x$ and $p$.
Generally, Eq. (9) should be valid for arbitrary $m$. Thus, the two
coefficients ($v$ and $w$) of $e^{\pm i2pm}$ and the constant term
$\xi$ should be zero. We then obtain the energy spectrum for ZPNR as
\begin{equation}
\begin{aligned}
E=2g_{k}t_{4}\cos(p)\pm\left\vert t_{1}g_{k}+t_{2}e^{ip}+t_{5}e^{-ip}%
+t_{3}g_{k}e^{2ip}\right\vert,
\end{aligned}
\end{equation}
where $\pm$ represent the conduction and valence bands,
respectively.
On the other hand, from $\xi$=0, we find a transcendental equation
for the transverse wavevector $p$ which can be determined by
\begin{equation}
\begin{aligned}
F(p,N,k)=&t_{1}g_{k}\sin[p(N+1)]+t_{2}\sin(pN)\\
&+t_{3}g_{k}\sin[p(N-1)]+t_{5}\sin[p(N+2)]=0.
\end{aligned}
\end{equation}
This equation implies that the transverse wavenumber $p=p(k_{x},N)$
depends not only on the ribbon width $N$ but also on the
longitudinal wavenumber $k_{x}$. Obviously, we have
$F(p,N,k)=-F(-p,N,k)$, which means that Eq. (11) defines the same
subbands for $p\in(-\pi,0)$ and $p\in(0,\pi)$. Hence, we can simply
find the solutions of $p$ from Eq. (11) in the later interval. If
$t_1$=$t_2$ and $t_3=t_5=0$, Eq. (11) reduces to the transcendental
equation for a zigzag-edged graphene nanoribbon (ZGNR) case
\cite{Katsunori,Saroka}. Similar to that in a $N$-ZGNR, there are
only $N$-$1$ nonequivalent solutions of Eq. (11) for $p\in(0,\pi)$,
which defines $2N$-$2$ subbands, namely the bulk states of a ZPNR.
Notably, the two edge states are naturally missing in the scheme
here since the transverse wavevector are purely imaginary as is
described by Eq. (6). But fortunately we can restored them by
setting $p=i\beta$ and do the same procedure above to obtain the
eigenenergy and the corresponding transcendental equation. In this
case, the eigenenergy in Eq. (10) can be rewritten as
\begin{equation}
\begin{aligned}
E&=g_{k}t_{4}(e^{\beta}+e^{-\beta})\pm\sqrt{f(\beta)f(-\beta)},
\end{aligned}
\end{equation}
where
$f(\beta)=t_{1}g_{k}+t_{2}e^{\beta}+t_{3}g_{k}e^{2\beta}+t_{5}e^{-\beta}$,
with the corresponding transcendental equation expressed by
\begin{equation}
\begin{aligned}
G(\beta,N,k)&=t_{1}g_{k}\sinh[\beta (N+1)]+t_{2}\sinh(\beta N)\\
&+t_{3}g_{k}\sinh[\beta (N-1)]+t_{5}\sinh[\beta(N+2)]=0,
\end{aligned}
\end{equation}
where $\sinh(x)$ is the hyperbolic sine function. Obviously, we have
$G(\beta,N,k)$=$-G(-\beta,N,k)$, which means we only need to find
the solution of $\beta$ for $\beta>0$ case.
Therefore, according to Eq. (10), we can obtain the band structure
of ZPNRs by this analytical approach. We present an example of bulk
band structure in Fig. 2(a), where the (red) dashed lines describe
the analytical result for 10-ZPNR, which exactly matches our
previous numerical one [see the (black) solid lines] starting from
Eq. (2). In addition, the edge bands of different 10-, 15- and
30-ZPNR are also shown in Figs. 2(b-d), where the (black) solid and
(blue) dash-dotted lines represent the numerical and analytical
results, respectively. Unfortunately, we can see that the analytical
results for edge states given by Eq. (12) are not in consistent with
the numerical ones. This discrepancy was also revealed in a recent
work \cite{M. Aminic}. We think that the discrepancy mainly
originates from the hopping links $t_3$, $t_4$ and $t_5$, with which
their hopping distances are beyond a zigzag chain (see Fig. 1). This
makes the discrete Schordinger equation (4) is invalid for the edge
atoms, namely $m$ equal to 1 or $N$. To resolve this problem, one
solution is choosing four atoms to write down Eq. (4) and double the
number of the boundary condition (5). But this method will enlarge
the dimensions of matrix $M$ unavoidably and make the problem to be
quite complicate and difficult to solve.
Hereby, we propose an efficient solution to eliminate this
discrepancy by simply adding a correction term. Generally, the two
edge states can be described by a 2$\times$2 matrix Hamiltonian as
\begin{equation}
H_{edge}=\left(
\begin{array}
[c]{cc}%
h_0 & h_{c}\\
h_{c}^{\ast} & h_0
\end{array}
\right),
\end{equation}
where $h_0$ describes the two degenerate edge states when the ribbon
width $N$ is large, and $h_c$ describes the coupling between the
edge states for small $N$. Based on this argument, according to Eq.
(12), we have $h_0=2t_{4}g_{k}\sinh(\beta)$ and
$h_c=\sqrt{f(\beta)f(-\beta)}$. The band structure of the edge
states in 10-, 15- and 30-ZPNR given by Eq. (12) are presented by
the (blue) dash-dotted lines in Figs. 2(b-d), respectively. From
these figures, we find that $h_c$ is finite for a narrow ribbon as
shown in Fig. 2(b) for 10-ZPNR, but it vanished for the wider ones
[e.g., Fig. 2(d) for 30-ZPNR]. This means that $h_c$ is suitable to
describe the coupling between the edge states. However, there is a
observable discrepancy between the analytical results [Eq. (12)]
shown by the (blue) dash-dotted lines and the numerical ones [the
(black) solid lines] in Figs. 2(b-d). This implies that $h_0$ is
unsuitable to describe the edge states and needs a correction so as
to describe the edge bands accurately. Naturally, we can assume that
the correction term is a superposition of the energy term caused by
hopping links ($t_3$, $t_4$, $t_5$) beyond one zigzag chain, which
is expressed as
\begin{equation}
\begin{aligned}
h'=&\sum_{s=+,-}(b_3^st_3g_ke^{s2\beta}+b_4^st_4g_ke^{s\beta}+b_5^st_5e^{s\beta}),
\end{aligned}
\end{equation}
where the coefficients $b_3^+=0.01783$, $b_3^-=0.5739$,
$b_4^{\pm}=-1.419$, $b_5^+=-0.1345$ and $b_5^-=8.735$. Then the
energy of edge states can be written as
\begin{equation}
\begin{aligned}
E=h_0+h'\pm h_c.
\end{aligned}
\end{equation}
The analytical edge states expressed by Eq. (16) are also depicted
by the (red) dashed lines in Figs. 2(b-d). Comparing them with the
numerical data [the (black) solid lines], we find that they are in
excellent agreement with each other regardless of the ribbon width,
which indicates that our method is valid and reliable.
\begin{figure}
\includegraphics[width=0.43\textwidth,bb=75 6 697 525]{Fig2.eps}
\caption{(a) The band structure of a bare 10-ZPNR, where the
(red/black) dashed/solid lines represent the analytical/numerical
results. The scale-enlarged edge bands for (b) 10-ZPNR, (c) 15-ZPNR
and (d) 30-ZPNR with the comparison of analytical and numerical
results, where the [red (blue)] dashed (dash-dotted) lines represent
the analytical result for the edge bands with (without) the
correction term $h'$. }
\end{figure}
On the other hand, for the wavefunction (3), owing to the
translational invariance along the $x$-direction we can rewrite it
in another generic form
\begin{eqnarray}
\psi_{n,k_x}(r)&=&\sum_{m=1}^{N}\left(
\begin{array}
[c]{cc}%
C_A\varphi_{A}(p,m)e^{ik_{x}x_{m,A}} \\
C_B\varphi_{B}(p,m)e^{ik_{x}x_{m,B}}
\end{array}
\right),
\end{eqnarray}
where $C_{A}$ and $C_{B}$ are the normalization coefficient, $N$ the
number of A and B atoms in a super-cell of a ZPNR, and
$x_{A/B,m}$ the $x$-coordination of the $m$th $A/B$ atoms. Notably,
Eq. (17) is the wavefunction for the bulk states of ZPNRs. For edge
states, we should replace the transversal wavevector $p$ by
$i\beta$.
As for the wavefunction of even-$N$ ZPNRs, we can obtain the
relation \textbf{$C$=$\mp Az$} in Eq. (7) from the parity of a ZPNR,
namely $\phi_A(N+1-m)$=$\pm \phi_B(m)$ \cite{Katsunori}. The reason
is that the wavefunction of even-$N$ ZPNRs is either symmetric or
antisymmetric which is similar to that in ZGNRs \cite{Mahdi
Moradinasab}. Specifically, combined with the translational
invariance along the $x$-direction, the wavefuntion of even-$N$
ZPNRs is specified as
\begin{eqnarray}
\psi_{n,k_x}(r)&=&\frac{C}{\sqrt{L_x}}\sum_{m=1}^{N}\left(
\begin{array}
[c]{cc}%
-s z^{-1}\varphi_{A}(p,m)e^{ik_{x}x_{m,A}} \\
\varphi_{B}(p,m)e^{ik_{x}x_{m,B}}
\end{array}
\right)\nonumber\\
&=&\frac{C'}{\sqrt{L_x}}\sum_{m=1}^{N}\left(
\begin{array}
[c]{cc}%
-s\sin[p(N+1-m)]e^{ik_{x}x_{m,A}} \\
\sin(pm)e^{ik_{x}x_{m,B}}
\end{array}
\right),
\end{eqnarray}
where $C'$=$[\sum_{m=1}^N\sin^2(pm)]^{-1}/\sqrt{2}$ is the
normalization coefficient, $s=\pm 1$ indicates the parity of the
subbands. Notably, Eq. (18) is the wavefunction for bulk states of
even-$N$ ZPNRs. For edge states, the wavefunction ($p$=$i\beta$) is
\begin{eqnarray}
\psi_{n,k_x}(r)=\frac{C_e}{\sqrt{L_x}}\sum_{m=1}^{N}\left(
\begin{array}
[c]{cc}%
-s \sinh[\beta(N+1-m)]e^{ik_{x}x_{m,A}} \\
\sinh(\beta m)e^{ik_{x}x_{m,B}}
\end{array}
\right),
\end{eqnarray}
where $C_e$=$[\sum_{m=1}^N\sinh^2(pm)]^{-1}/\sqrt{2}$ is also a
normalization coefficient. On the contrary, owing to the absence of
the $C_{2x}$ symmetry, there is no such a simple expression of
wavefunction for the odd-$N$ ZPNRs.
\subsection{Optical property and transition selection rules}
In order to detect the above calculated band structure of ZPNRs, we
study its optical response in this subsection. One useful physical
quantity to understand the optical property is the joint density of
states (JDOS) representing all possible optical transitions among
the subbands, which is generally given by
\begin{equation}
D_J(\omega)=\frac{g_s}{
L_x}\sum_{n,n',k_x}[f(E_{n,k_x})-f(E_{n',k_x})]\delta(E_{n,k_x}-E_{n',k_x}+\hbar\omega),
\end{equation}
where the sum runs over all states $|n,k_x\rangle$ and
$|n',k_x\rangle$, $g_s$ is 2 for spin degree, $L_x$ the ribbon
length, $\hbar \omega$ the photon energy, and
$f(E)=1/[\text{exp}{(E-E_F)/k_BT}+1]$ the Fermi-Dirac distribution
function with Boltzman constant $k_B$ and temperature $T$. Here, we
take a Guass broadening
$\frac{1}{\Gamma\sqrt{2\pi}}\text{exp}[-(E_{n,k_x}-E_{n',k_x}+\hbar
\omega)^2/2\Gamma^2]$ to approximate the $\delta$-function, where
$\Gamma$ is a phenomenological constant accounting for the energy
level broadening factor. Meanwhile, assuming the incident light is
polarized along the longitudinal ($x$-) direction, the optical
conductance based on the Kubo formula is given by \cite{Ando1,Ando2}
\begin{equation}
\begin{aligned}
\sigma(\omega)=&\frac{g_s \hbar e^2}{iL_x}\sum_{n,n',k_x}
\frac{[f(E_{n,k_x})-f(E_{n',k_x})]|\langle
n,k_x|v_x|n',k_x\rangle|^{2}}{(E_{n,k_x}-E_{n',k_x})(E_{n,k_x}-E_{n',k_x}+\hbar\omega+i\Gamma)},
\end{aligned}
\end{equation}
where $v_x$=$\frac{1}{i\hbar}\frac{\partial H}{\partial k_x}$ is the
velocity operator, which is valid and independent of the band
structure model, and $|n,k_x\rangle=\phi(r)\varphi(K)$ \cite{Burt} is the total
electron wavefunction in a ZPNR. Here $\phi(r)$ is the envelop
function which describes the slowly varying electron sharing
movement in the crystal, while $\varphi(K)$ is the band edge
wavefunction (BEW) connecting to the atom orbits directly describing
the fast movement in the crystal. In a ZPNR, $\varphi(K)$ is
composed by $|s\rangle$, $|p_x\rangle$, $|p_y\rangle$, and
$|p_z\rangle$ atomic-orbits with different weights \cite{Ruo-Yu
Zhang,WeifengLi}. For a linear polarized light, the optical
transition matrix elements satisfy $\langle
n,k_x|v_x|n',k_x\rangle$=$\langle
\psi_{n,k_x}|v_x|\psi_{n',k_x}\rangle\langle
\varphi_n(K)|\varphi_{n'}(K)\rangle$. Obviously,
$v_{n,n'}(k_x)$=$\langle \psi_{n,k_x}|v_x|\psi_{n',k_x}\rangle$
determines the optical transition selection rules. A zero matrix
element $v_{n,n'}(k_x)$ means a forbidden transition. The inner
product between the two BEWs is subband dependent, with which only
affects the amplitude of the optical conductance but does not change
the optical selection rules. We take the inner production around the
$\Gamma$-point
($\langle\varphi_n(\Gamma)|\varphi_{n'}(\Gamma)\rangle$) as an
approximation and treat it as a constant. This approximation has
also been used in the previous work \cite{Tony2} for 2D phosphorene.
We have omitted this constant in our calculations because the
specific expression of the BEWs in ZPNRs are currently unknown. This
approximation would not change the essential physics, i.e., the
even-odd dependent optical selection rule, reported here. Note that
in some topological none-trivial system, the dipole optical matrix
result in the winding number \cite{Likunshi,Tingcao}. But there is
no such an effect in phosphorene because it is a topologically
trivial system. The real part of $\sigma(\omega)$ indicates the
optical absorption when an laser beam incidents on the sample.
Moreover, we can obtain the dielectric function
$\varepsilon(\omega)$ from optical conductance by using
$\varepsilon(\omega)$=1+$\frac{4\pi i}{\omega}\sigma(\omega)$
\cite{Peteryu}.
\begin{figure}
\includegraphics[width=0.47\textwidth,bb=75 3 740 547]{Fig3.eps}
\caption{The band structures of bare (a)10-ZPNR and (c)11-ZPNR,
where the (red/blue) dashed/solid lines represent the
symmetric/antisymmetric states. (b) and (d) show the spatial
distribution of the wave function of the subband $n_v=1$ (left
panel) and $n_c=1$ (right panel) corresponding to the states at
$k_x$=0 indicated by the red and blue dots in (a) and (c),
respectively.}
\end{figure}
In optical transition process, selection rules determined by the
matrix elements $v_{n,n'}(k_x)$ are the most important information.
The integral of the velocity matrix elements $V_{n,n'}$=$\int dk_x
|v_{n,n'}(k_x)|^2$ is proportional to the optical transition
probability between the $n$th and $n'$th subbands. Generally, the
selection rule is always constrained by the symmetry of the system.
Hence, in order to obtain a general optical selection rule for
ZPNRs, we firstly check their lattice symmetry. According to Fig. 1,
we find the lattice symmetry of a $N$-ZPNR is even-odd-$N$
dependent. In particular, the even-$N$ ones have a
$C_{2x}(x,y,z)$$\rightarrow$$(x,-y,-z)$ operator with respect to the
ribbon central axis (see the dotted horizontal line in Fig. 1). This
is equivalent to the symmetry operator $\sigma_{zx}\sigma_{xy}$,
where $\sigma_{zx}$ and $\sigma_{xy}$ are the mirror symmetry
operators corresponding to the $xoz$ and the $xoy$ planes,
respectively. However, the odd-$N$ ones do not have this symmetry.
In even-$N$ ZPNRs, the constraint on the eigenstates $\langle
x,y,z|n,k_x\rangle$ imposed by $C_{2x}$ symmetry is $C_{2x}\langle
x,y,z|n,k_x\rangle$=$\langle x,-y,-z|n,k_x\rangle$. Assuming
$\lambda$ is the eigenvalue of $C_{2x}$ operator, we have
\begin{equation}
\langle x,y,z|n,k_x\rangle=(C_{2x})^2\langle x,y,z|n,k_x\rangle=\lambda^2\langle x,y,z|n,k_x\rangle.
\end{equation}
Then we obtain $\lambda^2$=1, i.e., $\lambda$=$\pm$1, where $+/-$
means the even/odd parity provided by the $C_{2x}$ operator. This
indicates that $\langle x,y,z|n,k_x\rangle$ is either symmetric or
antisymmetric along the $y$ and $z$ directions, namely
$\langle-y,-z|n,k_x\rangle$=$\pm$$\langle y,z|n,k_x\rangle$. Thus,
we can classify the eigenstates for even-$N$ ZPNRs as even or odd
parity according to the eigenvalues of the $C_{2x}$ operator.
\begin{figure}
\includegraphics[width=0.48\textwidth,bb=18 281 609 524]{Fig4.eps}
\caption{The integral of the optical transition matrix elements
$V_{n,n'}$=$\int dk_x |v_{n,n'}(k_x)|^2$ for (a) the $n_v$=1 subband
to the $n_c$=1 one and (b) the $n_v$=2 subband to the $n_c$=2 one as
a function of the ribbon width $N$, where the insets show the amplified picture of $V_{n,n'}$ on ribbon width with large $N$.}
\end{figure}
In order to confirm the above argument on the symmetry and parity
for the systems, we present the band structure and the wavefunction
in real space of the first subband in the conduction and valence
bands for 10- and 11-ZPNR in Figs. 3(a-b) and 3(c-d), respectively.
The wavefunction corresponding to states indicated by the red (blue)
dots in Figs. 3(a) and 3(c) are shown in the left (right) panels in
Figs. 3(b) and 3(d), respectively. According to the left (right)
panel in Fig. 3(b), we find that the first subband in the conduction
(valence) band is even (odd) under $C_{2x}$ transformation. By
checking the eigenstates in other subbands, we observe that the
parity of wavefunctions varies alternatively from odd [(blue) solid
lines] to even [(red) dashed lines] with the increase of subband
index $n$. This is consistent with the previous results obtained by
the first-principles calculation \cite{Zahra,Tran}. Hence, the
parity of the subband in the conduction (valence) band is related to
its subband index via $(-1)^{n}[(-1)^{(n+1)}]$. Further, under the
$C_{2x}$ operation, the velocity operator $v_x$ is even, i.e.,
$C_{2x}:v_x\rightarrow v_x$. Hence, we obtain the condition for none
zero matrix element $v_{n,n'}(k_x)$ is that the parity of the
initial ($|n,k_x\rangle$) and final ($|n',k_x\rangle$) states are
the same. In other words, only the transitions among the states with
identical parity are allowed. This can also be verified by
calculating the optical transition matrix element. For example,
using the relation $\mathbf{v}=\frac{i}{\hbar}[\mathbf{r},H]$
combined with the wavefunction Eq. (18), the inter-band optical
transition matrix element between the bulk states is \cite{Peteryu}
\begin{equation}
\langle \psi_v |v_x|\psi_c\rangle=\frac{i}{\hbar}\langle \psi_v |Hx-xH|\psi_c\rangle,
\end{equation}
$\psi_{c/v}$ is the wavefunction in Eq. (18) or
(19). After some algebra, we have
\begin{equation}
\langle \psi_{n,kx}^v|v_x|\psi_{n',kx}^c\rangle=
\left\{
\begin{array}{ll}
\frac{i}{\hbar}\frac{C'^2}{L_x}(A_{t_1}+A_{t_3}+A_{t_4}),\;\; s'=s \\
0,\;\; s'\neq s,
\end{array}
\right.
\end{equation}
where
\begin{equation*}
\begin{aligned}
&A_{t_1}=-4i t_{1}b\sin(bk_{x})\sum_{m=1}^{N}\sin(pm)\sin[p^{\prime}(N+1-m)],\\
&A_{t_3}=-4i t_{3}b\sin(bk_{x})\sum_{m=1}^{N-2}\sin(pm)\sin[p^{\prime}(N-1-m)],\\
&A_{t_4}=8i t_{4}b\sin(bk_{x})\cos(p')\sum_{m=1}^{N}\sin(pm)\sin(p^{\prime}m).
\end{aligned}
\end{equation*}
Here $b=a_1/2$ and the detailed calculations on Eq. (24) are
presented in the Appendix A.
From Eq. (24), we can explicitly find that only the inter-band
transition between the bulks states with the same symmetry are
allowed. Using the wavefunction in Eq. (19), we can obtain the same
selection rules $s=s'$ for the transition between the edge bands as
well as that between the bulk bands and edge bands. Hence, we
conclude that only the transitions between the subbands with same
parity are allowed. Consequently, in even-$N$ ZPNRs, the inter
(intra)-band selection rule is $\Delta n$=$n-n'$=odd (even). This is
in good agreement with the above analysis based on the lattice
symmetry. It is important that although the band structure of
odd-$N$ ZPNRs is similar to that of even-$N$ ones as shown in Fig.
3(c), the optical selection rule is qualitatively different from
that for even-$N$ ZPNRs. According to the left (right) panel in Fig.
3(d), by checking the eigenstates within the whole band, we know
that there is no subband owning definite parity in 11-ZPNR due to
the absence of $C_{2x}$ symmetry. Thus, the optical transitions in
odd-$N$ ZPNRs between two arbitrary subbands are all possible. In
order to illustrate the even-odd dependent optical selection rule
more clearly, in Fig. 4 we show the integral of the optical
transition matrix elements $V_{n,n'}$ as a function of the ribbon
width $N$, where (a) for $V_{n_v=1,n_c=1}$ and (b) for
$V_{n_v=2,n_c=2}$, respectively, and the insets show the amplified picture of $V_{n,n'}$ on ribbon width with large $N$. Physically, $V_{n,n'}$ is
proportional to the optical transition probability between the $n$th
and $n'$th subbands. According to the figure, we find that the
transition probability oscillates with the ribbon width $N$ and shows
an even-odd $N$ dependent feature. The transitions between the
subband $n_v$=1 (2) and $n_c$=1 (2) are forbidden in even $N$-ZPNRs
due to the presence of the $C_{2x}$ symmetry. In contrast, the
transitions between the subband $n_v$=1 (2) and $n_c$=1 (2) are
allowed in odd $N$-ZPNRs due to the absence of the $C_{2x}$
symmetry. This even-odd dependent selection rule is also reflected
in the optical absorption spectrum which will be discussed in the
nest section.
\section{Numerical Results and Discussions}
In this section, we present some numerical examples for the optical
absorption spectrum of ZPNRs and discuss the corresponding results.
We take $N$=10 and 11 to represent the even and odd cases,
respectively, which would not qualitatively influence the results
here. The temperature is 4 K and the level broadening $\Gamma$ is 4
meV throughout the calculation unless specificated. In all following
figures, the green solid line (if available) indicates the Fermi
level.
\begin{figure}
\includegraphics[width=0.48\textwidth,bb=63 3 762 586]{Fig5.eps}
\caption{The inter-band JDOS [(red) dash-dotted lines] and the
optical absorption [(blue) solid lines] as a function of the
incident photon energy with $\sigma_0=2e^2/h$ for (a) 10-ZPNR and (b)
11-ZPNR. The Fermi level $E _F$ is chosen as $-$0.3086 eV lying
between the two bands of edge states. The peaks (labeled by 1, 2,
$\cdots$ 6) are associated with the subband transitions illustrated
in Fig. 2(a). (c) The optical absorption spectra for 20-ZPNR
[(orange) solid line] and 21-ZPNR [(purple) dashed line].}
\end{figure}
As discussed in Sec. IIC, the inter- (intra-) band optical
transition selection rule in even-$N$ ZPNRs satisfies $\Delta
n$=$n-n'$=odd (even) due to the $C_{2x}$ symmetry. On the contrary,
the optical transitions in odd-$N$ ZPNRs between two arbitrary
subbands are all possible resulting from the $C_{2x}$ symmetry
breaking. Keeping this in mind is important to understand the
optical properties of even-$N$ ZPNRs. Fig. 5 shows the inter-band
JDOS and the optical absorption spectrum for (a) 10-ZPNR and (b)
11-ZPNR with Fermi energy $E_F=-$0.3086 eV lying between the edge
states, respectively. As shown by the (red) dash-dotted line in Fig.
5(a), we see peaks in the JDOS spectrum at different photon energy
known as van Hove singularities. The JDOS peaks range from the
mid-infrared (155-413 meV) to the visible region due to the edge
states and the quantum confinement, which is different from that of
2D phosphorene case \cite{YBZhangPL}. However, there is no optical
absorption around zero frequency, which is contradict to the fact
that ZPNRs are metallic. The reason is that the transition between
the edge states is forbidden by the $C_{2x}$ symmetry in even-$N$
ZPNRs since their parities are different from each other. Compared
with the JDOS, we find that more peaks are missing in the optical
absorption spectra Re$\sigma(\omega)$ [the (blue) solid line] due to
the optical selection rule $\Delta n$=odd arising from the $C_{2x}$
symmetry, which is similar to that in ZGNRs \cite{Han,Chung,Saroka}.
The remained optical absorption peaks (labeled by 1, 2, $\cdots$ 6)
originating from the allowed transitions contributed by subbands
with the same parity which are schematically illustrated in Fig.
3(a). In contrast, as shown in Fig. 5(b), the optical transitions
among subbands are all possible for 11-ZPNR owing to the $C_{2x}$
symmetry breaking. All the optical absorption peaks appear one to
one correspondence to the JDOS. Owing to the edge states, the
absorption peaks range from the mid-infrared to the visible
frequency, etc. However, the absorption peak in the mid-infrared
frequency (the first peak) disappears for wider ribbons as shown in
Fig. 5(c), which is different from that in ZGNRs
\cite{Han,Chung,Saroka}. The reason comes from two sides: i) the
edge states become degenerate for wider ZPNRs and ii) unlike that in
ZGNRs, the edge states of ZPNRs are slightly dispersed [see Fig.
3(a)] due to the electron-hole asymmetry. This fact means that only
two $k_x$ states contribute to the optical absorption for a certain
Fermi level, leading to zero optical conductance. Again, from Fig.
5(c), we find that there are more absorption peaks for 21-ZPNR than
that of the 20-ZPNR arising from the $C_{2x}$ symmetry breaking.
Moreover, it should be noted that there is a little discrepancy
between the JDOS peaks and the optical absorption peaks because that
the optical transition matrix element $v_{n,n'}(k_x)$ depends on the
subbands' derivatives $\partial E/\partial_{k_x}$.
\begin{figure}
\includegraphics[width=0.48\textwidth,bb=22 42 736 538]{Fig6.eps}
\caption{The intra-band JDOS [(red) dash-dotted lines] and the
optical absorption [(blue) solid lines] as a function of the
incident photon energy for (a) 10-ZPNR and (b) 11-ZPNR. The band
structure and the corresponding optical transitions are shown in (c)
and (d) for 10-ZPNR ($E_F$=$-$4.0351 eV) and 11-ZPNR
($E_F$=$-$3.7241 eV), respectively.}
\end{figure}
Figure 6 shows the intra-band JDOS [(red) dash-dotted line] and
optical absorption spectrum [(blue) solid line] for (a) 10-ZPNR and
(b) 11-ZPNR with the corresponding band structures and Fermi levels
shown in (c) and (d), respectively. As depicted in Fig. 6(a), the first
JDOS peak for 10-ZPNR located at $\hbar\omega=$0.604 eV comes from
the transition between the $n_v$=6 subband and the $n_v$=5 one. But
there is no absorption peak at the same frequency because the
parities of the two subbands are different [see Fig. 6(c)] and the
transitions are forbidden by the $C_{2x}$ symmetry. In other words,
this transition violates the intra-band optical transition selection
rule ($\Delta n$=even) for even-$N$ ZPNRs. By the same token, the
second and third JDOS peaks are contributed by the transitions
between the subbands with the same parities [see Fig. 6(c)], hence
the corresponding absorption peaks appear [see the (blue) solid
line]. On the contrary, we find that the optical absorption peaks
are almost presented for 11-ZPNR [see Fig. 6(b)] arising from the
$C_{2x}$ symmetry breaking which means that the all optical
transitions are principally possible among all subbands. The
corresponding transitions are shown in Fig. 6(d). On the other hand,
some of the matrix elements $\langle n,k_x|v_x|n',k_x\rangle$ may be
tiny (weak), i.e, the $\langle 6(7),k_x|v_x|4(5),k_x\rangle$, and
the corresponding absorption peaks are missing in this case [see
Fig. 6(b)].
\begin{figure}
\includegraphics[width=0.48\textwidth,bb=20 40 735 537]{Fig7.eps}
\caption{The inter-band JDOS [(red) dash-dotted line] and optical
absorption spectrum [(blue) solid line] for (a) 10-ZPNR ($E_F$=-0.31
eV) and (b) 11-ZPNR ($E_F=-$0.2747 eV) with a uniform vertical
electric field $E_v$=0.1V/$\r{A}$, respectively, where the (black)
dash line is the absorption spectrum for bare ZPNRs. The band
structures of 10- and 11-ZPNRs under electric field are shown in (c)
and (d), respectively, where the Fermi levels for both cases are
lying between the two edge states.}
\end{figure}
Next, we turn to the effect of externally applied electric field on
the optical property of ZPNRs. Fig. 7 depicts the inter-band JDOS
[(red) dash-dotted line] and optical absorption spectra [(blue)
solid line] for (a) 10-ZPNR and (b) 11-ZPNR under a uniform vertical
electric field (VEF) with strength $E_v$=0.1V/$\r{A}$, where the
corresponding band structures with the optical transition
indications are shown in (c) and (d), respectively. The Fermi level
for both cases are lying between the edge states. In real
experiment, the VEF corresponding to the top gate or substrate
effect. It maybe be generated by using the polar semiconductors
interface \cite{Dong Zhang}. Owing to the puckered lattice structure
of ZPNRs, the band structure of the ZPNR under a VEF is even-odd
dependent \cite{Zhoubl} since the edge states of even (odd)-$N$
ribbons located on the different (same) sub-layers. The VEF opens a
gap between the two edge bands for even ribbons [see Fig. 7(c)], but
for odd ones the two edge bands are always (nearly) degenerated [see
Fig. 7(d)]. Further, the VEF also breaks the $C_{2x}$ symmetry in
even-$N$ ZPNRs. These features are also reflected in the optical
absorption spectrum. From Fig. 7(a), we find that several extra
absorption peaks [the (blue) solid line] appear due to the $C_{2x}$
symmetry breaking by the VEF compared with the bare 10-ZPNR [see the
(black) dashed line]. Especially, the first absorption peak in
mid-infrared frequency is greatly enhanced due to the degeneracy
lifting of the edge states. In comparison, as shown in Fig. 7(b),
the absorption spectrum of 11-ZPNR is slightly changed compared to
the bare case [also see the (black) dashed line] since the band
structure is nearly unaffected by the VEF. These features offer a
useful approach to identify the even-odd property of ZPNR samples by
experimentally detecting the optical absorption under VEF.
\begin{figure}
\includegraphics[width=0.48\textwidth,bb=25 12 735 514]{Fig8.eps}
\caption{The inter-band JDOS [(red) dash-dotted line] and optical
absorption spectrum [(blue) solid line] for 10-ZPNR with impurities
localized at (a) one edge (the 1st atomic row) with $E_F$=-0.1954 eV
and (b) the center (the 10th row) with $E_F=-$0.3058 eV,
respectively, where the impurity potential $U_i$ is 0.5 eV and 1.5
eV corresponding to (a) and (b), and the (black) dashed line
indicates the optical spectrum of a pristine 10-ZPNR. While (c) and
(d) respectively to (a) and (b) but for band structure.}
\end{figure}
Experimentally, it is difficult to avoid impurities and defects in
samples. This may consequently affect the optical properties of
ZPNRs by changing the band structure or breaking the $C_{2x}$
symmetry. Figs. 8 (c) and 8(d) show the band structure of 10-ZPNR
with impurities distributing on its one edge (the 1st atomic row)
and the center (the 10th row), respectively, where we have defined a
$N$-ZPNR with 2$N$ atomic rows. For a zigzag chain of ZPNR there are
two phosphorus atomic rows, hence a $N$-ZPNR has 2$N$ rows. We model
the impurity effect by adding a impurity potential $U_i$ to the
on-site energy of the corresponding impurity atoms, which is widely
used in previous works \cite{zouyl,L.L.Li}. As shown in Fig. 8(c)
for impurities located on the edge, we find that the nearly
degenerated edge states are separated [see the (orange) solid line]
due to the variation of the on-site energies, but the other subbands
remain unchanged. This is consistent with the result obtained by the
first-principles calculation \cite{Pooja,guocx}. On the contrary,
comparing Fig. 8(d) with Fig. 3(a), as impurities localized on the
center the subband contributed by the impurities are shifted [see
the (orange) solid line] but the other subbands remain unchanged.
This means that the impurities only have a local effect on the
electronic structure of a ZPNR. But, they will play an important
role in the optical absorption spectrum because of the lattice
symmetry breaking. Figs. 8(a) and 8(b) show the inter-band JDOS
[(red) dash-dotted line] and optical absorption spectrum [(blue)
solid line]for 10-ZPNR with impurities located at its one edge (the
1st atomic row) with $E_F$=-0.1954 eV and the center (the 10th row)
with $E_F=-$0.3058 eV, respectively. The impurities localized at the
edge break the $C_{2x}$ symmetry since the wavefunctions
corresponding to most of the subbands are partially distributed on
the edge. Hence, we observe the first and some extra optical
absorption peaks reappeared [see the (blue) solid line] compared
with the pristine 10-ZPNR shown in Fig. 8(a) [(black) dash line].
This is similar to that of the VEF effect discussed above.
Similarly, from Fig. 8(b), we also find some extra peaks when the
impurities localized at the center. However, the absorption peak at
the mid-infrared frequency (the first peak) is still missing
although the $C_{2x}$ symmetry is broken in this case. The reason is
that the edge states are mainly localized on the edge atoms, in
consequence the band structure is nearly unaffected by the
impurities localized on the center atoms. For 11-ZPNR, the optical
absorption spectrum is just slightly changed by the impurities,
hence we do not present the result here for saving space.
\begin{figure}
\includegraphics[width=0.48\textwidth,bb=25 30 746 517]{Fig9.eps}
\caption{The inter-band JDOS [(red) dash-dotted line] and optical
absorption spectrum [(blue) solid line] for (a) 10-ZPNR
($E_F=-$0.2228 eV) and (b) 11-ZPNR ($E_F=-$0.2151 eV) under a
uniform TEF with strength $E_t$=0.008 V/$\r{A}$, respectively, where
the Fermi level for both cases are lying between the two edge states
and the (black) dash line represents the absorption spectrum for
bare ZPNRs. The band structures of 10- and 11-ZPNR under TEF are
shown in (c) and (d), respectively.}
\end{figure}
Finally, a transverse electric field (TEF) can induce a Stark effect
(potential difference) arising from the finite width of ZPNRs
\cite{ezawa}, which can make a significant change of the band
structure, especially the edge bands. And this effect has been
experimentally observed for few-layer BPs \cite{BingchenDeng}. As a
result, this can also change the optical properties of ZPNRs by
breaking the $C_{2x}$ symmetry. Therefore, Fig. 9 displays the
inter-band JDOS [(red) dash-dotted line] and optical absorption
spectrum [(blue) solid line] for (a) 10-ZPNR and (b) 11-ZPNR under a
uniform TEF with strength $E_t$=0.008 V/$\r{A}$. And the
corresponding band structures are shown in Figs. 9(c) and 9(d),
respectively. As shown in the figure, we find that in the presence
of the TEF the optical absorption peaks are shifted compared to the
bare ribbons [see the (black) dashed line]. In Figs. 9(c) and 9(d),
unlike the VEF case, we can see the degeneracy of the edge states
for 10- and 11-ZPNR are both lifted by the Stark effect. Owing to
the $C_{2x}$ symmetry breaking, all possible absorption peaks in
10-ZPNR corresponding to the JDOS appear [see Fig. 9(a)], which
means that the optical absorption of 10-ZPNR is greatly enhanced,
especially the absorption peak in the mid-infrared frequency.
Further, the first absorption peak for both 10- and 11-ZPNR is
greatly enhanced due to the degeneracy lifting as show in Figs. 9(a)
and 9(b). Hence we conclude that the effect of TEF on the optical
absorption in ZPNRs is different from that of the VEF or impurity. A
TEF can induce different potentials on all atoms within a
super-cell, which leads to a global $C_{2x}$ symmetry breaking.
\section{Summary}
In summary, we have theoretically studied the electronic and optical
properties of ZPNRs under a linearly polarized light along the
longitudinal direction based on the TB Hamiltonian and Kubo formula.
We have obtained analytically the energy spectra of ZPNRs and the
optical transition selection rules based on the lattice symmetry
analysis. Owing to the $C_{2x}$ symmetry, the eigenstates of
even-$N$ ZPNRs are transversely either symmetric or antisymmetric,
which makes their optical response qualitatively different from that
of the odd-$N$ ones. In particular the inter (intra) -band selection
rule for even-$N$ ZPNRs is $\Delta n=$ odd (even) since the parity
factor of the wavefunction corresponding to the conduction (valence)
band is $(-1)^{n}[(-1)^{(n+1)}]$ (with the subband index $n$)
provided by the $C_{2x}$ symmetry. For odd-$N$ ZPNRs, however, the
all optical transitions are possible among all subbands. Further,
the edge states play an important role in the optical absorption and
are involved in many of the absorption peaks. The optical absorption
of even-$N$ ZPNRs can be enhanced by the substrate and impurity
effect as well as the transverse electric field via breaking the
$C_{2x}$ symmetry. While the optical absorption of odd-$N$ ones can
be effectively tuned by lattice defects or external electric fields.
Our findings provide a further understanding on the electronic states and
optical properties of ZPNRs, which are essential for the explanation of the optical experiment data on ZPNR
samples.
\section{Acknowledgments}
This work was supported by the National Natural Science Foundation
of China (Grant Nos. 11804092, 11774085, 61674145, 11704118,
11664010), and China Postdoctoral Science Foundation funded project
Grant No. BX20180097, and Hunan Provincial Natural Science
Foundation of China (Grant No. 2017JJ3210).
\section*{Appendix A}
\setcounter{equation}{0}
\renewcommand{\theequation}{A\arabic{equation}}
In this appendix, we calculate the optical transition matrix
elements in Eq. (24). Utilizing the relation
$\textbf{v}=\frac{i}{\hbar}[\textbf{r},H]$ combined the wavefunction in Eq. (18), the
optical matrix element can be written as \cite{Peteryu}
\begin{equation}
\langle \psi_v |v_x|\psi_c\rangle=\frac{i}{\hbar}\langle \psi_c |Hx-xH|\psi_v\rangle,
\end{equation}
$\psi_{c/v}$ is the wavefunction in Eq. (18) or
(19). According to Eq. (18), the transition matrix element between
the bulk states is
\begin{widetext}
\begin{equation}
\begin{aligned}
\langle \psi_{n,k_x}^v|v_x|\psi_{n',k_x}^c\rangle=&\frac{C^{\prime 2}}{L_{x}}\frac{i}{\hbar}\sum_{m=1}^{N}\sum_{n=1}^{N}\Bigg\{(x_{A,n}-x_{A,m}%
)e^{-ik_{x}x_{A,m}}e^{ik_{x}x_{A,n}%
}ss^{\prime}\sin[p(N+1-m)]\sin[p^{\prime}(N+1-n)]\left\langle A_{m}\right\vert
H\left\vert A_{n}\right\rangle \\
& +(x_{B,n}-x_{B,m})e^{-ik_{x}%
x_{B,m}}e^{ik_{x}x_{B,n}}\sin(pm)\sin(p^{\prime}n)\left\langle B_{m}%
\right\vert H\left\vert B_{n}\right\rangle \\
& -s(x_{B,n}-x_{A,m})e^{-ik_{x}%
x_{A,m}}e^{ik_{x}x_{B,n}}\sin[p(N+1-m)]\sin(p^{\prime}n)\left\langle
A_{m}\right\vert H\left\vert B_{n}\right\rangle \\
& -s^{\prime}(x_{A,n}-x_{B,m})e^{-ik_{x}x_{B,m}}e^{ik_{x}x_{A,n}}\sin(pm)\sin[p^{\prime}%
(N+1-n)]\left\langle B_{m}\right\vert H\left\vert A_{n}\right\rangle \Bigg\},\\
\end{aligned}
\end{equation}
where $m(n)$ is the atom site index, and $s(s')$ indicates the parity of the subbands. There are five hoppings, including
$\left\langle A_{m}\right\vert H\left\vert B_{n}\right\rangle =$ $\left\langle
B_{m}\right\vert H\left\vert A_{n}\right\rangle =t_{1}$ for $n=m$,
$\left\langle A_{m}\right\vert H\left\vert B_{n}\right\rangle =$ $\left\langle
B_{m}\right\vert H\left\vert A_{n}\right\rangle =t_{2}$ for $n=m\pm1$,
$\left\langle A_{m}\right\vert H\left\vert B_{n}\right\rangle =$ $\left\langle
B_{m}\right\vert H\left\vert A_{n}\right\rangle =t_{3}$ for $n=m\pm2$,
$\left\langle A_{m}\right\vert H\left\vert B_{n}\right\rangle =$ $\left\langle
B_{m}\right\vert H\left\vert A_{n}\right\rangle =t_{4}$ for $n=m\pm1$, and
$\left\langle A_{m}\right\vert H\left\vert B_{n}\right\rangle =$ $\left\langle
B_{m}\right\vert H\left\vert A_{n}\right\rangle =t_{5}$ for $n=m\pm1$.
Then, Eq. (A2) can be written as
\begin{equation}
\begin{aligned}
\langle \psi_{n,k_x}^v|v_x|\psi_{n',k_x}^c\rangle &=\frac{i}{\hbar}\frac{C'^2}{L_{x}}(A_{t_1}+A_{t_3}+A_{t_4}),
\end{aligned}
\end{equation}
where $A_{t_1}$, $A_{t_3}$ and $A_{t_4}$ represent the term of
transition matrix related to the hopping $t_1$, $t_3$ and $t_4$, and
the corresponding term can be written as
\begin{equation}
\begin{aligned}
A_{t_1}=&-\sum_{m=1}^{N}s\{t_{1}be^{ik_{x}b}\sin[p(N+1-m)]\sin(p^{\prime}
m)-t_{1}be^{-ik_{x}b}\sin[p(N+1-m)]\sin(p^{\prime} m)\}\\
& +s^{\prime}\{t_{1}be^{ik_{x}b}\sin(pm)\sin[p^{\prime
}(N+1-m)]-t_{1}be^{-ik_{x}b}\sin(pm)\sin[p^{\prime
}(N+1-m)]\}\\
=&-2it_{1}b\sin(bk_{x})\sum_{m=1}^{N}\{s\sin[p(N+1-m)]\sin(p^{\prime}m)+s^{\prime
}\sin(pm)\sin[p^{\prime}(N+1-m)]\},
\end{aligned}
\end{equation}
where the sum of $m$ runs from $1$ to $N$. Defining, $n=N+1-m$, we
find $n$ also runs from $1$ to $N$ when $m\epsilon[1,N]$. Applying
the summation transform $n=N+1-m$, $A_{t_1}$ can be rewritten as
\begin{equation}
\begin{aligned}
A_{t_1}=&-2it_{1}b\sin(bk_{x})\{\sum_{n=1}^{N}s\sin(pn)\sin[p^{\prime
}(N+1-n)]+\sum_{m=1}^{N}s^{\prime}\sin(pm)\sin[p^{\prime}(N+1-m)]\}\\
=&\left\{
\begin{array}
[c]{c}%
-4i t_{1}b\sin(bk_{x})\sum_{m=1}^{N}\sin(pm)\sin[p^{\prime
}(N+1-m)],\qquad s=s^{\prime}\\
\qquad \qquad 0,\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad s\neq s^{\prime}.%
\end{array}
\right.
\end{aligned}
\end{equation}
Meanwhile, the term $A_{t_3}$ is
\begin{equation}
\begin{aligned}
A_{t_3}=&-\sum_{m=3}^{N}s\{be^{ik_{x}b}\sin[p(N+1-m)]\sin[p^{\prime}%
(m-2)]t_{3}-be^{-ik_{x}b}\sin[p(N+1-m)]\sin[p^{\prime
}(m-2)]t_{3}\}\\
&-\sum_{m=1}^{N-2}s^{\prime}\{be^{ik_{x}b}\sin(pm)\sin[p^{\prime}(N+1-m-2)]t_{3}%
-be^{-ik_{x}b}\sin(pm)\sin[p^{\prime}(N+1-m-2)]t_{3}\}\\
=&-2ib\sin(bk_{x})t_{3}\{\sum_{m=3}^{N}s\sin[p(N+1-m)]\sin[p^{\prime}%
(m-2)]+\sum_{m=1}^{N-2}s^{\prime}\sin(pm)\sin[p^{\prime}(N+1-m-2)]\},
\end{aligned}
\end{equation}
in this case, the atoms at the edges should be excluded because the
hopping links of $t_3$ is beyond one zigzag chain. Applying similar
summation transform in $A_{t_1}$ to $A_{t_3}$, we have
\begin{equation}
\begin{aligned}
A_{t_3}=&-2ib\sin(bk_{x})t_{3}\{\sum_{n=1}^{N-2}s\sin(pn)\sin[p^{\prime
}(N+1-n-2)]+\sum_{m=1}^{N-2}s^{\prime}\sin(pm)\sin[p^{\prime}(N+1-m-2)]\}\\
=&\left\{
\begin{array}
[c]{c}%
-4i t_{3}b\sin(bk_{x})\sum_{m=1}^{N-2}\sin(pm)\sin[p^{\prime
}(N-1-m)],\qquad s=s^{\prime}\\
\qquad \qquad 0,\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad s\neq s^{\prime}.%
\end{array}
\right.
\end{aligned}
\end{equation}
Finally, the $A_{t_4}$ term is
\begin{equation}
\begin{aligned}
A_{t_4}=&\sum_{m=1}^{N}\{ss^{\prime}be^{ik_{x}b}\sin[p(N+1-m)]\sin[p^{\prime
}(N+1-m+1)]t_{4}+ss^{\prime}be^{ik_{x}b}\sin[p(N+1-m)]\sin[p^{\prime
}(N+1-m-1)]t_{4}\\
&-ss^{\prime}be^{-ik_{x}b}\sin[p(N+1-m)]\sin[p^{\prime
}(N+1-m+1)]t_{4}-ss^{\prime}be^{-ik_{x}b}\sin[p(N+1-m)]\sin[p^{\prime
}(N+1-m-1)]t_{4}\\
& +be^{ik_{x}b}\sin(pm)\sin[p^{\prime}(m-1)]t_{4}%
+be^{ik_{x}b}\sin(pm)\sin[p^{\prime}(m+1)]t_{4} \\ &-be^{-ik_{x}b}\sin(pm)\sin[p^{\prime}(m-1)]t_{4}%
-be^{-ik_{x}b}\sin(pm)\sin[p^{\prime}(m+1)]t_{4}\\
=&\sum_{m=1}^{N}4i t_{4}b\sin(bk_{x})\cos
p'\{ss^{\prime}\sin[p(N+1-m)]\sin [p^{\prime}(N+1-m)]+\sin(pm)\sin
(p^{\prime}m)\}.
\end{aligned}
\end{equation}
Here, we have used the relation $\sin(x)+\sin(y)=2\sin[(x+y)/2]\cos[(x-y)/2]$ to simplify it. Similarly, replacing all the summation index $N+1-m$ with $n$, we obtain
\begin{equation}
\begin{aligned}
A_{t_4}=&\sum_{n=1}^{N}4i ss^{\prime}t_{4}b\sin(bk_{x})\sin(pn)\sin
(p^{\prime}n)\cos p'+\sum_{m=1}^{N}4i t_{4}b\sin(bk_{x})\sin(pm)\sin
(p^{\prime}m)\cos p'\\
=&\left\{
\begin{array}
[c]{c}%
8i t_{4}b\sin(bk_{x})\cos p'\sum_{m=1}^{N}\sin(pm)\sin(p^{\prime
}m),\qquad s=s^{\prime}\\
\qquad \qquad 0,\qquad \qquad \qquad \qquad \qquad \qquad \qquad s\neq s^{\prime}.%
\end{array}
\right.
\end{aligned}
\end{equation}
Therefore, the transition matrix elements is
\begin{equation}
\langle \psi_{n,k_x}^v|v_x|\psi_{n',k_x}^c\rangle=
\left\{
\begin{array}{ll}
\frac{C'^2}{L_x}\frac{i}{\hbar}(A_{t_1}+A_{t_3}+A_{t_4}),\;\; s'=s \\
\qquad 0, \qquad \qquad\;\; s'\neq s,
\end{array}
\right.
\end{equation}
where
\begin{equation*}
\begin{aligned}
&A_{t_1}=-4i t_{1}b\sin(bk_{x})\sum_{m=1}^{N}\sin(pm)\sin[p^{\prime}(N+1-m)],\\
&A_{t_3}=-4i t_{3}b\sin(bk_{x})\sum_{m=1}^{N-2}\sin(pm)\sin[p^{\prime}(N-1-m)],\\
&A_{t_4}=8i t_{4}b\sin(bk_{x})\cos(p')\sum_{m=1}^{N}\sin(pm)\sin(p^{\prime}m)
\end{aligned}
\end{equation*}
From Eq. (A10), we can explicitly find that only the inter band transition between the bulks states with the same symmetry are allowed. Using the wavefunction in Eq. (19), we can obtain the same selection rules $s=s'$ for the transition between the edge bands as well as the bulk bands to the edge bands. Hence, in even-$N$ ZPNRs, we conclude that only the transitions between the subbands with same parity are allowed.
\end{widetext}
|
1,108,101,564,327 | arxiv | \section{Introduction}
In the era of database astronomy, the construction of spectral energy distributions (SEDs) from the ultra-violet to the mid-infrared for large samples of stars is straightforward, requiring little user-input or effort. Modern tools such as the VO SED Analyzer \citep{bay08:aa492} can even detect infrared excesses for thousands of candidates at a time in a completely automated fashion. Many sub-fields have benefited from the ease-of-use of catalog photometry, though they are not without pitfalls. Searches for infrared excesses from warm (1000\,K), circumstellar dust provide a good case-study of the benefits and drawbacks of analyzing SEDs using only catalog photometry.
Circumstellar dust is a signpost for planetary systems, indicating the on-going process of planetary formation around pre-main and main-sequence stars \citep{ken12:mnras426,pat14:apjs212,cot16:apjs225,bin17:mnras469}, and illuminating the post-main sequence destruction of remnant planetary systems around white dwarf stars \citep{deb11:apjs197,hoa13:apj770,bar14:apj786,den17:apj849}. The frequency of circumstellar dust around stellar sources informs planetary occurrence rates in instances where direct detection is not feasible. These searches rely heavily on data from the \emph{Wise Infrared Survey Explorer} (\emph{WISE}; \citealt{wri10:aj140}), which produced the only all-sky survey at the wavelengths where warm dust is most apparent ($\lambda\geq$\,3\,$\mu$m). But the coarse spatial resolution of \emph{WISE} leads to a high probability of source confusion, contaminating samples of \emph{WISE}-selected infrared excess with false positives and skewing statistical studies of warm dust frequency.
Estimates of contamination by source confusion for \emph{WISE}-selected dusty infrared excesses around main-sequence stars indicate false-positive rates as high as 70\% \citep{sil18:apj868}. Dusty infrared excesses around white dwarf stars are much fainter than their main-sequence counterparts, and typically only detected in \emph{W1} and \emph{W2} bands (see \citealt{far16:nar71} for a recent review). Their faint magnitudes push the boundaries of the source-confusion limited detection thresholds of the AllWISE surveys. More concerning, as the \emph{Spitzer Space Telescope} \citep{wer04:apjs154} reaches its end-of-life, the ability to confirm \emph{WISE} infrared excesses for large samples may be lost entirely. The effectiveness of the next generation observatory, the \emph{James Webb Space Telescope} \citep{gar06:ssr123}, to mimic the survey imaging capability of \emph{Spitzer} will be limited by initial slew times that are an order of magnitude larger. This is likely to mean \emph{JWST} cannot support this science effectively for large samples of dusty white dwarfs as are currently being identified with \emph{Gaia} \citep{reb19:mnras489}.
\begin{figure*}[t!]
\gridline{\fig{ATLAS22561.eps}{0.495\textwidth}{ }\fig{EC02566.eps}{0.495\textwidth}{}}
\vspace{-1.2cm}
\gridline{\fig{EC03103.eps}{0.495\textwidth}{}\fig{EC05276.eps}{0.495\textwidth}{}}
\vspace{-1.2cm}
\gridline{\fig{SDSS00021.eps}{0.495\textwidth}{}\fig{SDSS08304.eps}{0.495\textwidth}{}}
\vspace{-1.2cm}
\gridline{\fig{SDSS13054.eps}{0.495\textwidth}{}\fig{SDSS13570.eps}{0.495\textwidth}{}}
\vspace{-0.5cm}
\caption{\added{Ground-based near-infrared and \emph{Spitzer} imaging of the eight false positive candidates is shown in order of increasing wavelength}. From left to right we show \emph{J}, \emph{K}$_s$, IRAC-Ch$\,$1 and IRAC-Ch$\,$2 images centered on the AllWISE source position for each target with a 7.8\arcsec\, circle over-plotted to visualize the \emph{WISE} beam size (1.3\,$\times$\,FWHM in \emph{W1}) \added{as a proxy for the source confusion limit}. The AllWISE pipeline includes an active deblending routine that can resolve up to two sources within this separation, but none of our targets (including those not shown here) were flagged for active deblending.\added{The images succinctly demonstrate that near-infrared imaging is insufficient to rule out source confusion in the \emph{WISE} \emph{W1} and \emph{W2} bands}\label{fig:imsequence}}
\end{figure*}
In this paper, we present \emph{Spitzer} follow-up of a sample of 22 \emph{WISE}-selected infrared excess candidates around white dwarf stars and discuss the efficacy of techniques to limit the contamination of \emph{WISE}-selected infrared excesses by source confusion. This sample approaches the faint limit of the AllWISE surveys, making it of broader impact to studies of source confusion amongst \emph{WISE}-selected infrared excesses. Using the higher-resolution \emph{Spitzer} data, we confirm the \emph{WISE} infrared excesses in 14/22 systems, with the remaining systems all showing nearby sources within the \emph{WISE} beam.
Prior to their \emph{Spitzer} observations, all of our targets were vetted by examining ground-based near-infrared imaging and astrometric shifts to probe for clear instances of confused \emph{WISE} photometry. None of the eight contaminated systems showed nearby sources in their ground-based near-infrared imaging, demonstrating that it is insufficient to rule out source confusion at the \emph{WISE} bands. We find that the astrometric information is a more useful indicator of the potential for source confusion, but only when considering the full astrometric uncertainty of the surveys involved. Even when applied carefully, we demonstrate that these techniques will not result in clean sample of excesses, and studies based on \emph{WISE}-selected infrared excesses should always consider a level contamination when interpreting sample properties.
\section{\emph{Spitzer} View of \emph{WISE} Infrared Excess Candidates}
Our targets were selected from a handful of studies that applied different criteria to identify the infrared excesses (\citealt{den17:apj849}, Gentile-Fusillo et al. in prep). The common property of our targets is an infrared excess in the \emph{WISE} \emph{W1} and \emph{W2} consistent with a warm, compact dust disk around a white dwarf star. The \emph{Spitzer} photometry is superior to the \emph{WISE} photometry in both sensitivity and, more importantly, spatial resolution, allowing us to test the possibility that a given \emph{WISE} excess is the result of source confusion. For each target, we searched for instances of multiple sources within the \emph{WISE} beam, and compared the \emph{Spitzer} photometry against stellar models to confirm the \emph{WISE}-selected excess.
\subsection{IRAC Imaging and Photometry}
Under program 14100, we collected 3.6 and 4.5 $\mu$m photometry of 22 dusty white dwarf candidates using the Infrared Array Camera (IRAC; \citealt{faz04:apjs154}) with \emph{Spitzer} in Cycle 14. Ten frames were taken using 30\,s exposures with the medium-sized cycling dither pattern, resulting in 300\,s of total integration in each channel. We produced fully calibrated mosaic images for each target using the MOPEX software package \citep{mak06:spie6274} following the recipes outlined for point-source extraction in the \emph{Spitzer} Data Analysis Cookbook version 6.0. PSF-fitted photometry was conducted using APEX, and the error in the measured flux was summed in quadrature with a 5\% calibration uncertainty \citep{far08:apj674}. It has been demonstrated that well-dithered observations are robust against intra-pixel flux variations at the sub-percent level \citep{wil19:mnras487} so we did not apply any such corrections. The measured fluxes are presented in Table \ref{tab:excesstable}.
For each target, we examined the IRAC-Ch$\,$1 and Ch$\,$2 mosaic images for multiple sources within the \emph{WISE} beam, centered on the AllWISE detection. The critical distance for resolving neighboring sources is 1.3$\,\times\,$the full-width half-maximum of the point-spread function of a given band (7.8\arcsec\,for \emph{W1}). Within this separation, the AllWISE pipeline relies on an active deblending procedure to detect instances of source confusion, triggered by an unsatisfactory fit to the intensity distribution during the point-source fitting photometry routine\footnote{\url{http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/sec4_4c.html}}. None of our targets were flagged for the active deblending routine so we adopted a 7.8\arcsec\,radius as our limit for potential source confusion.
Eight of our 22 targets have multiple sources within this limit indicating the AllWISE photometry was potentially confused. In Figure \ref{fig:imsequence}, we show 10\arcsec$\times$10\arcsec\, cutouts of the publicly available near-infrared \emph{J} and \emph{K}$_{s}$ and IRAC-Ch$\,$1 and Ch$\,$2 images of these eight targets. We note that in all eight, the nearby contaminants are not detected in any of the near-infrared images. We discuss the efficacy of near-infrared imaging for limiting contamination in \emph{WISE}-selected samples in Section \ref{subsec:nir}.
\subsection{Comparison with Stellar Models}
We constructed SEDs for each target utilizing data from the \emph{Galaxy Evolution Explorer} (\emph{GALEX}; \citealt{mar05:apj619}), Sloan Digital Sky Survey \citep{ahn14:apjs211}, VST-ATLAS survey \citep{sha15:mnras451}, Panoramic Survey Telescope and Rapid Response System \citep{cha16:arxiv}, SkyMapper Southern Survey \citep{wol18:pasa35}, UKIRT Infrared Deep Sky Survey \citep{law07:mnras379}, VISTA Hemisphere Survey (VHS; \citealt{irw04:spie5493, ham08:mnras384, cro12:aap548}), and the AllWISE surveys \citep{cut13:ycat2328}. We de-reddened the photometry using a standard prescription \citep{gen19:mnras482} and converted the magnitudes into fluxes using the published zero-points for each bandpass.
Most of the objects in our sample do not have a published spectrum to help us choose an appropriate stellar model, instead have only been classified as white dwarf stars. The `EC' objects were first identified with low-resolution spectrograms as part of the Edinburgh-Cape Blue Object Survey \citep{sto97:mnras287}, and later confirmed with targeted follow-up \citep{den17:apj849}. The `ATLAS' and `SDSS' objects were identified as high probability white dwarf candidates via their photometry and proper motion \citep{gir11:mnras417, gen15:mnras448, gen17:mnras469}. All of our objects were also included in the \emph{Gaia} white dwarf catalog of \cite{gen19:mnras482}, which includes estimates of effective temperature and surface gravity assuming both hydrogen and helium dominated atmospheres.
For our stellar models, we utilized the pure hydrogen-dominated white dwarf model spectra of \cite{koe10:memsai81}, with the effective temperature and surface gravity of each star taken from the hydrogen model fits to the \emph{Gaia} photometry \citep{gen19:mnras482}. It should be emphasized that in our comparion of the model to the SED, the model parameters were not being re-fit to the photometry, rather the surface gravity and effective temperature were fixed and the model was then scaled to fit the optical photometry. Because the goal of this exercise was only to identify the systems with an infrared excess, rather than to fit or describe the infrared excess, this approach was sufficient.
We determined the flux excess of each target in the IRAC-Ch$\,$1 and Ch$\,$2 bands using the standard formula:
\begin{equation}
\chi = \frac{F_{\text{obs}}-F_{\text{mod}}}{\sqrt{\sigma_{\text{obs}}^2 + \sigma_{\text{mod}}^2}}
\end{equation}
and deemed those that have a Ch$\,$1 or Ch$\,$2 flux excess greater than 4$\sigma$ and clean IRAC-Ch$\,$1 and Ch$\,$2 images to be \emph{Spitzer} confirmed excesses. Targets that showed IRAC photometry consistent with the stellar model and had multiple sources within our 7.8\arcsec\, confusion limit were the result of confused \emph{WISE} photometry. We present an example SED in Figure \ref{fig:ec03103_sed}, that shows an instance of a contaminated excess produced by source confusion. The contaminating sources for this target, EC\,03103, are clearly identifiable in Figure \ref{fig:imsequence}. The remainder of the SEDs are shown in Appendix \ref{appendix} and in Table \ref{tab:excesstable}, we identify the remaining cases of confused \emph{WISE} photometry.
Among our sample of 22 targets, we identify eight \emph{WISE}-selected excesses that are the result of source confusion, for a nominal contamination rate of 36\%. It is worth re-emphasizing that our sample had already been vetted for obvious cases of source confusion prior to observation with \emph{Spitzer}, so this 36\% contamination rate only includes cases of source confusion which were \emph{unable} to rule out with ground-based data. The effectiveness of these vetting techniques is discussed below.
\begin{figure}
\epsscale{1.2}
\plotone{ec0310sed.eps}
\caption{The SED of EC\,03103 demonstrates a case of confused \emph{WISE} photometry (orange) erroneously being classified as an excess. The \emph{Spitzer} photometry of the white dwarf (red) is consistent with the stellar model, and the confused sources that produced the \emph{WISE} excess are clearly resolved with \emph{Spitzer} in Figure \ref{fig:imsequence}. The remainder of the spectral energy distributions are shown in Appendix \ref{appendix}. \label{fig:ec03103_sed}}
\end{figure}
\begin{deluxetable*}{lcccccccc}
\tablecaption{\emph{Spitzer} and \emph{WISE} fluxes for each candidate, separated into \emph{Spitzer} confirmed excesses and confused \emph{WISE} photometry. In the final two columns, we present the \emph{Gaia} Figure of Merit (FoM) and separation from expected position collected from the official \emph{Gaia}-AllWISE cross-match, discussed in Section \ref{subsec:astro}\label{tab:excesstable}}
\tablehead{\colhead{} & \colhead{} & \multicolumn2c{\emph{Spitzer}} & \multicolumn2c{\emph{WISE}} & \colhead{} & \multicolumn2c{\emph{Gaia}}\\
\colhead{Target Name} & \colhead{\emph{Gaia} WD Designation\tablenotemark{a}} & \colhead{Ch$\,$1} & \colhead{Ch$\,$2} & \colhead{\emph{W1}} & \colhead{\emph{W2}} & \colhead{S/N} & \colhead{FoM} &\colhead{Separation}\\
\colhead{} & \colhead{} & \colhead{($\mu$Jy)} & \colhead{($\mu$Jy)} & \colhead{($\mu$Jy)} & \colhead{($\mu$Jy)} & \colhead{(\emph{W1})} & \colhead{} & \colhead{(\arcsec)}}
\startdata
\cutinhead{\emph{Spitzer} Confirmed Excess}
ATLAS\,00254 & WD\,J002540.01--393454.56 & 39$\,\pm\,$3 & 42$\,\pm\,$3 & 34$\,\pm\,$5 & 33$\,\pm\,$9 & 7.3 & 5.5 & 0.66 \\
ATLAS\,02325 & WD\,J023252.01--095745.86 & 49$\,\pm\,$3 & 40$\,\pm\,$3 & 48$\,\pm\,$5 & 41$\,\pm\,$10 & 10.8 & 7.1 & 0.19 \\
ATLAS\,10552 & WD\,J105524.50--023721.13 & 86$\,\pm\,$5 & 86$\,\pm\,$5 & 104$\,\pm\,$7 & 98$\,\pm\,$13 & 16.4 & 0.2 & 0.83 \\
ATLAS\,12123 & WD\,J121236.94--105355.07 & 49$\,\pm\,$3 & 47$\,\pm\,$3 & 43$\,\pm\,$6 & 46$\,\pm\,$12 & 7.8 & 6.2 & 0.47 \\
ATLAS\,15131 & WD\,J151312.71--152352.87 & 35$\,\pm\,$3 & 38$\,\pm\,$3 & 36$\,\pm\,$6 & 30$\,\pm\,$12 & 6.9 & 5.4 & 0.65 \\
ATLAS\,22120 & WD\,J221202.88--135239.96 & 156$\,\pm\,$8 & 156$\,\pm\,$8 & 132$\,\pm\,$7 & 145$\,\pm\,$13 & 19.3 & 8.7 & 0.07 \\
ATLAS\,23403 & WD\,J234036.64--370844.72 & 169$\,\pm\,$9 & 161$\,\pm\,$9 & 155$\,\pm\,$7 & 158$\,\pm\,$11 & 24.4 & 9.4 & 0.05 \\
EC\,01071 & WD\,J010933.16--190117.56 & 95$\,\pm\,$6 & 79$\,\pm\,$5 & 89$\,\pm\,$6 & 99$\,\pm\,$11 & 15.7 & 5.6 & 0.46 \\
EC\,01129 & WD\,J011501.17--520744.67 & 55$\,\pm\,$4 & 34$\,\pm\,$2 & 71$\,\pm\,$6 & 27$\,\pm\,$9 & 14.2 & 6.6 & 0.29 \\
EC\,21548\tablenotemark{b} & WD\,J215823.88--585353.81 & 199$\,\pm\,$11 & 151$\,\pm\,$8 & 205$\,\pm\,$8 & 171$\,\pm\,$10 & 29.1 & -- & -- \\
SDSS\,01190 & WD\,J011909.99+104454.09 & 89$\,\pm\,$5 & 87$\,\pm\,$5 & 90$\,\pm\,$6 & 92$\,\pm\,$11 & 16.0 & 7.5 & 0.22 \\
SDSS\,09355 & WD\,J093553.30+105722.97 & 33$\,\pm\,$3 & 32$\,\pm\,$2 & 37$\,\pm\,$6 & 40$\,\pm\,$12 & 6.5 & 5.0 & 0.85 \\
SDSS\,09514 & WD\,J095144.01+074957.41 & 76$\,\pm\,$5 & 77$\,\pm\,$4 & 65$\,\pm\,$6 & 77$\,\pm\,$12 & 11.2 & 5.5 & 0.56 \\
SDSS\,13125 & WD\,J131251.36+295535.98 & 45$\,\pm\,$3 & 48$\,\pm\,$3 & 38$\,\pm\,$5 & 43$\,\pm\,$10 & 8.4 & 6.6 & 0.31 \\
\cutinhead{Confused \emph{WISE} Photometry}
ATLAS\,22561 & WD\,J225612.92--131938.83 & 91$\,\pm\,$5 & 60$\,\pm\,$4 & 119$\,\pm\,$7 & 74$\,\pm\,$13 & 17.1 & 0.4 & 0.72 \\
EC\,02566 & WD\,J025859.58--175020.33 & 40$\,\pm\,$3 & 22$\,\pm\,$2 & 48$\,\pm\,$5 & 56$\,\pm\,$8 & 11.9 & 3.4 & 0.77 \\
EC\,03103 & WD\,J031121.31--621515.72 & 81$\,\pm\,$5 & 53$\,\pm\,$3 & 157$\,\pm\,$6 & 128$\,\pm\,$8 & 30.4 & 0.0 & 0.42 \\
EC\,05276 & WD\,J052912.10--430334.49 & 71$\,\pm\,$4 & 41$\,\pm\,$3 & 112$\,\pm\,$5 & 65$\,\pm\,$8 & 23.3 & 3.0 & 0.45 \\
SDSS\,00021 & WD\,J000216.18+073350.30 & 30$\,\pm\,$3 & 19$\,\pm\,$2 & 40$\,\pm\,$6 & 37$\,\pm\,$12 & 7.7 & 5.6 & 0.55 \\
SDSS\,08304\tablenotemark{c} & WD\,J083047.28+001041.51 & 28$\,\pm\,$3 & 27$\,\pm\,$2 & 26$\,\pm\,$6 & 35$\,\pm\,$11 & 5.0 & 0.3 & 2.03 \\
SDSS\,13054 & WD\,J130542.73+152541.16 & 37$\,\pm\,$3 & 23$\,\pm\,$2 & 80$\,\pm\,$6 & 57$\,\pm\,$12 & 14.7 & 5.0 & 0.52 \\
SDSS\,13570 & WD\,J135701.68+123145.62 & 9$\,\pm\,$2 & 6$\,\pm\,$1 & 21$\,\pm\,$5 & 18$\,\pm\,$9 & 5.2 & 4.5 & 0.92 \\
\enddata
\tablenotetext{a}{\cite{gen19:mnras482}}
\tablenotetext{b}{The \emph{Gaia}-AllWISE cross-match returned no results for EC\,21548, despite an AllWISE detection within 0.5\arcsec of the expected position. This case is discussed in Section \ref{subsec:astro}.}
\tablenotetext{c}{The measured IRAC-Ch$\,$1 and Ch$\,$2 fluxes are confused with a background galaxy.}
\end{deluxetable*}
\section{Mitigating Contamination in \emph{WISE}-selected Samples}
As \emph{Spitzer} nears the end of its operational lifetime, it is worth considering what techniques are effective at separating the clean from the confused amongst \emph{WISE}-selected infrared excess samples. Recent works have explored this subject using samples of main-sequence stars \citep{pat17:aj153,sil18:apj868}, but the infrared excesses exhibited by dusty debris around white dwarf stars are much fainter, and typically only detected in the \emph{W1} and \emph{W2} bands. Furthermore white dwarf infrared excess searches are often limited to a few dozen candidates, so statistical methods for isolating outliers (such as demonstrated by \citealt{pat17:aj153}) are untenable. In the following sections, we consider a few commonly employed strategies and discuss their effectiveness based our classifications with \emph{Spitzer}.
\subsection{Ground-based Near-infrared Imaging \label{subsec:nir}}
In the absence of space-based follow-up, ground-based near-infrared imaging can be used to search for instances of multiple sources within the \emph{WISE} imaging beam. The Two Micron All Sky Survey \citep{skr06:aj131} is insufficient in both depth and resolution for these purposes. The UKIDSS Large Area Survey \citep{law07:mnras379} and the VISTA-VHS \citep{mcm13:msngr154} have depths of \emph{K}$\approx18.2$ mag and \emph{K}$_s\approx19.8$ mag, and their images have proven useful for quantifying levels of source confusion (e.g. \citealt{deb11:apj729}, \citealt{den16:apj831}). In the absence of publicly available imaging, targeted programs can also be used to cull samples of \emph{WISE}-selected infrared excesses \citep{bar12:apj760}. Near-infrared imaging is preferred to optical in order to get as close as possible to the bandpass of \emph{WISE} images. Ultracool dwarfs only become apparent beyond 1\,$\mu$m \citep{bar15:aap577} and dusty background galaxies can rise in flux as a power law at the \emph{WISE} wavelengths, escaping detection at optical and even near-infrared wavelengths.
Prior to their selection for follow-up with \emph{Spitzer}, all 22 of our targets were vetted for nearby sources within the \emph{WISE} beam using high-quality, ground-based near-infrared images. In Figure \ref{fig:imsequence}, we show the \emph{J} and \emph{K}$_s$-band images for the eight contaminated targets. It is apparent from these image sequences that a clean near-infrared image is insufficient to confirm a \emph{WISE}-selected infrared excess candidate. Near-infrared imaging is however a valuable tool for ruling out \emph{WISE}-selected infrared excess candidates in cases where a clear, nearby source can be identified. It should always be considered for vetting candidates when available.
\subsection{Astrometric Separation\label{subsec:astro}}
Another method to assess the potential for source confusion of a \emph{WISE}-selected infrared excess is to compare its expected position to the detected AllWISE detection. A sufficiently bright and nearby contaminant can be expected to shift the centroid of the detected source in the \emph{WISE} images, indicating source confusion \citep{wil17:mnras468,wil18:mnras481}.
Prior to their \emph{Spitzer} observations, our candidates were also vetted for large separations between their expected, proper motion-corrected \emph{Gaia} position and their detected AllWISE position. All but one candidate was found within 1\arcsec\, of its proper motion corrected position. The contamination rate in our sample indicates that at the sub-arcsecond level, the raw separation value between the expected and detected position is a poor indicator of source confusion. This can be seen by comparing the separations in Table \ref{tab:excesstable}, where there is a large scatter and overlap between the confirmed and confused samples. The cause of this scatter is the wide range of \emph{WISE} astrometric uncertainty among our targets, and is a by-product of our sample being near the fainter end of of the AllWISE detection limits. Incorporating this astrometric uncertainty is essential for discriminating clean and confused \emph{WISE} photometry, as discussed below.
\subsection{The Gaia Figure of Merit as a Confusion Discriminant}
The astrometric uncertainty of \emph{WISE} is known to be inversely proportional to the detection's signal-to-noise (S/N). For the \emph{W1} band, this relationship can be approximated as $3.0/(\rm{S/N}$) \citep{cut13:allwise,deb11:apjs197}. At the 5$\sigma$ detection limits of AllWISE, the astrometric uncertainty reaches 0.6\arcsec, meaning that in samples of a few hundred one reasonably expects several true detections of objects at separations greater than 0.5\arcsec. Conversely, and perhaps more detrimental, an object with high S/N within a separation of 0.5\arcsec\,could in fact be several standard deviations away from its expected position. Both cases emphasize that the raw separations should not be directly compared between bright and faint objects, and instead the individual astrometric uncertainty must be considered.
The framework developed for probability-based cross-matches provides a useful way to incorporate the astrometric uncertainty into the evaluation of whether or not the \emph{WISE} astrometric position is likely perturbed (see \cite{wil18:mnras481} for example). Additionally, the positional accuracy and proper motions provided by the \emph{Gaia} Data Release 2 \citep{gaia16:aap595,gaia18:aa616} provide a fantastic reference position. As part of the \emph{Gaia} DR2, cross-matched catalogs between several optical and near-infrared surveys were produced based on probabilistic, nearest-neighbor approaches \citep{mar17:aap607, mar19:aap621} that incorporate the astrometric uncertainty of each survey, the epoch differences between each catalog, and the probability of randomly finding a nearby, unrelated counterpart in a survey given the local source count density.
\begin{figure}
\epsscale{1.2}
\plotone{fom_vs_snr.eps}
\caption{The \emph{Gaia} Figure of Merit is plotted against the AllWISE S/N for each object, with confirmed excesses shown as circles and excesses due to \emph{WISE} source confusion as crosses. The color scale represents the separation between the expected position and the AllWISE detection. Candidates with a high \emph{W1} S/N but low FoM score are likely cases of confused AllWISE photometry. \label{fig:fomvssnr}}
\end{figure}
The cross-match algorithm works by first searching for all possible counterparts (dubbed neighbors) in a given catalog within 5$\sigma$ of the combined astrometric uncertainty of the object in \emph{Gaia} and the neighbors in the catalog of interest. The Figure of Merit (FoM) is computed for each potential neighbor by comparing the probabilities of discovery of the object at the measured separation and the probability of chance alignment. The counterpart with the highest FoM is selected as the match and reported in the \emph{bestNeighbor} table \citep{mar17:aap607}. All neighbors for each cross-match are listed in the corresponding \emph{Neighborhood} table.
There is no threshold for the FoM score to use to evaluate the goodness of a match, that is to say the FoM does not translate directly into a likelihood. For the AllWISE catalog, this dimensionless parameter ranges from $7.0\times10^{-5}$ to 15.5 \citep{mar19:aap621}, with a strong dependence on the astrometric uncertainty of the counterpart in AllWISE. As the AllWISE astrometry is inversely proportional to the S/N ratio of the detection, one expects a relationship between the \emph{W1} S/N and the \emph{Gaia} FoM score. We queried the \emph{Gaia} \emph{Neighborhood} catalog for and collected the recorded separation and FoM score of the best neighbor identified in the \emph{Gaia} cross-match.
Figure \ref{fig:fomvssnr} demonstrates a strong relationship between the \emph{Gaia} FoM score and the \emph{W1} S/N, where the majority of the outliers are cases of confused AllWISE photometry. Based on this, we conclude that excesses with S/N $>\,$10 but FoM $<\,$4 are likely the result of source confusion. There is one object in this region, ATLAS\,10552, that is a confirmed excess. A closer inspection of the images and SED for ATLAS\,10552 indicate it is the rare case where the AllWISE photometry was confused in addition to the white dwarf having a true infrared excess as there is a faint, nearby source and the IRAC fluxes are slightly below the AllWISE fluxes.
Another object, EC\,21548, returned no neighbors in the Gaia-AllWISE cross-match, i.e. there is not an associated source to the \emph{Gaia} detection in the AllWISE catalog within 5$\sigma$ of astrometric separation. The AllWISE photometry we associated with EC\,21548 corresponds to a source found at a separation of 0.5\arcsec\,from the expected position of EC\,21548. The \emph{Spitzer} images show a single source near the expected position of EC\,21548, and the IRAC-Ch$\,$1 and Ch$\,$2 fluxes agree with the AllWISE \emph{W1} and \emph{W2} fluxes, leading to a bit of mystery as to why the \emph{Gaia} and the nearest AllWISE coordinates are so discrepant. Its exclusion in the cross-match could indicate some unaccounted for systematic uncertainty in the AllWISE astrometry, or it could simply be spurious. Whatever the case, it is another good example of a target that would have been erroneously rejected by the astrometric uncertainty cut proposed above.
In addition to the two confirmed excesses that would have been rejected, a few cases of confused \emph{WISE} photometry are not distinguished by this method. SDSS\,00021, SDSS\,13054, and SDSS\,13570 all lie near the sample of confirmed infrared excesses. The first is a case of a statistically weak infrared excess, and can be discarded for the purpose of evaluating this technique. Referencing the \emph{Spitzer} images in Figure \ref{fig:imsequence}, we see that the remaining two have multiple sources contaminating the AllWISE photometry, resulting in a smaller positional perturbation than cases where one contaminant is responsible for the AllWISE positional offset.
In general, the Gaia FoM is a useful discriminant for identifying confused \emph{WISE} photometry, having correctly identified five out of the eight confused sources in our sample. Applying this technique would have come at a cost though, as two confirmed excesses were rejected by this method and the two cases of multiple contaminants that result in little astrometric perturbation would have been missed. These results emphasize that even advanced astrometric methods will fail to produce clean samples of \emph{WISE}-selected infrared excesses.
\subsection{Proper Motion Comparison}
Related to the astrometric test, one can also compare the proper motions measured by \emph{WISE} and \emph{Gaia} to test the validity of a \emph{WISE}-selected infrared excess \citep{deb19:apjl872}. This is effectively repeating the astrometric experiment with a series of independent measurements over time. Given the six month baseline, the initial \emph{WISE} proper motions are not sufficient for comparison with \emph{Gaia}, but the continued observations of the \emph{NEOWISE} mission \citep{mai14:apj792} have provided a six year baseline allowing for improved motion measurements. The CatWISE Preliminary catalog \citep{eis19:arxiv} provides new photomety and proper motion measurements using the original AllWISE processing techniques for data collected between 2010 and 2016, providing a factor of ten improvement to the original AllWISE proper motion measurements, in addition to improving the depth and positional accuracy of sources as compared to AllWISE.
The proper motion accuracy in CatWISE is 10 mas\,yr$^{-1}$ for bright sources, 30 mas\,yr$^{-1}$ at \emph{W1}\,$\approx$\,15.5 mag, and 100 mas\,yr$^{-1}$ at \emph{W1}\,$\approx$\,17 mag, so an object must either be sufficiently bright or have a sufficiently high proper motion to perform this test. Two of our objects meet this criterion, EC\,03103 and EC\,05276, and their reported proper motions are given in Table \ref{tab:pmtable}. Both objects have discrepant proper motions in \emph{Gaia} and CatWISE, consistent with their classification of having confused \emph{WISE} photometry. Unfortunately, the sample size is not sufficient to evaluate the efficacy of this technique, but the two cases of confirmed source confusion demonstrate that it is a worthwhile check for large surveys of \emph{WISE} infrared excesses.
\begin{deluxetable}{lr}
\tablecaption{Comparison of \emph{Gaia} and CatWISE proper motion measurements for two candidates in our sample. All proper motion measurements are given in units of mas\,yr$^{-1}$\label{tab:pmtable}}
\tablehead{\multicolumn1l{EC\,03103} & \colhead{}}
\startdata
\emph{Gaia} Source ID & 4720876181720327808 \\
\emph{Gaia} DR2 $\mu_\alpha$\,cos($\delta$) & 404.8$\,\pm\,$0.2 \\
\emph{Gaia} DR2 $\mu_\delta$ & 57.6$\,\pm\,$0.2 \\
CatWISE Source Name & J031122.06-621515.2 \\
CatWISE $\mu_\alpha$\,cos($\delta$) & 279.9$\,\pm\,$21.2 \\
CatWISE $\mu_\delta$ & 80.3$\,\pm\,$19.3\\
\hline
\sidehead{EC\,05276}
\hline
\emph{Gaia} Source ID & 4805782462481529600 \\
\emph{Gaia} DR2 $\mu_\alpha$\,cos($\delta$) & -37.3$\,\pm\,$ 0.1 \\
\emph{Gaia} DR2 $\mu_\delta$ & 15.3$\,\pm\,$0.1 \\
CatWISE Source Name & J052912.09-430334.8 \\
CatWISE $\mu_\alpha$\,cos($\delta$) & -397.3$\,\pm\,$34.4 \\
CatWISE $\mu_\delta$ & 358.5$\,\pm\,$35.7\\
\enddata
\end{deluxetable}
\section{Conclusions}
Among the sample of 22 \emph{WISE}-selected dusty white dwarf candidates, we find that eight are the result of source confusion, despite our attempts at vetting the sample prior to \emph{Spitzer} observation. We show that ground-based, near-infrared imaging is insufficient for detecting the contaminants in our sample, but should still be employed when vetting candidates to rule out more obvious cases of source confusion. Astrometric filtering of candidates on the fainter end of the \emph{WISE} catalog should also take into account the astrometric uncertainty, and we demonstrate the utility of filtering candidates using the Figure of Merit metric from the official \emph{Gaia}-AllWISE cross-match.
However, even when applying these techniques in combination one will fail to produce a clean sample of \emph{WISE}-selected infrared excesses, and care must be taken when interpreting the statistical properties of \emph{WISE}-selected infrared excesses. The fact remains that \emph{WISE}-selected infrared excess candidates should be treated as guilty until proven innocent. \deleted{and the best available facility capable of passing judgment is \emph{Spitzer}}\added{The confusion limit is inherent to the \emph{WISE} telescope and cannot be remedied by advanced processing. Future studies of \emph{WISE}-selected infrared excesses utilizing the new co-adds and increased depth of the continued NEOWISE mission \citep{sch19:apjs240} could suffer from even higher contamination rates, as the survey depth is pushed further and further past the confusion limit.}
The 14 confirmed excesses in our sample could also provide a nice increase to the known sample of dusty white dwarf stars, which currently stands between 40 and 50 systems \citep{far16:nar71}. We emphasize that our confirmation does not signify their status as dusty white dwarf stars, as we cannot preclude the possibility of a brown dwarf companion as the source of the infrared excess. To-date, all confirmed dusty white dwarf stars have also shown signs of active accretion detectable as atmospheric metals, and the search for these is a necessary step for solidifying their infrared excess as circumstellar dust. Only one of the 14 \emph{Spitzer}-confirmed excesses in our sample has a literature detection of metals (EC\,01071; \citealt{den17:apj849}), and we are currently pursuing high resolution spectroscopic follow-up of the remaining candidates.
\acknowledgments
We would like to acknowledge Boris G\"{a}nsicke for comments and suggestions which improved this manuscript, and the anonymous referee for providing a swift and helpful report. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology. WISE and NEOWISE are funded by the National Aeronautics and Space Administration. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.\\
\vspace{5mm}
\facilities{IRSA, Spitzer, WISE, Gaia}
\software{astropy \citep{ast13:aap558}}
\clearpage
|
1,108,101,564,328 | arxiv | \section{INTRODUCTION}
Due to the prominent demand for both quality and safety in surgery, it is essential for surgeon trainees to achieve required proficiency levels before operating on patients~\cite{roberts2006evolution}.
An absence of adequate training can significantly compromise the clinical outcome, which has been shown in numerous studies~\cite{reznick2006teaching,aggarwal2010training,birkmeyer2013surgical}.
Effective training and reliable methods to assess surgical skills are thus critical in supporting trainees in technical skill acquisition~\cite{darzi2001assessment}.
Simultaneously, current surgical training is undergoing significant changes with a rapid uptake of minimally invasive robot-assisted surgery.
However, despite advances of surgical technology, most assessments of trainee skills are still performed via outcome-based analysis~\cite{bridgewater2003surgeon}, structured checklists, and rating scales~\cite{goh2012gears, aghazadeh2015GEARS,niitsu2013OSAT}. Such assessment requires large amounts of expert monitoring and manual ratings, and can be inconsistent due to biases in human interpretations~\cite{reiley2011objective}. Considering the increasing attention to the efficiency and effectiveness of assessment and targeted feedback, conventional methods are no longer adequate in advanced surgery settings~\cite{vedula2017objective}.
Modern robot-assisted surgical systems are able to collect a large amount of sensory data from surgical robots or simulators~\cite{moustris2011robotsurgery}. This high volume data could reveal valuable information related to the skills and proficiencies of the operator.
However, analyzing such complex surgical data can be challenging.
Specifically, surgical motion profiles by nature are nonlinear, non-stationary stochastic processes~\cite{cheng2015time,klonowski2009everything} with large variability, both throughout a procedure, as well within repetitions of the same type of surgical task (e.g., suture throws)~\cite{reiley2009task}.
In addition, the high dimensionality of the data creates an additional challenge for accurate and robust skill assessments~\cite{reiley2011objective}. Further, although several surgical assessment methods have been developed, methods to autonomously coach the trainee are lacking.
Towards this aim, there is a great need to develop techniques for quicker and more effective surgical skill acquisition~\cite{kassahun2016surgical,vedula2017objective}.
In this paper, we are particularly interested in online skill assessment methods that could pave the way for autonomous surgical coaching.
\vspace{-10pt}
\subsection{Previous Approaches in Objective Skill Assessment}
Different objective skill assessment techniques have been reported in the literature~\cite{kassahun2016surgical}. Current approaches with a focus on surgical motions can be divided into two main categories: 1) descriptive statistic analysis, and 2) predictive modeling-based methods.
Descriptive statistic analysis aims to compute features from motion observations to quantitively describe skill levels. Specifically, summary features, such as movement time~\cite{judkins2009objective,liang2017motion,trejos2014force}, path length~\cite{judkins2009objective}, motion jerk~\cite{liang2017motion}, curvature~\cite{judkins2009objective}, etc., are widely used and have shown to have high correlations with surgical skills. Other novel measures of motion, such as energy expenditure~\cite{poursartip2017analysis}, semantic labels~\cite{ershad2016meaningful}, tool orientation~\cite{sharon2017ori_basedmetrics}, force~\cite{trejos2014force}, etc., can also provide discriminative information in measuring skills.
However, this approach involves manual feature engineering, requires task-specific knowledge and significant effort to design optimal skill metrics~\cite{shackelford2017metrics}. In fact, defining the best metrics to capture adequate information and be generalized enough to apply across different types of surgery or groups of surgeons remains an open problem~\cite{judkins2009objective,kassahun2016surgical,fard2018automated,stefanidis2009metrics}.
In contrast to descriptive analysis, predictive modeling-based methods aim to predict surgical skills from motion data. This method can be further categorized into 1) descriptive, and 2) generative modeling. In descriptive modeling, models are learnt by transforming raw motion data to intermediate interpretations and summary features. Coupled with advanced feature selection, these pre-defined representations are subsequently fed into learning models as an input for skill assessment. In the literature, machine learning (ML) algorithms are commonly explored for modeling, such as k-nearest neighbors (kNN), logistic regression (LR), support vector machines (SVM), and linear discriminant analysis (LDA). Such algorithms yielded a skill predictive accuracy between 61.1\% and 95.2\%~\cite{chmarra2010skills,vedula2016taskseg,fard2018automated,poursartip2017energy}.
Forestier \emph{et al.} developed a novel vector space model (VSM) to assess skills via learning from the \textit{bag of word}, a collection of discretized local features (strings) obtained from motion data~\cite{forestier2017jigsaw}. In~\cite{brown2017using}, Brown \emph{et al.} explored an ensemble approach, which combines multiple ML algorithms for modeling, and was able to predict rating scores with moderate accuracies (51.7\% to 75.0\%).
More recently, Zia \emph{et al.} utilized nearest neighbor (NN) classifiers with a novel feature fusion (texture-, frequency- and entropy-based features) and further improved skill assessment with accuracy ranging from 99.7\% to 100\%~\cite{zia2018skill}.
Although the descriptive modeling-based approaches show their validity in revealing skill patterns and underlying operation structures, the model accuracy and validity are typically limited by the quality of extracted features. Considering the complex nature of surgical motion profiles, critical information has the potential to be discarded within the feature extraction and selection process.
Alternatively, in generative modeling, temporal motion data are usually segmented into a series of predefined rudimentary gestures for certain surgical tasks. Using generative modeling algorithms, such as Hidden Markov Model (HMM) and its variants, several class-specific skill models were trained for each level and achieved accuracy ranging from 94.4\% to 100\%~\cite{reiley2009task,tao2012sparse}.
However, the segmentation of surgical gestures from surgeon motions can be a strenuous process. HMM models usually require large amounts of time and computational effort for parameter tuning and model development.
Further, one typical deficiency is that the skill assessment is obtained at the global task level, i.e., at the end of each operation. It requires an entire observation for each trial. This drawback potentially undermines the goal for an efficient online surgical skill assessment.
\vspace{-10pt}
\subsection{Proposed Approach}
Deep learning, also referred to as deep structured learning, is a set of learning methods that allow a machine to automatically process and learn from input data via hierarchical layers from low to high levels~\cite{2015natureDL,2015DL_review}.
These algorithms perform feature self-learning to progressively discover abstract representations during the training process.
Due to its superiority in complex pattern recognition, this approach dramatically improves the state of the art. Currently, deep learning models have achieved success in strategic games~\cite{silver2016alphaGO}, speech recognition~\cite{graves2013speech}, medical imaging~\cite{esteva2017dermatologist}, health informatics~\cite{Ng2017cardiologist}, and more.
In the study of robotic surgical training, DiPietro~\emph{et al.} first apply deep learning based on Recurrent Neural Networks for gesture and high-level task recognition~\cite{dipietro2016recognizing}.
Still, relatively little work has been done to explore deep learning approaches for surgical skill assessment.
\begin{figure*}[tb]
\centering
\includegraphics [width=0.95\linewidth,clip,trim=5pt 20pt 0pt 35pt]{fig1}
\caption{ {\bf An end-to-end framework for online skill assessment in robot-assisted minimally-invasive surgery.} The framework utilizes window sequences of multivariate motion data as an input, recorded from robot end-effectors, and outputs a discriminative assessment of surgical skills via a deep learning architecture.}
\label{fig: fig1}
\end{figure*}
In this paper, we introduce and evaluate the applicability of deep learning for a proficient surgical skill assessment. Specifically, a novel analytical framework with deep surgical skill model is proposed to directly process multivariate time series via an automatic learning. We hypothesize the learning-based approach could help to explore the intrinsic motion characteristics for decoding skills and promote an optimal performance in online skill assessment systems.
Fig~\ref{fig: fig1} shows the end-to-end pipeline framework. Without performing manual feature extraction and selection, latent feature learning is automatically employed on multivariate motion data and directly outputs classifications. To validate our approach, we conduct experiments on the public robotic surgery dataset, JIGSAW~\cite{gao2014JIGSAW}, in analysis of three independent training tasks: \textit{Suturing} (SU), \textit{Needle-passing} (NP), and \textit{Knot-tying} (KT).
To the best of our knowledge, it is the first study to employ a deep architecture for an objective surgical skill analysis. The main contributions of this paper can be summarized as:
\begin{enumerate}
\item[--] An novel end-to-end analytical framework with deep learning for skill assessment based on high-level analysis of surgical motion.
\item[--] Experimental evaluation of our proposed deep skill model.
\item[--] Application of data augmentation leveraging the limitation of small-scale JIGSAWS dataset, discussion on the effect of labeling approaches on the assessment accuracy, and exploration of validation schemes applicable for deep-learning-based development.
\end{enumerate}
In the remainder of this paper we first present our proposed approach and implementation details in Section~\ref{sec: model}.
We then conduct experiments on JIGSAW dataset to validate the model in Section~\ref{sec: experiments}. Data pre-processing, training, and evaluation approaches are given. Then, we present our results in Section~\ref{sec: results} and discussions in Section~\ref{sec: discussion}. Last, we conclude this paper in Section \ref{sec:conclusion}.
\begin{figure*}[tb]
\centering
\includegraphics [width=1\linewidth,clip,trim=0pt 40pt 0pt 5pt]{fig2}
\caption{ {\bf Illustrations of the proposed deep architecture using a 10-layer convolutional neural network.} The window width $W$ used in this example is 60. Starting from the inputs, this model consists of three conv-pool stages with a convolution and max-pooling each, one flatten layer, two fully-connected layers, and one softmax layer for outputs. Note that the max-pooling dropout (with probability of 20\%) and fully-connected dropout (with probability of 50\%) is applied during training.}
\label{fig: fig2}
\end{figure*}
\section{DEEP SURGICAL SKILL CLASSIFICATION MODEL}
\label{sec: model}
Our deep learning model for surgical skill assessment is motivated from studies in multiple domains~\cite{2015DL_review,langkvist2014review,gamboa2017deeptimeseries}. In this section, we introduce a deep architecture using Convolutional Neural Network (CNN) to assess surgical skills from an end-to-end classification.
\subsection{Problem Formulation}
Here, the assessment of surgical skills is formalized as a supervised three-class classification problem, where the input is multivariate time series (MTS) of motion kinematics measured from surgical robot end-effectors, $X$, and the output is the predicted labels representing corresponding expertise levels of trainees, which can be one-hot encoded as $y \in \{1 :``Novice", 2:``Intermediate'', 3 :``Expert" \}$. Typically, ground-truth skill labels are acquired from expert ratings, crowdsourcing, or self-reporting experience.
The objective cost function for training the network is defined as a multinomial cross-entropy cost, $J$, as shown in Eq.~\ref{Eq: cross-entropy}.
\begin{equation}
\label{Eq: cross-entropy}
J(\theta) = -\displaystyle\sum_{i=1}^{m}\displaystyle\sum_{k=1}^{K}{ 1 \{y^{(i)}=k \} \log{p(y^{(i)}=k | x^{(i)}; \theta )} }
\end{equation}
where $m$ is the total number of training examples, $K$ is the class number, $K=3$, and $p(y^{(i)}=k | x^{(i)}; \theta)$ is the conditional likelihood that the predicted label $y^{(i)}$ on a single training example $x^{(i)}$ is assigned to class $k\in K$, given specific trained model parameters $\theta$.
\subsection{Model Architecture}
The architecture of the proposed neural network consists of five types of layers: convolutional layer, pooling layer, flatten layer, fully-connected layer and softmax layer.
Fig.~\ref{fig: fig2} shows a 10-layer working architecture and parameter settings used in the network. Note that, the depth of the network is chosen after trial-and-error from the training/validation procedure.
The network takes the slide of length $W$ from $C$-channel sensory measurements as input, which is a $W\times C$ matrix, where $C$ is the number of channels, or dimensions, of input time series.
Then, input samples are first processed by three convolution-pooling (Conv-pool) stages, where each stage consists of a convolution layer and a max-pooling layer.
Each convolution layer has different numbers of kernels with the size of 2 and each kernel is convoluted with the input matrix of the layer with a stride of 1. Specifically, the first convolution (\textit{Conv1}) filters the $W\times 38$ input matrix with 38 kernels; the second convolution with 76 kernels (\textit{Conv2}) will filter the corresponding output matrix of previous layer; and the third convolutional layer (\textit{Conv3}) filters with 152 kernels. To reduce the dimensionality of the feature maps and avoid overfitting, corresponding connections of each convolution are followed by a max-pooling operation. The max-pooling operations take the output of convolution layer as input, and downsample the extracted feature maps, where each local input patch is replaced by taking the maximum value of each channel over the patch. The size of max-pooling is set as 2 with a stride of 2.
In this network, we use the rectified linear unit (ReLU) as the activation function to add nonlinearity in all convolutional layers and fully-connected layers~\cite{nair2010relu}. Finally, we apply a softmax logistic regression to produce a distribution of probability over three classes for the output layer.
\subsection{Implementation}
To implement the proposed architecture, the deep learning skill model is trained from scratch, which does not require any pre-trained model.The network algorithm is implemented using Keras library with Tensorflow backend based on Python 3.6~\cite{chollet2015keras}. We first initialize parameters at each layer using the Xavier initialization method~\cite{2015DL_review}, where biases are initialized as zeros, the weights at each layer are initialized from a Gaussian distribution with mean 0 and a variance of $1/N$, where $N$ specifies the number of neurons in the previous layer.
During the optimization, our network is trained end-to-end by minimizing the multinomial cross-entropy cost between the predicted and ground-truth labels, as defined in Eq.~\ref{eq1}, at the learning rate, $\epsilon$, of 0.0001. To train the net efficiently, we run mini-batch updates of gradient descent, which calculate network parameters on a subset of the training data at each iteration~\cite{li2014minibatch}. The size of mini batches is set to 600. A total of 300 epochs for training were run in this work.
The network parameters are optimized by an Adam solver~\cite{kingma2014adam}, which computes adaptive learning rates for each neuron parameter via estimates of first and second moment of the gradients. The exponential decay rates of the first and second moment estimates are set to 0.9 and 0.999, respectively.
Also, to achieve better generalization and model performance, we apply a stochastic dropout regularization to our neural network during training time. Components of outputs from specific layers of networks are randomly dropped out at a specific probability~\cite{srivastava2014dropout}. This method has proven its effectiveness to reduce over-fitting in complex deep learning models~\cite{wu2015maxpolldrop}. In this study, we implement two strategies of dropout: one is the max-pooling dropout on the layers of max-pooling after ReLU non-linearity; another regularization is the fully-connected dropout on the fully-connected layers. The probabilities of dropout for the max-pooling and fully-connected dropout are set at 0.2 and 0.5, respectively.
As mentioned above, the hyper-parameters used for CNN implementation include the learning rate, mini-batch size, epoch, number of filters, stride and size of kernel, and dropout rates in the max-pooling and fully-connected layers. These hyper-parameters are chosen and fine-tuned by employing the validation set, which is split from training data. We save the best model, as evaluated on validation data, in order to obtain an optimal prediction performance.
\section{EXPERIMENT SETUP}
\label{sec: experiments}
\subsection{Dataset}
Our dataset comes from the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), the only public-available minimally invasive surgical database, which is collected from the \textit{da Vinci} tele-robotic surgical system~\cite{gao2014JIGSAW,ahmidi2017dataset}.
The \textit{da Vinci} robot is comprised of two master tool manipulators (MTMs) on left and right sides, two patient-sides slave manipulators (PSMs), and an endoscopic camera arm. Robot motion data are captured (sampling frequency 30 Hz) as multivariate time series with 19 measurements for each end-effector: tool tip Cartesian positions ($x$, $y$, $z$), rotations (denoted by a $3 \times 3$ matrix $R$), linear velocities ($v_x$, $v_y$, $v_z$), angular velocities ($\omega_x'$, $\omega_y'$, $\omega_z'$), and the gripper angle $\theta$. Details of the JIGSAWS kinematic motion data are summarized in Table~\ref{tab: JIGSAW}.
The dataset contains recordings from eight surgeons with varying robotic surgical experience. Each surgeon performed three different training tasks, namely, \textit{Suturing} (SU), \textit{Knot-tying} (KT), and \textit{Needle-passing} (NP), and repeated each task five times. All three tasks are typically standard components in surgical skill training curricula~\cite{gao2014JIGSAW}. An illustration of the three operation tasks is shown in Fig.~\ref{fig: fig3}.
The two ways in which skill labels are reported in JIGSAWS dataset are: (1) self-proclaimed skill labels based on practice hours with \textit{expert} reporting greater than 100 hours, \textit{intermediate} between 10-100 hours, and \textit{novice} reporting less than 10 hours of total surgical robotic operation time, and (2) a modified global rating scale (GRS) ranging from 6 and 30, manually graded by an experienced surgeon.
In this study, we use the self-proclaimed skill levels and GRS-based skill levels as the ground-truth labels for each surgical trial, respectively. In order to label surgeons skill levels using GRS scores, inspired from~\cite{fard2018automated}, thresholds of 15 and 20 are used to divide surgeons into \textit{novice}, \textit{intermediate}, and \textit{expert}, in tasks of \textit{Needle-passing} and \textit{Knot-tying}, and thresholds of 19 and 24 are used in \textit{Suturing} for skill labeling.
\begin{table*}[tb]
\begin{centering}
\renewcommand\arraystretch{1.6}
\renewcommand\tabcolsep{11pt}
\footnotesize
\caption{{\bf Variables of sensory signals from end-effectors of \textit{da Vinci} robot.} These variables are captured as multivariate time series data in each surgical operation trial.}
\label{tab: JIGSAW}
\centering
\begin{tabular}{lcccc}
\toprule[\heavyrulewidth]
\multicolumn{2}{c}{ \bf{End-effector Category}} & \multicolumn{1}{c}{ \bf{Description} } & \bf{Variables} & \bf{Channels} \\
\midrule[\heavyrulewidth]
\multirow{2}{*}{\parbox{4cm}{\bf{Master Tool \\ Manipulator (MTM)}}} & \textit{MTM1} & \multirow{2}{*}{\parbox{5cm}{Positions (3), rotation matrix (9), velocities (6) of tool tip, gripper angular velocity (1)}} & \multirow{2}{*}{\parbox{2.5cm}{$x$, $y$, $z$, $R \in {\rm I\!R}^{3\times 3}$, \\ $v_x$, $v_y$, $v_z$, $\omega_x'$, $\omega_y'$, $\omega_z'$, $\alpha$}} & \multirow{2}{*}{$19\times2$}\\
& \textit{MTM2} & & & \\
\midrule[\heavyrulewidth]
\multirow{2}{*}{\parbox{4cm}{\bf{Patient-side\\Manipulator (PSM)}}} & \textit{PSM1} & \multirow{2}{*}{\parbox{5cm}{Positions (3), rotation matrix (9), velocities (6) of tool tip, gripper angular velocity (1)}} & \multirow{2}{*}{\parbox{2.5cm}{$x$, $y$, $z$, $R \in {\rm I\!R}^{3\times 3}$, \\ $v_x$, $v_y$, $v_z$, $\omega_x'$, $\omega_y'$, $\omega_z'$, $\alpha$}} & \multirow{2}{*}{$19\times2$} \\
& \textit{PSM2} & & & \\
\bottomrule[\heavyrulewidth]
\end{tabular}
\end{centering}
\end{table*}
\begin{figure*}[tb]
\centering
\includegraphics [width=0.85\linewidth,clip,trim=0pt 0pt 0pt 0pt]{fig3}
\caption{{\bf Snapshots of operation tasks during robot-assisted minimally invasive surgical training}. The operations are implemented using the \textit{da Vinci} robot and are reported in JIGSAWS~\cite{gao2014JIGSAW}: (A) \textit{Suturing}, (B) \textit{Needle-passing}, (C) \textit{Knot-tying}.}
\label{fig: fig3}
\vspace{-0.3cm}
\end{figure*}
\subsection{Data Preparation \& Inputs}
\paragraph{$Z$-normalization} Due to differences in the scaling ranges and offset effects of the sensory data, the data fed into the neural network are first normalized with a $z$-normalization process. Each channel of raw data, $x$, is normalized individually as $z = \frac{x- \mu}{\sigma}$, where $\mu$ and $\alpha$ are the mean and standard deviation of vector $x$. This normalization process can be performed online by feeding the network with the batch of sensory data.
\paragraph{Data Augmentation} One challenge for developing a robust skill model with our approach comes from the lack of large-scale data samples in JIGSAWS, where the number of labeled samples is only 40 in total (8 subjects with 5 trial repetitions) for each surgical task. Generally, deep learning might suffer from overfitting if the size of available dataset is limited~\cite{2015natureDL}. To overcome this problem, data augmentation is introduced to prevent overfitting and improve generalization of the deep learning model.
This has been seen so far mostly in image recognition, where several methods, such as scaling, cropping, and rotating are used~\cite{krizhevsky2012imagenet,he2016deep}. Inspired from the computer vision community, similar augmentation techniques were applied for time series data to enlarge small-sized datasets and increase decoding accuracy \cite{cui2016MCNN,le2016dataaug,um2017data}.
In this study, to support the network in learning, we adapted the augmentation strategy and introduced a two-step augmentation process before inputting data into our network.
First, followed by $z$-normalization, we viewed and separated the surgical motion data from master (MTMs) and patient-side manipulators (PSMs) as two distinct sample instances, while the class labels for each trial were preserved. This procedure is also appropriate in cases where the MTMs and PSMs are not directly correlated (e.g. position scaling, or other differences in robot control terms).
Then, we carried out a label-preserving cropping with a sliding window, where the motion sub-sequences were extracted using crops, i.e., sliding a fixed-size window within the trial. The annotation for each window is identical to the class label of original trial, from which the sub-sequences are extracted. One advantage of this approach is that it leads to larger-scale sets for the robust training and testing of the network. Also, this technique allows us to format time series as equal-length inputs, regardless of the varied lengths of original data.
The pseudo-code of sliding-window cropping algorithm is shown in Algorithm~\ref{alg: cropwindow}, where $X$ is the input motion data, $s$ is the output crops (i.e., sub-sequences), $W$ is the sliding window size and $L$ is the step size. After experimenting based on trial-and-error, we chose a window size $W=60$ and a step size $L=30$ in this work.
Overall, by applying the aforementioned data augmentation process on the original dataset, it resulted in 6290, 6780, and 3542 crops for \textit{Suturing}, \textit{Needle-passing}, and \textit{Knot-tying}, respectively. All of these crops are new data samples for the network. The overall numbers of obtained crops are different since original recording lengths are varied across each trial in JIGSAWS. As a result, we obtained the total sample trials with the size of 6290, 6780, and 3542 for three tasks, respectively, according to the selected setting.
\algnewcommand\algorithmicinput{\textbf{INPUT:}}
\algnewcommand\INPUT{\item[\algorithmicinput]}
\algnewcommand\algorithmicoutput{\textbf{OUTPUT:}}
\algnewcommand\OUTPUT{\item[\algorithmicoutput]}
\makeatletter
\newcommand{\let\@latex@error\@gobble}{\let\@latex@error\@gobble}
\makeatother
\begin{figure}[tb]
\let\@latex@error\@gobble
\begin{algorithm}[H]
\caption{Sliding-window Cropping Algorithm}
\label{alg: cropwindow}
\begin{algorithmic}[1]
\INPUT raw time series $X$, \textit{stepSize} $L$, \textit{windowWidth} $W$
\OUTPUT sub-sequences $s =$ SlidingWindow$(X, L, W)$
\State \textbf{initialization} $m:= 0$, $n:=0$
\State $s: = empty$
\While {$m+W \leq \textit{length}(X)$}
\State $s[n]:= X[m:(m+W-1)]$
\State $m:= m + L, n:=n+1$
\EndWhile
\State \Return sub-sequences $s$
\end{algorithmic}
\end{algorithm}
\vspace{-0.8cm}
\end{figure}
\subsection{Training \& Testing}
To validate the model classification, we adopt two individual validation schemes in this work: \textit{Leave-one-supertrial-out (LOSO)} and \textit{Hold-out}. The objective of the comparison is to search for the best validation strategy suitable for system development in the case of deep learning. Based on each cross-validation setting, we train and test a surgical skill model for each surgical task, \textit{Suturing} (SU), \textit{Knot-tying} (KT), and \textit{Needle-passing} (NP).
\paragraph {Leave-one-supertrial-out (LOSO) cross-validation:} This technique involves repetitively leaving out a single subset for testing in multiple partitions. Specifically, a supertrial, $i$, defined as a subset of examples combining the $i$-th trials from all subjects for a given surgical task~\cite{gao2014JIGSAW}, is left out for testing, while the union of remaining examples is used for training. This process is repeated in five folds where each fold consists of each one of the five supertrials. The average of all five-fold performance measures (see Section 3.4 for definitions) in each test set is reported and gives an aggregated classification result. As a widely-used validation strategy, the \textit{LOSO} cross-validation shows its value in evaluating the robustness of a method for skill assessment.
\paragraph {Hold-out:} Different from the \textit{LOSO} cross-validation, the \textit{Hold-out} strategy is implemented by conducting a train/test split once, which is normally adopted in deep learning models when large datasets are presented. In this work, one single subset consisting of one of the five trials from each surgeon, for a given surgical task, is left out throughout the training and used as a hold-out for the purpose of testing. Also, to reduce the bias and avoid potential overfitting, we randomly select a trial out of the five repetitions for each subject.
\subsection{Modeling Performance Measures}
To compare the model performance, classifications are evaluated regarding four common metrics (Eq.~\ref{eq1})~\cite{sammut2011encyclopedia,kumar2012assessing,ahmidi2017dataset}: the average \textit{accuracy} -- ratio between the sum of correct predictions and the total number of predictions; \textit{precision} -- ratio of correct positive predictions ($T_p$) and the total positive results predicted by the classifier $(T_p + F_p)$;
\textit{recall} -- ratio of positive predictions ($T_p$) and the total positive results in the ground-truth $(T_p + F_n)$; and \textit{f1-score} -- a weighted harmonic average between \textit{precision} and \textit{recall}.
\begin{equation}
\label{eq1}
\begin{aligned}
& \textit{precision} = \frac{T_p}{T_p + F_p} \\[6pt]
& \textit{recall} = \frac{T_p}{T_p + F_n} \\[6pt]
& \textit{f1-score} = \frac{2* (recall * precision) }{ recall + precision } \\[6pt]
\end{aligned}
\end{equation}
where $T_p$ and $F_p$ are the numbers of true positives and false positives, $T_n$ and $F_n$ are the numbers of true negatives and false negatives, for a specific class.
In order to assess the computing effort involved in model classification, we measure the running time of skill models to classify all samples in the entire testing set. In the \textit{LOSO} scheme, the running time is measured as the average value from the five-fold cross-validation.
\begin{figure*}[tb]
\centering
\includegraphics [width=1\linewidth,clip,trim=0pt 5pt 5pt 10pt]{fig4}
\caption{ {\bf Confusion matrices of classification results in three surgical training tasks. (A) self-proclaimed skill classification, (B) GRS-based skill classification.} Element value $(i,j)$ and color represent the probability of predicted skill label $j$, given the self-proclaimed skill label $i$, where $i$ and $j$ $ \in \{1 :``Novice", 2:``Intermediate", 3 :``Expert" \}$. The diagonal corresponds to correct predictions.}
\label{fig: fig4}
\end{figure*}
\section{RESULTS}
\label{sec: results}
We evaluate the proposed deep learning approach for self-proclaimed skill classification and GRS-based skill classification using the JIGSAWS dataset.
The confusion matrices of classification results are obtained from the testing set under the \textit{LOSO} scheme.
We compare our results with the state-of-the-art classification approaches in Table~\ref{tab: review_perform}.
It is important to mention that in order to obtain a valid benchmarking analysis, the classifiers investigated in this study are selected among the skill assessment using JIGSAWS motion data and evaluated based on the same \textit{LOSO} validation.
Fig.~\ref{fig: fig4} (A) shows the results of three-class self-proclaimed skill classification. The proposed deep learning skill model achieved high-accuracy prediction performance. Specifically, our model obtained accuracies of 93.4\%, 89.8\%, and 84.9\% in tasks of \textit{Suturing}, \textit{Needle-passing} and \textit{Knot-tying}, respectively, using a window crop with 2-second duration containing 60 time steps ($W=60$).
In contrast to the per-window assessment, highest accuracies reported in the literature range from 99.9\% to 100\% via a descriptive model using entropy features based on the entire observation of full operation trial.
For the GRS-based skill classification, as shown in Fig.~\ref{fig: fig4} (B), the proposed approach can achieve higher accuracy than others (92.5\%, 95.4\%, and 91.3\% in \textit{Suturing}, \textit{Needle-passing} and \textit{Knot-tying}). Specifically, the deep learning model outperformed $k$-nearest neighbors (k-NN), logistic regression (LR), and support vector machine (SVM), with the accuracy improvements ranging from $2.89\%$ to $22.68\%$ in \textit{Suturing}, and $10.94\%$ to $21.09\%$ in \textit{Knot-tying}.
To study the capability of our proposed approach for online skill decoding, we further evaluate the performance of proposed approach using the input sequences with varying lengths. We repeated our experiment for the self-proclaimed skill classification with different sizes of sliding window: $W1=30$, $W2=60$ and $W3=90$.
Modeling performance of window sizes together with the average running time taken for self-proclaimed skill classification is reported in Table~\ref{tab: modelresults}.
The results show that our deep learning skill model can offer advantages over traditional approaches with highly time-efficient skill classification on the per-window basis, without the full observation of surgical motion for each trial (per-trial basis).
Also, a higher average accuracy can be found with an increase of sliding window size. Specifically, the 3-second sliding window containing 90 time steps ($W3=90$) can obtain better results compared to 2-second window ($W2=60$), with average accuracy improvements of $0.75\%$ in \textit{Suturing}, $0.56\%$ in \textit{Needle-passing} and $2.38\%$ in \textit{Knot-tying}, respectively.
Furthermore, in order to characterize the roles of two validation schemes, we repeat the above modeling process using \textit{Hold-out} strategy. Table~\ref{tab: modelresults} shows the comparison of self-proclaimed skill classification under \textit{LOSO} cross-validation and \textit{Hold-out} schemes.
\begin{table*}[tb]
\centering
\caption{ {\bf Comparison of existing algorithms employed for skill assessment using motion data from JIGSAWS dataset.} We benchmark the results in terms of accuracy based on \textit{LOSO} cross-validation. Models conducting classification on the trial level are categorized as \textit{per-trial basis}.
}
\label{tab: review_perform}
\renewcommand\arraystretch{1.35}
\renewcommand\tabcolsep{11.5pt}
\begin{tabular}{lccccccc}
\toprule[\heavyrulewidth]
\multirow{2}{*}{\textbf{Author}}& \multirow{2}{*}{\centering \textbf{Algorithm}} & \multirow{2}{*}{ \parbox{1.5cm}{\center \textbf{Labeling\\Approach}}} & \multirow{2}{*}{\parbox{1.5cm}{\centering \textbf{Metric\\ Extraction}}} & \multicolumn{3}{c}{\textbf{Accuracy}} & \multicolumn{1}{c}{\multirow{2}{*}{\bf{Characteristics}} }\\ \cmidrule(lr){5-7}
& & & & \textbf{SU} & \textbf{NP} & \textbf{KT} & \\
\midrule[\heavyrulewidth]
\multirow{2}{*}{\parbox{1.2cm}{Lingling\\\textit{2012}~\cite{tao2012sparse}}} & \multirow{2}{*}{S-HMM} & \multirow{2}{*}{\textit{Self-proclaim} } & \multirow{2}{*}{ \parbox{1.5cm}{\centering{gesture\\segments}}} & \multirow{2}{*}{97.4} & \multirow{2}{*}{96.2} & \multirow{2}{*}{94.4} & \multirow{2}{*}{ \parbox{3cm}{\textbullet\ generative modeling \\ \textbullet\ segment-based \\ \textbullet\ per-trial basis}}\\
& & & & &\\
\midrule[\heavyrulewidth]
\multirow{2}{*}{\parbox{1.2cm}{Forestier\\\textit{2017}~\cite{forestier2017jigsaw}}} & \multirow{2}{*}{VSM} & \multirow{2}{*}{\textit{Self-proclaim}} & \multirow{2}{*}{ \parbox{1.5cm}{\centering {bag of words\\features}}} & \multirow{2}{*}{89.7} & \multirow{2}{*}{96.3} & \multirow{2}{*}{61.1} & \multirow{2}{*}{ \parbox{3cm}{\textbullet\ descriptive modeling \\ \textbullet\ feature-based \\ \textbullet\ per-trial basis}}\\
& & & & &\\
\midrule[\heavyrulewidth]
\multirow{2}{*}{\parbox{1.2cm}{Zia \textit{2018}\\~\cite{zia2018skill}}} & \multirow{2}{*}{NN} & \multirow{2}{*}{\textit{Self-proclaim}} & \multirow{2}{*}{ \parbox{1.5cm}{\centering{entropy\\features}}} & \multirow{2}{*}{100} & \multirow{2}{*}{99.9} & \multirow{2}{*}{100} & \multirow{2}{*}{ \parbox{3cm}{\textbullet\ descriptive modeling \\\textbullet\ feature-based \\ \textbullet\ per-trial basis}}\\
& & & & &\\
\midrule[\heavyrulewidth]
\multirow{3}{*}{\parbox{1.2cm}{Fard \textit{2017}\\~\cite{fard2018automated}}} & $k$-NN & \multirow{3}{*}{\textit{GRS-based} } & \multirow{3}{*}{ \parbox{1.5cm}{\centering{movement\\features}}} & 89.7 & \textit{N/A} & 82.1 & \multirow{3}{*}{ \parbox{3cm}{ \textbullet\ descriptive modeling \\\textbullet\ feature-based\\ \textbullet\ two-class skill only \\ \textbullet\ per-trial basis}}\\
& LR & & & 89.9 & \textit{N/A} & 82.3 & \\
& SVM & & & 75.4 & \textit{N/A} & 75.4 & \\
\midrule[\heavyrulewidth]
\multirow{3}{*}{\parbox{1.2cm}{\textbf{Current study}}} & \multirow{3}{*}{ \centering CNN} & \multirow{1}{*}{\textit{Self-proclaim} } & \multirow{3}{*}{ \parbox{1.5cm}{\centering \textit{N/A}}} & \multirow{1}{*}{ 93.4 } & \multirow{1}{*}{ 89.8} & \multirow{1}{*}{ 84.9 } & \multirow{1}{*}{ \parbox{3cm}{\textbullet\ deep learning modeling \\ \textbullet\ no manual feature \\\textbullet\ per-window basis \\ \textbullet\ online analysis}} \\
& & \multirow{2}{*}{\textit{GRS-based}} & & \multirow{2}{*}{ 92.5 } & \multirow{2}{*}{ 95.4} & \multirow{2}{*}{ 91.3 } & \\
& & & & &\\
\midrule[\heavyrulewidth]
\end{tabular}
\end{table*}
\begin{table*}[tb]
\centering
\caption{ {\bf Summary table showing self-proclaimed skill classification performance based on different validation schemes and sliding windows.} Window size is set as $W1=30$, $W2=60$ and $W3=90$. Running time quantifies the computing effort involved in classification. Bold numbers denote best results regarding f1-score, accuracy, and running time.
}
\label{tab: modelresults}
\renewcommand\arraystretch{0.86}
\renewcommand\tabcolsep{12pt}
\begin{tabular}{llccccccc}
\toprule[\heavyrulewidth]
\multirow{2}{*}{\textbf{Task}} & \multirow{2}{*}{\parbox{1.0cm}{ \textbf{Validation Scheme}}} & \multirow{2}{*}{\parbox{0.8cm}{ \textbf{Window Size}}} & \multicolumn{3}{c}{\textbf{F1-score}} & \multirow{2}{*}{\textbf{Accuracy}} & \multirow{2}{*}{\parbox{1.5cm}{\centering \textbf{Running Time ($ms$)}}} \\ \cmidrule(lr){4-6}
& & & \textbf{Novice} & \textbf{Interm.} & \textbf{Expert} & & \\
\midrule[\heavyrulewidth]
\multirow{6}{*}{\parbox{1cm}{\textbf{Suturing}}} & \multirow{3}{*}{LOSO} & $W1$ & 0.94 & 0.83 & 0.95 & 0.930 & \textbf{146.45}\\
& & $W2$ & 0.94 & 0.83 & 0.97 & 0.934 & 185.40\\
& & $W3$ & \textbf{0.95} & \textbf{0.86} & \textbf{0.96} & \textbf{0.941} & 247.01\\ \cmidrule(lr){2-8}
& \multirow{3}{*}{Hold-out} & $W1$ & 0.98 & 0.92 & 0.94 & 0.961 & \textbf{98.10}\\
& & $W2$ & \textbf{0.99} & 0.94 & 0.96 & 0.972 & 146.40\\
& & $W3$ & \textbf{0.99} & \textbf{0.98} & \textbf{0.97} & \textbf{0.983} & 194.79\\
\midrule
\multirow{6}{*}{\parbox{1cm}{\textbf{Needle-\\passing}}} & \multirow{3}{*}{LOSO} & $W1$ & 0.95 & 0.73 & 0.88 & 0.889 & \textbf{153.36}\\
& & $W2$ & 0.95 & 0.75 & \textbf{0.90} & 0.898 & 194.98\\
& & $W3$ & \textbf{0.96} & \textbf{0.76} & 0.89 & \textbf{0.903} & 248.03\\ \cmidrule(lr){2-8}
& \multirow{3}{*}{Hold-out} & $W1$ & 0.97 & 0.80 & 0.91 & 0.919 & \textbf{113.49}\\
& & $W2$ & \textbf{0.98} & 0.81 & 0.91 & 0.925 & 169.72\\
& & $W3$ & \textbf{0.98} & \textbf{0.86} & \textbf{0.94} & \textbf{0.945} & 207.12 \\
\midrule
\multirow{6}{*}{\parbox{1cm}{\textbf{Knot-tying}}} & \multirow{3}{*}{LOSO} & $W1$ & 0.90 & 0.57 & 0.90 & 0.847 & \textbf{101.83}\\
& & $W2$ & 0.90 & 0.62 & \textbf{0.92} & 0.849 & 138.25\\
& & $W3$ & \textbf{0.92} & \textbf{0.64} & 0.91 & \textbf{0.868} & 147.38 \\ \cmidrule(lr){2-8}
& \multirow{3}{*}{Hold-out} & $W1$ & 0.87 & 0.42 & 0.91 & 0.803 & \textbf{74.5}\\
& & $W2$ & \textbf{0.88} & \textbf{0.48} & \textbf{0.92} & \textbf{0.817} & 113.55 \\
& & $W3$ & \textbf{0.88} & 0.47 & 0.91 & 0.816 & 139.39\\
\bottomrule[\heavyrulewidth]
\end{tabular}
\end{table*}
\section{DISCUSSION}
\label{sec: discussion}
Recent trends in robot-assisted surgery have promoted a great need for proficient approaches for objective skill assessment~\cite{vedula2017objective}. Although several analytical techniques have been developed, efficiently measuring surgical skills from complex surgical data still remains an open problem.
In this paper, our primary goal is to introduce and evaluate the applicability of a novel deep learning approach towards online surgical skill assessment. Compared to conventional approaches, our proposed deep learning model reduced dependency on the complex manual feature design or carefully-tuned gesture segmentation. Overall, deep learning skill models, with appropriate design choices, yielded competitive performance in both accuracy and time efficiency.
\subsection{Validity of our deep learning model for objective skill assessment}
For results shown in Fig.~\ref{fig: fig4} (A) and (B), we note that both \textit{Suturing} and \textit{Needle-passing} are associated with better results than \textit{Knot-tying} in both self-proclaimed skill classification and GRS-based skill classification, indicting that \textit{Knot-tying} is a more difficult task for assessment. For self-proclaimed skill classification, the majority of misclassification errors occurred during the \textit{Knot-tying} task where self-proclaimed $Intermediate$ are misclassified as actual $Novice$. As shown in Fig.~\ref{fig: fig4}(A), the distribution across $Intermediate$ is pronounced with the probability of $0.34$ being misclassified as $Novice$.
This could be attributed to the fact that the self-proclaimed skill labels, which are based on hours spent in robot operations, may not accurately reflect the ground-truth knowledge of expertise.
As evident, the classification using GRS-based skill labels generally performs better than the results using self-proclaimed skills. Our results indicate that more accurate surgeon skill labels relative to the true surgeon expertise might help to further improve the overall accuracy of skill assessment.
As shown in Table~\ref{tab: review_perform}, high classification accuracy can be achieved by a few existing methods using generative modeling and descriptive modeling. Specifically, a generative model, sparse HMM (S-HMM), is able to give high predictive accuracy ranging from 94.4\% to 97.4\%. This result might benefit from a precise description of motion structures and pre-defined gestures in each task. However, such an approach requires prerequisite segmentation of motion sequences, as well as different complex class-specific models for each skill level~\cite{tao2012sparse}.
Second, descriptive models sometimes may be superior to provide highly accurate results, such as the use of novel entropy features. However, the deficiency is that significant domain-specific knowledge and development is required to define the most informative features manually, which directly associate with the final assessment accuracy. This deficiency could also explain why there exists a larger variance in accuracy between other studies (61.1\%-100\%), which are sensitive to the choice of predefined features, as shown in Table~\ref{tab: review_perform}.
Another attention of our analysis is focused on the optimal sliding windows needed to render an efficient assessment. The duration of time steps in each window should roughly correspond to the minimum time required to decode skills from input signals. Usually, technical skill is assessed at the trial level; however, a quicker and more efficient acquisition may enable immediate feedback to the trainee, possibly improving learning outcomes.
Overall, our findings suggest that the per-window-based classification in this work is well-applicable for online settings.
Smaller window size can allow for a faster running speed and less delay due to the light-weight computing expense. In contrast, an lager window size implies an increase of delay due to larger network complexity and higher computing effort involved in decoding. Specifically, as shown in Table~\ref{tab: modelresults}, within the \textit{LOSO} validation scheme, the network can classify the entire testing dataset within 133.88 $ms$ for $W1$ and 172.87 $ms$ running time for $W2$, while it required 214.14 $ms$ running time for $W3$ to classify the samples.
However, it is important to mention that given an increase of window sizes, a higher accuracy can be achieved. In particular, there seems to be more gains in the \textit{Knot-tying} analysis, where the highest $2.24\%$ accuracy improvement was obtained from $W2$ to $W3$. This result might be due to the fact that more information of motion dynamics are contained in larger crops, thus allowing for an improved decoding accuracy.
We suggest that this trade-off between decoding accuracy and time efficiency could be a factor of interest in online settings of skill assessment.
\subsection{Comparison of Validation Schemes}
We investigated the validity of two different validation schemes for skill modeling. In this case, the differences between both are non-trivial in the deep learning development.
Noticeably, \textit{LOSO} cross-validation gives a reliable estimate of system performance. However, the \textit{Hold-out} scheme, which uses a random subset of surgical trials as a hold-out, demonstrates relatively larger variances among results. This result can by explained by the differences among these randomly selected examples in the \textit{Hold-out} validation.
Nevertheless, the \textit{Hold-out} shows consistency with the results in \textit{LOSO} scheme across different tasks and window sizes, as shown in Table~\ref{tab: modelresults}.
It is important to note that given a large dataset, the \textit{LOSO} cross-validation might be less efficient for model assessment. In this scenario, the computing load in \textit{LOSO} modeling has been largely increased, which may not be suitable for complex deep architectures. However, the \textit{Hold-out} only needs to run once and is less computationally expensive in modeling.
\subsection{Limitations}
Despite the progress in present work, there still exist some limitations of deep learning models towards a proficient online skill assessment.
First, as confirmed by our results, the classification accuracy of supervised deep learning relies heavily on the labeled samples. The primary concern in this study lies with the JIGSAWS dataset and the lack of strict ground-truth labels of skill levels. It is important to mention that there is a lack of consensus in the ground-truth annotation of surgical skills. In the GRS-based labeling, skill labels were annotated based on the predefined cut-off threshold of GRS scores, however, no commonly accepted cutoff exists.
For future work, a refined labeling approach with stronger ground-truth knowledge of surgeon expertise may further improve the overall skill assessment~\cite{sun2017revisiting,dockter2017minimally}.
Second, we will search for a detailed optimization of our deep architecture, parameter settings and augmentation strategies to better handle motion time-series data and improve the online performance further.
In addition, the interpretability of automatically learned representations is currently limited due to the black-box nature of deep learning models. It would be interesting to investigate a visualization of deep hierarchical representations to understand hidden skill patterns, so as to better justify the decision taken by a deep learning classifier.
\section{CONCLUSION}
\label{sec:conclusion}
The primary contributions of this study are: (1) a novel data-driven deep architecture for an active classification of surgical skill via end-to-end learnings, (2) an insight in accuracy and time efficiency improvements for online skill assessment, and (3) application of data augmentation and exploration of validation schemes feasible for deep skill modeling.
Taking advantage of recent technique advances, our approach has several desirable proprieties and is extremely valuable for online skill assessment.
First, a key benefit is an end-to-end skill decoding, learning abstract representations of surgery motion data with automatic recognitions of skill levels. Without a priori dependency on engineered features or segmentation, the proposed model achieved comparable results to previously reported methods.
It yielded highly competitive time efficiency given relatively small crops (1$-$3 second window with 30$-$90 time steps), which were computationally feasible for online assessment and immediate feedback in training.
Furthermore, we demonstrated that an improvement of modeling performance could be achieved by the optimization of design choices. An appropriate window size could provide better results in \textit{Knot-tying} with a 2.24\% accuracy increase. Also, the development of deep skill models might benefit from the \textit{Hold-out} strategy, which requires less computing effort than the \textit{LOSO} cross-validation, especially in the case where large datasets are involved.
Overall, the ability to automatically learn abstract representations from raw sensory data with high predictive accuracy and fast processing speed, makes our approach well-suited for online objective skill assessment.
The proposed deep model can be easily integrated into the pipeline of robot-assisted surgical systems and could allow for immediate feedback in personalized surgical training.
\section*{Acknowledgment}
This work is supported by National Science Foundation (NSF\#1464432).
\section*{Conflict of interest }
The authors, Ziheng Wang and Ann Majewicz Fey, declare that they have no conflict of interest.
\section*{Ethical approval}
For this type of study formal consent is not required.
\section*{Informed consent}
This articles does not contain patient data.
\bibliographystyle{spbasic.bst}
|
1,108,101,564,329 | arxiv | \section{Introduction}
A chemical reactor is a system where a set of reactions and their
mixing takes place. Chemical engineers are interested in finding the most cost-efficient reactor for a given chemical reaction. Fritz Horn in 1964 was the first to reduce this optimisation problem to that of
finding the feasible set of the optimisation problem~\cite{Horn}. He called this feasible set the $\textit{attainable region}$ of a system. By definition, this region is the set of all realisable states of the reaction network in question with a certain starting point. Over the past half a
century, the optimisation problem in the field of chemical reaction networks has
been foremostly developed by Martin Feinberg \cite{Fei, Fei02}, Roy
Jackson \cite{HJ}, David Glasser, and Diane Hildebrandt \cite{MGHGM}. More recently, the field of
chemical reaction networks has gathered a lot of attention in the mathematics
community \cite{Bor, CDSS, DJFN, JS}, and is a fast growing field.
\medskip
In this article, we formalise the definition of attainable regions and characterise them for some special systems using well known notions from algebraic geometry. To the best of the author's knowledge, this is first rigorous mathematical treatment of attainable regions. We aim for our contribution to help towards better understanding of the convex hulls of trajectories of dynamical systems.
\medskip
In the subsequent Section~2 we first set up the basic notation and define the attainable region for a general chemical reaction network. In Section~3 we then characterise the attainable region for linear systems. We show that for linear systems the convex hull of the trajectories are the attainable regions. In particular, we show that the feasible set of the reactor-optimisation problem for a class of linear systems can be expressed as the feasible set of a semidefinite program---using the language of algebraic geometry, the attainable region is a
{\em spectrahedral shadow}. We then move on to Section~4 where we discuss a number of computational experiments on weakly reversible systems with a single linkage class, so chemical reaction
networks given by a strongly connected digraph. These experiments enable us to formulate a new conjecture about attainable regions in the non-linear case.
The attainable region is a convex object and to understand this convex object we
would like to understand its faces.
In Section~5, we used one such approach to
understand the faces of the convex hull of the trajectories of
weakly reversible systems with their convex body each in dimension 3, 4 and~5. The article ends with a discussion of our results and an outlook into future applications of this new rigorous treatment of chemical reaction networks.
\section{Notation}
Throughout this article we follow the standard notation from chemistry and
denote \textit{chemical species} by $X_1, X_2, \dots, X_s$ for some $s\in \mathbb{N}$. Each of these
species has a concentration $x_1(t),
x_2(t), \dots, x_s(t) \in \mathbb{R}^{\geq 0}$, respectively, at any time $t$ for $t\in [0,\infty)$. A {\em chemical complex} is a linear combination with non-negative integer coefficients of chemical species.
As defined in \cite{CDSS}, a \textit{chemical reaction network} (CRN) is a directed graph $G$ with vertex set
$V=\{1,2,\dots, n \}$ and edge set $E \subseteq \{(i,j) \in V \times V \ : i\neq j
\}$. In such a graph the vertex
$i\in V$ represents a chemical complex, and the edges indicate that a reaction takes place from one
complex to the other. In addition, the edges are weighted by their reaction rates.
\begin{example}\label{eg} \rm
The following figure shows a network of chemical reactions.
\begin{center}
\schemestart
\subscheme{$X_2$ + 2$X_5$}
\arrow(be--ac){<=>[$\kappa_5$][$\kappa_6$]}[135]
\subscheme{$X_1$ + $X_3$}
\arrow(@be--d){<=>[$\kappa_3$][$\kappa_4$]}[45]
\subscheme{$X_4$}
\arrow(@ac--@d){<=>[$\kappa_1$][$\kappa_2$]}
\schemestop
\end{center}
\bigskip
Here, $X_i$ are the species for $i\in\{1,2,\dots, 5\}$. The chemical complexes are
\{$X_2+2X_5$\}, \{$X_1 +X_3$\}, and \{$X_4$\}. The labels
$\kappa_i$ for $i \in\{1,2,\dots, 6\}$ are the corresponding rates of reactions.
$\hfill \square$
\smallskip
\end{example}
Given such a reaction network, we are interested in the evolution of the
concentrations of the species over time, dictated by the mass-action kinetics.
For $s$ species and $n$ complexes in a network, let henceforth $Y=(y_{ij})$ denote the $n\times s$
matrix with the entry $y_{ij}$ being the coefficient of the $j$-th species in the $i$-th complex. We associate with the vertex $i$ of a CRN the monomial
$x^{y_i}=x_1^{y_{i1}}x_2^{y_{i2}}\cdots x_s^{y_{is}}$. This is a simple transformation of the linear combination defining complexes in the CRN which enables us to write
the dynamics for the mass-action kinetics as
\begin{equation}\label{eqn:ds}
\dot{x}=\frac{dx}{dt}=\Psi(x)\cdot A_{\kappa}\cdot Y
\end{equation}
where $\Psi(x)=\begin{bmatrix}x^{y_1}& x^{y_2}&\cdots &x^{y_n}
\end{bmatrix}$ and $A_{\kappa}
=(\kappa_{ij})$ is a matrix with $ij$-th
entry given by the rates of reactions from the $i$-th complex to the
the $j$-th complex for $i\neq j$ and $\sum_j \kappa_{ij}=0$ for all $i$. This matrix is
the negative of Laplacian of the weighted digraph $G$. \\
\begin{example_contd}[\ref{eg}]
In the network illustrated in \cref{eg}, the monomials corresponding to the complexes are $x_1x_3\text{, }x_4\text{ and }x_2x_5^2$ and hence,
\begin{gather*}
\Psi(x)=\begin{bmatrix}x_1x_3&x_4&x_2x_5^2\end{bmatrix}\text{, }
A_\kappa=\begin{bmatrix}
-\kappa_1-\kappa_5 & \kappa_1 & \kappa_5 \\
\kappa_2 & -\kappa_2-\kappa_4 & \kappa_4\\
\kappa_6 & \kappa_3 & -\kappa_6-\kappa_3
\end{bmatrix}\text{, }
Y=\begin{bmatrix}
1&0&1&0&0 \\
0&0&0&1&0\\
0&1&0&0&2
\end{bmatrix}.
\end{gather*}
Using the notation established, the dynamics of the above network is given by the system of ODEs below
\begin{equation}
\begin{split}
\dot{x_1}&=\frac{dx_1}{dt}=(-\kappa_1 - \kappa_5) x_1 x_3 + \kappa_2 x_4 + \kappa_6 x_2 x_5^2 \\
\dot{x_2}&=\frac{dx_2}{dt}=\kappa_5 x_1 x_3 + \kappa_4 x_4 + (-\kappa_3 - \kappa_6) x_2 x_5^2\\
\dot{x_3}&=\frac{dx_3}{dt}=(-\kappa_1 - \kappa_5) x_1 x_3 + \kappa_2 x_4 + \kappa_6 x_2 x_5^2 \\
\dot{x_4}&=\frac{dx_4}{dt}=\kappa_1 x_1 x_3 + (-\kappa_2 - \kappa_4) x_4 + \kappa_3 x_2 x_5^2 \\
\dot{x_5}&=\frac{dx_5}{dt}=2 (\kappa_5 x_1 x_3 + \kappa_4 x_4 + (-\kappa_3 - \kappa_6) x_2 x_5^2).
\end{split}
\end{equation}
\end{example_contd}
Let $y_j$ be the vector given by the $j$-th row of the matrix $Y$. Consider the linear subspace in $\mathbb{R}^s$ spanned by $y_j-y_i$ whenever $(i,j)\in E.$ This space is called the \emph{stoichiometry subspace} and we will henceforth denote it by $P.$
For a given dynamical system we always denote the initial value of the system at time $t=0$ by $x_0=x(0)\in \mathbb{R}^s_{> 0}$. The trajectory that starts at $x_0$ stays in the affine subspace $(x_0 + P)\cap \mathbb{R}^s_{\geq 0}$.
We call a subset
$S\subset \mathbb{R}^s$
\textit{forward closed} subset if the initial condition $x_0\in S$ holds for
the dynamical system then all future values are contained in the subset $S$. In formulae, we thus have that $x_0 \in S$ implies $x(t)\in S$ for all $t\geq 0$. In particular, the non-negative orthant of $\mathbb{R}^s$ is forward closed.\\
In this
work, we aim to characterise all the
possible sets of the species concentration attainable from the continuous reaction,
according to the dynamics, and mixing of the concentrations of the species at all
times. This approach to the reactor optimisation problem has been explored and discussed in \cite{MGHGM}. We approach this problem by building on a new mathematical definition of this attainable region, and we study these regions for various kinds of dynamical systems.
\begin{definition}\rm
The {\em attainable region}, $\mathcal{A}(x_0)$ is the smallest convex forward
closed
subset of $\mathbb{R}^s$ that contains the point $x_0$.
\end{definition}
By construction, the attainable
region is a convex subset in the closed positive
orthant of real space
$\mathbb{R}^s$ of the chemical species. In the section that follows we first discuss the attainable regions of linear dynamical systems.
\section{Linear systems}
A dynamical system as in (\ref{eqn:ds}) is called $\textit{linear}$ when $n=s$ and $Y$ is the identity
matrix. In this case each of the complexes is a different single-unit species.
\smallskip
\begin{example}\label{eglin} \rm
The following graph illustrates the linear system of three species.
\begin{center}
\schemestart
\subscheme{$X_3$}
\arrow(be--ac){<=>[$\kappa_{13}$][$\kappa_{31}$]}[135]
\subscheme{$X_1$ }
\arrow(@be--d){<=>[$\kappa_{32}$][$\kappa_{23}$]}[45]
\subscheme{$X_2$}
\arrow(@ac--@d){<=>[$\kappa_{12}$][$\kappa_{21}$]}
\schemestop
\end{center}
\bigskip
For the purpose of illustration, let now $\kappa_{12}=6,\kappa_{21}=1,\kappa_{32}=6,\kappa_{23}=1,
\kappa_{13}=3,\kappa_{31}=3$. From (\ref{eqn:ds}), we can express the
dynamics of this system as
$$
\begin{bmatrix}
\dot{x}_1 &
\dot{x}_2 &
\dot{x}_3
\end{bmatrix}=
\begin{bmatrix}
x_1&
x_2&
x_3
\end{bmatrix}\cdot \begin{bmatrix}
-9 & 6 & 3\\
1 &-2 & 1\\
3 & 6 & -9
\end{bmatrix}
$$
If $A_{\kappa}$ is diagonalisable, the solution to such a system is given by
\begin{equation}
\label{eq:linsol}
{x(t)}=\sum_{k=1}^n ({x}_0 \cdot {r}_k){l}_k \exp(\lambda_k t)
\end{equation}
where ${l}_k$ and ${r}_k$ are the left and right eigenvectors of $A_{\kappa}$
corresponding
to eigenvalues $\lambda_k$, respectively,
and
${x}_0$ is the intial vector: for details see page 11 of \cite{CK}.\\
This gives
$$
\begin{bmatrix}
x_1&
x_2&
x_3
\end{bmatrix} =
\begin{bmatrix}
9/4e^{-8t}-3/2e^{-12t}+5/4\\ -9/2e^{-8t}+15/2\\ 9/4e^{-8t}+3/2e^{-12t}+5/4\\
\end{bmatrix}^\top$$
with ${x}_0=\begin{bmatrix} 2 & 3 & 5 \end{bmatrix}$ as the starting
vector.
For $t=0$, we see that $\begin{bmatrix}
x_1&
x_2&
x_3
\end{bmatrix} = \begin{bmatrix} 2 & 3 & 5 \end{bmatrix}$ and as $t\rightarrow \infty$, this system continuously travels to
the stable point $\begin{bmatrix} 5/4 & 15/2 & 5/4 \end{bmatrix}$. On implicitizing the parametric equation in $t$, we obtain
\begin{equation}\label{eq:lintraj}
x_1+x_2+x_3-10 =0 \text{ and } 8x_2^3-99x_2^2+324x_2x_3+324x_3^2-270x_2-3240x_3+4725 = 0.
\end{equation}
These two equations fully describe the trajectory of the linear system from $x_0$ to the
stable point on the plane cut out by $x_1+x_2+x_3-10 =0.$ A similar observation can be made for the system with a different starting point. We will prove later
that the convex hull of this curve is the attainable region and this region can also be expressed as a so-called
spectrahedral shadow.
$\hfill \square$
\end{example}
\smallskip
We henceforth denote by $C$ the solution of a dynamical system. This is the trajectory of the dynamics. In \cref{eglin} the trajectory is given by \cref{eq:lintraj} restricted from $x_0$ to the stable point.
The convex hull, $S=\conv (C)$, of $C$ is the smallest convex set in the concentration
space $\mathbb{R}^s$ containing the solution $C$. In the lemma below we can now show that for
linear chemical reactions, the convex hull of the solution of the dynamics is
forward closed. In words, for linear systems every point on any trajectory that starts with some point in the convex hull $S$ and follows the dynamics of the system is contained in this convex hull.
\begin{lemma}\label{fc}
The convex hull of the trajectory of a linear dynamical system is forward closed.
\end{lemma}
\begin{proof}
Any point $c$ in the convex hull, $\conv (C)=S \subset \mathbb{R}^s$, of the trajectory $C$ can be
expressed as $c=\sum_i\mu_i c_i$, where $c_i$ are points on the trajectory, $
\mu_i\geq 0$, and $\sum_{i=1}^{s+1}\mu_i =1$ for
$i \in \{1,2,\cdots, s+1\}.$
First let us consider the case where the Laplacian is diagonalisable as in \cref{eglin}. With $c$ as starting point, the new trajectory, as in (\ref{eq:linsol}), is given by
\begin{equation}
\label{eq:ls}
{x(t)}=\sum_k \left(( \sum_i\mu_i c_i)\cdot {r}_k \right){l}_k
\exp(\lambda_k t)= \sum_i \mu_i \left(\sum_k ( c_i\cdot {r}_k ){l}_k
\exp(\lambda_k t)\right)
\end{equation}
is the convex sum of trajectories in $S$. Thus, $S$ is forward closed.
For the dynamical system $\dot{x}=x\cdot A_{\kappa}$ where $A_{\kappa}$ is not diagonalisable we perform a coordinate change by the matrix $U$ such that the matrix $UAU^{-1}$ is in its Jordan canonical form : see section 1.3 of \cite{CK}.
It is enough to consider single Jordan block $J$. The solution of a single Jordan block form is given by $x(t)=x\text{ }U^{-1}\exp(tJ)\text{ }U$. Proceeding same as above with $c$ as the starting point
\begin{equation}
\begin{split}
x(t)&=(\sum_i \mu_{i}c_i) \text{ }U^{-1}\exp(tJ)\text{ }U\\
&=(\sum_i \mu_{i}(x\text{ } U^{-1}\exp(t_iJ)\text{ }U))\text{ }U^{-1}\exp(tJ)\text{ }U\\
&=\sum_i \mu_{i}(x\text{ } U^{-1}\exp((t_i+t)J)\text{ }U)
\end{split}
\end{equation}
This gives us that the convex hull of the trajectory of a linear dynamical system is forward closed.
\end{proof}
For the linear system with $x_0$ as the initial point, by \cref{fc} the attainable region $\mathcal{A}(x_0)$ is the convex hull of the trajectory.
Next, we give a condition on the Laplacian of a linear reaction network for which the convex hull of the trajectory is a semi-algebraic set. A semi-algebraic set in $\mathbb{R}^s$ is the solution set of finitely many polynomial inequalities as:
$\mathcal{S}=\{x\in \mathbb{R}^s |\text{ } f_1(x)\geq 0, \ldots,f_n(x)\geq 0 \}$ where $f_i\in \mathbb{R}[x_1,\ldots, x_s]$ for all $i\in {1,\ldots, n}$. These sets are very well understood objects in algebraic geometry and can sometimes be represented as a spectrahedral shadow \cite{Sch1}.
A \textit{spectrahedral shadow} is a convex set
$S\subset \mathbb{R}^m$ that can be expressed by a linear matrix inequality:
$$
S=\{(x_1,x_2,\ldots,x_m)\in \mathbb{R}^m| \text{ }\exists \text{ } (y_1,y_2,\ldots,y_p)\in \mathbb{R}^p :A_0+\sum_ix_iA_i+
\sum_jy_jB_j \succcurlyeq 0\}
$$
where $A_0$, $A_i$ and $B_j$ are real symmetric $n\times n$ matrices for $i\in\{1,2,\ldots, m\}$, and $j\in\{1,2,\ldots, p \}$. We use the symbol $A\succcurlyeq 0$ to denote that the matrix $A$ is positive
semidefinite. This is equivalent to $A$ having non-negative eigenvalues. In order to prove \cref{ss} we need the following useful fact on these semi-algebraic sets.
\begin{remark}\label{rem} \rm
Let $\phi:\mathbb{R}^m\rightarrow\mathbb{R}^n$ be an affine-linear map and $S\subset \mathbb{R}^m$ be
a spectrahedral shadow. The linear image $\phi(S)\subset \mathbb{R}^n$ is a spectrahedral shadow.
\end{remark}
A spectrahedral shadow is a linear projection of the feasible set of a semidefinite
program which is also called a $\textit{spectrahedron}$. Expressing the attainable region as a spectrahedral shadow has an
advantage of getting good bounds for the
optimisation of a linear objective function.
\begin{proposition}\label{ss}
The convex hull of the trajectory of a linear chemical reaction network whose Laplacian has rational eigenvalues is a spectrahedral shadow.
\end{proposition}
\begin{proof}
Consider a rational curve $C : I \longrightarrow \mathbb{R}^n$ given by $t \mapsto
(t^{a_1},t^{a_2},\ldots,t^{a_n})$ in $\mathbb{R}^n$ over an interval $I \subset \mathbb{R}
$ where $a_i$ are positive rational numbers for $i\in \{1,2,\dots, n \}$.
For $0\leq t\leq 1$ this is a semialgebraic set $S$ of dimension 1.
By Theorem 6.1 in Claus Scheiderer's paper \cite{Sch}, the closure of convex hull of $S$ is a spectrahedral
shadow.
If $a_i$'s are the rational eigenvalues of the Laplacian of a linear chemical reaction network then the trajectory of the dynamical system is the image of $S$ under the map
$\phi : S\longrightarrow \mathbb{R}^s$ for $0\leq t\leq 1$, given by the matrix
whose $i$-th column vector is given by the transpose of the row vector $(({x}_0 \cdot {r}_i){l}_i).$
The convex hull of the trajectory of a linear chemical reaction network is the linear
image of convex hull of $S$ and
therefore, by \cref{rem} is a spectrahedral shadow.
\end{proof}
Using \cref{fc} and \cref{ss}, in the theorem below, we can characterise the
class of linear system for which the attainable region is a spectrahedral shadow.
\begin{theorem}
The attainable region of linear chemical reaction networks whose Laplacian has rational eigenvalues is spectrahedral shadow.
\end{theorem}
\begin{proof}
From \cref{ss}, we know that the convex hull of the trajectory of a linear chemical
reaction network whose Laplacian has rational eigenvalues is a spectrahedral
shadow. Also, for linear dynamical systems the convex hull is forward closed by
\cref{fc}. Therefore, the attainable region is the convex hull of the trajectory and
is a spectrahedral shadow.
\end{proof}
For a linear system whose Laplacian has rational eigenvalues, we can hence obtain an exact expression of its attainable region as a spectrahedral shadow. This enables us to use powerful methods of real algebraic geometry to study the properties of these sets.
One future stream of research, which we will not pursue in this text, is an extension of the above result to linear systems whose Laplacian has real, rather than rational, eigenvalues. To the best of author's knowledge this is not yet known. A property of this type would be an important step towards understanding the attainable region of a general dynamical system.
\section{Weakly Reversible Chemical Reaction Networks}
Following \cite{CDSS}, a chemical reaction network is called {\em weakly reversible} if each connected component of the underlying connected graph is strongly connected. Following the usual terminology from graph theory, a directed graph is strongly connected if there is a directed path between any two of its vertices.
In this article we will restrict ourselves to weakly reversible systems whose underlying graph has only one strongly connected component. These are called {\em linkage class one} systems. For these systems we conjecture the following:
\begin{conjecture}
For weakly reversible systems with linkage class one the convex hull of the trajectory reaching a positive stable point is forward closed.
\end{conjecture}
In order to provide the computational evidence for this conjecture we followed a two-step procedure, outlined below. All computations were performed using the freely available software $\mathtt{SAGE}$ \cite{Sg}.
\medskip
\paragraph{Step one} Given $n$ vertices, we generate a random
digraph. This graph is usually not strongly connected. We then add edges randomly
between the strongly connected components of the generated graph to make it
strongly connected. To each
vertex of the graph we associate a monomial in $s$ indeterminates upto
a degree $d$. This represents the chemical complex at that vertex as introduced in Section~2.
These monomials are the entries of a matrix $\Psi (x)$ and the powers in the monomials give the matrix $Y$ in (\ref{eqn:ds}).
We obtain the matrix $A_{\kappa}$ by assigning random positive edge
weights. These three matrices now fully specify a random dynamical system for a weakly reversible CRN.
We numerically integrate the obtained dynamical system in $\mathtt{SAGE}$
using the Runge-Kutta 4 method. In order for it to effectively integrate we keep the
degree of monomials below~5. For higher
values of $d$, one may use a higher order Runge-Kutta method for integration.
This integration gives us points that lie on the solution $C$ of the system. Because we want to make a statement about the convex hull of the trajectory, we
now construct a polytope in dimension $s$ which is the convex hull of the points obtained.
$\mathtt{SAGE}$ uses the cdd library for
this.
In our computations, we computed $10,000$ points per trajectory. The tailing points
are closer to each other than the initial points, so we tailored the set of points for
which we compute the convex hull $S$. Using a random point $c$ in $S$ as the initial point, for the same system we integrate again to get a new set of points on the new trajectory $C^{\prime}$ and ask if
$S$ contains
the points on $C^{\prime}$.
This was done for various trajectories in $\mathbb{R}^s$ for $s=2,3,4,5,6.$
During these computations we faced various challenges. Most of these pertained to
the fact that the computations were numerical, and also, to the large number of
points. In particular, the computations were not always feasible in dimensions higher than $s=6$. Computing the polytope becomes harder for a large set of points and this required us to tailor the set of points accordingly.
\paragraph{Step two} It was proven in \cite{DJFN} and elaborated upon again in \cite{Bor} that every weakly
reversible chemical reaction network has at least one positive steady state. During our computations in step one,
we observed all the systems to be converging to a steady state. Moreover, the trajectories
starting from any interior point also converged to the $\textit{same}$
point. This may, however, be due to the fact that the random graph we generated almost always had single stationary point. This leads us to \cref{prob}.
Since the computations were numerical, as the
dimensions got higher it became difficult to compute the polytope for more than
first 100 points. Therefore, in the
second part of
the computations, we attempted to double check
the points which in step one of the computations were found to not be in the convex hull possibly due to error while integration or computing the convex hull of floating point numbers.
The tailing points on $C^{\prime}$, although reported as not contained in $S$ for many instances, were found to be in the close range of some point
on the starting trajectory.
Secondly, since we had tailored our set of points we checked by changing the subset of points on $C$
for which we computed the convex hull. This new polytope reported in some instances to contain the points that
were not contained in the first polytope.
From the various computations we have computational evidence, in at least lower dimensions, that for
strongly connected graphs the convex hull of the trajectory is forward closed. These computations also compels us to ask the following question:
\begin{problem}\label{prob}
In a weakly reversible system with random edge weights and a random starting point, how likely is that the stoichiometry space has multistationary points?
\end{problem}
Or put differently, in the space of weakly reversible systems in given dimension $d$, how big is the space of systems that have multiple stationary points?
This seems a fairly hard question for a general weakly reversible systems and to
date not much is known about this problem. Similar questions have been asked for
one-dimensional stoichiometry space in \cite{JS}. In general, it would be useful to
be able to characterise the systems that have multiple stationary points. Such a characterisation may give us
insight into the systems where the convex hull of trajectories is not forward closed
and the attainable region is greater than the convex hull.
\section{Facial Structure}
In the previous section, we conjectured that for chemical reaction networks given by strongly connected graphs, the attainable region is the convex hull of the trajectory. To understand this object using convex algebraic geometry it is imperative to study its {\em faces}. For parametrized curves, one such approach was suggested by Cynthia Vinzant in Section 5.2 of her PhD dissertation \cite{Vin}. We give the details of this below.
Let $C$ be a parametrized curve given by $
\textbf{g}=(g_1(t), \dots , g_m(t))$ for $t\in \mathcal{D}$. Here, $\mathcal{D}\subseteq \mathbb{R}$ is a closed interval and $g_i(t)$ are univariate polynomials in $t$ for $i\in \{1,2,\ldots ,m\}$.
The $r$-th {\em face-vertex set} Face($r$) of the curve $C$ is defined to be
$$\{(d_1,\dots, d_r)\in \mathcal{D}^r | \text{ } \textbf{g}(d_1),\dots , \textbf{g}(d_r)
\text{ are the vertices of a face of the convex hull of $C$} \}.$$
For $p\leq r$, let $\{d_1,\ldots ,d_p\}\in \intr (\mathcal{D})$ be interior and
$\{d_{p+1},\ldots ,d_r\}\in \partial\mathcal{D}$ be the boundary points. As $d_i$ varies in $\mathcal{D}$, the face-vertex set Face($r$)
is always contained in the variety cut out by
\begin{equation}\label{eq:facevset3}
\text{minors}\left( n+1,\begin{pmatrix}
\, 1 & \ldots & 1 & 0 & \ldots & 0 \\
\, \textbf{g}(d_1) & \ldots & \textbf{g}(d_r) & \textbf{g}^{\prime}(d_1) & \ldots & \textbf{g}^{\prime}(d_p)
\end{pmatrix}
\right).
\end{equation}
This describes a variety in $\mathcal{D}^r$ that contains the set Face($r$) for the convex hull of $C$.
In this section, we apply this approach to the dynamical systems and illustrate them in the examples below. This method has not been previously used to understand the convex hulls. Note that for any curve $C$, it is true that if $c_1,\ldots , c_r$ are points on the curve such that they define vertex set of some face of the convex hull of $C$ then
\begin{equation}\label{eq:facevset2}
\text{minors}\left( n+1,\begin{pmatrix}
\, 1 & \ldots & 1 & 0 & \ldots & 0 \\
\, c_1 & \ldots & c_r & c^{\prime}_{1} & \ldots & c^{\prime}_{p}
\end{pmatrix}
\right)
\end{equation}
vanish where $c^{\prime}_{i}$ denote the tangent vector at that point. We will exploit this fact and give representation of the faces.
\bigskip
In our case, we only had points on the curve and this makes it difficult to express faces as a variety. For a curve in $s$ dimension we were able to look at the following cases:
\begin{itemize}
\item Face($\frac{s+1}{2}$) if $s$ is odd.
\item Face($\frac{s}{2}+1$) if $s$ is even.
\end{itemize}
The above two conditions make the matrix in (\ref{eq:facevset2}) a square matrix and the corresponding faces are then given by the vanishing of the determinant.
We used the software $\mathtt{Mathematica}$~\cite{Mat} to plot the sign of the determinant for all combinations of points on the curve. We illustrate this for curves in dimensions 3, 4 and 5 below. These curves are given by the ODE's which satisfy the condition in the following lemma due to \cite{HT}.
\medskip
\begin{figure}[t]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.4\linewidth]{edgecurve-1.pdf}
\captionof{figure}{Face(2) of a curve in 3-space.}
\label{Fig:test1}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.4\linewidth]{edgecurve5-1.pdf}
\captionof{figure}{Face(3) with initial point as one vertex of the 3-face for a curve in 4-space.}
\label{fig:test2}
\end{minipage}
\end{figure}
\begin{lemma}\rm
A dynamical system $\dot{\textbf{x}}=\textbf{f(x)}$ where each $f_i$ is a polynomial in $s$ variables arises from a CRN with mass-action kinetics if and only if every monomial in $f_i$ with negative coefficient is divisible by $x_i$ for all $i\in\{1,2,\ldots,s\}.$
\end{lemma}
By this lemma there exists a chemical reaction network for each of the systems in the examples below.
\begin{example}\rm
Consider the following system with initial point as $x_{0}=(10,8,9,2)$,
\begin{equation}
\begin{split}
\dot{x_1} & =-2x_1^2 - 6x_1x_4 + 10x_3x_4 \\
\dot{x_2} & =x_1^2 - 8x_2x_3\\
\dot{x_3} & =x_1^2 + 6x_1x_4 - 9x_3x_4 \\
\dot{x_4} & = 8x_2x_3 - x_3x_4.
\end{split}
\end{equation}
The solution of this system lies in stoichiometry
subspace of dimension 3 and hence, the convex hull has dimension 3. To find the curve of Face(2),
we consider the matrix given by
\begin{equation}\label{eq:facevset4}
\begin{pmatrix}
\, 1 & 1 & 0 & 0 \\
\, c_{3i} & c_{3j} & c^{\prime}_{3i} & c^{\prime}_{3j}
\end{pmatrix}
\end{equation}
as in (\ref{eq:facevset2}) for $i,j\in \{2,3, \ldots , 2000 \}$ and $i\leq j$. We plot this in \cref{Fig:test1} where blue and red represents that the sign of the determinant is negative and positive, respectively. The separating boundary of the red and blue area represents the Face(2) of this system.
$\hfill\square$
\end{example}
\bigskip
Next, we consider a curve with a 4-dimensional convex body.
\begin{example}\rm
Consider the following system with initial point as $x_{0}=(5,8,6,2)$,
\begin{equation}
\begin{split}
\dot{x_1} & =-10x_1^2 + 12x_2x_3 + 6x_3^2 + 4x_3x_4 - 5x_1 \\
\dot{x_2} & =2x_1^2 - 8x_2x_3 + x_1\\
\dot{x_3} & =8x_1^2 - 8x_2x_3 - 6x_3^2 + 5x_1 \\
\dot{x_4} & = - 8x_3x_4 + 4x_1.
\end{split}
\end{equation}
For this system we consider the representation of faces that has initial point as always one of the vertex. This is given by considering the matrix in \cref{eq:facevset6} with the initial point as the boundary point. The boundary of the red and the blue area in \cref{fig:test2} gives the curve describing the Face(3) of the system such that every point on this curve represents the face of the convex hull such that initial point is one of the three vertices of that face.
\begin{equation}\label{eq:facevset6}
\begin{pmatrix}
\, 1 & 1 &1 & 0 & 0 \\
\, c_{3i} & c_{3j} &x_0& c^{\prime}_{3i} & c^{\prime}_{3j}
\end{pmatrix}
\end{equation}
and for $i,j\in \{2, \ldots , 2000 \}$ and $i\leq j$.
$\hfill\square$
\end{example}
\begin{figure}[t]
\begin{center}
\vspace{-0.2in}
\includegraphics[width=3.5cm]{edgecurve3d-1.pdf}\hspace*{2.3em}
\includegraphics[width=3.5cm]{edgecurve3d-2.pdf}\hspace*{2.3em}
\includegraphics[width=3.5cm]{edgecurve3d-3.pdf}
\vspace{-0.3in}
\end{center}
\caption{\label{fig:drei}
Face(3) of a 5-dimensional convex body}
\end{figure}
The following example will depict the Face(3) of a trajectory in 5-dimensions.
\begin{example}\rm The system given by
\begin{equation}
\begin{split}
\dot{x_1} & =4x_3x_4x_6 - 8x_1x_6^2 + 2x_2^2 + 4x_3x_5 \\
\dot{x_2} & =-10x_2^2x_4 + 4x_3x_4x_6 + 4x_1x_6^2 - 12x_2^2 + 6x_6^2\\
\dot{x_3} & =5x_2^2x_4 - 6x_3x_4x_6 + 6x_1x_6^2 - 4x_3x_5 + 2x_6^2 \\
\dot{x_4} & =-4x_2^2x_4 - 4x_3x_4x_6 + 2x_1x_6^2 + 2x_6^2\\
\dot{x_5} & =4x_2^2x_4 + 4x_1x_6^2 - 4x_3x_5 \\
\dot{x_6} & = x_2^2x_4 + 2x_3x_4x_6 - 14x_1x_6^2 + 12x_2^2 + 8x_3x_5 - 8x_6^2
\end{split}
\end{equation}
has stoichiometry space of dimension 5. The \cref{fig:drei} shows the sign of determinant of
\begin{equation}
\begin{pmatrix}
\, 1 & 1 &1 & 0 & 0 &0 \\
\, c_{5i} & c_{5j} &c_{5k}& c^{\prime}_{5i} & c^{\prime}_{5j} & c^{\prime}_{5k}
\end{pmatrix}
\end{equation}
for $i,j,k\in \{2,\ldots , 200\}$ and $i<j<k.$
$\hfill\square$
\end{example}
Using this adaptation
for understanding the convex hulls is not sufficient. This approach when
applied to the trajectories could not be used to give a representation of all the
faces and therefore, for the curves coming from a dynamical system this adaptation could not
give a general description. Clearly, there are some rich veins of research here which can be pursued much further.
\section{Discussion}
This work is motivated by an optimisation problem in chemistry, namely the one of finding the most cost-efficient reactor. It is of great interest for industrial chemists to find the optimum reactor while improving the reaction efficiency. The feasible set of this problem is a convex object. Since this region is a geometric object this problem lies on the interface of chemistry and mathematics. The formalism that we have established in this paper now provides the basis to describe and explore the properties of these convex sets coming from chemistry, using the language of algebraic geometry and characterise them. For certain linear systems we could express this
convex object as a spectrahedral shadow. However, by results due to Claus Scheiderer in \cite{Sch1} it is not
possible to express every convex object as a spectrahedral shadow.
There are a number of intriguing new stream of research coming out of our analysis. We conjectured that attainable region of weakly reversible systems with linkage class one is the convex hull of the trajectory. In the future, we hope to work towards resolving this conjecture and give a representation of
the same. One possible way of tackling this problem could be via an approximation of this region by
the semidefinite representable sets. As a second step, it would also be very interesting to study the
systems where the attainable region is larger than the convex hull of the trajectory. In particular,
understanding the attainable regions of the systems with multistationary points
may prove to be insightful. Giving a representation by way of studying the faces is yet another interesting problem for convex hulls coming from such trajectories.
\bigskip
\begin{small}
\noindent
{\bf Acknowledgements.}
The author would like to express her gratitude to Bernd Sturmfels for suggesting the problem, and providing valuable advice and support along the way. She is grateful to Christiane G$\ddot{\text{o}}$rgen for feedback and useful comments on the draft of the manuscript. She is thankful to Amir Ali Ahmadi, Anne Shiu and Cynthia Vinzant for their help and useful discussions.
The author was funded by the
International Max Planck Research School {\em Mathematics in the
Sciences} (IMPRS).
\end{small}
\begin{small}
|
1,108,101,564,330 | arxiv | \section{DOA estimation with the Nao robot array}
\label{sec:NAO_description}
This section describes the method for direction of arrival (DOA) estimation in tasks 1 and 2, that was performed with the Nao robot array.
In this paper, the same spherical coordinate system is used, as described in \cite{lollmann2018locata}, denoted by $(r,\theta,\phi)$, where $r$ is the distance from the origin, and $\theta$ and $\phi$ are the elevation and azimuth angles, respectively. Consider an array of $Q$ omni-directional microphones, representing the array mounted on Nao. In this case, let $\{\mathbf{r}_q\equiv(r_q,\theta_q,\phi_q)\}_{q=1}^{Q}$ denote the microphones positions arranged according to the configuration used in the LOCATA challenge for Nao \cite{lollmann2018locata}.
In addition, a sound field which is comprised of $L$ far field sources is also considered,
arriving from directions $\{\Psi_{l}\equiv(\theta_{l},\phi_{l})\}_{l=1}^{L}$. These $L$ sources
can represent the direct sound from speakers in a room and the reflections
due to objects and room boundaries. In this case, the sound
pressure measured by the array can be described in the short-time Fourier transform (STFT) domain
as \cite{van2002optimum}
\begin{align}
\mathbf{p}(\tau,\omega)=\mathbf{V}(\omega,\mathbf{\Psi})\mathbf{s}(\tau,\omega)+\mathbf{n}(\tau,\omega),\label{eq:pressureModel_vantrees-1}
\end{align}
where $\mathbf{p}(\tau,\omega)=\begin{bmatrix}
p(\tau,\omega,\mathbf{r}_1),p(\tau,\omega,\mathbf{r}_2),\ldots,p(\tau,\omega,\mathbf{r}_Q)
\end{bmatrix}^T$ is a $Q\times1$ vector holding the recorded sound pressure, $\mathbf{s}(\tau,\omega)=\begin{bmatrix}s_{1}(\tau,\omega),s_{2}(\tau,\omega),\ldots,s_{L}(\tau,\omega)\end{bmatrix}^{T}$
is an $L\times1$ vector holding the source signal amplitudes, $\mathbf{V}(\omega,\mathbf{\Psi})$
is a $Q\times L$ matrix holding the steering vectors between
each source and microphone and with $\mathbf{\Psi}=\begin{bmatrix}
\Psi_{1} , \Psi_{2} ,\ldots, \Psi_{L}
\end{bmatrix}^T$ denoting the DOAs of the sources, $\mathbf{n}(\tau,\omega)=\begin{bmatrix}n_{1}(\tau,\omega),n_{2}(\tau,\omega),\ldots,n_{Q}(\tau,\omega)\end{bmatrix}^{T}$
is a $Q\times1$ vector holding the noise components, $\tau$ and
$\omega$ are the time and frequency indices, respectively, and
$(\cdot)^{T}$ denotes the transpose operator.
\begin{comment}
The microphone domain direct-path dominance test (MD-DPD) first identifies
time-frequency (TF) bins in the STFT domain for which the direct-path
from a single speaker is dominant. Then, MUSIC \cite{MUSIC} with a signal subspace
of single dimension is applied to each of the selected bins yielding
DOA estimation for each bin. Finally, the DOA estimations from the
different bins are fused together using K-means to obtain the final
DOAs estimation.
\end{comment}
The signals recorded by the Nao robot array were transformed
to the STFT domain with a Hanning window of 512 samples (32 ms), and
with an overlap of 50\%. A focusing process was then applied to this
measured pressures vector in order to remove the frequency dependence
of the steering matrices across every $J_{\omega}=15$ adjacent frequency
indexes. The purpose of the focusing process is to enable the implementation
of frequency-smoothing while preserving the spatial information. The
focusing was performed by multiplying the sound pressure vector at each
frequency index, $\omega$, with a focusing transformation $\mathbf{T}\left(\omega,\omega_{0}\right)$
that satisfies
\begin{equation}
\mathbf{T}\left(\omega,\omega_{0}\right)\mathbf{V}(\omega,\mathbf{\Psi})=\mathbf{V}(\omega_0,\mathbf{\Psi}),\label{eq:focusing eq-1}
\end{equation}
where $\omega_{0}$ is the center frequency in the frequency-smoothing
range. The focusing transformations were computed in advance according
to \cite{MD_DPD_Submitted} using spherical harmonics (SH) order of $N=4$. With ideal
focusing, the cross-spectrum matrix of the focused sound pressure can be
written as \cite{MD_DPD_Submitted}
\begin{equation}
\mathbf{S}_{\tilde{p}}\left(\tau,\omega\right)=\mathbf{V}(\omega_0,\mathbf{\Psi})\mathbf{S}_{s}\left(\tau,\omega\right)\mathbf{V}(\omega_0,\mathbf{\Psi})^{H}+\mathbf{S}_{\widetilde{n}}\left(\tau,\omega\right)\label{eq:focused pressure cross-spectrum}
\end{equation}
where $\mathbf{S}_{\tilde{p}}\left(\tau,\omega\right)=E\left[\mathbf{T}\left(\omega,\omega_{0}\right)\mathbf{p}\left(\tau,\omega\right)\mathbf{p}\left(\tau,\omega\right)^{H}\mathbf{T}\left(\omega,\omega_{0}\right)^{H}\right]$,
$\mathbf{S}_{s}\left(\tau,\omega\right)=E\left[\mathbf{s}\left(\tau,\omega\right)\mathbf{s}\left(\tau,\omega\right)^{H}\right]$,
$\mathbf{S}_{\tilde{n}}\left(\tau,\omega\right)=E\left[\mathbf{T}\left(\omega,\omega_{0}\right)\mathbf{n}\left(\tau,\omega\right)\mathbf{n}\left(\tau,\omega\right)^{H}\mathbf{T}\left(\omega,\omega_{0}\right)^{H}\right]$, and $(\cdot)^{H}$ is the Hermitian operator.
In practice, an averaging across $J_{\tau}=3$ time frames is used
to approximate the expectation. A frequency-smoothing is then applied
to $\mathbf{S}_{\tilde{p}}\left(\tau,\omega\right)$ by averaging
across $J_{\omega}=15$ frequency bins. Denoting the smoothed variables
by an overline, i.e. $\overline{\mathbf{S}}\left(\tau,\omega\right)=\sum_{j_{\omega}=0}^{J_{\omega}-1}\mathbf{S}\left(\tau,\omega-j_{\omega}\right)$,
the smoothed focused cross-spectrum matrix can be written as
\begin{equation}
\overline{\mathbf{S}_{\tilde{p}}}\left(\tau,\omega\right)=\mathbf{V}\left(\mathbf{\psi},\omega_{0}\right)\overline{\mathbf{S}_{s}}\left(\tau,\omega\right)\mathbf{V}\left(\mathbf{\psi},\omega_{0}\right)^{H}+\overline{\mathbf{S}_{\widetilde{n}}}\left(\tau,\omega\right).\label{eq:smoothed focused cross-spectrum}
\end{equation}
The purpose of the frequency-smoothing operation is to restore the
rank of the source cross-spectrum matrix, $\mathbf{S}_{s}\left(\tau,\omega\right)$,
which is singular when coherent sources, such as reflections, are
present. After applying focusing and frequency-smoothing, the effective-rank
\cite{roy2007effective} of $\overline{\mathbf{S}_{\tilde{p}}}\left(\tau,\omega\right)$
reflects the number of sources $Q$ and the noise subspace can be
correctly estimated \cite{MD_DPD_Submitted}. Time-frequency (TF) bins in which the direct-path
is dominant are identified in a similar way to those proposed in the direct-path dominance (DPD)
test \cite{nadiri2014localization}
\[
\mathfrak{\mathcal{A}}_{\text{MD-DPD}}=\left\{ \left(\tau,\nu\right):\frac{\lambda_{1}\left(\overline{\mathbf{S}_{\tilde{p}}}\left(\tau,\omega\right)\right)}{\lambda_{2}\left(\overline{\mathbf{S}_{\tilde{p}}}\left(\tau,\omega\right)\right)}>\mathcal{TH}_{\text{MD-DPD}}\right\} ,
\]
where $\lambda_{1}\left(\overline{\mathbf{S}_{\tilde{p}}}\left(\tau,\omega\right)\right)$
and $\lambda_{2}\left(\overline{\mathbf{S}_{\tilde{p}}}\left(\tau,\omega\right)\right)$
are the largest and the second largest eigenvalues of $\overline{\mathbf{S}_{\tilde{p}}}\left(\tau,\omega\right)$,
and $\mathcal{TH}_{\text{MD-DPD}}$ is the test threshold, chosen independently
for each recording, to ensure that 5\% of all available bins pass
the test. Then, MUSIC with a signal subspace of single dimension was
applied to each of the bins in $\mathcal{A}_{\text{MD-DPD}}$. The noise
subspace was estimated by the singular values decomposition of $\overline{\mathbf{S}_{\tilde{p}}}\left(\tau,\omega\right)$.
Next, k-means clustering was performed with the DOA estimates from
the bins that passed the test. For task 1, a single speaker was present,
thus, k-means clustering has been performed with a single cluster.
For task 2, the number of clusters was chosen to the number of sources,
which has been estimated for each recording by examining the scatter
of DOA estimates on an azimuth-elevation grid, and was therefore assumed
to be known apriori. This was performed in order to focus on the performance
of the DOA estimation process rather than on source number estimation. Finally, since the sources in tasks
1 and 2 are known to be stationary, the final DOA estimates have been
associated with a unique source identifier for all timestamps, regardless
of its activity.
\section{DOA estimation with the Eigenmike array}
\label{sec:EM_description}
This section describes the method for DOA estimation in tasks 1 and 2, that was performed with the Eigemike array.
The sound pressure system model described in (\ref{eq:pressureModel_vantrees-1}), can be used with $r_q=r$ for all $q=1,\ldots,Q$ and with the same STFT parameters, such that it now describes a spherical array. This formulation can facilitate the processing of signals in the SH domain \cite{SHdomain_cite1,SHdomain_cite2,rafaely2015fundamentals}, which was performed up to SH order of $N=3$. Following that, plane wave decomposition had been performed, leading to \cite{khaykin2009coherent}:
\begin{align}
\mathbf{a_{nm}}(\tau,\omega)=\mathbf{Y}^H(\mathbf{\Psi}) \mathbf{s}(\tau,\omega)+\tilde{\mathbf{n}}(\tau,\omega),
\label{eq:SHPWDModel}
\end{align}
where $$\mathbf{a_{nm}}(\tau,\omega)=\begin{bmatrix}
a_{00}(\tau,\omega),a_{1(-1)}(\tau,\omega),a_{10}(\tau,\omega),\ldots,a_{NN}(\tau,\omega)
\end{bmatrix}^T$$ is a $(N+1)^2\times 1$ vector holding the recorded plane wave density (PWD) coefficients in the SH domain, $
\mathbf{Y}^H(\mathbf{\Psi})=\begin{bmatrix}
\mathbf{y}^*(\Psi_1) , \mathbf{y}^*(\Psi_2) , \ldots , \mathbf{y}^*(\Psi_L)
\end{bmatrix
$ is the $(N+1)^2\times L$ steering matrix in this domain, with its columns
$
\mathbf{y}(\Psi_l)=\big[Y_0^0(\Psi_l),Y_{1}^{-1}(\Psi_l),\ldots, Y_{N}^{N}(\Psi_l)\big]^T
$
holding the SH functions $Y_n^m(\cdot)$ of order $n$ and degree $m$. These functions are assumed to be order limited to $N$, which usually holds when both $N=\lceil kr\rceil$ and $(N+1)^2\leq Q$ \cite{truncation_n_ceil_kr,rafaely2015fundamentals}, where $k$ is the wavenumber. The noise components in this domain are described by the $(N+1)^2\times 1$ vector $\tilde{\mathbf{n}}(\tau,\omega)$, where $(\cdot)^*$ denotes the complex conjugate. In this challenge, this plane-wave decomposition was performed in a similar manner to the R-PWD method, described in \cite{Robust_PWD_AlonRafa} (equation (2.27)).
Next, the local TF correlation matrices are computed for every TF bin by \cite{nadiri2014localization}:
\begin{align}
\tilde{\mathbf{S}}_a(\tau,\omega)=&\frac{1}{J_{\tau}J_{\omega}}\sum_{j_{\omega}=0}^{J_{\omega}-1}\sum_{j_{\tau}=0}^{J_{\tau}-1}\mathbf{a_{nm}}(\tau-j_{\tau},\omega-j_{\omega})\nonumber\\
&\times\mathbf{a_{nm}}^H(\tau-j_{\tau},\omega-j_{\omega}),
\label{eq:Ra_Model}
\end{align}
where $J_{\tau}$ and $J_{\omega}$ are the number of time and frequency bins for the averaging, respectively. The values that were chosen for this array are $J_{\tau}=2$ and $J_{\omega}=15$.
Notice in (\ref{eq:Ra_Model}) that frequency smoothing is performed directly without focusing matrices, in this domain \cite{FSdecor2_SH}.
The direct-path dominance enhanced plane-wave decomposition (DPD-EDS) test is designed for PWD measurements in the SH domain, and it uses the local TF correlation matrix $\tilde{\mathbf{S}}_a(\tau,\omega)$, as in (\ref{eq:Ra_Model}). With the aim of identifying TF bins dominated by the direct sound, it was shown in \cite{DPD_EDS_Submitted}, that under some conditions, the dominant eigenvector of $\tilde{\mathbf{S}}_a(\tau,\omega)$, denoted by $\mathbf{u}_1(\tau,\omega)$, may approximately satisfy
\begin{align}
\mathbf{u}_1(\tau,\omega) \propto \mathbf{y}^*(\Psi_1),
\label{eq:u1}
\end{align}
where $\Psi_1$ is the direction of the direct sound in the TF bin.
Motivated by (\ref{eq:u1}), identifying a bin dominated by the direct sound, can be achieved by examining $\mathbf{u}_1(\tau,\omega)$, and measuring to what extent it represents a single plane wave. In this challenge, this has been performed by the following MUSIC-based measure
\begin{align}
\mathcal{EDS}(\tau,\omega)=\underset{\Omega}{\text{max}}\,\frac{1}{\norm{ \mathbf{P}_{\mathbf{u}_{1}(\tau,\omega)}^{\perp}\mathbf{y}^{*}(\Omega) }^2},
\label{eq:EDS_measure_MUSIC}
\end{align}
where $\mathbf{P}_{\mathbf{u}_{1}(\tau,\omega)}^{\perp}$
is the projection into the subspace which is orthogonal to $\mathbf{u}_{1}(\tau,\omega)$.
Next, the following DPD-EDS test have been performed:
\begin{align}
\mathcal{A}_{\text{EDS}}=\Big\{ (\tau,\omega): \mathcal{EDS}(\tau,\omega)>\mathcal{TH}_{\text{EDS}} \Big\},
\label{eq:EDS_thr}
\end{align}
where $\mathcal{TH}_{\text{EDS}}$ is the test thresholds which should hold $\mathcal{TH}_{\text{EDS}}\gg 1$, and in this challenge was chosen for each recording separately, to ensure that $2.5\%$ of all available bins pass the test.
Similarly to the previous section, a DOA estimation from each TF bin is given by the argument $\Omega$ that maximizes $\mathcal{EDS}(\tau,\omega)$,
\begin{align}
\Omega_{\text{EDS}}=\big\{ \Omega :\text{arg}\, \underset{\Omega}{\text{max}}\,\mathcal{EDS}(\tau,\omega), \forall (\tau,\omega)\in\mathcal{A}_{\text{EDS}} \big\},
\label{eq:DOAest_EDS}
\end{align}
already computed in (\ref{eq:EDS_measure_MUSIC}). For further information on the DPD-EDS test, the reader is referred to \cite{DPD_EDS_Submitted,DPD_SPW_EUSIPCO}.
The process of producing the final DOA estimates is performed similarly to the process described for the Nao robot array in the previous section, using k-means clustering.
For most recordings, an analysis frequency range of $[400,6000]\,$Hz was employed, with the exception of several recordings where the frequency range was reduced to $[400,4000]\,$Hz which seemed to yield more tightly dense clusters of DOA estimates.
When the development data of the Eigenimke recordings was analyzed, a relatively constant bias of $+8^{\circ}$ in the azimuth angle, and $-5^{\circ}$ in the elevation angle, relative to the ground truth data, was present. Hence, this bias was subtracted from the final DOA estimates that were calculated with the evaluation data, for all recordings.
\bibliographystyle{IEEEtran}
|
1,108,101,564,331 | arxiv | \section{Introduction}
Scanning capacitance microscopy (SCM) is a useful imaging technique that allows one to acquire topographical features of micro- and nano-sized samples. One of the early SCM instruments, based upon a commercial product (RCA CED video Disc),\cite{mat} is made of a capacitive sensor driven by an ultra-high frequency oscillator (500 MHz or higher), a sharp tip, and a particular sample to be imaged, all of which are in a feedback loop to maintain a maximum resonant output. Primary applications include: surface characterizations of nano structures\cite{will,step} and the profiling of both conductors and insulators, particularly to acquire semiconductor ($p$- and $n$-type) dopant density profiles,\cite{will2,gian} with sub-100 nm resolution. More recently, researchers have developed an integrated capacitance bridge for enabling quantum capacitance measurements at room temperature.\cite{haze}
SCM is essentially a near-field capacitive sensor providing either the direct tip-sample capacitance ($C$), or its gradient with respect to the change of a tip-sample separation ($dC/dx$), the latter being a more common choice for imaging due to its high sensitivity achieved by a lock-in technique. Therefore, it is easy to recognize SCM as a variant of atomic force microscopy (AFM), since the tip-sample {\em electric capacitance} is directly related to the tip-sample {\em electric force}, which is proportional to the gradient of the capacitance itself. For this reason, SCM is frequently employed as another operating mode of atomic force microscopes, providing a direct measure of classical electrostatics between a tip and the surface of a sample.
Previously, we have reported the usefulness of a relaxation oscillator for measuring the capacitance between two metal plates in an attempt to characterize the absolute separation between them.\cite{hankins} Here, we extend our capacitance measurements to a topological mapping of the surface of a sample in both contact and non-contact modes. Our simple, low-cost SCM provides a valuable experimental platform in the undergraduate laboratory where students gain critical exposure to nano-scale imaging techniques. We present precision calibrations of the relaxation oscillator, and successful 2-D surface scans of machined grooves and an American coin.
\section{Relaxation oscillator}
The relaxation oscillator, shown in the top of Fig. 1, consists of a comparator, an external capacitance to be measured, and three resistors. In our previous application,\cite{hankins} there was an internal capacitor $C_{\rm{int}}=47$ pF connected to $C_{\rm{ext}}$ in parallel; however, in the present oscillator, only the capacitance of an external source---plus possible parasitic capacitance---contributes to the characteristic oscillation period set by the $RC$ constant, where $R$ is the 100-k$\Omega$ resistor above the negative input of the op-amp. The other two 100-k$\Omega$ resistors, just below the positive input of the op-amp, act as a voltage divider, with $V_{\rm{+}}$ being just half of the maximum output of the oscillator $V_{\rm{out}}$. The total `capacitance' of the circuit, combining both $C_{\rm{ext}}$ and parasitic capacitance, repeats charging-discharging cycles at $V_{\rm{-}}$, which result in square waves at the oscillator output, as described in the bottom of Fig. 1.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth,clip]{fig1a.eps}
\includegraphics[width=0.8\columnwidth,clip]{fig1b.eps}
\vspace{0.5cm}
\caption{Circuit diagram of a relaxation oscillator (top) and a plot of the oscillator and capacitor outputs, $V_{\rm{out}}$ and $V_{\rm{-}}$, respectively, captured by an oscilloscope (bottom). The total capacitance can be expressed as $C_{\rm{tot}}=C_{\rm{ext}}+C_{\rm{para}}$, where $C_{\rm{para}}$ represents possible parasitic capacitance present in the external wiring. Together with the 100 k$\Omega$-resistor, the $RC$ charging-discharging cycle is established. Note that the maximum of the capacitor output is half of $V_{\rm{out}}$ according to our 1:1 voltage divider.}
\label{fig1}
\end{figure}
To understand the circuit qualitatively, let us assume that $V_{\rm{out}}$ is initially held at a constant maximum voltage $V_0$. At this point, $V_{+}$ is exactly ${\frac{1}{2}V_{0}}$. The external capacitor then starts charging up and the voltage increases at $V_{-}$. Note that, as long as $V_{-}<V_{+}$, the output remains constant at $V_{0}$, and that the voltage across the capacitor keeps on increasing until $V_{-}>V{+}$, at which point the oscillator output swings to $-V_{0}$. Once the oscillator output flips, the capacitor starts discharging. As long as the output remains at $-V_{0}$, the capacitor continues to discharge until $V_{+}>V_{-}$. This cycle repeats itself.
Quantitatively, we can derive the relationship between the period of the oscillation and the $RC$-time constant.\cite{ham} Applying Kirchhoff's rule and Ohm's law to the $RC$ components of the op-amp, we get: $V_{\rm{out}}=V_{-}+V_{\rm{R}}$, where $V_{\rm{R}}$ is the voltage drop across the resistor. This leads to
\begin{equation}
V_{\rm{out}}-V_{-}=RC\frac{dV_{-}}{dt},
\end{equation}
where $C$ is the total capacitance of the system and $R$ is the 100 k$\Omega$-resistor. The period of the oscillation $T$ can be calculated by considering one half-period during which the capacitor output at $V_{-}$ swings from $-\frac{1}{2}V_0$ to $+\frac{1}{2}V_0$. Solving Eq. 1 with initial conditions: (i) $V_{-}=-\frac{V_0}{2}$; and (ii) $V_{\rm{out}}=V_0$ at $t$=0 yields
\begin{equation}
V_{-}=V_0(1-\frac{3}{2}e^{-t/RC}).
\end{equation}
The time elapse, when $V_{-}$ reaches $+\frac{1}{2}V_0$, is exactly a half-period (e.g. $t=T/2$), so the period of the oscillation is immediately obtained as
\begin{equation}
T=2\ln(3)RC.
\end{equation}
A few remarks are in order with regard to Eq. 3. First, the period of the relaxation oscillator is directly proportional to the capacitance being measured. To give a rough estimate, for $C=10$ pF and $R=100$ k$\Omega$, a typical period to be measured is on the order of 10 $\mu$s (see Section III). Second, Eq. 3 contains the DC offset (i.e. $T_0$ or baseline value of $T$) due to the parasitic capacitance. That is, $T=2\ln(3)R(C_{\rm{ext}}+C_{\rm{para}})$, where the total capacitance is the sum of the {\em variable} external capacitance and the {\em static} parasitic capacitance originating from the circuit wiring and other contacts. Finally, to obtain rough estimates of resolution related to differential capacitance, we take the inverse of the derivative of Eq. 3 with respect to $C_{\rm{ext}}$
\begin{equation}
\frac{\partial{C_{\rm{ext}}}}{\partial{T}}=\frac{1}{2\ln(3)R},
\end{equation}
which gives approximately 5 pF/$\mu$s resolution. For a $100$-MHz (or 0.01-$\mu$s) counter, the lower bound of capacitance sensitivity is 0.01 pF, which is compatible or even better than some commercially available RCL meters. In reality, the actual sensitivity is limited by drifts in measurements, such as those caused by temperature and acoustic vibrations, as noticed in our own measurements (see Section IV).
In selecting an op-amp, slew rate is an important consideration as the oscillator output rapidly swings between two extrema (e.g. positive to negative). We have chosen LM7171 as a comparator due to its fast slew rate (4100 V/$\mu$s) and low-noise performance (14 nV/$\sqrt{\rm{Hz}}$). Our previous choice OP27, despite its lower noise-performance (3 nV/$\sqrt{\rm{Hz}}$), has a slew rate of only about 3 V/$\mu$s, which overrides the lower bound of capacitance set by the counter. Other options, such as AD790 and AD8561 that are specifically designed to operate as comparators, provide ultra-fast rise and fall time on the order of nano-seconds. Another approach for building a relaxation oscillator has been proposed by Liu {\textit {et al}}..\cite{liu} In their configuration, the oscillator operates on two op-amps, one as an integrator for active measurements of the $RC$ constant and the other as a comparator to measure the charging-discharging cycle. While the two approaches are equally viable, we have chosen the single-comparator configuration because of its stable outputs and its simplicity.
\section{Experimental setup and calibrations}
To perform scanning capacitance microscopy, a stage is constructed with three mechanical actuators (two Z825BV and one Z812BV from Thorlabs), which move in the three Cartesian axes, as shown in Fig. 2. Each actuator has 50 nm resolution and travel ranges of 25 mm and 12 mm in lateral and vertical translations, respectively. It is important to ensure good insulation and rigid placement of various parts in the vicinity of tip and sample, because the relaxation oscillator is found to be extremely delicate. For example, it can be disturbed by a tiny capacitance as a result of walking near the circuit. We have used a glass tube (Vitro tube) to secure the probe tip, and the various wires connecting to the relaxation oscillator are kept as short as possible to further minimize interference.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth,clip]{fig2.eps}
\caption{Schematic of the experimental setup: The XYZ stage is controlled by the PC to move in relation to the probe tip which is held at a constant location in space. The sample and the probe tip form $C_{\rm{ext}}$ and consequently determine the period of the relaxation oscillator (RO). The period is measured by the counter (Keithley 2015 THD Multimeter) and collected by the PC via a MATLAB data acquisition program. The etched tungsten has a tip size on the order of 2 $\mu$m, which can be readily estimated from the initial diameter of the non-etched wire of 500 $\mu$m. See inset.}
\label{fig2}
\end{figure}
The probe tip is electrochemically etched on a tungsten wire in 2 M sodium hydroxide inside a teflon cell.\cite{kim,mcd} A positive electrode (+2 V) is connected to the top of the tungsten wire with a negative electrode (ground) in the solution. The electrical energy coupled with the basic solution induces effective etching during which the wire surrounded by the meniscus is etched slower than the wire deeper in the solution. Because of this, the tungsten wire becomes thinner towards the tip. The process continues until the weight of the base of the wire exceeds the tensile strength of the etched portion of the wire. When the base falls off, it leaves a sharp needle-like tip, as shown in Fig. 2. Alternatively, a paperclip can be used as a probe tip by cutting end diagonally, which also produces a relatively sharp tip point.
To calibrate the relaxation oscillator, we used a number of precision capacitors (Vishay Sprague 5\% tolerance), across a range of capacitance values from 1 pF to 100 pF. This direct calibration, shown in the top of Fig. 3, gives a calibration factor of $\alpha=0.206\pm0.032$ $\mu$s/pF (or 4.87$\pm0.75$ pF/$\mu$s) with $T_0=5.2\pm1.0$ $\mu$s, which enables a conversion of period in seconds (s) to the actual unit of capacitance (F). From the standard deviation of the mean (SDOM) of $N$=50 measurements, our oscillator reaches a resolution of 0.001 pF in about 3 seconds. We have also confirmed the capacitance of the precision capacitors using an RCL meter (Fluke PM6303A).
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth,clip]{fig3.eps}
\caption{Direct calibration using precision capacitors of known capacitance (top) and the parallel-plate calibration by mapping the periods at different separation distances between the probe tip and the sample surface (bottom). By fitting the data from the direct calibration to a linear function (i.e. first-order polynomial), we obtain a period-to-capacitance calibration factor: $\alpha$=0.206 $\mu$s/pF. The parallel-plate calibration is somewhat imprecise, because the tip-sample geometry is not precisely known {\em a priori}. The deviation from the $1/d$ fit is evident as the tip-probe separation is decreased.}
\label{fig3}
\end{figure}
An alternative way to calibrate the relaxation oscillator is to employ a parallel-plate system in which capacitance is expected to obey $C=\epsilon_0A/d$ (with $A$ being the effective area of the system) and then to fit the resulting data with a power law form: $T=T_0+\beta/(d-d_0)^{\gamma}$, where $T_0$ is the offset period, $d_0$ is a point of contact, and $\gamma=1$. The pre-factor, $\beta$, is directly related to physical parameters, such as $\epsilon_0$ and $A$. The bottom of Fig. 3 is generated by first bringing the tip almost in contact with the sample, in this case approximately 100 $\mu$m, and then stepping further away in increments of 1 $\mu$m until it reaches a maximum distance determined by the range of the linear actuator. The graph, though strongly displaying the $1/d$ relationship at larger distances (i.e. away from the other plate), suffers a severe deviation from the expected power law at short distances, possibly, due to finite parallelism. For this reason, the $1/d$ calibration is not as reliable as the direct calibration using capacitors of known capacitance. From the fit, we obtain: $T_0=4.946\pm0.001$ $\mu$s, $d_0=1.332\pm0.001$ mm, and $\beta=0.035\pm0.001$ $\mu$s$\cdot$mm (or $0.039\pm0.001$ pF$\cdot$mm using the calibration factor $\alpha$). Since $\beta\equiv\epsilon_0A$, we estimate the effective diameter of the probe tip to be roughly $D_{\rm{eff}}=(2.2\pm0.1$) mm. This is about a factor of three larger than the actual diameter of the probe tip measured to be $D_{\rm{act}}=(0.8\pm0.1)$ mm. In fact, this discrepancy is not surprising, because the precise geometry of the probe tip cannot be known {\em a priori} and, as a consequence, validity of the parallel-plate model does not hold (i.e. $D_{\rm{act}}\neq D_{\rm{eff}}$). The calibration involving a parallel-plate system is an interesting subject in its own right and enables critical assessments of a finite degree of parallelism manifested by a possible deviation from the expected power law.\cite{hankins,mcd}
\section{Temperature drifts}
We have found a striking correlation between capacitance and temperature. The tip-sample capacitance at a fixed position, as measured by the period of the relaxation oscillator, is shown to be inversely proportional to the temperature of the op-amp, as measured by the resistance of a 10-k$\Omega$ thermistor (TH10k from Thorlabs). This behavior is clearly seen in Fig. 4. The dotted line corresponds to the resistance of a thermistor mounted inside the relaxation oscillator circuit box during an 11-day period. The actual relationship between the resistance of the thermistor and temperature is rather complicated.\cite{thorlab} But, to give a rough estimate, we have $\Delta T_{\rm{C}}/\Delta R\sim$2 C$^{\circ}$/k$\Omega$ at room temperature ($\sim20 ^\circ$C), translating into 2 C$^{\circ}$ change over the course of the 11-day period. This variation in temperature then corresponds to $\Delta C=0.3 $ pF change in capacitance.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth,clip]{fig4.eps}
\caption{Plot of thermistor resistance (i.e. temperature) and period (i.e. capacitance) data over the course of 11 days displaying a strong correlation between them: As a result, a small drift of room temperature strongly affects the capacitance measurements. For temperature measurements, we have used a thermistor whose resistance is inversely proportional to temperature. Hence, the actual temperature and the capacitance are proportional to each other (e.g. positively correlated).}
\label{fig4}
\end{figure}
To suppress the temperature-driven fluctuations, it is possible to implement a temperature control system in which one side of the op-amp is in contact with a thermoelectric coupler (TEC), while the other side is in contact with a heat sink. A proportional-integral-derivative (PID) controller then actively stabilizes the system to a set temperature. Building a PID controller circuit is relatively simple and provides great stability ($\sim10$ mK).\cite{caltech}
In the absence of a temperature-stabilizing mechanism, it becomes necessary to quantify the degree of correlation to account for any temperature-driven capacitance drift.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth,clip]{fig5.eps}
\caption{A closer look of the temperature-capacitance correlation over a 15-hour period: The top graph plots both resistance and period over time to show the inverse relationship between them, while the lower graph plots them against each other to extract the linear correlation coefficient, which is calculated to be $\vert r\vert=0.99$ with a slope of 18.5 k$\Omega$/$\mu$s. The variation in period during 15 hours corresponds to an overall change in capacitance of 0.05 pF. }
\label{fig5}
\end{figure}
Because our typical surface scan (in the non-contact mode) takes on the order of several hours, we focus on the capacitance-temperature fluctuation over a 15-hour period (top of Fig. 5) and obtain the linear correlation coefficient (bottom of Fig. 5) using the definition\cite{taylor}: $r\equiv\frac{\sigma_{xy}}{\sigma_x\sigma_y}$, where $\sigma_{xy}=\sum(x_i-\bar{x})(y_i-\bar{y})/N$ is the covariance of $x$ (period) and $y$ (resistance) and $\sigma_x=\sqrt{\sum(x_i-\bar{x})^2}$ and $\sigma_y=\sqrt{\sum(y_i-\bar{y})^2}$ represent the standard deviation of $x$ and $y$, respectively, with each sum running over $N$ measurements. The obtained linear correlation coefficient is 0.99 with a best slope of 18.5 k$\Omega$/$\mu$s, or in terms of the actual unit of capacitance and temperature: 0.2 pF/C$^{\circ}$. This slope can be then directly used to correct any temperature-driven fluctuations.
\section{Scanning result I: Non-contact mode}
Presented in Fig. 6 is the result of scanning over a sample of brass machined `S' and `U' (for Seattle University). The two letters are convex-shaped with depth of 2 mm. For this particular scan, a paperclip is used as a probe tip and moves across the whole sample at a constant height in two dimensions (5 mm by 4 mm). Hence, the variation in capacitance is directly caused by the change in depth in the sample itself and represents a topographical map of the sample surface. Data are collected approximately every 100 $\mu$m for a total of 7200 points.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth,clip]{fig6.eps}
\caption{Surface plot of a two-dimensional scan in constant-height mode: The probe tip is held at a fixed height and the stage containing the sample is moved in relation to it. Capacitance (period) measurements are conducted at regularly spaced intervals. The capacitance data directly correspond to height variations and subsequently produces a topographical map of the sample being scanned. The distance between pixels (i.e. lateral resolution) is 100 $\mu$m. A clear contrast of depth is obtained with the letters `SU' distinctly resolved.}
\label{fig6}
\end{figure}
The main advantage of this method is that the tip and the sample never come in direct contact. Reliable scans are obtained as long as the tip-sample distance is maintained sufficiently small ($<$10 $\mu$m). Although the method is completely ``touch-free'' and thus non-destructive, several disadvantages also exist: First, as we have previously discussed, the effect of the temperature drift over time causes the capacitance to fluctuate. The capacitance fluctuation is quite inevitable in the non-contact method, and either active temperature stabilization or off-line treatment of data using the linear correlation coefficient becomes necessary. Second, the system is extremely susceptible to acoustic disturbance. For example, the period measurement can be affected by something as minor as a person walking within a few feet of the experimental setup. It might be useful to place the setup inside an enclosure on a vibration-control table, but our scanning setup sat on an optical table without any enclosure. Finally, it is significantly less capable of scanning sharp cliffs or valleys in a sample, because the probe tip is not only interacting with the section of the sample immediately below it, but also with portions of the sample in the immediate vicinity of the relevant point. Depending on the topography of the sample, the surrounding (secondary) portion of the sample may be much closer to the tip than the relevant (primary) portion of the sample to be imaged, leading to an inflated period measurement.
\section{Scanning result II: Contact mode}
The contact method, a product of which is shown in Fig. 7 as a two-dimensional scan over the face of Abraham Lincoln on a US penny, actually brings the probe tip into direct contact with the sample. In this contact method, we (i) record the height at which a sharp fall in period measurements occurs; and (ii) retract the tip a set distance away from the initial location, usually about 1 $\mu$m, and (iii) approach a new point to find the height at which the sharp fall occurs at that location. The procedure is repeated until a full, topological map of height variations is acquired over the entire surface of a sample.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth,clip]{fig7.eps}
\caption{Surface plot of a two-dimensional scan of a US penny in contact mode. At a given lateral position, the probe tip comes into physical contact with a point on the sample surface. The heights at which the capacitance (period) value rapidly falls are recorded and subsequently generate a topographical map of the sample. This method is somewhat slower, but yields a much better resolution, because it is insensitive to externally-driven fluctuations. The distance between each pixel is 45 $\mu$m for this run.}
\label{fig7}
\end{figure}
The main advantage to this method is that it is significantly more precise and reliable than the non-contact method. While the non-contact method is extremely vulnerable to external noises, the contact method is almost impervious to them, making it drift-free. In this method, the {\em contrast} resolution and {\em lateral} resolution are limited by the resolution of the linear actuation (50 nm) and the size of the employed tip (2 $\mu$m), respectively. However, one major disadvantage is that, owing to the probe tip making repeated contacts with the sample surface, the end tip will blunt over time and, consequently, the lateral resolution degrades as the scanning progresses. Additionally, delicate portions of the sample could be damaged by the light, yet frequent, touch of the probe tip.
In principle, the capacitance measurement is not at all necessary to perform the contact microscopy. The only requirement is the capability to measure the contact transition when a sharp tip `touches' the surface of a sample. For this reason, a simple, battery-operated current-meter can be built to replace the present relaxation oscillator. There, the contrast resolution is still set by the minimum distance of the z-axis translation, and the lateral resolution still limited by the size of the probe tip.
\section{Conclusion}
We have reported results of scanning capacitance microscopy using a relaxation oscillator. Successful surface topography of both machined grooves and a coin has been obtained in the non-contact and contact modes with a spatial resolution of 100 $\mu$m and 45 $\mu$m, respectively. Our simple approach provides an excellent opportunity for students wishing to gain laboratory exposure to nano-scale imaging and microscopy techniques. To advance the present technique to a next level, the linear mechanical actuators can be replaced by a set of pico-meter piezoelectric translators (PZTs), and the tungsten probe tip can be attached with a template-assisted, electrochemically-synthesized nanowire whose diameter is as small as 10 nm.\cite{kim} Despite some apparent challenges, such as manipulating an individual nanowire for attachment and maintaining a sharp-contact point, the ultra-small tip, aided by high-precision PZTs, are within experimental reach and could drastically improve the presently achieved resolution by more than three orders of magnitude.
\begin{acknowledgments}
This work is supported by the Clare Luce Boothe Research Program (MP), the M. J. Murdock Charitable Trust, Pat and Mary Welch (ST and AH), and the Research at Undergraduate Institutions through the National Science Foundation (WJK). We also thank Charlie Rackson for careful proofreading of our manuscript.
\end{acknowledgments}
|
1,108,101,564,332 | arxiv | \section{A Patterson-Sullivan construction of equilibrium states}
\label{sect:construction}
We refer to \cite[Chap.~3, 6, 7]{PauPolSha15} and \cite[Chap.~2, 3,
4]{BroParPau19} for details and complements on this section.
Let $X$ be (see \cite{BroParPau19} for a more general framework)
$\bullet$~ either a complete, simply connected Riemannian manifold
$\wt M$ with dimension $m$ at least $2$ and pinched sectional
curvature at most $-1$,
$\bullet$~ or (the geometric realisation of) a simplicial tree $\maths{X}$
whose vertex degrees are uniformly bounded and at least $3$. In this
case, we respectively denote by $E\maths{X}$ and $V\maths{X}$ the sets of vertices
and edges of $\maths{X}$. For every edge $e$, we denote by $o(e),t(e),
\overline{e}$ its original vertex, terminal vertex and opposite edge.
Let us fix an indifferent basepoint $x_*$ in $\wt M$ or in $V\maths{X}$.
Recall (see for instance \cite{BriHae99}) that a geodesic ray or line
in $X$ is an isometric map from $[0,+\infty[$ or $\maths{R}$ respectively
into $X$, that two geodesic rays are {\it asymptotic} if they stay at
bounded distance one from the other, and that the {\it boundary at
infinity} of $X$ is the space $\partial_\infty X$ of asymptotic
classes of geodesic rays in $X$ endowed with the quotient topology of
the compact-open topology. When $X=\wt M$, up to a translation
factor, two asymptotic geodesic rays converge exponentially fast one
to the other, and $\partial_\infty \wt M$ is homeomorphic to the
sphere $\maths{S}_{m-1}$ of dimension $m-1$. When $X$ is a tree, up to a
translation factor, two asymptotic geodesic rays coincide after a
certain time, and $\partial_\infty \wt M$ is homeomorphic to a Cantor
set.
For every $x$ in $X$, the Gromov-Bourdon {\em visual distance} $d_x$
on $\partial_\infty X$ seen from $x$ (inducing the topology of
$\partial_\infty X$ ) is defined by
$
d_x(\xi,\eta)=
\lim_{t\rightarrow+\infty} e^{\frac{1}{2}(d(\xi_t,\,\eta_t)-d(x,\,\xi_t)-d(x,\,\eta_t))}\;,
$
where $\xi,\eta\in\partial_\infty X$ and $t\mapsto \xi_t,\eta_t$ are
any geodesic rays converging to $\xi,\eta$ respectively. The visual
distances seen from two points of $X$ are Lipschitz equivalent.
Let $\Gamma$ be a discrete group of isometries of $X$ which is {\it
nonelementary}, that is, does not preserve a subset of cardinality
at most $2$ in $X\cup\partial_\infty X$. When $X=\wt M$, this is
equivalent to $\Gamma$ being non virtually nilpotent. When $X$ is a tree,
we furthermore assume that $X$ has no nonempty proper invariant subtree
(this is not an important restriction, as one may always replace $X$
by its unique minimal nonempty invariant subtree), and that $\Gamma$ does
not map an edge to its opposite one.
The {\it limit set} $\Lambda\Gamma$ of $\Gamma$ is the smallest nonempty
closed invariant subset of $\partial_\infty X$, which is the
complement of the orbit $\Gamma x_*$ in its closure $\overline{\Gamma x_*}$,
in the compactification $X\cup\partial_\infty X$ of $X$ by its
boundary at infinity.
\medskip
\noindent{\bf Examples. } (1) Let $\wt M$ be a symmetric space with
negative curvature, e.g.~the real hyperbolic plane ${\HH}^2_\RR$, and let $\Gamma$
be an arithmetic lattice in $\operatorname{Isom}(\wt M)$, e.g.~$\Gamma = \operatorname{PSL}_2(\maths{Z})$
acting by homographies on the upper halfplane model of ${\HH}^2_\RR$ with
constant curvature $-1$ (see for instance \cite{Katok92}, and
\cite{Margulis91} for a huge amount of examples).
(2) For every prime power $q$, let $\maths{X}$ be the regular tree of degree
$q+1$, and let $\Gamma=\operatorname{PGL}_2(\maths{F}_q[Y])$, acting on $\maths{X}$ seen as the
Bruhat-Tits tree $\maths{X}_q$ of $\operatorname{PGL}_2$ over the local field $\maths{F}_q((Y^{-1}))$
(see for example \cite{Serre83}, and \cite{BasLub01} for a huge amount
of examples).
\begin{center}
\input{fig_modular.pdf_t}
\end{center}
Note that the pictures of the quotients $\Gamma\backslash X$ are very similar in
the above two special examples, in particular
$\bullet$ the lengths of the closed horocycle quotients in
$\operatorname{PSL}_2(\maths{Z})\backslash {\HH}^2_\RR$ go exponentially to $0$ (they are equal to
$e^{-t}$ where $t$ is the distance of the horocycle quotient to the
orbifold point of order $2$),
$\bullet$ the orders of the vertex stabilisers along a geodesic ray in
$\maths{X}_q$ lifting the quotient ray $\operatorname{PGL}_2(\maths{F}_q[Y])\backslash\maths{X}_q$ increase
exponentially (they are equal to $c\,q^n$ where $c$ is a constant and
$n$ is the distance of the vertex to the origin of the ray), see for
instance \cite[\S 15.2]{BroParPau19}.
\medskip
\noindent{\bf Remark. } Note that we allow torsion in $\Gamma$, as this is in particular
important in the tree case; we allow $\Gamma\backslash X$ to be noncompact;
and we allow $\Gamma$ not to be a lattice, which gives in the tree
case the possibility to have almost any (metrisable, compact, totally
disconnect) space of ends and almost any type of asymptotic growth of
the quotient $\Gamma\backslash X$ (linear, polynomial, exponential, etc), see
loc. cit.
Recall that $\Gamma$ is a lattice in $X$ if either the Riemannian volume
$\operatorname{Vol}(\Gamma\backslash \wt M)$ of the quotient orbifold $\Gamma\backslash \wt M$ is
finite, or if the {\it graph of groups volume}
$$
\operatorname{Vol}(\Gamma\backslash\!\!\backslash \maths{X})= \sum_{[x]\in\Gamma\backslash V\maths{X}} \;\;\frac{1}{{\operatorname{Card}}(\Gamma_x)}
$$
(where $\Gamma_x$ is the stabiliser of $x$ in $\Gamma$) of the quotient
graph of groups $\Gamma\backslash\!\!\backslash \maths{X}$ is finite. Note the analogy, in the two
special examples above, between the computation of (most of) the
volume of $\operatorname{PSL}_2(\maths{Z})\backslash {\HH}^2_\RR$ as a converging integral of the
lengths of the closed horocycle quotients and of the volume of
$\operatorname{PGL}_2(\maths{F}_q[Y])\backslash\!\!\backslash\maths{X}_q$ (which does converge by a geometric mean
argument).
\medskip
\noindent{\bf The phase space. }
Let ${\cal G} X$ be the space of geodesic lines $\ell:\maths{R}\rightarrow X$ in $X$, such
that, when $X$ is a tree, $\ell(0)$ is a vertex, endowed with the
$\operatorname{Isom}(X)$-invariant distance (inducing its topology) defined by
$$
d(\ell, \ell')=
\int_{-\infty}^{+\infty} d(\ell(t),\ell'(t))\;e^{-2 |t|}\,dt\;,
$$
and with the $\operatorname{Isom}(X)$-equivariant {\it geodesic flow}, which is
the one-parameter group of homeo\-morphisms
$$
\flow t :\ell\mapsto \{s \mapsto \ell(s+t)\}
$$
for all $\ell\in{\cal G} X$, with continuous time parameter $t\in\maths{R}$ if
$X=\wt M$ and discrete time parameter $t\in\maths{Z}$ if $X$ is a tree. We
again call {\it geodesic flow} and denote by $(\flow t)_t$ the
quotient flow on the {\it phase space} $\Gamma\backslash {\cal G} X$.
Note that the map from the unit tangent bundle $T^1\wt M$ endowed with
Sasaki's metric to ${\cal G} \wt M$, which associates to a unit tangent vector
$v$ the unique geodesic line whose tangent vector at time $t=0$ is
$v$, is an $\operatorname{Isom}(\wt M)$-equivariant bi-Hölder-continuous\footnote{In
order to deal with noncompactness issues, a map $f$ between two
metric spaces is {\it Hölder-continuous} if there exist $c,c'>0$
and $\alpha\in\;]0,1]$ such that for every $x,y$ in the source space,
if $d(x,y)\leq c$, then $d(f(x),f(y))\leq c' d(x,y)^\alpha$.}
homeomorphism, by which we identify the two spaces from now on.
\medskip
\noindent
{\bf Potentials on the phase space. } We now introduce the
supplementary data (with physical origin) that we will consider on
our phase space. Assume first that $X=\wt M$. Let $\wt F:T^1\wt M\rightarrow
\maths{R}$ be a {\it potential}, that is, a $\Gamma$-invariant,
bounded\footnote{see \cite[\S 3.2]{BroParPau19} for a weakening of
this assumption} Hölder-continuous real map on $T^1\wt M$.
Two potentials $\wt F,\wt F^* :T^1\wt M\rightarrow \maths{R}$ are {\it cohomologous}
(see for instance \cite{Livsic72}) if there exists a
Hölder-continuous, bounded, differentiable along flow lines,
$\Gamma$-invariant function $\wt G :T^1\wt M\rightarrow \maths{R}$, such that, for every
$v\in T^1\wt M$,
$
\wt F^*(v)-\wt F(v)=\frac{d}{dt}_{\mid t=0}\wt G(\flow{t}v)\;.
$
For every $x,y\in \wt M$, let us define (with the
obvious convention of being $0$ if $x=y$) the integral of $\wt F$
between $x$ and $y$, called the {\it amplitude} of $\wt F$ between $x$
and $y$, to be
\begin{center}
\raisebox{0.5cm}{
$\displaystyle\int_x^y\wt F= \int_{0}^{d(x,y)} \wt F(\flow t v) \;dt$}
~~~~~~~~~ \input{fig_geodsegm.pdf_t}
\end{center}
and $v$ is the tangent vector to the geodesic segment from $x$ to $y$.
Now assume that $X$ is a tree. Let $\wt c: E\maths{X}\rightarrow \maths{R}$ be a
(logarithmic) {\it system of conductances} (see for instance
\cite{Zemanian91}), that is, a $\Gamma$-invariant, bounded real map on
$E\maths{X}$. Two systems of conductances $\wt c,\wt c^*: E\maths{X}\rightarrow \maths{R}$ are
{\it cohomologous} if there exists a $\Gamma$-invariant function
$\wt f : V\maths{X}\rightarrow \maths{R}$, such that for every $e\in E\maths{X}$
$
\wt c^*(e)-\wt c(e)=f(t(e))-f(o(e))\;.
$
For every $\ell\in {\cal G} X$, we denote by $e^+_0(\ell)=\ell([0,1])\in
E\maths{X}$ the first edge followed by $\ell$, and we define $\wt F:{\cal G} X\rightarrow
\maths{R}$ as the map $\ell\mapsto \wt c(e^+_0(\ell))$.
For every $x,y\in V\maths{X}$, we now define the {\it amplitude} of
$\wt F$ between $x$ and $y$, to be
\begin{center}
\raisebox{0.5cm}{
$\displaystyle\int_x^y\wt F= \sum_{i=1}^{k} \;\;\wt c(e_i) \;dt$}
~~~~~~~~~ \input{fig_edgepath.pdf_t}
\end{center}
if $(e_1,e_2,\dots, e_k)$ is the geodesic edge path
in $\maths{X}$ between $x$ and $y$.
In both cases, we will denote by $F :\Gamma\backslash {\cal G} X\rightarrow\maths{R}$ the function
on the phase space induced by $\wt F$ by taking the quotient modulo
$\Gamma$, that we call the {\it potential} on $\Gamma\backslash {\cal G} X$. Note that
we make no assumption of reversibility on $F$.
\medskip
\noindent
{\bf Cohomological invariants. } Let us now introduce three
cohomological invariants of the potentials on the phase space.
The {\it pressure} of $F$ is the physical complexity associated with
the potential $F$ defined by
\begin{center}
\fcolorbox{blue}{white}{
$\displaystyle
P_F= \sup_{\;\mu \;\;(\flow t)_t {\textrm{-invariant~proba~on~}}
\Gamma\backslash {\cal G} X} \;\big(\; h_\mu + \int_{\Gamma\backslash {\cal G} X} F\;d\mu\;\big)
$ }
\end{center}
where $h_\mu$ is the metric entropy\footnote{The metric entropy
$h_\mu$ is the upper bound, for all measurable countable partitions
$\xi$ of $\Gamma\backslash {\cal G} X$, of
$$
\lim_{k\rightarrow+\infty}\;\;\frac{1}{k}\; H_\mu(\xi\vee \cdots \vee g^{-k}\xi)
$$
where $H_\mu(\xi)= - \sum_{E\in \xi} \mu(E) \ln \mu(E)$ is Shannon's
entropy of the countable partition $\xi$, see for instance
\cite{KatHas95}, and the join $\xi\vee\xi'$ of two partitions $\xi$
and $\xi'$ is the partition by the nonempty intersections of an
element of $\xi$ and an element of $\xi'$.} of $\mu$ for the time
$1$ map $\flow 1$ of the geodesic flow.
The {\it critical exponent} of $F$ is the weighted (by the exponential
amplitudes) orbital growth rate of the group $\Gamma$, defined by
\begin{center}
\fcolorbox{blue}{white}{
$\displaystyle \delta_F= \lim_{n\rightarrow+\infty}\;\frac{1}{n}\;\ln\;\Big(
\sum_{\gamma\in\Gamma,\;n-1< d(x_*,\gamma x_*)\leq n} \;\;
\exp\big(\int_{x_*}^{\gamma x_*} \wt F\;\big)\Big)\;. $ }
\end{center}
Note that the critical exponent $\delta_0$ of the zero potential is
the usual critical exponent of the group $\Gamma$ (see for instance
\cite{Paulin97d}). We have $\delta_F\in\;]-\infty,+\infty[$ since
$$
\delta_0+\inf \wt F\leq \delta_F\leq \delta_0+\sup \wt F\;.
$$
Note that $\delta_{F\circ\iota}=\delta_F$ where $\iota : {\cal G} X\rightarrow {\cal G} X$
is the involutive {\it time reversal map} defined by $\ell\mapsto
\{t\mapsto \ell(-t)\}$.
The {\it period} for the potential $F$ of a periodic orbit ${\cal O}$ of
the geodesic flow $(\flow t)_t$ on $\Gamma\backslash {\cal G} X$ is $\int_{\cal O} F=
\int_{\ell(0)}^{\ell(t_{\cal O})}\;\wt F$ where $\ell\in{\cal G} X$ maps to
${\cal O}$ and $t_{\cal O} =\inf\{t>0\;:\;\Gamma \flow{t}\ell=\Gamma \ell\}$ is the
{\it length} of the periodic orbit ${\cal O}$. The {\it Gurevich
pressure} of $F$ is the growth rate of the exponentials of
periods for $F$ of the periodic orbits, defined by
\begin{center}
\fcolorbox{blue}{white}{ $\displaystyle \P_F^{\rm Gur}=
\lim_{n\rightarrow+\infty}\;\frac{1}{n}\;\ln\; \sum_{{\cal O}\;:\; t_{\cal O}\leq n,\;
{\cal O}\cap W\neq \emptyset} \;\;\exp\big(\int_{{\cal O}} F\big)\;,$ }
\end{center}
where the sum is taken over the periodic orbits ${\cal O}$ of $(\flow
t)_t$ on $\Gamma\backslash {\cal G} X$ with length at most $n$ and meeting $W$, where
$W$ is any relatively compact open subset of $\Gamma\backslash {\cal G} X$ meeting the
nonwandering set of the geodesic flow (recall that we made no
assumption of compactness on the phase space).
Note that the above three limits exist, and are independent of the
choices of $x_*$ and $W$, and depend only on the cohomology class of
the potential $F$.
The following result proved in \cite[Theo.~4.1 and 6.1]{PauPolSha15}
extends the case of the zero potential due to Otal and Peigné
\cite{OtaPei04}.
\begin{theo}[Paulin-Pollicott-Schapira] If $X=\wt M$ has pinched sectional
curvatures with uniformly bounded derivatives,\footnote{This
assumption on the derivatives was forgotten in the statements of
\cite{OtaPei04, PauPolSha15}, but is used in the proofs.} then
\begin{center}
\fcolorbox{red}{white}{ $\displaystyle P_F=\delta_F=\P_F^{\rm Gur}$. }
\end{center}
\end{theo}
Note that the dynamics of the geodesic flow $(\flow t)_t$ on the phase
space $\Gamma\backslash {\cal G} X$ is very chaotic. In particular, there are lots
$(\flow t)_t$-invariant measures on $\Gamma\backslash {\cal G} X$. We give two basic
examples, and we will then contruct, using potentials, a huge family
of such measures.
\medskip
\noindent{\bf Examples. } (1) If $X=\wt M$, then the {\it Liouville
measure} $m_{\rm Liou}$ on $T^1M=\Gamma\backslash (T^1\wt M)$ is the measure
on $T^1M$ which disintegrates, with respect to the canonical footpoint
projection $T^1M\rightarrow M$, over the Riemannian measure $\operatorname{vol}_M$ of the
orbifold $M= \Gamma\backslash \wt M$, with conditional measures on the fibers
the spherical measures $\operatorname{vol}_{T^1_xM}$ on the (orbifold) unit tangent
spheres at the points $x$ in $M$:
\begin{center}
\fcolorbox{blue}{white}{ $\displaystyle d m_{\rm Liou}(v)= \int_{x\in M}
d\operatorname{vol}_{T^1_xM}(v)\;\;d \operatorname{vol}_M(x)$. }
\end{center}
(2) For every periodic orbit ${\cal O}$ of the geodesic flow $(\flow t)_t$
on $\Gamma\backslash {\cal G} X$, we denote by $\L_{\cal O}$ the Lebesgue
measure\footnote{If the length of ${\cal O}$ is $T$ and if $v\in T^1\wt M$
maps into ${\cal O}$ by the canonical projection $T^1\wt M\rightarrow T^1M$, the
Lebesgue measure $\L_{\cal O}$ of ${\cal O}$ is the pushforward by $t\mapsto
\Gamma\flow t v$ of the Lebesgue measure on $[0,T]$.} (when $X=\wt M$)
or counting measure (when $X$ is a tree) of ${\cal O}$. This is a $(\flow
t)_t$-invariant measure on $\Gamma\backslash {\cal G} X$ with support ${\cal O}$.
\medskip The main class of invariant measures we will study is the
following one, and the terminology has been mostly introduced by
Sinai, Ruelle, Bowen, see for instance \cite{Ruelle04}. A $(\flow
t)_t$-invariant probability measure $\mu$ on the phase space $\Gamma\backslash
{\cal G} X$ is an {\it equilibrium state} if it realizes the upper bound
defining the pressure of $F$, that is, if
$$
h_\mu + \int_{\Gamma\backslash {\cal G} X} F\;d\mu = P_F\;.
$$
The remainder of this section is devoted to the problems of {\bf
existence, uniqueness and explicit construction} of equilibrium
states.
\medskip
\noindent
{\bf Gibbs cocycles. } As for instance defined by Hamenstädt, the
(normalised) {\it Gibbs cocycle} of the potential $F$ is the function
$C:\partial_\infty X\times \wt M\times \wt M\rightarrow \maths{R}$ when $X=\wt M$ or
the function $C:\partial_\infty X\times V\maths{X}\times V\maths{X}\rightarrow \maths{R}$ when
$X$ is a tree, defined by the following limit of difference of
amplitudes for the renormalised potential
\begin{center}
\raisebox{1.3cm}{
$\displaystyle(\xi,x,y)\mapsto C_\xi(x,y)=
\lim_{t\rightarrow+\infty}\int_y^{\xi_t}(\wt F-\delta_F)-\int_{x}^{\xi_t}
(\wt F -\delta_F)$,}
~~~~~~~~~ \input{fig_cocycle.pdf_t}
\end{center}
where $t\mapsto \xi_t$ is any geodesic ray converging to $\xi$. The
limit does exist. The Gibbs cocycle is $\Gamma$-invariant (for the
diagonal action) and locally Hölder-continuous. It does satisfy the
cocycle property $C_\xi(x,z)=C_\xi(x,y)+C_\xi(y,z)$ for all
$x,y,z$. Furthermore, there exist constants $c_1, c_2>0$ (depending
only on the bounds of $\wt F$ and on the pinching of the sectional
curvature, when $X=\wt M$) such that if $d(x,y)\leq 1$, then
$C_\xi(x,y)\leq c_1d(x,y)^{c_2}$. See \cite[\S 3.4]{BroParPau19}.
\medskip
\noindent
{\bf Patterson densities. } A (normalised) {\it Patterson density} of
the potential $F$ is a $\Gamma$-equiv\-ariant family
$(\mu_{x})_{x\in X}$ of pairwise absolutely continuous (positive,
Borel) measures on $\partial_\infty X$, whose support is $\Lambda\Gamma$,
such that
\begin{equation}\label{eq:Pattersondensity}
\gamma_*\mu_x=\mu_{\gamma x}{\rm ~~~and~~~}
\frac{d\mu_x}{d\mu_y}(\xi) = e^{-C_\xi(x,\,y)}
\end{equation}
for every $\gamma\in\Gamma$, for all $x,y\in X$, and for
(almost) every $\xi\in\partial_\infty X$.
Patterson densities do exist and they satisfy the following Mohsen's
shadow lemma (see for instance \cite[\S 4.1]{BroParPau19}:
\smallskip\noindent
\begin{minipage}{7cm}
\begin{center}
\input{fig_shadow.pdf_t}~~~~~~~~~~~~~~~~
\end{center}
\end{minipage}
\begin{minipage}{7.9cm}
Define the {\it shadow} ${\cal O}_xE$ seen from $x$ of a subset $E$ of $X$
as the set of points at infinity of the geodesic rays from $x$ through
$E$. Then for every $x\in X$, if $r>0$ is large enough, there exists
$\kappa>0$ such that for every $\gamma\in\Gamma$, we have
\end{minipage}
\begin{center}
\raisebox{1.3cm}{\fcolorbox{green}{white}{
$\displaystyle
\frac{1}{\kappa}\;\exp\Big(\int_x^{\gamma x}(\wt F-\delta_F)\Big)\leq
\mu_x\big({\cal O}_xB(\gamma x,r)\big)\leq
\kappa\;\exp\Big(\int_x^{\gamma x}(\wt F-\delta_F)\Big)$}
\hfill (2)}\addtocounter{equation}{1}
\end{center}
\vspace*{-0.5cm}
\noindent
{\bf Gibbs measures. } The {\it Hopf parametrisation} of $X$ at $x_*$
is the map from ${\cal G} X$ to $(\partial_\infty X\times \partial_\infty
X-{\rm Diag})\times R$, where $R=\maths{R}$ if $X=\wt M$ and $R=\maths{Z}$ if $X$
is a tree, defined by
\begin{center}
\raisebox{1.3cm}{
\hspace{-1cm}$\displaystyle\ell\mapsto (\ell_-,\ell_+,t)$}
~~~~~~~~~~~~~~~~~~~~~~~~~\input{fig_hopf.pdf_t}
\end{center}
where $\ell_-$, $\ell_+$ are the original and terminal points at
infinity of the geodesic line $\ell$, and $t$ is the algebraic
distance along $\ell$ between the footpoint $\ell(0)$ and the closest
point to $x_*$ on the geodesic line. It is a Hölder-continuous
homeomorphism (for the previously defined distances). Up to
translations on the third factor, it does not depend on the basepoint
$x_*$ and is $\Gamma$-invariant, see for instance \cite[\S 2.3 and \S
3.1]{BroParPau19}. The geodesic flow acts by translations on the
third factor.
\medskip
Let $(\mu_{x})_{x\in X}$ and $(\mu^\iota_{x})_{x\in X}$ be Patterson
densities for the potentials $F$ and $F\circ \iota$ respectively,
where $\iota:\Gamma\ell\mapsto \Gamma\{t\mapsto \ell(-t)\}$ is the time
reversal on the phase space $\Gamma\backslash {\cal G} X$. We denote by $C^\iota$ the
Gibbs cocycle of the potential $F\circ \iota$. We denote by $dt$ the
Lebesgue or counting measure on $R$. The measure on ${\cal G} X$ defined
using the Hopf parametrisation by
\begin{center}
\fcolorbox{blue}{white}{
$\displaystyle
d\wt m_F(\ell)= \frac{d\mu^\iota_{x_*}(\ell_-)\;d\mu_{x_*}(\ell_+)\;dt}
{\exp\big(\,C^\iota_{\ell_-}(x_*,\,\ell(0))+C_{\ell_+}(x_*,\,\ell(0))\,\big)}
$ }
\end{center}
is a $\sigma$-finite nonzero measure on ${\cal G} X$. By Equation
\eqref{eq:Pattersondensity} and by the invariance of the
measure $dt$ under translations, it is independent of the choice of
basepoint $x_*$, hence is $\Gamma$-invariant and $(\flow t)_t$-invariant.
Therefore it induces a $\sigma$-finite nonzero $(\flow t)_t$-invariant
measure on $\Gamma\backslash{\cal G} X$, called the {\it Gibbs measure} on the phase
space and denoted by $m_F$.
\medskip
\noindent{\bf Examples. } (1) When $F=0$, then the Gibbs measure is
called the Bowen-Margulis measure (see for instance \cite{Roblin03}).
(2) When $X=\wt M$ and $\wt F$ is the {\it unstable Jacobian}, that
is, for every $v\in T^1\wt M$,
\begin{center}
\raisebox{1.2cm}{\fcolorbox{blue}{white}{
$\displaystyle
\wt F^{\rm su}(v)=-\;\frac{d}{dt}_{\mid t=0}\ln\Big(
\begin{array}{c}
{\rm Jacobian~of~restriction~of~} \flow t{\rm ~to}\\
{\rm strong~unstable~leaf~} W^{su}(v)
\end{array}\Big),
$ }}~~~~~~ \input{fig_unstable.pdf_t}
\end{center}
we have the following result (see \cite[\S 7]{PauPolSha15}, in
particular for weaker assumptions). When $M$ has variable sectional
curvature, the Liouville measure and the Bowen-Margulis measure might
be quite different. The following result in particular says that the
huge family of Gibbs measures interpolates between the Liouville
measure and the Bowen-Margulis measure. This sometimes provides common
proofs of properties satisfied by both the Liouville measure and the
Bowen-Margulis measure.
\begin{theo}[Paulin-Pollicott-Schapira] If $X=\wt M$ has pinched sectional
curvatures with uniformly bounded derivatives, then $\wt F^{\rm su}$
is Hölder-continuous and bounded. If $\wt M$ has a cocompact lattice
and if $(\flow t)_t$ is completely conservative\footnote{That is,
every wandering set has measure zero.} for the Liouville measure,
then
\begin{center}
\fcolorbox{red}{white}{ $\displaystyle m_{F_{\rm su}} = m_{\rm Liou}$. }
\end{center}
\end{theo}
The following result, due to Bowen and Ruelle when $M$ is compact and
to Otal-Peigné \cite{OtaPei04} when $F=0$, completely solves the
problems of existence, uniqueness and explicit construction of
equilibrium states, see \cite[\S 6]{PauPolSha15}.
\begin{theo}[Paulin-Pollicott-Schapira] Assume that $X=\wt M$ has pinched
sectional curvatures with uniformly bounded derivatives.\footnote{This
assumption on the derivatives was forgotten in the statements of
\cite{OtaPei04, PauPolSha15}.} If the Gibbs measure $m_F$ is finite,
then $\overline{m_F}=\frac{m_F}{\|m_F\|}$ is the unique equilibrium
state. Otherwise, there is no equilibrium state.
\end{theo}
We refer to Section \ref{subsect:variational} for an analogous
statement when $X$ is a tree, whose proof uses completely different
techniques.
\section{Basic ergodic properties of Gibbs measures}
\label{sect:ergodic}
We refer to \cite[Chap.~3, 5, 8]{PauPolSha15} and
\cite[Chap.~4]{BroParPau19} for details and complements on this
section.
\subsection{The Gibbs property}
\label{sect:Gibbsproperty}
In this section, we justify the terminology of Gibbs measures used above.
For every $\ell\in \Gamma\backslash{\cal G} X$, say $\ell=\Gamma\wt \ell$, for every
$r>0$ and for all $t,t'\geq 0$, the (Bowen or) {\it dynamical ball}
$B(\ell;t,t',r)$ in the phase space $\Gamma\backslash{\cal G} X$ centered at $\ell$
with parameters $t,t',r$ is the image in $\Gamma\backslash{\cal G} X$ of the set of
geodesic lines in ${\cal G} X$ following the lift $\wt \ell$ at distance
less than $r$ in the time interval $[-t',t]$, that is, the image in
$\Gamma\backslash{\cal G} X$ of
\begin{center}
\fcolorbox{blue}{white}{
$\displaystyle B(\wt \ell;t,t',r)=\big\{\ell\,'\in {\cal G} X
\;:\; \sup_{s\,\in\,[-t',\,t]}\;
d_{X}(\,\wt \ell(s),\ell\,'(s)\,) <r\big\}$. }
\end{center}
The following definition of the Gibbs property is well adapted to the
possible noncompactness of the phase space $\Gamma\backslash {\cal G} X$. A $(\flow
t)_t$-invariant measure $m'$ on $\Gamma\backslash {\cal G} X$ satisfies the {\it Gibbs
property} for the potential $F$ with {\it Gibbs constant} $c(F)\in
\maths{R}$ if for every compact subset $K$ of $\Gamma\backslash {\cal G} X$, there exists
$r>0$ and $c_{K,r}\geq 1$ such that for all $t,t'\geq 0$ large enough,
for every $\ell$ in $\Gamma\backslash {\cal G} X$ with $\flow{-t'}\ell, \flow{t}
\ell\in K$, we have
\begin{center}
\fcolorbox{green}{white}{
$\displaystyle \frac{1}{C_{K,r}}\leq
\frac{m'\big(B(\ell;t,t',r)\big)}
{e^{\int_{-t'}^t \left(\,F(\flow{t}\ell)-c(F)\,\right)\,dt}}
\leq C_{K,r}$. }
\end{center}
The following result is due to \cite[\S 3.8]{PauPolSha15} when $X=\wt
M$ and \cite[\S 4.2]{BroParPau19} in general.
\begin{prop}\label{prop:gibbsgibbs}
The Gibbs measure $m_F$ satisfies the Gibbs property for $F$
with Gibbs constant $c(F)$ equal to the critical exponent $\delta_F$.
\end{prop}
Let us give a sketch of its proof, which explains the decorrelation of
the influence of the two points at infinity of the geodesic lines,
using the fact that the Gibbs measure is absolutely continuous with
respect to a product measure in the Hopf parametrisation. The key
geometric lemma is the following one.
\begin{lemm} For every $r>0$, there exists $t_r>0$ such that for all
$t,t'\geq t_r$ and $\ell\in{\cal G} X$, we have, using the Hopf
parametrisation at the footpoint $\ell(0)$,
$$
{\cal O}_{\ell(0)}B(\ell(-t'),r)\times {\cal O}_{\ell(0)}B(\ell(t),r)\times
\;]-1,1[\;\; \subset\; B(\ell;t,t',2r+2)
$$
$$
B(\ell;t,t',r)\;\subset \;{\cal O}_{\ell(0)}B(\ell(-t'),2r)\times
{\cal O}_{\ell(0)}B(\ell(t),2r)\times \;]-r,r[\;.
$$
\end{lemm}
Let us give a proof-by-picture of the first claim, the second one
being similar. See the following picture. If a geodesic line $\ell'$
has its points at infinity $\ell'_-$ and $\ell'_+$ in the shadows seen
from $\ell(0)$ of $B(\ell(-t'),r)$ and $B(\ell(-t'),r)$ respectively,
then by the properties of triangles in negatively curved spaces, if
$t$ and $t'$ are large, then the image of $\ell'$ is close to the
union of the images of the geodesic rays from $\ell(0)$ to $\ell_-$
and $\ell_+$. The control on the time parameter in Hopf
parametrisation then says that $\ell'$ is staying at bounded distance
from $\ell$ in the time interval $[-t',t]$.
\begin{center}
\input{fig_gibbsprop.pdf_t}
\end{center}
We now conclude the proof of Proposition \ref{prop:gibbsgibbs} by
using the boundedness of the Gibbs cocycles $C$ and $C^\iota$ on a
given compact subset $K$ in order to control the denominator in the
formula giving $\wt m_F$, and by using Mohsen's shadow lemma (see
Equation (2)) which estimates the Patterson measures of shadows of
balls.
\subsection{Ergodicity}
\label{sect:ergodicity}
In this section, we study the ergodicity property of the Gibbs
measures under the geodesic flow in the phase space.
The {\it Poincaré series} of the potential $F$ is
\begin{center}
\fcolorbox{blue}{white}{
$\displaystyle
Q_F(s)= \sum_{\gamma\in\Gamma}\;\;\exp\Big(\int_{x_*}^{\gamma x_*} (\wt F-s)\Big)\;.
$ }
\end{center}
It depends on the basepoint $x_*$, but its convergence or divergence
does not. It converges if $s>\delta_F$ and diverges for $s<\delta_F$,
by the definition of the critical exponent $\delta_F$.
The following result has a long history, and we refer for instance to
\cite[\S 5]{PauPolSha15} and \cite[\S 4.2]{BroParPau19} for proofs,
and proofs of its following two corollaries.
\begin{theo}[Hopf-Tsuji-Sullivan-Roblin] The following assertions are
equivalent.
\begin{enumerate}
\item The Poincaré series of $F$ diverges at the critical exponent of
$F$~: $Q_F(\delta_F)=+\infty$.
\item The group action $(\partial_\infty X\times \partial_\infty
X-{\rm Diag},\mu^\iota_{x_*}\otimes \mu_{x_*},\Gamma)$ is ergodic and
completely conservative.
\item The geodesic flow on the phase space with the Gibbs measure
$(\Gamma\backslash{\cal G} X,m_F,(\flow t)_t))$ is ergodic and completely
conservative.
\end{enumerate}
\end{theo}
\begin{coro} If $Q_F(\delta_F)=+\infty$, then there exists a Patterson
density for $F$, unique up to a positive scalar. It is atomless, and
the diagonal in $\partial_\infty X\times \partial_\infty X$ has
measure $0$ for the product measure $\mu^\iota_{x_*}\otimes
\mu_{x_*}$.
\end{coro}
Let us give a sketch of the very classical proof of the first claim of
this corollary.
\medskip
\noindent{\bf Existence. } Using the properties of negatively curved
spaces, one can prove, denoting by ${\cal D}_x$ the Dirac mass at a point
$x$, that one can take
\begin{center}
\fcolorbox{green}{white}{ $\displaystyle
\mu_x=\lim_{s_i\rightarrow\,\delta_F^+}\;\frac{1}{Q_F(s_i)}\;\sum_{\gamma\in\Gamma}
\;\;\exp\Big(\int_{x}^{\gamma x_*} (\wt F-s_i)\Big)\;\;{\cal D}_{\gamma x_*}$, }
\end{center}
where the atomic measure before taking the limit is, when $x=x_*$, a
probability measure, hence has, for some sequence $(s_i)_{i\in\maths{N}}$ in
$]\delta_F,+\infty[$ converging to $\delta_F$, a weakstar converging
subsequence in the compact space of probability measures on the
compact space $X\cup\partial_\infty X$.
\medskip
\noindent{\bf Uniqueness. } Let $(\mu'_x)_{x}$ be another Patterson
density. Up to positive scalars, we may assume that $\mu_{x_*}$ and
$\mu'_{x_*}$ are probability measures. Then $(\omega_x=\frac{1}{2}
(\mu_x+\mu'_x))_{x}$ is a Patterson density, $\mu_{x_*}$ is absolutely
continuous with respect to $\omega_{x_*}$, and by ergodicity, the
Radon-Nikodym derivative $\frac{d\mu_{x_*}}{d\,\omega_{x_*}}$ is
almost everywhere constant, hence the probability measures $\mu_{x_*}$
and $\omega_{x_*}$ are equal, hence $\mu_{x_*}=\mu'_{x_*}$.
\begin{coro} If $m_F$ is finite, then $Q_F(\delta_F)=+\infty$ (hence $(\flow
t)_t)$ is ergodic) and the normalised Gibbs measure $\overline{m_F}=
\frac{m_F}{\|m_F\|}$ is a cohomological invariant of the potential $F$.
\end{coro}
\subsection{Mixing}
\label{sect:mixing}
In this section, we study the mixing property of the Gibbs measures
under the geodesic flow in the phase space. Recall that the {\it
length spectrum} for the action of $\Gamma$ on $X$ is the subgroup of
$\maths{R}$ (hence of $\maths{Z}$ when $X$ is a tree) generated by the set of
lengths of the closed geodesic in $\Gamma\backslash X$ (or, in dynamical terms,
of the set of lengths of periodic orbits of the geodesic flow on the
phase space). See for instance \cite[\S 8.1]{PauPolSha15} when $X=\wt
M$ and \cite[\S 4.4]{BroParPau19} when $X$ is a tree for a proof of
the following result, which crucially uses the fact that the Gibbs
measure is absolutely continuous with respect to a product measure in
the Hopf parametrisation.
\begin{theo}[Babillot] If the Gibbs measure $m_F$ is finite, then the
following assertions are equivalent.
\begin{enumerate}
\item The Gibbs measure $m_F$ is mixing under the geodesic flow
$(\flow t)_t$.
\item The geodesic flow $(\flow t)_t$ is topologically mixing on its
nonwandering set in the phase space.
\item The length spectrum of $\Gamma$ is dense in $\maths{R}$ if $X=\wt M$ or
equal to $\maths{Z}$ if $X$ is a tree.
\end{enumerate}
\end{theo}
We summarise in the following result the known properties of the rate
of mixing of the geodesic flow in the manifold case when $X=\wt M$
(see \cite[\S 9.1]{BroParPau19}), refering to Section
\ref{sect:mixingrate} for the tree case, whose proof turns out to be
quite different.
Let $\alpha\in\;]0,1]$ and let ${\cal C}_{\rm b}^\alpha (Z)$ be the Banach
space\footnote{Recall that its norm (taking into account the
possible noncompactness of $Z$) is given by
$$
\|f\|_\alpha= \|f\|_\infty +
\sup_{\substack{x,\,y\,\in \,Z\\ 0<d(x,\,y)\leq 1}}
\frac{|f(x)-f(y)|}{d(x,y)^\alpha}\;.
$$} of bounded $\alpha$-H\"older-continuous functions on a metric
space $Z$. When $X=\wt M$, we will say that the (continuous time)
geodesic flow on the phase space $T^1M=\Gamma\backslash T^1 \wt M$ is
{\em exponentially mixing for the $\alpha$-H\"older regularity} or
that it has {\em exponential decay of $\alpha$-H\"older correlations}
for the potential $F$ if there exist $c',\kappa >0$ such that for all
$\phi,\psi\in {\cal C}_{\rm b}^\alpha (T^1 M)$ and $t\in\maths{R}$, we have
$
\Big|\int_{T^1 M}
\phi\circ\flow{-t}\;\psi\;d\overline{m_{F}}-
\int_{T^1 M}\phi\; d\overline{m_{F}}
\int_{T^1 M}\psi\;d\overline{m_{F}}\;\Big|
\le c'\;e^{-\kappa|t|}\;\|\phi\|_\alpha\;\|\psi\|_\alpha\,.
$
\begin{theo} Assume that $X=\wt M$ and that $M=\Gamma\backslash \wt M$ is
compact. Then the geodesic flow on the phase space $T^1M$ has
exponential decay of H\"older correlations if
\begin{itemize}
\item $M$ is two-dimensional, by \cite{Dolgopyat98},
\item $M$ is $1/9$-pinched and $F=0$, by
\cite[Coro.~2.7]{GiuLivPol13},
\item the potential $F$ is the unstable Jacobian $F^{\rm su}$, so
that, up to a positive scalar, $m_F$ is the Liouville measure
$m_{\rm Liou}$, by \cite{Liverani04}, see also \cite{Tsujii10},
\cite[Coro.~5]{NonZwo15} who give more precise estimates,
\item $M$ is locally symmetric by \cite{Stoyanov11}, see also
\cite{MohOh15} for some noncompact cases.
\end{itemize}
\end{theo}
Note that this gives only a very partial picture of the rate of mixing
of the geodesic flow in negative curvature, and it would be
interesting to have a complete result. Stronger results exist for the
Sobolev regularity when $\wt M$ is a symmetric space, $F=0$ and $\Gamma$
is an arithmetic lattice (the Gibbs measure then coincides, up to a
multiplicative constant, with the Liouville measure): see for instance
\cite[Theorem~2.4.5]{KleMar96}, using spectral gap properties given by
\cite[Theorem 3.1]{Clozel03}. But this still does not give a complete
answer.
\section{Coding and rate of mixing for geodesic flows on trees}
\label{sect:mixingrate}
We refer to \cite[Chap.~5 and 9.2]{BroParPau19} for details and
complements on this section.
From now on, we assume that $X$ is (the geometric realisation of) a
simplicial tree $\maths{X}$, and we write ${\cal G}\maths{X}$ instead of ${\cal G} X$. We
consider the discrete group $\Gamma$, the system of conductances $\wt c$
and the associated potential $F$ on the phase space $\Gamma\backslash {\cal G}\maths{X}$ as
introduced in Section \ref{sect:construction}.
The study of the rate of mixing of the (discrete time) geodesic flow
on the phase space uses coding theory. But since, as explained, we make
no assumption of compactness on the phase space, and no hypothesis of
being without torsion on the group $\Gamma$ in the huge class of examples
described in Section \ref{sect:construction}, the coding theory
requires more sophisticated tools than subshifts of finite type.
\subsection{Coding}
\label{subsect:coding}
Let ${\cal A}$ be a countable discrete set, called an {\it alphabet}, and
let $A= (A_{i,\,j})_{i,\,j\in{\cal A}}$ be an element in
$\{0,1\}^{{\cal A}\times{\cal A}}$, called a {\it transition matrix}. The
(two-sided, countable state) {\it topological shift}\footnote{We
prefer not to use the frequent terminology of {\it topological
Markov shift} as it could be misleading, many probability measures
invariant under general topological shifts do not satisfy the Markov
chain property that the probability to pass from one state to
another depends only on the previous state, not of all past states.}
with alphabet ${\cal A}$ and transition matrix $A$ is the topological
dynamical system $(\Sigma,\sigma)$, where $\Sigma$, called the {\it
shift space}, is the closed subset of the topological product space
${\cal A}^\maths{Z}$ of {\it $A$-admissible} two-sided infinite sequences, defined
by
$$
\Sigma=\big\{x=(x_n)_{n\in\maths{Z}}\in {\cal A}^\maths{Z}\;:\; \forall \;n\in\maths{Z},\;\;\;
A_{x_n,x_{n+1}}=1\}\;,
$$
and $\sigma: \Sigma\rightarrow \Sigma$ is the (two-sided) {\it
shift}\index{shift} defined by
$$
\forall\;x\in \Sigma,\;\forall\;n\in\maths{Z},\;\;\;\;(\sigma(x))_n=x_{n+1}\;.
$$
We endow $\Sigma$ with the
distance
$$
d(x,x')= \exp\big(-\sup\big\{n\in\maths{N}\;:\;\;
\forall\, i\,\in\,\{-n,\dots,n\},\;\;x_i\;=\;x'_i\big\}\,\big)\;.
$$
Let us denote by $\maths{Y}$ the (countable) quotient graph\footnote{The
fact that the canonical projection is a morphism of graphs is the
reason why we assumed $\Gamma$ to be acting without mapping an edge to
its inverse.} $\Gamma\backslash \maths{X}$. For every vertex or edge $x\in V\maths{Y}\cup
E\maths{Y}$, we fix a lift $\wt x$ in $V\maths{X}\cup E\maths{X}$, and we define
$G_x=\Gamma_{\wt x}$ to be the stabiliser of $\wt x$ in $\Gamma$.
\medskip
\noindent
\begin{minipage}{8.9cm}
For every $e\in E\maths{Y}$, we assume that $\widetilde{\overline{e}}
=\overline{\wt{e}}$. But there is no reason in general that
$\widetilde{t(e)} =t(\wt{e}\,)$. We fix $g_e\in\Gamma$ mapping
$\widetilde{t(e)}$ to $t(\wt e\,)$ (which does exist), and we denote
by $\rho_e: G_e= \Gamma_{\widetilde{e}}\rightarrow \Gamma_{\widetilde{t(e})} =
G_{t(e)}$ the conjugation $g\mapsto g_e^{-1}\,g\,g_e$ by $g_e$ on
$G_e$ (noticing that the stabiliser $\Gamma_{\widetilde{e}}$ is contained in
the stabiliser $\Gamma_{t(\widetilde{e})}$).
\end{minipage}
\begin{minipage}{6cm}
\begin{center}
\input{fig_lift.pdf_t}
\end{center}
\end{minipage}
Let us try to code a geodesic line in the phase space $\Gamma\backslash{\cal G}\maths{X}$.
The natural starting point is to write it as $\Gamma\ell$ for some
$\ell\in{\cal G} X$, that is, to choose one of its lifts. We then have to
construct a coding which is independent of the choice of this
lift. For every $i\in\maths{Z}$, let us denote by $f_i=\ell([i,i+1])$ the
$i$-th edge followed by $\ell$, and by $e_i$ (also denoted by
$e_{i+1}^-(\ell)$ for later use) its image by the canonical $p:\maths{X}\rightarrow
\maths{Y}=\Gamma\backslash\maths{X}$, which seems fit to be a natural part of the coding of
$\ell$. Since we will need to translate through our coding the fact
that $\ell$ is geodesic, hence has no backtracking, the edge $e_{i+1}$
(also denoted by $e_{i+1}^+(\ell)$ for later use) following $e_i$
seems to have a role to play.
\medskip
\begin{center}
\!\!\!\!\!\!\input{fig_codage.pdf_t}
\end{center}
Since the terminal point of $f_i$ is the original point of $f_{i+1}$,
the terminal point of $e_i$ is naturally also the original point of
$e_{i+1}$. But there is no reason for the terminal point of the
choosen lift $\widetilde{e_i}$ to also be the original point of the
choosen lift $\widetilde{e_{i+1}}$. Since $f_i$ and $\widetilde{e_i}$
both map by $p$ to $e_i$, we may fix $\gamma_i\in\Gamma$ such that $\gamma_i
f_i=\widetilde{e_i}$, for every $i\in\maths{Z}$.
Now, note that the vertex stabilizers in $\Gamma$ of vertices of $\maths{X}$
are in general nontrivial (and we explained in Section
\ref{sect:construction} that it is important to allow them to become
very large in order to have numerous dynamically interesting
noncompact quotients of simplicial trees). The construction (see the
above diagram) provides a natural element $g_{e_i}^{\;-1}\,\gamma_i\,
\gamma_{i+1}^{\;-1} \,g_{\;\overline{e_{i+1}}}$ which stabilises the
lifted vertex $\widetilde{t(e_i)}$, hence belongs to $G_{t(e_i)}$.
Since we made choices for the elements $\gamma_i$, the element
$g_{e_i}^{\;-1}\,\gamma_i\, \gamma_{i+1}^{\;-1} \,g_{\;\overline{e_{i+1}}}$
gives a well-defined double class $h_{i+1}(\ell)$ in
$\rho_{e_i}(G_{e_i})\backslash G_{t(e_i)}/\rho_{\,\overline{e_{i+1}}}
(G_{e_{i+1}})$, which also seems fit to be another natural piece of
the coding of $\ell$.
It turns out that this construction is indeed working. We take as
alphabet the (countable) set
$$
{\cal A}=\Big\{(e^-,h,e^+)\;:\;\begin{array}{l}
e^\pm\in E\maths{Y} {\rm ~with~} t(e^-)=o(e^+)\\
h\in \rho_{e^-}(G_{e^-})\backslash G_{o(e^+)}/\rho_{\,\overline{e^+}}(G_{e^+})
{\rm ~with~} h\neq [1] {\rm ~if~} \overline{e^+}=e^-
\end{array}\Big\}\;.
$$ This last assumption of conditional nontriviality of the double
class codes the fact that $\ell$ being a geodesic line, the edge
$f_{i+1}$ is not the opposite edge of $f_i$, though $e_{i+1}$ might be
the opposite edge of $e_i$. And since in the tree $\maths{X}$, being locally
geodesic implies being geodesic, it is very reasonable that we have
captured through our coding all the geodesic properties of the
geodesic lines and translated them into symbolic terms. We take as
transition matrix over the alphabet ${\cal A}$ the matrix with entries
$$
A_{(e^-,\,h,\,e^+),\,({e'}^-,\,h',\,{e'}^+)} =\left\{\begin{array}{l}
1 {\rm ~if~} e^+={e'}^-\\0 {\rm ~otherwise},\end{array}\right.
$$
which just says that we are glueing together the coding of pairs of
consecutive edges of the geodesic line. Note that since the tree is
locally finite, the transition matrix $A$ has finitely many nonzero
entries on each row and column, hence the associated shift space
$\Sigma$ is locally compact.
We then refer to \cite[\S 5.2]{BroParPau19} for a proof of the
following result, though almost everything is in the above picture!
We denote by $F_{\rm symb}:\Sigma\rightarrow\maths{R}$ the locally constant map
which associates to $\big((e^-_i, h_i, e^+_i)\big)_{i\in\maths{Z}}$ the
image $\wt c(\widetilde{e^+_0})$ by the system of conductances of the
lift of its first edge.
\begin{theo}\label{theo:coding} The map
$$
\Theta:\left\{\begin{array}{ccl}
\Gamma\backslash{\cal G}\maths{X} & \longrightarrow & \Sigma\\
\Gamma\ell & \mapsto &
\big((e^-_i(\ell), h_i(\ell), e^+_i(\ell))\big)_{i\in\maths{Z}}
\end{array}\right.
$$
is a bilipschitz homeomorphism, conjugating the time $1$ map of
the (discrete time) geodesic flow $(\flow t)_{t\in\maths{Z}}$ to the shift
$\sigma$. Furthermore,
\begin{enumerate}
\item $(\Sigma,\sigma)$ is topologically transitive,\footnote{This
comes from the assumption that there is no nontrivial proper
$\Gamma$-invariant subtree in $\maths{X}$, since then $\partial_\infty X=
\Lambda\Gamma$, implying that the nonwandering set of the geodesic flow
$(\flow t)_{t\in\maths{Z}}$ is the full phase space $\Gamma\backslash{\cal G}\maths{X}$.}
\item if the Gibbs measure $m_F$ is finite and if the length spectrum
of $\Gamma$ is equal to $\maths{Z}$, then the probability measure
$\maths{P}=\Theta_*\overline{m_F}$ is mixing for the shift $\sigma$ on
$\Sigma$,
\item the measure $\maths{P}$ satisfies the Gibbs property on
$(\Sigma,\sigma)$ with Gibbs constant $\delta_F$ for the potential
$F_{\rm symb}$.\footnote{That is, with a formulation adapted to the
possibility that the alphabet ${\cal A}$ may be infinite, for every finite
subset $E$ of the alphabet ${\cal A}$, there exists $C_E\geq 1$ such that
for all $p\leq q$ in $\maths{Z}$ and for every
$x=(x_n)_{n\in\maths{Z}}\in\Sigma$ such that $x_p,x_q\in E$, we have
$$
\frac {1}{C_E}\le\frac{\maths{P}([x_{p}, x_{p+1},\dots,x_{q-1}, x_{q}])}
{e^{-\delta_F(q-p+1)+\sum_{n=p}^{q}F_{\rm symb}(\sigma^n x)}} \le C_E\;.
$$
where $[x_{p}, x_{p+1},\dots,x_{q-1}, x_{q}]$ is the cylinder
$\{(y_n)_{n\in\maths{Z}}\in \Sigma\;:\;{\rm if~} p\leq n\leq q
{\rm ~then~} y_n=x_n\}$.}
\item if $(Z_n:x\mapsto x_n)_{n\in\maths{Z}}$ is the canonical random
process in symbolic dynamics, then the pair $((Z_n)_{n\in\maths{Z}},\maths{P})$
is not always a Markov chain.
\end{enumerate}
\end{theo}
This last claim has lead to an erratum in the paper \cite{Kwon15}. The
pair $((Z_n)_{n\in\maths{Z}},\maths{P})$ is not a Markov chain for instance in
Example (2) at the beginning of Section \ref{sect:construction}, when
$\maths{X}=\maths{X}_q$ and $\Gamma=\operatorname{PGL}_2(\maths{F}_q[Y])$.\footnote{As noticed by
J.-P.~Serre \cite{Serre83}, the image of almost every geodesic line
of $\maths{X}$ in the quotient ray $\Gamma\backslash X$ is a broken line which makes
infinitely many back-and-forths from the origin of the quotient ray.
\begin{center}
\input{fig_zigzag.pdf_t}
\end{center}
\nopagebreak
There is absolutely no way to predict the probability of behaviour
of the geodesic line image at a given time in terms of its recent
past probabilities (except that when it starts to go down, it has
to go down all the way to the origin).}
\subsection{Variational principle for simplicial trees}
\label{subsect:variational}
The first corollary of the coding results in the previous section is
the following existence and uniqueness result of equilibrium states
for the geodesic flow on the phase space $\Gamma\backslash {\cal G}\maths{X}$ for the
potential $F$.
\begin{coro}\label{coro:varprinctree} If $m_F$ is finite, then
$\overline{m_F}=\frac{m_F}{\|m_F\|}$ is the unique equilibrium state
for $F$ under the geodesic flow $(\flow t)_{t\in\maths{Z}}$ on $\Gamma\backslash
{\cal G}\maths{X}$, and furthermore
\begin{center}
\fcolorbox{red}{white}{ $\displaystyle P_F=\delta_F$. }
\end{center}
\end{coro}
We only give a sketch of a proof, refering to \cite[\S
5.4]{BroParPau19} for a complete one. We use the coding given in
Theorem \ref{theo:coding} with its properties (in particular the fact
that it satisfies the Gibbs property for a symbolic potential related
to the potential $F$).
Let $(\Sigma,\sigma)$ be a topological shift, with countable alphabet
${\cal A}$. A $\sigma$-invariant probability measure $m$ on $\Sigma$ is a
{\it weak\footnote{The terminology comes from the fact that the
assumptions bear only on the periodic points of $\sigma$.} Gibbs
measure} for a map $\phi:\Sigma\rightarrow \maths{R}$ with Gibbs constant $c(m)
\in\maths{R}$ if for every $a\in {\cal A}$, there exists a constant $c_a\geq 1$
such that for all $n\in\maths{N}-\{0\}$ and $x$ in the cylinder $[a]=
\{y=(y_n)_{n\in\maths{Z}}\in\Sigma\;:\;y_0=a\}$ such that $\sigma^n(x) =x$,
we have
$$
\frac {1}{c_a}\le\frac{m([x_{0}, x_{1},\dots, x_{n-1}])}
{e^{\sum_{i=0}^{n-1}\;(\,\phi(\sigma^i x)-c(m)\,)}} \le c_a\;.
$$
The following result of Buzzi is proved in
\cite[Appendix]{BroParPau19}, with a much weaker regularity assumption
on $\phi$, and it concludes the proof of Corollary
\ref{coro:varprinctree}.
\begin{theo}[Buzzi] Let $(\Sigma,\sigma)$ be a topological shift and
$\phi:\Sigma\rightarrow\maths{R}$ a bounded Hölder-continuous function. If $m$ is a
weak Gibbs measure for $\phi$ with Gibbs constant $c(m)$, then
$P_\phi=c(m)$ and $m$ is the unique equilibrium state for the
potential $\phi$.
\end{theo}
\subsection{Rate of mixing for simplicial trees}
\label{subsect:mixingrate}
Let us first recall the definition of an exponential mixing rate for
discrete time dynamical systems.
Let $(Z,m,T)$ be a dynamical system with $(Z,m)$ a metric probability
space and let $T:Z\rightarrow Z$ be a (not necessarily invertible) measure
preserving map. For all $n\in\maths{N}$ and $\phi,\psi\in\maths{L}^2(m)$, the
(well-defined) $n$-th {\it correlation coefficient} of $\phi,\psi$ is
$$
\operatorname{cov}_{m,\,n}(\phi,\psi)=
\int_{Z}(\phi\circ T^n)\;\psi\;dm-\int_{Z}\phi\; dm\;\int_{Z}\psi\;dm\;.
$$
Let $\alpha\in\;]0,1]$. As for the case of flows in Section
\ref{sect:mixing}, we will say that the dynamical system $(Z,m,T)$ is
{\it exponentially mixing for the $\alpha$-H\"older regularity} or
that it has {\it exponential decay of $\alpha$-H\"older
correlations} if there exist $c',\kappa >0$ such that for all
$\phi,\psi\in {\cal C}_{\rm b} ^\alpha(Z)$ and $n\in\maths{N}$, we have
$$
|\operatorname{cov}_{m,\,n}(\phi,\psi)|
\le c'\;e^{-\kappa\, n }\;\|\phi\|_\alpha\;\|\psi\|_\alpha\,.
$$
Note that this property is invariant under measure preserving
conjugations of dynamical systems by bilipschitz homeomorphisms. In
our case, $T$ will be either the time $1$ map of the geodesic flow
$(\flow t)_{t\in\maths{Z}}$ on the phase space $Z=\Gamma\backslash{\cal G}\maths{X}$ or the
two-sided shift $\sigma$ on a two-sided topological shift space
$\Sigma$ or (see below) the one-sided shift $\sigma_+$ on a one-sided
topological shift space $\Sigma_+$.
\medskip
The following result is one of the new results contained in the book
\cite{BroParPau19}. For every finite subset $E$ in $\Gamma\backslash V\maths{X}$, let
$\tau_E:\Gamma\backslash{\cal G}\maths{X}\rightarrow \maths{N}\cup\{+\infty\}$ be the first positive
passage time of geodesic lines in $E$, that is, the map
$$
\ell\mapsto\inf\{n\in\maths{N}-\{0\}\;:\; \flow{n}\ell(0)\in E\}\;.
$$
The following result says that if the tree quotient contains a
finite subset in which the geodesic lines with large return times have
an exponentially decreasing mass, then the (discrete time) geodesic
flow on the phase space has exponential decay of correlations. This
condition turns out to be quite easy to check on practical examples,
see for instance \cite[\S 9.2]{BroParPau19}.
\begin{theo}\label{theo:mixingratetree}
If $m_F$ is finite and mixing for $(\flow t)_{t\in\maths{Z}}$, if
there exist a finite subset $E$ in $\Gamma\backslash V\maths{X}$ and $c'',\kappa'>0$
such that
$$
\forall\;n\in\maths{N},\;\;\;m_F(\{\ell\in\Gamma\backslash{\cal G}\maths{X}\;:\;
\ell(0)\in E, \tau_E(\ell)\geq n\})\leq c''e^{-\kappa' n}\;,
$$
then for every $\alpha\in\;]0,1]$, the (discrete time) dynamical system
$(\Gamma\backslash{\cal G}\maths{X},m_F,(\flow t)_{t\in\maths{Z}})$ is exponentially mixing for
the $\alpha$-Hölder regularity.
\end{theo}
The hypothesis of Theorem \ref{theo:mixingratetree} is for instance
satisfied for Example (2) at the beginning of Section
\ref{sect:construction} with $\maths{X}=\maths{X}_q$ and $\Gamma =\operatorname{PGL}_2(\maths{F}_q[Y])$,
taking $E$ consisting of the origin of the modular ray $\Gamma\backslash \maths{X}_q$,
and using the exponential decay of the stabilisers orders along a lift
of the modular ray in $\maths{X}_p$. In this case, the quotient graph
$\Gamma\backslash\maths{X}$ has linear growth. We gave in \cite[page 193]{BroParPau19}
examples where the quotient graph $\Gamma\backslash\maths{X}$ has exponential
growth.
Here is an example where the quotient graph has quadratic growth, for
every even $q\geq 2$. The tree $\maths{X}$ is the regular tree of degrees
$q+2$. The vertex group of the top-left vertex $x_*$ of the quotient
graph is $\maths{Z}/(\frac{q}{2}+1)\maths{Z}$. A set $E$ as in Theorem
\ref{theo:mixingratetree} consists of the three vertices at distance
at most $1$ from $x_*$. The vertex group of a vertex at distance
$k\geq 1$ from $x_*$ on the left vertical ray is $\maths{Z}/(q+1)^k\maths{Z}$. The
vertex group of a vertex not on the left vertical ray, at distance
$k\geq 1$ from $x_*$ is $\maths{Z}/q\maths{Z}\times\maths{Z}/(q+1)^{k-1}\maths{Z}$. The number
at the beginning of each edge represents the index of the edge group
inside the vertex group of its origin.
\begin{center}
\input{fig_youngtree.pdf_t}
\end{center}
Recall that two {\it growth functions} $f$ and $f'$, that is, two
increasing maps from $\maths{N}$ to $\maths{N}-\{0\}$, are {\it equivalent} if
there exist two integers $c\geq 1$ and $c'\geq 0$ such that for every
$n\in\maths{N}$ large enough, we have $f(\lfloor \frac{1}{c}\,n-c'\rfloor)
\leq f'(n)\leq f(c\,n+c')$. The {\it type of growth} of an infinite,
connected, locally finite graph $Y$ is the equivalence class of the
map $n\mapsto {\operatorname{Card}} \;B_{VY}(v_0,n)$, which does not depend on the
choice of a base point $v_0\in VY$, nor on the quasi-isometry type of
$Y$.
It is well known (see for instance \cite{Choucroun94b,Hughes04} or
\cite[\S 6.2]{GriNekSus00}) that every totally disconnected compact
metric space is homeomorphic to the boundary at infinity of a
simplicial tree with uniformly bounded degrees, and that any
increasing positive integer sequence $(a_n)_{n\in\maths{N}}$ with at most
exponential speed (that is, there exists $k\in\maths{N}$ such that
$a_{n+1}\leq ka_{n}$ for every $n\in\maths{N}$) is, up to the above
equivalence, the sequence of orders of the balls of an infinite rooted
simplicial tree with uniformly bounded degrees. Hence the following
result (not contained in \cite{BroParPau19}) says that we can realize
any space of ends, or any at most exponential type of growth, in the
quotient graph of an action of a group on a tree satisfying the
hypothesis of Theorem \ref{theo:mixingratetree}.
\begin{prop}\label{prop:allendsallgrowth}
For every rooted tree $({\cal T},*)$ with uniformly bounded degrees,
there exists a simplicial tree $\maths{X}$ and a discrete group $\Gamma$ of
automorphisms of $\maths{X}$ as in the beginning of Section
\ref{sect:construction} such that $\Gamma$ is a lattice, $\Gamma\backslash \maths{X}={\cal T}$
and the geodesic flow $(\flow t)_{t\in\maths{Z}})$ is exponentially mixing
for the $\alpha$-Hölder regularity on $\Gamma\backslash {\cal G}\maths{X}$ for the zero
potential.
\end{prop}
\noindent{\bf Proof. } We refer for instance to \cite[\S I.5]{Serre83} for
background on graphs of groups.
Let us fix $q\in\maths{N}$ large enough compared with the maximum degree $d$
of ${\cal T}$. We define a graph of groups $({\cal T},G_\bullet)$ with underlying
graph ${\cal T}$ as follows. For every vertex $v$ of ${\cal T}$ at distance $n$ of
the root $*$, we define $G_v=\maths{Z}/q^{n+1}\maths{Z}$. For every edge $e$ whose
closest vertex to the root $*$ is at distance $n$ from $*$, we define
$G_e=\maths{Z}/q^{n+1}\maths{Z}$. For every edge $e$ pointing away from the root,
we define the monomorphism $G_e\rightarrow G_{o(e)}$ to be the identity, and
the monomorphism $G_e\rightarrow G_{t(e)}$ to be the identity on the first
factors, so that the index of $G_e$ in $G_{o(e)}$ is $1$ and the index
of $G_e$ in $G_{t(e)}$ is $q$.
Let $\Gamma$ and $\maths{X}$ be respectively the fundamental group (using the
root as the basepoint) and the Bass-Serre tree of the graph of groups
$({\cal T},G_\bullet)$.
Then the degrees of the vertices of $\maths{X}$ are at least $3$ (actually
at least $q$) and at most $q+d-1$, and for every $n$, we have
\begin{equation}\label{eq:expodecayreturn}
\sum_{x\in V{\cal T}\;:\; d(x,*)=n}\frac{1}{|G_x|}\leq d^n/q^n\;.
\end{equation}
Since $q$ is large compared to $d$, this implies that the volume of
$({\cal T},G_\bullet)$ is finite, hence $\Gamma$ is a lattice.
Since the potential is the zero potential, the Gibbs measure is the
Bowen-Margulis measure, and up to a positive scalar, the Patterson
density is, by \cite[Prop.~4.16]{BroParPau19}, the Hausdorff measure
of the visual distance $d_{x_*}$. Since $q$ is large compared to $d$,
the set of points at infinity of lifts in $\maths{X}$ of geodesic rays in
${\cal T}$
starting from the root has measure $0$ for the Patterson
density. Since for every edge in ${\cal T}$ pointing away from the root, the
index of its edge group in its original vertex group is $1$, almost
every geodesic line for the Bowen-Margulis measure (which is
absolutely continuous with respect to the product measure of the
Patterson densities on its two endpoints and the counting measure
along its image) maps in ${\cal T}$ to a path making infinitely many
back-and-forth from the root. If $E=\{*\}$ is the singleton in
$V\!{\cal T}$ consisting of the root, since $q$ is large compared to $d$,
Equation \eqref{eq:expodecayreturn} then shows that the hypothesis of
Theorem \ref{theo:mixingratetree} is satisfied, and this concludes the
proof of Proposition \ref{prop:allendsallgrowth}. \hfill$\Box$
\medskip
We conclude this survey with a sketch of proof of Theorem
\ref{theo:mixingratetree}, sending to \cite[\S 9.2]{BroParPau19} for a
complete proof. We thank Omri Sarig for a key idea in the proof of
this theorem.
\medskip
\noindent{\bf Step 1. } The first step consists in passing from the
geometric dynamical system to a two-sided symbolic dynamical system,
using Section \ref{subsect:coding}.
\medskip
Let ${\cal A},A,\Sigma,\sigma,\Theta,\maths{P}$ be as given in Theorem
\ref{theo:coding} for the coding of the (discrete time) geodesic flow
on the phase space $\Gamma\backslash{\cal G}\maths{X}$. Let $\pi_+:\Sigma\rightarrow {\cal A}^\maths{N}$ be the
natural projection defined by $(x_n)_{n\in\maths{Z}}\mapsto (x_n)_{n\in\maths{N}}$,
let $(\Sigma_+,\sigma_+)$ be the one-sided topological shift
constructed as for the two-sided one with the same alphabet ${\cal A}$ and
same transition matrix $A$, with $\Sigma_+\subset {\cal A}^\maths{N}$. Let
$$
{\cal E}=\{(e^-,h,e^+)\in{\cal A} \;:\; t(e^-)=o(e^+)\in E\}
$$
which is a finite subset of the alphabet, and $\tau_{{\cal E}}:\Sigma_+ \rightarrow
\maths{N}$ the first positive passage time in ${\cal E}$ of the shift orbits, that
is, the map $x=(x_n)_{n\in\maths{N}}\mapsto \inf\{n\in\maths{N}-\{0\}\;:\;
x_n\in{\cal E}\}$.
The rate of mixing statement for two-sided symbolic dynamical system,
that we will prove in Step 2, is the following one.
\begin{theo} \label{theo:critexpdecaysimpldynsymb} Let
$({\cal A},A,\Sigma,\sigma)$ be a locally compact transitive two-sided
topological shift, and let $\maths{P}$ be a mixing $\sigma$-invariant
probability measure with full support on $\Sigma$. Assume that
\begin{enumerate}
\item[(1)] for every $n \in\maths{N}$ and for every $A$-admissible finite
sequence $w= (w_0,\dots,w_n)$ in ${\cal A}$, the (measure theoretic)
Jacobian of the map
$$
f_w:\{(x_k)_{k\in\maths{N}}\in\pi_+(\Sigma)\;:\;x_0=w_n\}\rightarrow\{(y_k)_{k\in\maths{N}}
\in \pi_+(\Sigma)\;:\;y_0=w_0,\dots, y_n=w_n\}
$$
defined by $(x_0,x_1,x_2,\dots)\mapsto (w_0,\dots,w_n,
x_1,x_2,\dots)$, with respect to the restrictions of the pushforward
measure $(\pi_+)_*\maths{P}$, is constant;
\item[(2)] there exist a finite subset ${\cal E}$ of ${\cal A}$ and $c'',\kappa'
>0$ such that for every $n\in\maths{N}$, we have
$
\maths{P}\big(\{x\in \Sigma\;:\;x_0\in {\cal E} \;{\rm and}\;
\tau_{\cal E}(x)\geq n\}\big)\leq c''\;e^{-\kappa' n}\;.
$
\end{enumerate}
Then $(\Sigma,\sigma,\maths{P})$ has exponential decay of $\alpha$-H\"older
correlations.
\end{theo}
Theorem \ref{theo:mixingratetree} follows from Theorem
\ref{theo:critexpdecaysimpldynsymb} by using the coding given in
Theorem \ref{theo:coding}. The verification of Assertion (2) is
immediate as it corresponds to the assumption of Theorem
\ref{theo:mixingratetree}. The one of Assertion (1) is a bit
technical, using a strengthened version of Mohsen's shadow lemma for
trees.
\medskip
\noindent{\bf Step 2. } The second step consists in passing from the
two-sided symbolic dynamical system to a one-sided symbolic dynamical
system.
Let $(\Sigma_+,\sigma_+)$ be the one-sided topological shift with the
same alphabet ${\cal A}$ and same transition matrix $A$ as the two-sided one
in the statement of Theorem \ref{theo:critexpdecaysimpldynsymb}, with
\mbox{$\Sigma_+=\pi_+(\Sigma)$,} and let $\maths{P}_+=(\pi_+)_*\maths{P}$. Recall
that the {\it cylinders} in $\Sigma_+$ are the subsets defined for
$k\in\maths{N}$ and $w_0,\dots w_k\in{\cal A}$ by
$$
[w_0,\dots,w_k]=\{x=(x_n)_{n\in\maths{N}}\in\Sigma_+\;:\; x_0=w_0,\dots,x_k=w_k \}\;.
$$
The rate of mixing statement for one-sided symbolic dynamical system,
that we will prove in Step 3, is the following one.
\begin{theo} \label{theo:critexpdecaysimpldynsymbonsesided} Let
$({\cal A},A,\Sigma_+,\sigma_+)$ be a locally compact transitive one-sided
topological shift, and let $\maths{P}_+$ be a mixing $\sigma$-invariant
probability measure with full support on $\Sigma_+$. Assume that
\begin{enumerate}
\item[(1)] for every $n\in\maths{N}$ and for every $A$-admissible finite
sequence $w= (w_0,\dots,w_n)$ in ${\cal A}$, the Jacobian of the map
between cylinders
$$
f_w:[w_n]\rightarrow[w_0,\dots,w_n]
$$
defined by $(x_0,x_1,x_2,\dots)\mapsto (w_0,\dots,w_n,
x_1,x_2,\dots)$, with respect to the restrictions of $\maths{P}_+$, is
constant;
\item[(2)] there exist a finite subset ${\cal E}$ of ${\cal A}$ and $c'',\kappa'
>0$ such that for every $n\in\maths{N}$, we have
$
\maths{P}_+\big(\{x\in \Sigma_+\;:\;x_0\in {\cal E} \;{\rm and}\;
\tau_{\cal E}(x)\geq n\}\big)\leq c''\;e^{-\kappa' n}\;.
$
\end{enumerate}
Then $(\Sigma_+,\sigma_+,\maths{P}_+)$ has exponential decay of
$\alpha$-H\"older correlations.
\end{theo}
Theorem \ref{theo:critexpdecaysimpldynsymb} follows from Theorem
\ref{theo:critexpdecaysimpldynsymbonsesided} by a classical argument
due to Sinai and Bowen (and explained to the authors by Buzzi), saying
that if the one-sided symbolic dynamical system $(\Sigma_+,\sigma_+,
(\pi_+)_*\maths{P})$ is exponentially mixing, then so is the two-sided
symbolic dynamical system $(\Sigma,\sigma,\maths{P})$.
\medskip
\noindent{\bf Step 3. } The third and final step that we sketch is a
proof of Theorem \ref{theo:critexpdecaysimpldynsymbonsesided}, using
as main tool a Young's tower argument.
We implicitely throw away from $\Sigma_+$ the measure zero subset of
points $x\in\Sigma_+$ whose orbit under the shift $\sigma_+$ does not
pass infinitely many times in the open nonempty finite union of
fundamental cylinders
$$
\Delta_0=\bigcup_{a\in{\cal E}}\;\;[a]\;.
$$
We denote by $\Phi :\Sigma_+\rightarrow\Delta_0$ the first positive time
passage map, defined by $x\mapsto \sigma_+^{\tau_{\cal E}(x)}(x)$. We denote
by $W$ the set of excursions outside ${\cal E}$, that is, the set of
$A$-admissible finite sequences $(w_0,\dots,w_n)$ in ${\cal A}$ such that
$w_0,w_n\in{\cal E}$ and $w_i\notin{\cal E}$ for $1\leq i\leq n-1$.
We have the following properties.
\begin{enumerate}
\item The set $\{[a]\;:\;a\in{\cal E}\}$ is a finite measurable partition of
$\Delta_0$. For every $a\in{\cal E}$, the set $\{[w]\;:\; w\in W, w_0=a\}$
is a countable measurable partition of $[a]$.
\item For every $w\in W$, the first positive passage time $\tau_{\cal E}$ is
positive on every excursion cylinder $[w]$, and if $w_n$ is the last
letter of $w$, then the restriction $\Phi:[w]\rightarrow[w_n]$ is a
bijection with constant with constant Jacobian with respect to
$\maths{P}_+$ (actually much less is needed in order to apply Young's
arguments).
\item The first positive time passage map $\Phi$ satisfies strong
dilations properties on the excursion cylinders. More precisely, for
every excursion $w= (w_0,\dots,w_n)\in W$, for every $k\leq n-1$,
for all $x,y\in [w]$, we have $d(\Phi(x),\Phi(y))\geq e\; d(x,y)$
and $d(\sigma_+^kx,\sigma_+^ky))< d(\Phi(x),\Phi(y))$.
\end{enumerate}
Let us fix $\alpha\in\;]0,1]$. Then an adaptation of
\cite[Theo.~3]{Young99} implies that there exists $\kappa>0$ such
that for all $\phi,\psi\in {\cal C}_{\rm b} ^\alpha(\Sigma_+)$, there
exists $c_{\phi,\psi}>0$ such that for every $n\in\maths{N}$, we have
$$
|\operatorname{cov}_{\maths{P}_+,\,n}(\phi,\psi)|
\le c_{\phi,\psi}\;e^{-\kappa\, n }\;.
$$
An argument using the Principle of Uniform Boundedness due to Chazotte
then allows us to take $c_{\phi,\psi}=c'\;e^{-\kappa\, n }\;
\|\phi\|_\alpha\; \|\psi\|_\alpha$ for some constant $c'>0$.
{\small
|
1,108,101,564,333 | arxiv | \section{Introduction to relativistic quantum mechanics}
It is a widespread opinion that a relativistically invariant quantum theory of interacting particles has to be a (local) quantum field theory. Therefore we first have to specify what we mean by \lq\lq relativistic quantum mechanics\rq\rq . Relativistic quantum mechanics is based on a theorem by Bargmann which basically states that~\cite{Bargmann:54,Kei:91}:\\
{\em A quantum mechanical model formulated on a Hilbert space preserves probabilities in all inertial coordinate systems if and only if the correspondence between states in different inertial coordinate systems can be realized by a single-valued unitary representation of the covering group of the Poincar\'e group.}\\
According to this theorem one has succeeded in constructing a relativistically invariant quantum mechanical model, if one has found a representation of the (covering group of the) Poincar\'e group in terms of unitary operators on an appropriate Hilbert space. Equivalently one can also look for a representation of the generators of the Poincar\'e group in terms of self-adjoint operators acting on this Hilbert space. These self-adjoint operators should then satisfy the Poincar\'e algebra
\begin{eqnarray}\label{eq:PCalgebra}
&&[J^i,J^j]=\imath\, \epsilon^{ijk} J^k\, , \quad [K^i,K^j]=-\imath\, \epsilon^{ijk} J^k\, , \quad [J^i,K^j]=\imath\, \epsilon^{ijk} K^k\, ,\nonumber \\
&&\left[P^\mu,P^\nu\right] =0\, ,\quad [K^i,P^0]=-\imath\, P^i\, ,\quad [J^i,P^0]=0\, ,\nonumber\\
&&{[J^i,P^j]=\imath\,\epsilon^{ijk}P^k\, ,\quad \left[K^i,P^j\right]=-\imath\, \delta_{ij}\,P^0}\, .
\end{eqnarray}
$P^0$ and $P^i$ generate time and space translations, respectively, $J^i$ rotations and $K^i$ Lorentz boosts. From the last commutation relation it is quite obvious that, if $P^0$ contains interactions, $K^i$ or $P^j$ (or both) have to contain interactions too. The form of relativistic dynamics is then characterized by the interaction dependent generators. Dirac~\cite{Dirac:49} identified three prominent forms of relativistic dynamics, the {\em instant form} (interactions in $P^0$, $K^i$, $i=1,2,3$), the {\em front form} (interactions in $P^-=P^0-P^3$, $F^1=K^1-J^2$, $F^2=K^2+J^1$) and the {\em point form} (interactions in $P^\mu$, $i=0,1,2,3$). In what follows we will stick to the point form, where $P^\mu$, the generators of space-time translations, contain interactions and $\vec{J}$, $\vec{K}$, the generators of Lorentz transformations, are interaction free. The big advantage of this form is that boosts and the addition of angular momenta become simple.
For a single free particle and also for several free particles it is quite easy to find Hilbert-space representations of the Poincar\'e generators in terms self-adjoint operators that satisfy the algebra given in Eq.~(\ref{eq:PCalgebra}), but what about interacting systems? Local quantum field theories provide a relativistic invariant description of interacting systems, but then one has to deal with a complicated many-body theory. It is less known that interacting representations of the Poincar\'e algebra can also be realized on an $N$-particle Hilbert space and one does not necessarily need a Fock space. A systematic procedure for implementing interactions in the Poincar\'e generators of an $N$-particle system such that the Poincar\'e algebra is preserved, has been suggest long ago by Bakamjian and Thomas~\cite{Bakamjian:53}. In the point form this procedure amounts to factorize the four-momentum operator of the interaction-free system into a four-velocity operator and a mass operator and add then interaction terms to the mass operator:
\begin{equation}\label{eq:BT}
{P^\mu}={M} { V^\mu_{\mathrm{free}}}= ({ M_{\mathrm{free}}}+{ M_{\mathrm{int}}}) { V^\mu_{\mathrm{free}}}\,.
\end{equation}
Since the mass operator is a Casimir operator of the Poincar\'e group, the constraints on the interaction terms that guarantee Poincar\'e invariance become simply that $M_{\mathrm{int}}$ should be a Lorentz scalar and that it should commute with $V^\mu_{\mathrm{free}}$, i.e. $[M_{\mathrm{int}}, V^\mu_{\mathrm{free}}]=0$. Remarkably, this kind of construction allows for instantaneous interactions (\lq\lq interactions at a distance\rq\rq ). Similar procedures can also be carried out in the instant and front forms of relativistic dynamics such that the physical equivalence of all three forms is guaranteed in the sense that the different descriptions are related by unitary transformations~\cite{Shatnii:78}.
A very convenient basis for representing Bakajian-Thomas (BT) type mass operators consists of velocity states
\begin{equation}
\vert \vec{v}; \vec{k}_1, \mu_1; \vec{k}_2, \mu_2; \dots; \vec{k}_N, \mu_N
\rangle \, ,\qquad \sum_{i=1}^N \,
\vec{k}_i = 0\, .
\end{equation}
These specify the state of an $N$-particle system by its overall velocity $\vec{v}$, the particle momenta $\vec{k}_i$ in the rest frame of the system and the spin projections $\mu_i$ of the individual particles. The physical momenta of the particles are then given by $\vec{p}_i=\overrightarrow{B(\vec{v}) k_i}$, where $B(\vec{v})$ is a canonical (rotationless) boost with the overall system velocity $\vec{v}$. Associated with this kind of boost is also the notion of \lq\lq canonical spin\rq\rq\ which fixes the spin projections $\mu_i$. $N$-particle velocity states, as introduced above, are eigenstates of the free $N$-particle velocity operator $V^\mu_{\mathrm{free}}$ and the free mass operator
\begin{equation}
M_{\mathrm{free}}\, \vert \vec{v};
\vec{k}_1, \mu_1; \vec{k}_{2}, \mu_{2};\dots\rangle = (\omega_1+\omega_2+\dots)\, \vert \vec{v};
\vec{k}_1, \mu_1; \vec{k}_{2}, \mu_{2};\dots\rangle\, ,
\end{equation}
with $\omega_i=\sqrt{m_i^2+\vec{k}_i^2}$. The overall velocity factors out in velocity-state matrix elements of BT-type mass operators,
\begin{eqnarray}
\langle \vec{v}^\prime; \vec{k}_1^\prime, \mu_1; \vec{k}_{2}^\prime,
\mu_{2}^\prime; \dots\vert M \vert \vec{v};
\vec{k}_1, \mu_1; \vec{k}_{2}, \mu_{2};\dots\rangle & &\nonumber\\
&&\hspace{-4.5cm}\propto\,{v^0\, \delta^3(\vec{v}^\prime - \vec{v})}\,
\langle \vec{k}_1^\prime, \mu_1; \vec{k}_{2}^\prime,
\mu_{2}^\prime; \dots\vert\vert M \vert\vert
\vec{k}_1, \mu_1; \vec{k}_{2}, \mu_{2};\dots\rangle\, ,
\end{eqnarray}
leading to the separation of overall and internal motion of the system.
\section{Cluster separability}
A central requirement for local relativistic quantum field theories is \lq\lq microscopic causality\rq\rq , i.e. the property that field operators at space-time points $x$ and $y$ should commute or anticommute, depending on whether they describe bosons or fermions, if these space-time points are space-like separated, i.e.
\begin{equation}
[\Psi(x),\Psi(y)]_{\pm}=0 \quad\hbox{for}\quad (x-y)^2<0\, .
\end{equation}
The crucial point here is that this must hold for arbitrarily small space-like distances. This condition requires an infinite number of degrees of freedom and can therefore not be satisfied in relativistic quantum mechanics with only a finite number of degrees of freedom. What replaces microscopic causality in the case of relativistic quantum mechanics is the physically more sensible requirement of \lq\lq macroscopic causality\rq\rq, or also often called \lq\lq cluster separability\rq\rq . It roughly means that subsystems of a quantum mechanical system should behave independently, if they are sufficiently space-like separated.
In order to phrase cluster separability in more mathematical terms, we start with an $N$-particle state $|\Phi\rangle$ with wave function $\phi(\vec{p}_1,\vec{p}_2,\dots,\vec{p}_N)$ and decompose this $N$-particle system into two subclusters $(A)$ and $(B)$. Next one introduces a separation operator $U^{(A)(B)}_\sigma$ with the property that
\begin{equation}
\lim_{\sigma\rightarrow\infty} \langle\Phi\vert U^{(A)(B)}_\sigma\vert \Phi\rangle =0\, .
\end{equation}
The role of the separation operator will become clearer by means of an example. Let us consider (space-like) separation by a canonical boost. In this case subsystem $(A)$ is boosted with velocity $\vec{v}$ and subsystem $(B)$ with velocity $-\vec{v}$. The action on the wave function is then
\begin{equation}
\Bigl(U^{(A)(B)}_{\vec{v}}\phi\Bigr)(\vec{p}_{i\in (A)},\vec{p}_{j\in (B)})=\phi\Bigl(\overrightarrow{B({ -}\vec{v})p}_{i\in(A)},\overrightarrow{B({ \vec{v})p}}_{j\in(B)}\Bigr)
\end{equation}
and one has to consider the limit $\sigma=|\vec{v}|\rightarrow \infty$ in Eq.~(7).
Having introduced a separation operator we are now able to formulate cluster separability in a more formal way. In the literature one can find different notions of it. A comparably weak, but physically plausible requirement, is cluster separability of the scattering operator:
\begin{equation}
\mathrm{s-}\!\!\lim_{\sigma\rightarrow\infty} {U_\sigma^{(A)(B)}}^\dag\, S\, {U_\sigma^{(A)(B)}} = S^{(A)}\otimes S^{(B)}\, .
\end{equation}
It means that the scattering operator should factorize into the scattering operators of the subsystems after separation. For three-particle systems it has been demonstrated that this type of cluster separability can be achieved by a BT construction~\cite{Coester:65}.
A stronger requirement is that the Poincar\'e generators become additive, when the clusters are separated. In a weaker version this means for the four-momentum operator that
\begin{equation}
\lim_{\sigma\rightarrow \infty}\langle \Phi \vert {U_\sigma^{(A)(B)}}^\dag \Bigl(P^\mu-P^\mu_{(A)}\otimes I_{(B)}-I_{(A)}\otimes P^\mu_{(B)}\Bigr)\, U_\sigma^{(A)(B)} \vert\Phi\rangle = 0\,,
\end{equation}
the stronger version is that
\begin{equation}
\lim_{\sigma\rightarrow \infty}\Bigl\vert\Bigl\vert\Bigl(P^\mu-P^\mu_{(A)}\otimes I_{(B)}-I_{(A)}\otimes P^\mu_{(B)}\Bigr)\, U_\sigma^{(A)(B)} \vert\Phi\rangle\Bigr\vert\Bigr\vert=0\, .
\end{equation}
The BT construction violates both conditions already in the $2$+$1$-body case (i.e. particles 1 and 2 interacting and particle 3 free)~\cite{Mutze:78,Kei:91}. The reason for the failure can essentially be traced back in this case to the fact that the BT-type mass operator and the mass operator of the separated 2+1-particle system differ in the velocity conserving delta functions. In the BT-case it is the overall three-particle velocity which is conserved, in the separated case it is rather the velocity of the interacting two-particle system. The separation, however, is done by boosting with the velocity of the interacting two-particle system.
One may now ask, whether wrong cluster properties lead to observable physical consequences. From our studies of the electromagnetic structure of mesons we have to conclude that this is indeed the case~\cite{Biernat:2009my,GomezRocha:2012zd,Biernat:2014dea}. In these papers electron scattering off a confined quark-antiquark pair was treated within relativistic point form quantum mechanics starting from a BT-type mass operator in which the dynamics of the photon is also fully included. The meson current can then be identified in a unique way from the resulting one-photon-exchange amplitude which has the usual structure, i.e. electron current contracted with the meson current and multiplied with the covariant photon propagator. The covariant analysis of the resulting meson current, however, reveals that it exhibits some unphysical features which most likely can be ascribed to wrong cluster properties. For pseudoscalar mesons, e.g., its complete covariant decomposition takes on the form
\begin{equation}
{\tilde{J}^\mu(\vec{p}_M^\prime;\vec{p}_M)} = {
(p_M+p^\prime_M)^\mu \, f(Q^2,{ s})}+ { (p_e+p^\prime_e)^\mu
\, g(Q^2,s)}\, .
\end{equation}
It is still conserved, transforms like a four-vector, but exhibits an unphysical dependence on the electron momenta which manifests itself in form of an additional covariant (and corresponding form factor) and a spurious Mandelstam-$s$ dependence of the form factors. Although unphysical, these features do not spoil the relativistic invariance of the electron-meson scattering amplitude. The Mandelstam-$s$ dependence of the physical and spurious form factors $f$ and $g$ is shown in Fig.~1. Since the spurious form factor $g$ is seen to vanish for large $s$ and the $s$-dependence of the physical form factor $f$ becomes also negligible in this case it is suggestive to extract the physical form factor in the limit $s\rightarrow\infty$. This strategy was pursued in Refs.~\cite{Biernat:2009my,GomezRocha:2012zd,Biernat:2014dea} where it lead to sensible results. It gives a simple analytical expression for the physical form factor $F(Q^2)=\lim_{s\rightarrow\infty} f(Q^2,s)$ which agrees with corresponding front form calculations in the $q_\perp=0$ frame. Similar effects of wrong cluster properties on electromagnetic form factors were also observed in model calculations done within the framework of front form quantum mechanics~\cite{Keister:2011ie}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=6cm]{fsB.pdf}\hspace{2.0cm}
\includegraphics[width=6cm]{gsB.pdf}
\caption{Mandelstam-$s$ dependence of the physical and spurious $B$ meson electromagnetic form factors $f$ and $g$ for various values of the (negative) squared four-momentum transfer $Q^2$~\cite{GomezRocha:2012zd}. The result has been obtained with a harmonic-oscillator wave function with parameters $a=0.55$~GeV, $m_b=4.8$~GeV, $m_{u,d}=0.25$~GeV.}
\end{center}\label{figure1}
\end{figure}
\section{Restoring cluster separability}
It is obviously the BT-type structure of the four-momentum operator (see Eq.~(\ref{eq:BT})) which guarantees Poincar\'e invariance on the one hand, but leads to wrong cluster properties on the other hand (if one has more than two particles). In order to show, how this conflict may be resolved, let us consider a three-particle system with pairwise two-particle interactions. To simplify matters we will consider spinless particles and neglect internal quantum numbers. We start with the four-momentum operators of the two-particle subsystems,
\begin{equation}
P^\mu_{(ij)}=M_{(ij)} V^\mu_{(ij)}\,,\quad i,j=1,2,3\, ,\quad i\neq j\, ,
\end{equation}
which have a BT-type structure (i.e. $V^\mu_{(ij)}$ is free of interactions). Cluster separability holds for these subsystems, if the two-particle interaction is sufficiently short ranged. The third particle can now be added by means of the usual tensor-product construction
\begin{equation}
\tilde{P}^\mu_{(ij)(k)}=P^\mu_{(ij)}\otimes I_{(k)}+I_{(ij)}\otimes P^\mu_{(k)}\,.
\end{equation}
The individual four-momentum operators
$\tilde{P}^\mu_{(ij)(k)}$ describe 2+1-body systems in a Poincar\'e invariant way and exhibit also the right cluster properties. One may now think of adding all these four momentum operators, to end up with a four momentum operator for a three particle system with pairwise interactions:
\begin{equation}
\tilde{P}^\mu_3=\tilde{P}^\mu_{(12)(3)}+\tilde{P}^\mu_{(23)(1)}+\tilde{P}^\mu_{(31)(2)}-2 {P}^\mu_{3\,\mathrm{free}}\, .
\end{equation}
But the components of the resulting four-momentum operator do not commute,
\begin{equation}
\bigl[\tilde{P}^\mu_3,\tilde{P}^\nu_3\bigr]{\neq} 0 \quad \hbox{since} \quad [M_{(ij)\,\mathrm{int}},V^\mu_{(j)}]\neq 0\, .
\end{equation}
One can, of course, write the individual $\tilde{P}^\mu_{(ij)(k)}$ in the form
\begin{equation}
\tilde{P}^\mu_{(ij)(k)}=\tilde{M}_{(ij)(k)}\, \tilde{V}^\mu_{(ij)(k)}\quad\hbox{with} \quad\tilde{M}_{(ij)(k)}^2=\tilde{P}_{(ij)(k)}\cdot \tilde{P}_{(ij)(k)}\, ,
\end{equation}
but the four-velocities $\tilde{V}^\mu_{(ij)(k)}$ contain interactions and differ for different clusterings, so that an overall four-velocity cannot be factored out of $\tilde{P}^\mu_3$. The key observation is now that all four-velocity operators have the same spectrum, namely $\mathbb{R}^3$. This implies that there exist unitary transformations which relate the four-velocity operators. One can find, in particular, unitary operators $U_{(ij)(k)}$ such that
\begin{equation}\label{eq:inter}
\tilde{V}^\mu_{(ij)(k)}=U_{(ij)(k)} V^\mu_3 U_{(ij)(k)}^\dag\, .
\end{equation}
With these unitary operators one can now define new three-particle momentum operators for a particular clustering,
\begin{equation}\label{eq:packing}
P^\mu_{(ij)(k)}:=U_{(ij)(k)}^\dag \tilde{P}^\mu_{(ij)(k)} U_{(ij)(k)}=U_{(ij)(k)}^\dag \tilde{M}_{(ij)(k)} U_{(ij)(k)} U_{(ij)(k)}^\dag \tilde{V}^\mu_{(ij)(k)} U_{(ij)(k)}= {M}_{(ij)(k)} V_3^\mu\, ,
\end{equation}
which have already BT-structure, i.e. with the free three-particle velocity factored out. From Eq.~(\ref{eq:packing}) it can be seen that the unitary operators $U_{(ij)(k)}$ obviously \lq\lq pack\rq\rq\ the interaction dependence of the four-velocity operators $\tilde{V}^\mu_{(ij)(k)}$ into the mass operator ${M}_{(ij)(k)}$. Therefore they were called \lq\lq packing operators\rq\rq\ by Sokolov in his seminal paper on the formal solution of the cluster problem~\cite{Sokolov:78}. The sum $({P}^\mu_{(12)(3)}+{P}^\mu_{(23)(1)}+{P}^\mu_{(31)(2)}-2 {P}^\mu_{3\,\mathrm{free}})$ describes a three-particle system with pairwise interactions, it has now BT-structure and satisfies thus the correct commutation relation. However, it still violates cluster separability. The solution is a further unitary transformation of the whole sum by means of $U=\prod U_{(ij)(k)}$, assuming that $U_{(ij)(k)}\rightarrow 1$ for separations $(ki)(j)$, $(jk)(i)$ and $(i)(j)(k)$. The final expression for the three-particle four-momentum operator, that has all the properties it should have, is:
\begin{eqnarray}
P^\mu_3 &:=&{U}\,\left[P^\mu_{(12)(3)} +P^\mu_{(23)(1)} +P^\mu_{(31)(2)}+{ P^\mu_{(123)\,\mathrm{int}}}-2P^\mu_{3\,\mathrm{free}}\right]\,{U^{\dagger}}\nonumber \\
&=&{ U}\left[ M_{(12)(3)}+ M_{(12)(3)}+ M_{(12)(3)}+{ M_{(123)\,\mathrm{int}}}-2 M_{3\,\mathrm{free}}\right]\,V_3^\mu\,{ U^{\dagger}}\nonumber\\&=&{ U}\,M_3\,V_3^\mu\,{ U^{\dagger}}\, .
\end{eqnarray}
If $U$ commutes with Lorentz transformations, it can be shown that such a "generalized BT construction" will satisfy relativity and cluster separability for $N$-particle systems.
In addition to the three-body force induced by $U$, which is of purely kinematical origin, we have also allowed for a genuine three-body interaction ${M_{(123)\,\mathrm{int}}}$. Since the $U_{(ij)(k)}$ will, in general, not commute, $U$ depends on the order of the $U_{(ij)(k)}$ in the product. For identical particles one should even take some kind of symmetrized product, for which also different possibilities exist~\cite{Sokolov:78,Kei:91}. This means that $P^\mu_3$ is, apart of the newly introduced free-body interaction ${M_{(123)\,\mathrm{int}}}$, not uniquely determined by the two-body momentum operators $P^\mu_{(ij)}$. There are even different ways to construct the packing operators $U_{(ij)(k)}$. All the unitary transformations leave, however, the on-shell data (binding energies, scattering phase shifts, etc.) of the two-particle subsystems untouched, they only affect their off-shell behavior.
The kind of procedure just outlined formally solves the cluster problem for three-body systems. Generalizations to $N>3$ particles and particle production have also been considered~\cite{Coester:82}. Its practical applicability, however, depends strongly on the capability to calculate the packing operators for a particular system. A possible procedure can also be found in Sokolov's paper. The trick is to split the packing operator further
\begin{equation}
U_{(ij)(k)}=W^\dag(M_{(ij)}) W(M_{(ij)\,\mathrm{free}})
\end{equation}
into a product of unitary operators which depend on the corresponding two-particle mass operators in a way to be determined. With this splitting one can rewrite Eq.~(\ref{eq:inter}) in the form
\begin{equation}
W(M_{(ij)\,\mathrm{free}}) V^\mu_3 W^\dag(M_{(ij)\,\mathrm{free}})=W(M_{(ij)})\tilde{V}^\mu_{(ij)(k)}W^\dag(M_{(ij)})\, .
\end{equation}
Since this equation should hold for any interaction the right- and left-hand sides can be chosen to equal some simple four-velocity operator, for which $V_{(ij)}^\mu\otimes I_k$ is a good choice. In order to compute the action of $W$ it is then convenient to take bases in which matrix elements of $V^\mu_3$, $V_{(ij)}^\mu\otimes I_k$ and $\tilde{V}^\mu_{(ij)(k)}$ can be calculated. This is the basis of (mixed) velocity eigenstates
\begin{equation}
|\vec{v}_{(12)};\vec{\tilde{k}}_1,\vec{\tilde{k}}_2,\vec{p}_3\rangle=
|\vec{v}_{(12)};\vec{\tilde{k}}_1,\vec{\tilde{k}}_2\rangle\otimes|\vec{p}_3\rangle
\end{equation}
of $M_{(ij)(k)\,\mathrm{free}}$ if one wants to calculate the action of $W(M_{(ij)\,\mathrm{free}})$ and corresponding eigenstates of $M_{(ij)(k)}$ if one wants to calculate the action of $W(M_{(ij)})$. It turns out that the effect of these operators is mainly to give the two-particle subsystem $(ij)$ the velocity $v_{(ij)(k)}$ of the whole three-particle system. After some calculations one finds out that the whole effect of the packing operator $U_{(ij)(k)}$ on the mass operator $\tilde{M}_{(ij)(k)}$ is just the replacement
\begin{equation}
\frac{1}{m_{(ij)}^{\prime\, 3/2} m_{(ij)}^{3/2}}
\, v^0_{(ij)}\delta^3(\vec{v}_{(ij)}^{\,\,\prime}-\vec{v}_{(ij)}) \rightarrow
\frac{\sqrt{v_{(ij)}^{\,\,\prime} \cdot {v}_{(ij)(k)}}}{m_{(ij)(k)}^{\prime\, 3/2}}
\frac{\sqrt{v_{(ij)} \cdot {v}_{(ij)(k)}}}{m_{(ij)(k)}^{3/2}}\, v^0_{(ij)(k)}\delta^3(\vec{v}_{(ij)(k)}^{\,\,\prime}-\vec{v}_{(ij)(k)})\,
\end{equation}
in the mixed velocity-state matrix elements. Here $m_{(ij)}$ and $m_{(ij)(k)}$ are the invariant masses of the free two-particle subsystem and the free three-particle system, $v_{(ij)}$ and $v_{(ij)(k)}$ the corresponding four-velocities.
\section{Summary and outlook}
We have given a short introduction into the field of relativistic quantum mechanics. It has been shown that the Bakamjian-Thomas construction, the only known systematic procedure to implement interactions such that Poincar\'e invariance of a quantum mechanical system is guaranteed, leads to problems with cluster separability for systems of more than two particles. Cluster separability is a physically sensible requirement for quantum mechanical systems which replaces microcausality in relativistic quantum field theories. We have discussed the physical consequences of wrong cluster properties, e.g., unphysical contributions in electromagnetic currents of bound states. Following the work of Sokolov we have sketched how a three-particle mass operator with pairwise interactions and correct cluster properties can be constructed. This is accomplished by a set of unitary transformations called packing operators. For the simplest case of three spinless particles we have explicitly calculated these packing operators. In a next step it is planned to use these results to see whether the problems encountered with electromagnetic bound-state currents can be cured by starting with a mass operator that has the correct cluster properties.
|
1,108,101,564,334 | arxiv | \section{Energy theorem}
\paragraph{}
A {\bf finite abstract simplicial complex} is a finite set of
non-empty sets which is closed under the operation of taking finite non-empty
subsets. The {\bf connection graph} $G'$ of such a {\bf simplicial complex}
$G$ has as the vertices the sets of $G$ and as the edge set the pairs of sets which intersect.
Given two simplicial complexes $G$ and $K$, their {\bf sum} $G \oplus K$ is the disjoint union and
the {\bf Cartesian product} $G \times K$ is the set of all set Cartesian products
$x \times y$, where $x \in G, y \in K$. While $G \times K$ is no
simplicial complex any more if both factors are different from the one-point complex $K_1$,
it still has a connection graph $(G \times K)'$, the graph for which the vertices are the sets $x \times y$
and where two sets are connected if they intersect.
The {\bf Barycentric refinement} $(G \times K)_1$ of $G \times K$ is the Whitney
complex of the graph with the same vertex set as $G'$ but where two sets are
connected if and only if one is contained in the other.
The matrix $L=L(G)=1+A$, where $A$ is the adjacency
matrix of $G'$ is the {\bf connection Laplacian} of $G$.
\paragraph{}
All geometric objects considered here are finite and combinatorial and
all operators are finite matrices. Only in the
last section when we look at the Barycentric limit of the discrete lattice $Z^d$, the operators become
almost periodic on a profinite group, but there will be universal bounds on the norm of the inverse.
One of the goals of his note is to extend the following theorem to the strong ring generated by
$G_{i_1} \times \cdots \times G_{i_k}$.
\begin{thm}[Unimodularity theorem]
For every simplicial complex $G$, the connection Laplacian $L(G)$ is unimodular.
\end{thm}
We have proven this in \cite{Unimodularity} inductively by building up
the simplicial complex $G$ as a discrete CW-complex starting with the
zero-dimensional skeleton, then adding one-dimensional cells,
reaching the one-dimensional skeleton of $G$, then adding triangles etc.
continuing until the entire complex is built up.
In every step, if a cell $x$ is added, this means that a ball $B(x)$ is
glued in along a sphere $S(x)$, the Fredholm determinant
${\rm det}(L)={\rm det}(1+A)$ of the adjacency matrix $A$ of $G'$
is multiplied by
$\omega(x) = 1-\chi(S(x)) =(-1)^{{\rm dim}(x)} \in \{ -1,1\}$,
where $S(x)$ is the unit sphere in the Barycentric refinement $G_1$ of $G$,
which is the Whitney complex of a graph.
The determinant of $L(G)$ is now equal to the {\bf Fermi characteristic}
$\prod_x \omega(x) \in \{-1,1\}$, a multiplicative analogue of the
{\bf Euler characteristic} $\sum_x \omega(x)$ of $G$. Having determinant $1$
or $-1$, the matrix is unimodular.
\paragraph{}
As a consequence of the unimodularity theorem,
the inverse $g=L^{-1}$ of $L$ produces {\bf Green functions} in the form of
integer entries $g(x,y)$ which can be seen as the {\bf potential energy}
between the two simplices $x,y$. The sum $V(x)=\sum_y g(x,y)$
is the {\bf potential} at $x$ as it adds up the potential energy $g(x,y)$ induced from the
other simplices. It can also be interpreted as a {\bf curvature} because the
Euler characteristic $\chi(G) = \sum_{x \in G} \omega(x)$ is the sum over
the Green function entries:
\begin{thm}[Energy theorem]
For every simplicial complex $G$, we have $\sum_x V(x) = \sum_{x,y} g(x,y) = \chi(G)$.
\end{thm}
\paragraph{}
This formula is a Gauss-Bonnet formula, when the row sum is interpreted as a curvature.
It can also be seen as a {\bf Poincar\'e-Hopf formula}
because $V(x)=(-1)^{{\rm dim}(x)} (1-\chi(S(x)) = (1-\chi(S^-_f(x)))$
is a {\bf Poincar\'e-Hopf} index for the {\bf Morse function} $f(x)=-{\rm dim}(x)$
on the Barycentric refinement $G_1$ of $G$. A function on a graph is a {\bf Morse functional} if
every $S^-_f(x)$ is a discrete sphere, where $S^-_f(x)=\{ y \in S(x) \; | \; f(y)<f(x) \; \}$.
The proof reduces the energy theorem to the Poincar\'e-Hopf formula which is
dual to the definition of Euler characteristic: the Poincar\'e-Hopf index of $-f={\rm dim}$
is $\omega(x)$ and $\sum_x \omega(x)$ is the definition of Euler characteristic as we sum
over simplices. In the Barycentric refinement, where we sum over vertices then $\omega(x)$ can be
seen as a curvature. In order to reduce the energy theorem to Poincar\'e-Hopf, one has to
show that $V(x) = \sum_y g(x,y)$ agrees with $(-1)^{{\rm dim}(x)} g(x,x)$. See also
\cite{Spheregeometry} for an interpretation of the diagonal elements.
\paragraph{}
The {\bf Barycentric refined complex} $G_1$ is a set of subsets of the power set
$2^{G}$ of $G$. It consists of set of sets in $G$ which pairwise are contained
in each other. It is the {\bf Whitney complex}
of the graph $G_1=(V,E)$, where $V=G$ and $E$ is the set of $(a,b)$ with
$a \subset b$ or $b \subset a$. Since $G_1$ is the Whitney complex of
a graph, we can then use a more intuitive picture of the complex, as graphs
can be drawn and visualized and are hardwired already as structures in
computer algebra systems. Also definitions become easier. To define the
inductive dimension for example, its convenient to define it for
simplicial complexes as the corresponding number for the Barycentric refined
complex, where one deals with the Whitney complex of a graph.
\section{The Sabidussi ring}
\paragraph{}
The {\bf strong product} of two finite simple graphs $G=(V,E)$ and $H=(W,F)$
is the graph $G \mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} H = (V \times W, \{ ((a,b),(c,d)) \; a=c, (b,d) \in F \}
\cup \{ ((a,b),(c,d)) \; b=d, (a,c) \in E \}
\cup \} ((a,b),(c,d)) \; (a,c) \in E {\rm and} (b,d) \in F \} )$.
It is an associative product introduced by Sabidussi \cite{Sabidussi}. Together with the
disjoint union $\oplus$ as addition on signed complexes,
it defines the {\bf strong Sabidussi ring of graphs}.
We have started to look at the arithmetic in \cite{ArithmeticGraphs}.
We will relate it in a moment to the strong ring
of simpicial complexes. In the Sabidussi ring of graphs,
the additive monoid $(\mathcal{G}_0,\oplus)$ of finite simple graphs with
zero element $(\emptyset,\emptyset)$ is first extended to a larger class $(\mathcal{G},\oplus)$
which is a Grothendieck group. The elements of this group are
then {\bf signed graphs}, where each connected component can have either a positive or
negative sign. The {\bf additive primes} in the strong ring are the connected components.
An element in the group, in which both additive primes $A$ and $-A$ appear, is equivalent
to a signed graph in which both components are deleted.
The {\bf complement} of a graph $G=(V,E)$ is denoted by
$\overline{G}=(V,\overline{E})$, where $\overline{E}$ is the set of pairs $(a,b)$
not in $E$.
\paragraph{}
The {\bf join} $G + H = (V \cup W, E \cup F \cup \{ (a,b), a \in V, b \in W \})$ is an
addition introduced by Zykov \cite{Zykov}. It is dual to the disjoint union
$G + H = \overline{ \overline{G} \oplus \overline{H}}$. The {\bf large product}
$G \star H = (V \times W, \{ (a,b),(c,d) \; (a,c) \in E \; {\rm or} \; (b,d) \in E \})$
is dual to the strong product.
In other words, the dual to the strong ring $(\mathcal{G},\oplus,\mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} ,0,1)$ is the
{\bf large ring} $(\mathcal{G},+,\star,0,1)$. Because the complement operation $G \to \overline{G}$ is
invertible and compatible with the ring operations, the two rings are isomorphic.
The additive primes in the strong ring are the connected sets.
Sabidussi has shown that every connected set has a unique multiplicative prime factorization.
It follows that the strong ring of graphs is an integral domain.
It is not a unique factorization domain however. There are disconnected graphs
which can be written in two different ways as a product of two graphs.
\paragraph{}
The Sabidussi and Zykov operations could also be defined for simplicial complexes but we don't need
this as it is better to look at the Cartesian product and relate it to the strong
product of connection graphs. But here is a definition:
the disjoint union $G \oplus H$ is just $G \cup H$ assuming that the simplices are disjoint.
The Zykov sum, or join is $G + H = G \cup H \cup \{ x \cup y \; | \; x \in G, y \in H \}$.
If $\pi_k$ denote the projections from the
{\bf set theoretical Cartesian product} $X \times Y$ to $X$ or $Y$,
the Zykov product can be defined as
$G \star H = \{ A \subset X \times Y \; | \; \pi_1(A) \in G \; {\rm or} \; \pi_2(A) \in H \}$.
It can be written as $G \star H = G \times 2^{V(H)} \cup 2^{V(G)} \times H$,
where $2^X$ is the {\bf power set} of $X$ and $V(G)= \bigcup_{A \in G}$.
\section{The Stanley-Reisner ring}
\paragraph{}
The {\bf Stanley-Reisner ring} $S$ is a subring $\bigcup_n \ZZ[x_1,x_2, \dots x_n]/I_n$, where
$I_n$ is the ideal generated $x_i^2$ and $f_G-f_H$, where $f_G, f_H$ are ring elements representing
the isomorphic finite complexes $G,H$. The ring $S$ contains all elements for which $f(0,0,0,... 0) = 0$.
It is a quotient ring of the
{\bf ring of chains} $C \subset \bigcup_n \ZZ[x_1,x_2, \dots x_n]/J_n$ with $J_n$ is the ideal generated by
squares. A signed version of ring of chains is used in algebraic topology. It is the {\bf free Abelian group}
generated by finite simplices represented by monoids in the ring.
The ring of chains $C$ is larger than the Stanley-Reisner ring as for example, the ring element $f=x-y$ is zero in $S$.
The Stanley-Reisner ring is helpful as
every simplicial complex $G$ and especially every Whitney complex of a graph can be described
algebraically with a polyonomial: if $V= \cup_{A \in G} A =\{x_1, \dots, x_n\}$ is the base
set of the finite abstract simplicial complex $G$, define the monoid
$x_A = \prod_{x \in A} x$ and then $f_G=\sum_{A \in G} x_A$. We initially have computed the product using
this algebraic picture in \cite{KnillKuenneth}.
\paragraph{}
The circular graph $G=C_4$ for example gives
$f_G = x+y+z+w+xy+yz+zw+wx$ and the triangle $H=K_3$ is described by $f_H= a+b+c+ab+ac+bc+abc$.
The Stanley-Reisner ring also contains elements like $7xy-3x+5y$ which are only chains and not simplicial complexes.
The addition $f_G + f_H$ is the disjoint union of the two complexes, where
different variables are used when adding two complexes. So, for $G=K_2=x+y+xy$ we have
$G+G = x+y+xy+a+b+ab$ and $G+G+G=x+y+xy+a+b+ab+u+v+uv$. The negative complex $-G$ is $-x-y-xy$.
The complex $G-G = -x-y-xy+a+b+ab$ is in the ideal divided out so that it becomes $0$ in the
quotient ring. The Stanley-Reisner ring is large. It contains elements like $xyz$ which can
not be represented as linear combinations $\sum_i a_i G_i$ of simplicial complexes $G_i$.
But we like to see the strong ring $R$ embedded in the full Stanley-Reisner ring $S$.
\paragraph{}
If $G$ and $H$ are simplicial complexes represented by polynomials $f_g,f_H$, then
the product $f_G f_H = f_{G \times H}$ is not a
simplicial complex any more in general. Take $f_{K_2} f_{K_2} = (a+b+a b)(c+d+c d)$ for example which is
$ a c + b c + a b c + a d + b d + a b d + a c d + b c d + a b c d$. We can not interpret $ac$ as a new
single variable as $bc$ is also there and their intersection $ac \cap bc = c$ could not be represented
in the complex. A simplicial complex is by definition closed for non-empty intersections as such an
intersection is a subset of both sets. We can still form the subring S in R generated by the
simplicial complexes and call it the {\bf strong ring}. The
reason for the name is that on the connection graph level it leads to the strong product of graphs.
The strong ring of simplicial complexes will be isomorphic to a subring of the
Sabidussi ring of graphs.
\paragraph{}
The Stanley-Reisner picture allows for a concrete implementation of the additive
Grothendieck group which extends the monoid given by the disjoint union as addition.
The Stanley-Reisner ring is usually a ring attached to a single geometric object.
The full Stanley-Reisner ring allows to represent any element in the strong ring.
It is however too large for many concepts in combinatorial topology, where
finite dimensional rings are used to describe a simple complex.
One can not see each individual element in $S$ as a geometric object on its own, as cohomology and
the unimodularity theorem fail on such an object. The chain $f=xy+yz+y$ for example is no simplicial complex.
Its connection graph is a complete graph for which the Fredholm determinant ${\rm det}(1+A)$ is zero.
Its boundary is not a subset of $f$, its connection graph is $K_3$ as all ingredient, the two edges and the
single vertex all intersect with each other. The elements of the Stanley-Reisner picture behave like
measurable sets in a $\sigma$-algebra. One can attach to every $f$ in the full Stanley-Reiser ring an {\bf Euler characteristic}
$\chi(f) = -f(-1,-1, \dots, -1)$ which satisfies $\chi(f+g)=\chi(f)+\chi(g), \chi(f g) = \chi(f) \chi(g)$
but this in general does not have a cohomological analog via Euler-Poincar\'e. This only holds in the strong ring.
\paragraph{}
While the Cartesian product $G \times H$ of two simplicial complexes is not a simplicial complex any more,
the product can be represented by an element $f_G f_H$ in the Stanley-Reisner ring. The
{\bf strong ring} is defined as the subring $S$ of the full Stanley-Reisner ring $R$
which is generated by simplicial complexes.
Every element in the strong ring $S$ is a sum $\sum_I a_I f_{G_I}$, where $a_I \in \ZZ$ and
for every finite subset $I \subset \mathbb{N}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\DD}{\mathbb{D}$, the notation $f_{G_I} = \prod_{i \in I} f_{G_i}$ is used.
The ring of chain contains $\sum_I a_I x_I$, with $x_I=x_{i_1} \cdot x_{i_k}$, where
$A_I =\{ x_{i_1}, \dots, x_{i_k} \}$ are finite subsets of $\{x_1,x_2, \dots \}$.
Most of the elements are not simplicial complexes any more. The ring of chains has been
used since the beginnings of combinatorial topology. But its elements can also be
described by graphs, connection graphs.
\paragraph{}
The strong ring has the empty complex as the zero element and the one-point complex $K_1$ as the
one element. The element $-K_1$ is the $-1$ element. The strong ring contains $\ZZ$ by identifying
the zero dimensional complexes $P_n$ with $n$. It contains
elements like $x+y+xy - (a+b+c+ab+bc+ac)$ which is a sum $K_2 + C_3$
of a Whitney complex $K_2$ a non-Whitney complex $C_3$. The triangle $K_3$
is represented by $(a+b+c+ab+bc+ac+abc)$. The strong ring does not contain elements like
$x+y+xy - (y+z+yz)$ as in the later case, we don't have a linear combination of
simplicial complexes as the sum is not a disjoint union.
The element $G=x+y+xy - (a+b+ab)$ is identified with the zero element $0$ as $G=K_2 - K_2$
and $G=x+y+xy + (a+b+ab) = K_2 + K_2$ can be written as $2 K_2 = P_2 \times K_2$,
where $P_2=u+v$ is the zero-dimensional complex representing $2$. The
multiplication honors signs so that for example $G \times (-H) = -G \times H$ and
more generally $a (G \times H) = (a G) \times H) = G \times (a H)$ for any zero
dimensional signed complex $a = P_a$ which follows from the distributivity
$H \times (G_1 + G_2) = H \times G_1 + H \times G_2$.
\section{The connection lemma}
\paragraph{}
The following lemma shows that the {\bf strong ring of simplicial complexes}
is isomorphic to a subring of the {\bf Sabidussi ring}.
First of all, we can extend the notion of {\bf connection graph} from simplicial complexes to
products $G=G_{i_1} \times \cdots \times G_{i_n}$ of simplicial complexes and so to the
strong ring. The vertex set of $G$ is the {\bf Cartesian product} $V'=V_1 \times \cdots V_n$
of the base sets $V_i=V(G_i) = \bigcup_{A \in G_i} A$. Two different elements in $V$ are connected
in the connection graph $G'$ if they intersect as sets.
This defines a finite simple graph $G'=(V',E')$. Also a multiple $\lambda G$ of a
complex is just mapped into the multiple $\lambda G'$ of the connection graph, if $\lambda$
is an integer.
\begin{lemma}[Connection lemma]
$(G \times H)' = G' \mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} H'$.
\end{lemma}
\begin{proof}
In $G \times H$, two simplices $(a \times b), (c \times d)$ in the vertex set
$V = \{ (x,y) \; | \; x \in G, y \in H \}$ of $(G \times H)'$ are connected in
$(G \times H)'$ if $(a \times b) \cap (c \times d)$ is not empty.
But that means that either $a \cap c$ is not empty or then that $b \times d$ is
not empty or then that $a=c$ and $b \cap d$
is not empty or then that $b=d$ and $a \cap c$ is not empty.
\end{proof}
\paragraph{}
The strong connection ring is a sub ring of the {\bf Sabidussi ring}
$(\mathcal{G},\oplus,\mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} ,0,1)$, in which objects are signed graphs.
While the later contains all graphs, the former only contains ring elements of the form $G'$,
where $G$ is a simplicial complex the {\bf strong connection ring}. The lemma allows us to avoid
seeing the elements as a subspace of abstract finite CW complexes for which the Cartesian
product is problematic. Both the full Stanley-Reisner ring and the Sabidussi rings are too large.
The energy theorem does not hold in the full Stanley-Reisner ring, as the example $G=xy+yz+y$
shows, where $G'=K_3$ is the complete graph for which the Fredholm determinant is zero.
We would need to complete it to a simplicial complex like $G=xy+yz+x+y+z$ for which
the connection Laplacian $L(G)$ has a non-zero Fredholm determinant.
\paragraph{}
Given a simplicial complex $G$, let $\sigma(G)$ denote the {\bf connection spectrum} of $G$. It is
the spectrum of the connection Laplacian $L(G)$. The trace ${\rm tr}(L(G))$ is the number of cells in the complex.
It is a measure for the {\bf total spectral energy} of the complex. Like the potential
theoretic total energy $\chi(G)$, the connection spectrum and total energy are compatible with arithmetic.
The trace ${\rm tr}(G)$ is the total number of cells and can be written in the Stanley-Reisner
picture as $f_G(1,1,1, \dots , 1)$.
\begin{coro}[Spectral compatibility]
$\sigma(G \times H) = \sigma(G) \sigma(H)$.
\end{coro}
\begin{proof}
It is a general fact that the Fredholm adjacency matrices tensor under
the strong ring multiplication. This implies that spectra multiply.
\end{proof}
The trace is therefore a ring homomorphism from the strong ring $S$ to the integers but
since the trace ${\rm tr}(L(G))$ is the number of cells, this is obvious.
\paragraph{}
We see that from a spectral point of view, it is good to look at the Fredholm
adjacency matrix $1+A(L)$ of the connection graph. The operator $L(G)$ is an
operator on the same Hilbert space than the Hodge Laplacian $H$ of $G$ which is
used to describe cohomology altebraically. We will look at the Hodge Laplacian later.
\paragraph{}
As a consequence of the tensor property of the connection Laplacians, we also know
that both the unimodularity theorem as well as the energy theorem extend to the
strong connection ring.
\begin{coro}[Energy theorem for connection ring]
Every connection Laplacian of a strong ring element is unimodular and has
the property that the total energy is the Euler characteristic.
\end{coro}
\begin{proof}
Linear algebra tells that if $L$ is a $n \times n$ matrix and $M$ is a $m \times m$ matrix,
then $\det(L \otimes M) = \det(L)^m \det(M)^n$. If $L,M$ are connection
Laplacians, then $|\det(L)|=|\det(L)|=1$ and the product shares the property of having
determinant $1$ or $-1$.
\end{proof}
\paragraph{}
In order to fix the additive part, we have to define the Fredholm determinant
of $-G$, the negative complex to $G$. Since $\chi(-G)=-\chi(G)$, we have
$\omega(-x)=-\omega(x)$ for simplices and if we want to preserve the property
$\psi(G) = \prod_x \omega(x)$, we see that that defining $\psi(-G) = {\rm det}(-L(G))$
is the right thing. The {\bf connection Laplacian} of $-G$ is therefore defined as
$-L(G)$. This also extends the energy theorem correctly.
The sum over all matrix entries of the inverse of $L$ is the Euler characteristic.
\section{The Sabidussi theorem}
\paragraph{ }
An element $G$ in the strong ring $S$ is called an {\bf additive prime} if it can
not be decomposed as $G=G_1 \oplus G_2$ with both $G_i$ being non-empty. The
additive prime factorization is braking $G$ into
{\bf connected components}.
A {\bf multiplicative prime} in the strong ring is an element which can not be
written as $G = G_1 \times G_2$, where both $G_i$ are not the one-element $K_1$.
\begin{thm}[Sabidussi theorem]
Every additive prime in the Sabidusi ring has a unique multiplicative
prime factorization.
\end{thm}
See \cite{Sabidussi}. See also \cite{ImrichKlavzar,HammackImrichKlavzar}
where also counter examples appear, if the connectivity assumption is dropped.
The reason for the non-uniqueness is that $\mathbb{N}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\DD}{\mathbb{D}[x]$ has no unique
prime factorization: $(1+x+x^2)(1+x^3) =(1+x^2+x^4)(1+x)$.
\paragraph{}
Can there be primes in the strong ring that are not primes in the Sabidussi ring?
The factors need not necessarily have to be simplicial complexes.
Is it possible that a simplicial complex can be factored into smaller
components in the Stanley-Reisner ring? The answer is no, because a simplicial complex $G$
is described by a polynomial $f_G$ has linear parts. A product does not have linear parts.
We therefore also have a unique prime factorization for connected components in the strong ring.
The Sabidussi theorem goes over to the strong ring.
\begin{coro}
Every additive prime in the strong ring has a unique multiplicative
prime factorization. The connected multiplicative primes in the strong ring
are the connected simplicial complexes.
\end{coro}
\paragraph{}
If we think of an element $G$ in the ring as a {\bf particle} and of $G \cup H$
as a {\bf pair of particles}, then the {\bf total spectral energy} is
the sum as the eigenvalues adds up. As the individual $L$ spectra
multiply, this provokes comparisons with the
{\bf Fock space} of particles: when taking the
disjoint union of spaces is that the Hilbert space of the particles is
the product space. If we look at the product of two spaces, then the
Hilbert space is the tensor product. In some sense,
"particles" are elements in the strong ring. They are generated by
prime pieces of "space". These are the elements in that ring
which belong to simplicial complexes.
\paragraph{}
Whether this picture painting particles as objects generated by space has
merit as a model in physics is here not important. Our point of view
is purely mathematical: the geometric ``parts" have {\bf topological
properties} like cohomology or energy or {\bf spectral properties} like
spectral energy attached to them.
More importantly, the geometric objects are elements in a
ring for which the algebraic operations are compatible with the
topological properties. The additive primes in the ring are the connected components, the
``particles" where each is composed of smaller parts thanks to a
unique prime factorization $G_1 \times \cdots \times G_n$ into {\bf multiplicative primes}.
These {\bf elementary parts or particles} are just the connected {\bf simplicial complexes}.
\section{Simplicial cohomology}
\paragraph{}
The {\bf Whitney complex} of a finite simple graph $G=(V,E)$ is the simplicial complex
in which the sets are the vertex sets of complete subgraphs of $G$. If $d$ denotes the
exterior derivative of the Whitney complex, then the {\bf Hodge Laplacian} $H=(d+d^*)^2$ decomposes
into blocks $H_k(G)$ for which $b_k(G)= {\rm dim}({\rm ker}(H_k))$ are the {\bf Betti numbers}, the
dimensions of the {\bf cohomology groups} $H^k(G) = {\rm ker}(d_k)/{\rm im}(d_{k-1}))$. The
{\bf Poincar\'e-polynomial} of $G$ is defined as $p_G(x) = \sum_{k=0}^{\infty} b_k(G) x^k$.
For every connected complex $G$, define $p_{-G}(x)=-p_G(x)$. Complexes can now have negative
Betti numbers. There are also non-empty complexes with $p_G(x)=0$ like $G=C_4-C_5$.
\paragraph{}
The exterior derivative on a signed simplicial complex $G$ is defined as $df(x) = f(\delta x)$,
where $\delta$ is the boundary operation on simplices.
The indidence matrix can also be defined as $d(x,y) = 1$ if $x \subset y$ and the orientation
matches.
Note that this depends on the choice of the orientation of the simplices as we do not require any compatibility.
It is a choice of basis in the Hilbert space on which the Laplacian will work.
The {\bf exterior derivative} $d$ of a product $G_1 \times G_2$ of two complexes each
having the boundary operation $\delta_i$ is then given as
$$ df(x,y) = f(\delta_1 x,y) +(-1)^{{\rm dim}(x)} f(x,\delta_2y) \; . $$
The {\bf Dirac operator} $D=d+d^*$ defines then the Hodge Laplacian $H=D^2$.
Both the connection Laplacian and Hodge Laplacian live on the same space.
\paragraph{}
The Hodge theorem directly goes over from simplicial complexes to elements in the strong
ring. Let $H=\oplus_{k=0} H_k$ be the block diagonal decomposition of the
Hodge Laplacian and $G = \sum_{i=1}^n a_I G_I$ the additive decomposition into connected components
of products $G_I = G_{i_1} \times \cdots \times G_{i_n}$ of simplicial complexes.
For every product $G_I$, we can write down a concrete exterior derivative $d_I$, Dirac operator $D(G_I)=d_I + d_I^*$
and Hodge Laplacian $H(G_I)= D(G_I)^2$.
\begin{propo}[Hodge relation]
We have $b_k(G_I) = {\rm dim}({\rm ker}(H_k(G_I)))$.
\end{propo}
The proof is the same as in the case of simplicial complexes as the cohomology of $G_I$
is based on a concrete exterior derivative. When adding connected components we
{\bf define} now
$$ b_k(\sum_I a_I G_I) = \sum_I a_I b_k(G_I) \; . $$
\paragraph{}
The strong ring has a remarkable compatibility with cohomology. In order to
see the following result, one could use discrete homotopy notions or then refer to the
classical notions and call two complexes homotopic if their geometric realizations are
homotopic. It is better however to stay in a combinatorial realm and ignore geometric
realizations.
\begin{thm}[Kuenneth]
The map $G \to p_G(x)$ is a ring homomorphism from the strong connection ring to $\ZZ[x]$.
\end{thm}
\begin{proof}
If $d_i$ are the exterior derivatives on $G_i$, we can write them as partial exterior
derivatives on the product space. We get from
$d f(x,y) = d_1 f(x,y) + (-1)^{{\rm dim}(x)} d_2(f(x,y)$
$$ d^* d f = d_1^* d_1 f + (-1)^{{\rm dim}(x)} d_1^* d_2 f
+ (-1)^{{\rm dim}} d_2^* d_1 f + d_2^* d_2 f \; , $$
$$ d d^* f = d_1 d_1^* f + (-1)^{{\rm dim}(x)} d_1 d_2^* f
+ (-1)^{{\rm dim}(x)} d_2 d_1^* f + d_2 d_2^* f \; . $$
Therefore $H f = H_1 f + H_2 f + (-1)^{{\rm dim}(x)} (d_1^*d_2 + d_1 d_2^* + d_2^* d_1 + d_2 d_1^*) f(x,y) )$.
Since Hodge gives an orthogonal decomposition
$$ {\rm im}(d_i),{\rm im}(d_i^*), {\rm ker}(H_i) = {\rm ker}(d_i) \cap {\rm ker}(d_i^*) \; , $$
there is a basis in which $H(v,w) = (H(G_1)(v), H(G_2)(w))$.
Every kernel element can be written as $(v,w)$,
where $v$ is in the kernel of $H_1$ and $w$ is in the kernel of $H_2$.
\end{proof}
The Kuenneth formula follows also from \cite{KnillKuenneth}
because the product is $(G \times H)_1$ and
the cohomology of the Barycentric refinement is the same.
It follows that the {\bf Euler-Poincar\'e formula} holds in general for elements in the
ring: the cohomological Euler characteristic $\sum_{k=0}^{\infty} b_k(G) (-1)^k$ is equal to
the combinatorial Euler characteristic $\sum_{k=0}^{\infty} v_k(G) (-1)^k$, where
$(v_0,v_1, \dots)$ is the $f$-vector of $G$. \\
There is also a cohomolog for the higher Wu characteristic $\omega_k(G)$.
\section{Gauss-Bonnet, Poincar\'e-Hopf}
\paragraph{}
The definition $\sum_x \omega(x) = \chi(G)$ of Euler characteristic, with curvature $\omega(x) = (-1)^{{\rm dim}(x)}$
can be interpreted as a {\bf Gauss-Bonnet} result in $G_1$, the Barycentric refinement of a simplicial complex $G$.
If $G$ is the Whitney complex of a graph, then we have a small set of vertices $V$, the zero dimensional simplices
in $G$. Pushing the curvature from the simplices to the vertices $v$, then produces the curvature
$$ K(v) = \sum_{k=0}^{\infty} \frac{(-1)^k V_{k-1}}{(k+1)}
= 1 - \frac{V_0}{2} + \frac{V_1}{3} - \frac{V_2}{4} + \cdots \; , $$
where $V_k(v)$ is the number of $k$-dimensional simplices containing $v$ and $V_{-1}=1$ as there is an empty complex
contained in every complex. See \cite{cherngaussbonnet}.
The formula appeared already in \cite{Levitt1992} but without seeing it as a Gauss-Bonnet result.
Gauss-Bonnet makes sense for any simplicial complex $G$.
If $v$ is a $0$-dimensional element in $G$ and $V(v)$ is the number of simplices containing $v$, then the same
curvature works. It can be formulated more generally for any element in strong ring. The curvatures just multiply
in the product:
\begin{thm}[Gauss-Bonnet]
Given a ring element $G$. The curvature function $K$ supported on the zero-dimensional part $V$ of $G$
satisfies $\sum_{v} K(v) = \chi(G)$. If $G = A \times B$, and $v=(a,b)$ is a $0$-dimensional point in $G$,
then $K_G(v) = K_A(a) K_B(b)$.
\end{thm}
\begin{proof}
The proof is the same. Lets take the product $A \times B$ of two simplicial complexes.
We have $\sigma(A \times B) = \sum_{x,y} \omega(x) \omega(y)$, where the sum is over all
pairs $(x,y) \in A \times B$ (the set theoretical Cartesian product). The sum does not change, if we
distribute every value $\omega(x)$ equally to zero-dimensional subparts. This gives the curvature.
\end{proof}
\paragraph{}
For Poincar\'e-Hopf \cite{poincarehopf}, we are given a locally injective function $f$ on $G$.
Define the {\bf Poincar\'e-Hopf index} $i_f(v)$
at a $0$-dimensional simplex $v$ in $G$ as $1-\chi(S_f^-(x))$, where
$S_f^-(v) = \{ x \in G \; | \; f(v)<f(x)$ and $v \subset x \}$ and the Euler characteristic is the usual
sum of the $\omega(y)$, where $y$ runs over the set $S_f^-(v)$. This can now be generalized to products:
\begin{thm}[Poincar\'e-Hopf]
Given a ring element $G$ and a locally injective function $f$ on $G$. The
index function $i_f$ supported on the zero-dimensional part $V$ of $G$ satisfies
$\sum_v i_f(v) = \chi(G)$. If $G=A \times B$ and $v=(a,b)$ is a $0$-dimensional point in $G$
then $i_f(v) = i_f(a) i_f(b)$.
\end{thm}
\begin{proof}
Also here, the proof is the same. Instead of distributing the original curvature values $\omega(x) \omega(y)$
equally to all zero dimensional parts, it is only thrown to the zero dimensional simplex $(a,b)$ for
which the function is minimal on $(x,y)$.
\end{proof}
\paragraph{}
Also index averaging generalizes. Given any probability measure $P$ on locally injective functions $f$,
one can look at the expectation $K_P(v) = {\rm E}[i_f(v)]$ which can now be interpreted as a curvature
as it does not depend on an indifidual function $f$ any more. There are various natural measures
which produce the Gauss-Bonnet curvature $K(x)$. One is to look at the product measure $[-1,1]$
indexed by $G$ \cite{indexexpectation}.
An other is the set of all colorings, locally injective functions on $G$ \cite{colorcurvature}.
Lets formulate it for colorings
\begin{thm}[Index averaging]
Averaging $i_f(x)$ over all locally injective functions on $G$
with uniform measure gives curvature $K(x)$.
\end{thm}
\paragraph{}
For Brouwer-Lefschetz \cite{brouwergraph}, we look at an endomorphisms $T$
of an element $G$ in the strong ring. The definition of the {\bf Brouwer index} is
the same as in the graph case: first of all, one can restrict to the attractor of $T$
and get an automorphism $T$. For a simples $x \in G$, define $i_T(x)={\rm sign}(T|x) \omega(x)$.
Because $T$ induces a permutation on the simplex $x$, the signature of $T|x$ is defined.
Also the definition of the {\bf Lefschetz number} $\chi_T(G)$ is the same. It is the super
trace on chomology
$$ \chi_T(G) = \sum_{k=0} (-1)^k {\rm tr}(T|H^k(G)) \; . $$
\begin{thm}[Brouwer-Lefschetz]
$\sum_{x, T(x)=x} i_T(x) = \chi_T(G)$.
\end{thm}
\begin{proof}
The fastest proof uses the heat flow $e^{ -tH(G)}$ for the Hodge Laplacian.
The super trace ${\rm str}(H^k)$ is zero for $k>0$ by McKean-Singer \cite{knillmckeansinger}.
Define $l(t)={\rm str}(\exp(-tL) U_T)$, where
$U_Tf=f(T)$ is the Koopman operator associated to $T$. The function $f(t)$ is constant.
This heat flow argument proves Lefschetz because $l(0) = {\rm str}(U_T)$ is $\sum_{T(x)=x} i_T(x)$
and $\lim_{t \to \infty} l(t)=\chi_T(G)$ by Hodge.
\end{proof}
\paragraph{}
There are more automorphisms $T$ in $A \times B$ than product automorphisms $T_1 \times T_2$.
An example is if $A=B$ and $T( (x,y) ) = (y,x)$. One could have the impression at first that
such an involution does not have a fixed point, but it does. Lets for example take $A=B=K_2$.
The product $A \times B$ has $9$ elements and can be written as $(a+b+ab) (c+d+cd)$.
The space is contractible so that only $H^0(G)$
has positive dimension and we are in the special case of the Brouwer fixed point case.
The Lefschetz number $\chi_T(G)$ is equal to $1$. There must be a fixed point. Indeed, it is
the two dimensional simplex $((a,b) \times (c,d))$ represented in the Stanley-Reisner picture
as $abcd$.
\section{Wu characteristic}
\paragraph{}
Euler characteristic $\chi(G)=\omega_1(G)$ is the first of a sequence $\omega_k$ of {\bf Wu characteristic}.
The {\bf Wu characteristic} $\omega(G)=\omega_2(G)$ is defined for a simplicial complex $G$ as
$$ \sum_{x \sim y} \omega(x) \omega(y) , $$
where $\omega(x) = (-1)^{{\rm dim}(x)}$ and where the sum is taken over all intersecting simplices.
The notation fits as $\omega(K_n) = (-1)^{n-1}$ which we have proven the fact that the Barycentric refinement
of the complete graph is a ball, a discrete manifold with sphere boundary of dimension $n-1$. A general
formula for discrete manifolds with boundary then use the formula $\omega(G) = \chi(G) - \chi(\delta G)$.
Higher order versions $\omega_k(G)$ are defined similarly than $\omega(G)$.
We just have to sum over all $k$-tuples of simultaneously intersecting simplices in the complex.
While we have seen $\omega_k( (G \times H)_1 ) = \omega_k(G) \omega_k(H)$ and
of course $\omega_k(G \oplus H) = \omega_k(G) + \omega_k(H)$, this insight was done for the Cartesian
product $(G \times H)_1$ which was again a simplicial complex, the Whitney complex of a graph.
The product property especially implies that the Barycentric refinement $G_1$ has the same Wu
characteristics $\omega_k(G) = \omega_k(G_1)$. In other words, like Euler characteristic
$\chi=\omega_1$, also the Wu characteristic $\omega=\omega_2$ and higher Wu characteristics
$\omega_k(G)$ are {\bf combinatorial invariants}.
\paragraph{}
The Wu characteristic can be extended to the strong ring. For simplicity, lets restrict to
$\omega=\omega_2$. The notation $\omega(x) = (-1)^{{\rm dim}(x)}$ is extended to pairs of
simplices as $\omega( (x,y) ) = \omega(x) \omega(y)$. So, $\omega$ is defined as a function
on the elements $(x,y)$ in the Cartesian product $G \times H$ of two simplicial complexes $G$
and $H$. We can not use the original definition of Wu characteristic for the product as
the product of two simplicial complexes is not a simplicial complex any more as the multiplicative
primes in the ring are the simplicial complexes. Lets write $(x,y) \sim (a,b)$ if both $x \cap a \neq \emptyset$
and $y \cap b \neq \emptyset$. Now define
$$ \omega(G \times H) = \sum_{(x,y) \sim (a,b)} \omega((x,y)) \omega((a,b)) \; . $$
As this is equal to $\sum_{(x,y) \sim (a,b)}$ $\omega(x) \omega(y) \omega(a) \omega(b)$
which is $\sum_{x \sim a} \sum_{y \sim b}$ $\omega(x) \omega(a) \omega(y) \omega(b)$ or
$(\sum_{x \sim a} \omega(x) \omega(a)) \sum_{y \sim b} \omega(y) \omega(b)$, which is
$\omega(G) \omega(H)$, the product property is evident.
We can also define $\omega_k(-G) = -\omega_k(G)$ so that
\begin{propo}
All Wu characteristics $\omega_k$ are ring homomorphisms from the strong ring to $\ZZ$.
\end{propo}
\paragraph{}
The just seen nice compatibility of Wu characteristic with the ring arithmetic structure
renders the Wu characteristic quite unique among multi-linear valuations \cite{valuation}.
We have seen in that paper that for geometric graphs, there are analogue
{\bf Dehn-Sommerville relations} which are other valuations which are zero,
answering a previously unresolved question of \cite{Gruenbaum1970} from 1970.
But this requires the simplicial complexes to be discrete manifolds in the sense that every unit
sphere has to be a sphere. Dehn-Sommerville invariants are exciting that they defeat somehow the fate of
exploding in the Barycentric limit as they are zero from the beginning and
remain zero in the continuum limit. The local versions, the Dehn-Sommerville invariants of the unit spheres are local quantities which
are {\bf zero curvature} conditions. One might wonder why Euler curvature is not defined for odd dimensional
manifolds for example. Indeed, Gauss-Bonnet-Chern is formulated only for even dimensional manifolds and
the definition of curvature involves a Pfaffian, which only makes sense in the even dimensional case. But
what really happens is that there are curvatures also in the odd dimensional case, they are just zero
due to Dehn-Sommerville. When writing \cite{cherngaussbonnet}, we were not aware of the Dehn-Sommerville
connection and had only conjectured that for odd dimensional geometric graphs the curvature is zero. It
was proven in \cite{indexexpectation} using discrete integral geometry seeing curvature as an average of
Poincar\'e-Hopf indices.
\paragraph{}
As a general rule, any result for Euler characteristic $\chi$ appears to generalize
to Wu characteristic. For Gauss-Bonnet, Poncar\'e-Hopf and index expectation linking the
two also the proofs go over. Start with the definition of Wu characteristic as a
Gauss-Bonnet type result where $\omega_k(x)$ is seen as a curvature on simplices. Then
push that curvature down to the zero dimensional parts. Either equally, leading to a
{\bf curvature}, or then directed along a gradient field of a function $f$,
leading to {\bf Poincar\'e-Hopf indices}. Averaging over all functions, then essentially
averages over all possible ``distribution channels" $f$ leading for a nice measure on
functions to a uniform distribution and so to curvature. The results and proofs generalize
to products.
\paragraph{}
Lets look at Gauss-Bonnet first for Wu characteristic:
\begin{thm}[Gauss-Bonnet]
Given a ring element $G$. The curvature function $K_k$ supported on the zero-dimensional part $V$ of $G$
satisfies $\sum_{v} K_k(v) = \omega_k(G)$. If $G = A \times B$, and $v=(a,b)$ is a $0$-dimensional point in $G$,
then $K_k(v) = K_A(a) K_B(b)$.
\end{thm}
\paragraph{}
For formulating Poincar\'e-Hopf for Wu characteristic,
we define for zero-dimensional entries $v=(a,b)$ the
stable sphere $S_f^-((a,b)) \{ (x,y) \in G \times H \; | \; f((a,b))<f((x,y))$ and
$a \subset x, b \subset y \}$. This stable sphere is the join of the stable spheres.
The definition $i_{f,k}(v) = 1-\omega_k(S_f^-(v))$ leads now to
$i_{f,k}((a,b)) = i_{f,k}(a) i_{f,k}(b)$ and
\begin{thm}[Poincar\'e-Hopf]
Let $f$ be a Morse function, then $\omega_k(G) = \sum_v i_{f,k}(v)$,
where the sum is over all zero dimensional $v$ in $G$.
\end{thm}
\paragraph{}
When looking at the index expectation results $K(x) = {\rm E}[i_f(x)]$, one
could either directly prove the result or then note that if we look at a direct
product $G \times H$ and take probability measures $P$ and $Q$ on functions of $G$
and $H$, then the random variables $f \to i_f(x)$ on the
two probability spaces $(\Omega(G),P)$ and $(\Omega(H),Q)$ are independent.
This implies ${\rm E}[i_f(x) i_f(y)] = {\rm E}[i_f(x)] {\rm E}[i_f(y)]$ and
so index expectation in the product:
\begin{thm}[Index expectation]
If the probability measure is the uniform measure on all colorings, then
curvature $K_{G \times H,k}$ is the expectation of Poincar\'e-Hopf indices
$i_{G \times H,f,k}$.
\end{thm}
\paragraph{}
The theorems of Gauss-Bonnet, Poincar\'e-Hopf and index expectation
are not restricted to Wu characteristic. They hold for any {\bf multi-linear
valuation}. By the multi-linear version of the discrete Hadwiger theorem \cite{KlainRota},
a basis of the space of valuations is given. Quadratic valuations for
example can be written as
$$ X(G) = \sum_{x \sim y} X_{ij} V_{ij}(G) \; , $$
where $X$ is a symmetric matrix and where the {\bf $f$-matrix} $V_{ij}$
counts the number of pairs $x,y$ of $i$-dimensional simplices
$x$ and $j$ dimensional simplices $y$ for which $x \cap y \neq \emptyset$.
\paragraph{}
To see how the $f$-vectors, the $f$-matrices and more generally the $f$-tensors
behave when we take products in the ring, its best to look at their
{\bf generating functions}. Given a simplicial complex $G$ with
$f$-vector $f(G) = (v_0(G),v_1(G), \dots)$, define the {\bf Euler polynomial}
$$ e_G(t) = \sum_{k=0}^{\infty} v_k(G) t^k $$
or the multi-variate polynomials like in the quadratic case
$$ V_G(t,s) = \sum_{k,l} V_{k,l}(G) t^k s^l \; , $$
which encodes the cardinalities $V_{k,l}(G)$ of intersecting $k$ and $l$
simplices in the complex $G$. The convolution of the $f$-vectors becomes
the product of Euler polynomials
$$ e_{G \times H} = e_G e_H \; . $$
\paragraph{}
If we define the $f$-vector of $-G$ as $-f(G)$ and so the Euler polynomial
of $-G$ as $-p_G$, we can therefore say that
\begin{propo}
The Euler polynomial extends to a ring homomorphism from
the strong ring to the polynomial ring $\ZZ[t]$.
\end{propo}
\paragraph{}
This also generalizes to the multivariate versions. To the $f$-tensors
counting $k$-tuple intersections in $G$, we can associate polynomials in
$\ZZ[t_1, \dots, t_k]$ which encode the $f$-tensor and then have
\begin{propo}
For every $k$, the multivariate $f$-polynomial construction
extends to ring homomorphisms from the strong ring to $\ZZ[t_1, \dots, t_k]$.
\end{propo}
\paragraph{}
We see that like in probability theory, where moment generating functions or
characteristic functions are convenient as they ``diagonalize" the combinatorial
structure of random variables on product spaces (independent spaces), the
use of $f$-polynomials helps to deal with the combinatorics of the $f$-tensors
in the strong ring.
\paragraph{}
In order to formulate results which involve cohomology like the Lefschetz
fixed point formula, which reduces if $T$ is the identity to the Euler-Poincar\'e
formula relating combinatorial and cohomological Euler characteristic,
one has to define a cohomology. Because the name intersection cohomology is taken,
we called it {\bf interaction cohomology}. It turns out that this cohomology is
finer than simplicial cohomology. Its Betti numbers are combinatorial invariants
which allow to distinguish spaces which simplicial cohomology can not, the prototype
example being the cylinder and Moebius strip
\cite{CaseStudy2016}. The structure
and proofs of the theorems however remain. The heat deformation proof of Lefschetz
is so simple that it extends to the ring and also from Euler characteristic to
Wu characteristic.
\paragraph{}
The definition of interaction cohomology involves
explicit matrices $d$ as exterior derivatives.
The {\bf quadratic interaction cohomology} for example is defined through
the {\bf exterior derivative}
$dF(x,y) = F(\delta x,y) + (-1)^{{\rm dim}(x)} F(x,\delta y)$ on functions $F$
on ordered pairs $(x,y)$ of intersecting simplices in $G$. This generalizes
the exterior derivative $dF(x)=F(\delta x)$ of simplicial cohomology.
This definition resembles the de-Rham type definition of exterior derivative
for the product of complexes, but there is a difference: in the interaction
cohomology we only look at pairs of simplices which interact (intersect).
In some sense, it restricts to the simplices in the ``diagonal" of the product.
\paragraph{}
It is obvious how to extend the definition of interaction cohomology to the product
of simplicial complexes and so to the strong ring. It is still important to
point out that at any stage we deal with finite dimensional matrices.
The {\bf quadratic interaction exterior derivative } $d$ is defined as
$dF(x,y) = d_1 F + (-1)^{\rm dim(x)} d_2 F$, where $d_1$ is the {\bf partial
exterior derivative} with respect to the first variable (represented by simplices
in $G$) and $d_2$ the partial exterior derivative with respect to the second variable
(represented by simplices in $H$). These partial derivatives are given by the
intersection exterior derivatives defined above.
\paragraph{}
Lets restrict for simplicity to quadratic interaction cohomology in which
the Wu characteristic $\omega=\omega_2$ plays the role of the Euler characteristic
$\chi = \omega_1$. Let $G$ first be a simplicial complex.
If $b_p(G)$ are the Betti numbers of these interaction cohomology groups of $G$,
then the {\bf Euler-poincar\'e} formula $\omega(G)=\sum_p (-1)^p b_p(G)$ holds.
More generally, the {\bf Lefschetz formula}
$\chi_T(G)=\sum_{(x,y)=(T(x),T(y))} i_T(x,y)$ generalizes, where $\chi_T(G)$ is the {\bf Lefschetz number},
the super trace of the Koopman operator $U_T$ on cohomology and where
$i_T(x,y) = (-1)^{{\rm dim}(x) + {\rm dim}(y)} {\rm sign}(T|x) {\rm sign}(T|y)$ is
the {\bf Brouwer index}. The heat proof generalizes.
\paragraph{}
The interaction cohomology groups are defined similarly for the product of simplicial
complexes and so for general elements in the strong ring.
The K\"unneth formula holds too. We can define the Betti numbers $b_{k}(-G)$ as $-b_k(G)$.
The {\bf interaction Poincar\'e polynomial} $p_G(x)=\sum_{k=0} {\rm dim}(H^k(G)) x^k$ again
satisfies $p_{G \times H}(x) = p_G(x) p_H(x)$ so that
\begin{thm}
For any $k$, the interaction cohomology polynomial extends to a
ring homomorphism from the strong ring to $\ZZ[t]$.
\end{thm}
\paragraph{}
While we hope to be able to explore this more elsewhere we note for now just that
all these higher order interaction cohomologies associated to the Wu characteristic
generalize from simplicial complexes to the strong ring. Being able to work on the
ring is practical as the interaction exterior derivatives are large matrices. So
far we had worked with the Barycentric refinements of the products, where the matrices
are bulky if we work with a full triangulation of the space. Similarly as de-Rham cohomology
significantly cuts the complexity of cohomology computations, this is also here the
case in the discrete. For a triangulation of the cylinder $G$, the full Hodge Laplacian $(d+d^*)^2$
of quadratic interaction cohomology is a $416 \times 416$ matrix as there were
$416 = \sum_{i,j} V_{ij}(G)$ pairs of intersecting simplices. When working in the ring, we
can compute a basis of the cohomology group from the basis of the circles and be done much faster.
\paragraph{}
For the M\"obius strip $G$ however, where the Hodge Laplacian of a triangulation leads to $364 \times 364$
matrices, we can not make that reduction as $G$ is not a product space. The smallest
ring element which represents $G$ is a simplicial complex. Unlike the cylinder
which is a ``composite particle", $G$ is an ``elementary particle".
In order to deal with concrete spaces, one would have to use patches of Euclidean
pieces and compute the cohomology using Mayer-Vietoris.
\paragraph{}
What could be more efficient is to
patch the space with contractible sets and look at the cohomology of a nerve graph which is
just \v{C}ech cohomology. But as in the M\"obius case, we already worked
with the smallest nerve at hand, this does not help in the computation in that case. It would
only reduce the complexity if we had started with a fine mesh representing the geometric object
$G$. We still have to explore what happens to the $k-$harmonic functions in the
kernel of the Hodge blocks $H_k$ and the spectra of $H_k$ in the interaction cohomology case,
if we cut a product space and glue it with reverse orientation as in $G$.
\paragraph{}
There is other geometry which can be pushed over from
complexes to graphs. See \cite{knillcalculus,KnillILAS,KnillBaltimore} for snapshots
for results formulated for Whitney complexes of graphs
from 2012, 2013 and 2014. The {\bf Jordan-Brouwer theorem} for
example, formulated in \cite{KnillJordan} for Whitney
complexes of graphs generalizes to simplicial complexes
and more generally to products as we anyway refer to the Barycentric
refinement there which is always a Whitney complex.
\paragraph{}
Also promising are subjects close to calculus like the discrete {\bf Sard theorem}
\cite{KnillSard} which allow to define new spaces in given spaces by looking
at the zero locus of functions. Also this result was formulated in graph theory
but holds for any simplicial complex.
The {\bf zero locus} $\{ f=c \}$ of a function $f$ on a ring element $G$ can be defined
as the complex obtained from the cells where $f$ changes sign.
We nave noticed that if $G$ is a discrete $d$-manifold in the sense that every
unit sphere in the Barycentric refinement is a $(d-1)$-sphere, then
for any locally injective function $f$ and any $c$ different from the range of $f$
the zero locus $f=c$ is a discrete manifold again.
\section{Two siblings: the Dirac and Connection operator}
\paragraph{}
To every $G$ in the strong ring belong two graphs $G_1$ and $G'$, the Barycentric refinement
and the connection graph. The addition of ring elements produces disjoint unions of graphs. The multiplication
naturally leads to products of the Barycentric refinements $A_1 \times B_1= (A \times B)_1$
as well as connection graphs $A' \mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} B' = (A \times B)'$, where $\mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} $ is the
strong product.
\paragraph{}
An illustrative example is given in \cite{CountingAndCohomology}. Both the {\bf prime graph} $G_1$
as well as the {\bf prime connection graph} have as the vertex set the set of square free integers
in $\{2,3,\dots,n\}$. In $G_1$, two numbers are connected if one is a factor of the other.
It is part of the Barycentric refinement of spectrum of the integers.
In the {\bf prime connection graph} $G'$ two integers are connected if they have a common factor
larger than $1$. It has first appeared in \cite{Experiments}. This picture sees square free integers
as simplices in a simplicial complex. The number theoretical M\"obius function $\mu(k)$ has the
property that $-\mu(k)$ is the Poincar\'e-Hopf index of the counting function $f(x)=x$.
The Poincare-Hopf theorem is then $\chi(G)=1-M(n)$, where $M(n)$ is the Mertens function.
As the Euler characteristic $\sum_x \omega(x)$ can also be expressed through Betti numbers, there is a relation
between the Mertens function and the kernels of Hodge operators $D^2$. On the other hand,
the Fermi characteristic $\prod_x \omega(x)$ is equal to the determinant of the connection Laplacian.
This was just an example. Lets look at it in general.
\paragraph{}
To every $G \in R$ belong two operators $D$ and $L$, the Dirac operator and connection operator.
They are both symmetric matrices acting
on the same Hilbert space. While $D(-G)=-D(G)$ does not change anything in the spectrum of $G$,
a sign change of $G$ changes the spectrum as $L(-G)=-L(G)$ is a different operator due to lack
of symmetry between positive and negative spectrum. There are higher order Dirac operators $D$
which belong to the exterior derivative $d$ which is used in interaction cohomology.
The operator $D$ which belongs to the Wu characteristic for example does not seem to have any
obvious algebraic relation to the Dirac operator belonging to Euler characteristic. Indeed, as
we have seen, the nullity of the Wu Dirac operator is not homotopy invariant in general.
On the other hand, the interaction Laplacian $L$ belonging to pairs of interacting simplices
is nothing else than the tensor product $L \otimes L$, which is the interaction Laplacian of
$G \times G$.
\paragraph{}
There are various indications that the Dirac operator $D(G)$ is of {\bf additive nature} while
the connection operator $L(G)$ is of {\bf multiplicative nature}. One indication is that $L(G)$ is
always invertible and that the product of ring elements produces a tensor product of
connection operators. Let ${\rm str}$ denote the super trace
of a matrix $A$ defined as ${\rm str}(A) = \sum_x \omega(x) A_{xx}$, where
$\omega(x) = (-1)^{{\rm dim}(x)}$. The discrete version of the {\bf Mc-Kean Singer formula} \cite{McKeanSinger}
can be formulated in a way which makes the additive and multiplicative nature of $D$ and $L$ clear:
\begin{thm}[Mc Kean Singer]
${\rm str}(e^{-D})=\chi(G)$ and ${\rm str}(L^{-1}) = \chi(G)$.
\end{thm}
\begin{proof}
The left identity follows from the fact that ${\rm str}(D^{2k})=0$ for every even $k$
different from $0$, that for $k=0$, we have the definition of $\chi(G)$ and that for odd
$2k+1$ the diagonal entries of $D^k$ are zero. The right equation is a Gauss-Bonnet formula
as the diagonal entries $L^{-1}_{ii}$ are the Euler characteristics $\chi(S(x))$ of unit
spheres so that $\omega(x) \chi(S(x)) = 1-\chi(S^-_f(x))$ is a Poincar\'e-Hof index
for the Morse function $f(x)=-{\rm dim}(x)$.
\end{proof}
These identities generalize to a general ring element $G$ in the strong ring.
\paragraph{}
A second indication for the multiplicative behavior of quantities related to the connection
graph is the Poincar\'e-Hopf formula. The multiplicative analogue of the Euler characteristic
$\chi(G) = \sum_x \omega(x)$ is the {\bf Fermi characteristic} $\phi(G) = \prod_x \omega(x)$.
Lets call a function $f$ on a simplicial complex a {\bf Morse function} if $S^-_f(x) = \{ y \in S(x) \; | \;
f(y)<f(x) \}$ is a discrete sphere. Since spheres have Euler characteristic in $\{0,2\}$ the
Euler-Poincar\'e index $i_f(x) = 1-\chi(S^-_f(x))$ is in $\{ -1,1\}$.
\begin{thm}[Poincar\'e-Hopf]
Let $f$ be a Morse function, then $\chi(G) = \sum_x i_f(x)$ and
$\phi(G) = \prod_x i_f(x)$.
\end{thm}
Since for any simplicial complex $G$, we can find a Morse function, the multiplicative part can be used
to show that $\psi(G) = {\rm det}(L(G))$ is equal to $\phi(G)$. This is the {\bf unimodularity theorem}.
As we have seen, this theorem generalize to the strong ring.
\paragraph{}
The most striking additive-multiplicative comparison of $D$ and $L$ comes through spectra.
The spectra of the Dirac operator $D$ behave additively, while the spectra of connection
operators multiply. The additive behavior of Hodge Laplacians happens also classically when looking at
Cartesian products of manifolds:
\begin{thm}[Spectral Pythagoras]
If $\lambda_i$ are eigenvalues of $D(G_i)$, there is an eigenvalue $\lambda_{ij}$ of $D(G_i \times G_j)$
such that $\lambda_i^2 + \lambda_j^2 = \lambda_{ij}^2$.
\end{thm}
\paragraph{}
We should compare that with the multiplicative behavior of the connection operators:
\begin{thm}[Spectral multiplicity]
If $\lambda_i$ are eigenvalues of $L(G_i)$, then there is an eigenvalue $\lambda_{ij}$ of $L(G_i \times G_j)$
such that $\lambda_i \lambda_j = \lambda_{ij}$.
\end{thm}
\paragraph{}
In some sense, the operator $L$ describes energies which are multiplicative and so can become much larger
than the energies of the Hodge operator $H$. Whether this has any significance in physics is not clear.
So far this is pure geometry and spectral theory. The operator $L$ does not have this block structure
like $D$. When looking at Schr\"odinger evolutions $e^{iDt}$ or $e^{i Lt}$, we expect different
behavior. Evolving with $L$ mixes parts of the Hilbert space which are separated in the Hodge case.
\section{Dimension}
\paragraph{}
The {\bf maximal dimension} ${\rm dim}_{{\rm max}}(G)$ of a simplicial complex $G$ is defined as $|x|-1$, where $|x|$
is the {\bf cardinality} of a simplex $x$. The {\bf inductive dimension} of $G$ is defined as
the inductive dimension of its Barycentric refinement graph $G_1$ (or rather the Whitney complex
of that graph) \cite{elemente11}.
The definition of inductive dimension for graphs is recursive:
$$ {\rm dim}(x) = 1+|S(x)|^{-1} \sum_{y \in S(x)} {\rm dim}(y) $$
starting with the assumption that the dimension of the empty graph is $-1$.
The unit sphere graph $S(x)$ is the subgraph of $G$ generated by all vertices directly connected to $x$.
\begin{propo}
Both the maximal and inductive dimension can be extended
so that additivity holds in full generality for non-zero elements in the strong ring.
\end{propo}
The zero graph $\emptyset$ has to be excluded as when we multiply a ring element with the
zero element, we get the zero element and ${\rm dim}(0)=-1$ would imply
${\rm dim}(G \times 0)={\rm dim}(0) = -1$
for any $G$. The property $0 \times G = 0$ has to hold if we want the ring axioms to hold.
We will be able to extend the clique number to the dual ring of the strong connection ring
but not the dimension.
\paragraph{}
Since the inductive dimension satisfies
$$ {\rm dim}( (G \times H)_1) = {\rm dim}(G_1) + {\rm dim}(H_1) \; , $$
(see \cite{KnillKuenneth}), the definition
$$ {\rm dim}(G \times H) = {\rm dim}(G) + {\rm dim}(H) $$
is natural. We have also shown ${\rm dim}(G)_1 \geq {\rm dim}(G)$ so that
for nonzero elements $G$ and $H$:
${\rm dim}(G \times H)_1 \geq {\rm dim}(G) + {\rm dim}(H)$, an inequality which looks
like the one for Hausdorff dimension in the continuum. Again, also this does not
hold if one of the factors is the zero element.
\paragraph{}
In comparison, we have seen that the Zykov sum has the property that the
{\rm clique number} ${\rm dim}_{{\rm max}}(G)+1$ is additive.
and that the large Zykov product, which is the dual of the strong product has
the property that the clique number is multiplicative.
If we define the clique number of $-G$ as minus the clique number of $G$, then we have
\begin{propo}
The clique number from the dual $R^*$ of the strong ring $R$ to the integers $\ZZ$
is a ring homomorphism.
\end{propo}
\begin{proof}
By taking complement, the identity $P_n \star P_m = P_n \mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} P_m = P_{nm}$ gives
$K_n \mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} K_m = K_m \mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} K_n = K_{nm}$.
We have justified before how to extend the clique number to $c(-G)=-c(G)$.
\end{proof}
\paragraph{}
The clique number is a ring homomorphism, making the clique number
to become negative for negative elements a natural choice.
For dimension, this is not good:
the dimension of the empty graph $0$ is $-1$ and since $0=-0$ this is not good.
If we would take a one dimensional graph $G$, we would have to define
${\rm dim}(-G)=-1$ but that does not go well with the fact that the zero graph
has dimension $-1$. We want the inductive dimension to be the average
of the dimensions of the unit spheres plus $1$ which makes the choice
${\rm dim}(-G)= {\rm dim}(G)$ natural.
\paragraph{}
The {\bf maximal dimension} ${\rm dim}_{\rm max}(G)$ of a complex is the dimension
of the maximal simplex in $G$. The {\bf clique number} $c(G) = {\rm dim}_{\rm max}(G)+1$ is the
largest cardinality which appears for sets in $G$. For graphs, it is the largest
$n$ for which $K_n$ is a subgraph of $G$. The empty complex or empty graph has
clique number $0$. Extend the clique number to the entire ring by defining $c(-G)=-c(G)$.
The strong ring multiplication satisfies $c(G \mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} H) = c(G) c(H)$ and
$c(G \oplus H) = {\rm max}(c(G),c(H))$.
\section{Dynamical systems}
\paragraph{}
There is an isospectral {\bf Lax deformation} $D'=[B,D]$ with $B=(d-d^*)$
of the Dirac operator $D=(d+d^*)$ of a simplicial complex. This works now also for any
ring elements. The deformed Dirac operator is then of the form $d+d^* + b$
meaning that $D$ has additionally to the geometric exterior derivative part also a diagonal
part which is geometrically not visible. What happens is that the off diagonal part $d(t)$
decreases, leading to an expansion of space if we use the exterior derivative to measure distances.
The deformation of exterior derivatives is also defined for Riemannian manifolds but the
deformed operators are pseudo differential operators.
\paragraph{}
A complex generalization of the system is obtained by defining
$B=d-d^*+i \beta b$, where $\beta$ is a parameter.
This is similar to \cite{Toda} who modified the Toda flow by adding $i \beta$ to
$B$. The case above was the situation $\beta=0$, where the flow has a situation
for scattering theory. The case $\beta=1$ leads asymptotically to a linear
wave equation. The exterior derivative has become complex however. All this
dynamics is invisible classically for the Hodge operator $H$ as the Hodge operator
$D^2$ does not change under the evolution.
\paragraph{}
As any Lax pair does, the eigenvalues are integrals of motion. The equation $U'(t) = B(t) U(t)$
produces a unitary curve $U(t)$ which has the property that if $D_1=V D V^T$ corresponds to an other choice of
simplex orientation then $D_1(t) = V D(t) V^T$ carries on.
The evolution does not depend on the gauge (the choice of signs used to make the simplicial
complexes signed) but the energy for $L(t) = U(t) L U(t)^*$ does.
\begin{thm}
For every $G$ in the strong ring, we have nonlinear integrable Hamiltonian systems
based on deformation of the Dirac operator.
\end{thm}
This is very concrete as for every ring element we can write down a concrete matrix
$D(G)$ and evolve the differential equation.
As for simplicial complexes, also when we deform a $D$ from the strong ring,
the Hodge Laplacian $H(G) = D(G)^2$ does not move under the deformation.
Most classical physics therefore is unaffected by the expansion.
For more details see \cite{KnillBarycentric,KnillBarycentric2}.
\paragraph{}
Any type of Laplacian produces Schr\"odinger type evolutions. Examples of Laplacians are
the {\bf Kirchhoff matrix} $H_0$ of the graph which has $G$ as the Whitney complex or
the Dirac matrix $D=(d+d^*)$ or the Hodge Laplacian $D^2$ or the connection graph $L$ of $G_n$.
One can also look at non-linear evolutions. An example is the nonlinear
Schr\"odinger equation obtained from a functional like $F(u) = \langle u,g u \rangle - V(|u|^2)$
on the Hilbert space. The Hamiltonian system is then $u' = \partial_{\overline{u}} F'(u)$.
This {\bf Helmholtz evolution} has both the energy $F(u)$ as well as $|u|^2$ as integrals of
motion. We can therefore restrict to {\bf states}, vectors of
length $1$. A natural choice compatible with the product structure is the
{\bf Shannon entropy} $V(|u|^2) = \beta S(u) = - \beta \sum_x p(x) \log(p(x))$ where $p(x)=|u(x)|^2$.
Summands for which $p(x)=0$ are assumed to be zero as $\lim_{p \to 0} p \log(p)=0$.
\paragraph{}
We like the Helmhotz system because the Shannon entropy is essentially unique in the property that it
is additive with respect to products. As $\langle 1,g 1 \rangle = \chi(G)$ is the Euler characteristic,
which is compatible with the arithmetic, the {\bf Helmholtz free energy} $F(u)$ leads to a
natural Hamiltonian system. The {\bf inverse temperature} $\beta$ can be used as a perturbation parameter.
If $\beta=0$, we have a Schr\"odinger evolution. If $\beta>0$, it becomes a nonlinear Schr\"odinger
evolution. So far, this system is pretty much unexplored. The choice of both the energy and entropy
part was done due to arithmetic compatibility: Euler characteristic is known to be the unique valuation
up to scale which is compatible with the product and entropy is known to be a unique quantity
(again up to scale) compatible with the product by a theorem of Shannon. See \cite{Helmholtz}
were we look at bifurcations of the minima under temperature changes. There are {\bf catastrophes}
already for the simplest simplicial complexes.
\section{Barycentral central limit}
\paragraph{}
We can use a Barycentric central limit theorem to
show that in the {\bf van Hove limit} of the strong nearest neighbor
lattice $Z \times \cdots \times Z$, the spectral measure of the connection Laplacian $L$
has a {\bf mass gap}: not only $L$, but also the Green function, the inverse $g=L^{-1}$
has a bounded almost periodic infinite volume limit. The potential theory of the
Laplacian remains bounded and nonlinear time-dependent partial difference equations like
$Lu+ cV(u)=W$ which are Euler equations of {\bf Frenkel-Kontorova} type variational problems
have unique solutions $u$ for small $c$.
\paragraph{}
A {\bf Fock space} analogy is to see an additive prime in the ring (a connected space) as a {\bf particle state}.
The sum of spaces is then a collection of independent particles and the product is an entangled
{\bf multi-particle system}. Every independent particle has a unique decomposition into
{\bf multiplicative primes} which are the ``elementary particles". But the union of two particles can decay
into different type of elementary particles. For an entangled multi-particle system $G \times H$,
the spectrum of that configuration consists of all values $\lambda_i \mu_j$, where $\lambda_i$
are the eigenvalues of $L(G)$ and $\mu_j$ are the eigenvalues of $L(H)$.
\paragraph{}
While $G \mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} H =H \mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} G$, we don't have $L(G) \otimes L(H)
= L(H) \otimes L(G)$ as the tensor product of matrices is not commutative.
But these two products are unitary equivalent.
We can still define an {\bf anti-commutator} $L(H) \otimes L(G) - L(G) \otimes L(H)$.
We observed experimentally that the kernel of this anti-commutator is non-trivial only if both
complexes have an odd number of vertices. While $L(G)$ can have non-simple spectrum,
we have so far only seen simple spectrum for operators $L(G)\times L(H)-L(H) \otimes L(G)$.
\paragraph{}
When we take successive Barycentric refinements $G_n$ of complex $G$ we see
a universal feature \cite{KnillBarycentric,KnillBarycentric2}. This generalizes
readily to the ring.
\begin{thm}[Barycentric central limit]
For any $G$ in the ring and for each of the operators $A=L$ or $A=H$ or $A=H_k$,
the density of states of $A(G_n)$ converges weakly to a measure which only
depends on the maximal dimension of $G$.
\end{thm}
\paragraph{}
The reason why this is true is that the $(k+1)$'th
Barycentric refinement of a maximal $d$-simplex consists of $(d-1)!$ smaller disks glued along
$d$-dimensional parts which have a cardinality growing slower.
The gluing process changes a negligible amount of matrix entries. This
can be estimated using a result of Lidskii-Last \cite{SimonTrace}.
Examples of Laplacians for which the result holds are the
Kirchhoff matrix of the graph which has $G$ as the Whitney complex or
the Dirac matrix $D=(d+d^*)$ or the Hodge Laplacian $D^2$
or the connection graph $L$ of $G_n$.
\paragraph{}
In the one-dimensional case with Kirchhoff Laplacian,
the limiting measure is derivative of the inverse of $F(x)=4 \sin^2(\pi x/2)$.
\paragraph{}
For $G=C_n$ we look at the eigenvalues of the $v$-torus $C_n^{\rm nu} = C_n \mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} C_n \cdots \mbox{\ooalign{$\times$\cr\hidewidth$\square$\hidewidth\cr}} C_n$.
The spectrum defines a discrete measure $dk_n$ which has a weak limit $dk$.
\begin{thm}[Mass gap]
The weak limit $dk_n$ exists, is absolutely continuous and has support away
from $0$. The limiting operator is almost periodic, bounded and invertible.
\end{thm}
\begin{proof}
For $\nu=1$, the density of states has a support which contains two intervals.
The interval $[-1/5,1/5]$ is excluded as we can give a bound on $g=L^{-1}$.
For general $\nu$, the gap size estimate follows from the fact that under
products, the eienvalues of $L$ multiply.
\end{proof}
\section{Stability in the infinite volume limit}
\paragraph{}
To the lattice $Z^\nu$ belongs the Hodge Laplacian
$H_0 u(n) =$ $\sum_{i=1}^{\nu} u(n+e_i)-u(n)$. In the one-dimensional case, where
$H_0 u(n)=u(n+1)-2u(n)-u(n-1)$ corresponds after a Fourier transform to the multiplication
by $2 \cos(x)-2$ we see that the spectrum of $H_0$ is $[0,4]$. The non-invertibility leads
to ``small divisor problems". Also, the trivial linear solutions $u(n) = \theta + \alpha n$
to $Lu=0$ are minima. Its this minimality which allows the problem to be continued to
nonlinear situations like $u(n+1)-2u(n)-u(n-1) = c \sin(u(n))$ for Diophantine $\alpha$.
For $\alpha=p/q$, Birkhoff periodic points and for general
$\alpha$, minimizers in the form of Aubry-Mather sets survive. For the connection Laplacian $L$
$0$ is in a gap of the spectrum.
\paragraph{}
Unlike for the Hodge Laplacians in $Z^d$ which naturally are expressed on
the Pontryagin dual of $T^d$, for which the Laplacian has spectrum containing $0$, we
deal with the dual of is $\DD_2^{\nu}$, where $\DD_2$ is the {\bf dyadic group}. The limiting operator
of $L$ is almost periodic operator and has a bounded inverse. Whereas in the Hodge case without mass gap, a
{\bf strong implicit function theorem} is required to continue solutions of nonlinear Frenkel-Kontorova
type Hamiltonian systems $Lu+ \epsilon V(u)=0$, the connection Laplacian is an invertible kinetic part
and perturbation theory requires only the weak implicit function theorem. Solutions of the Poisson equation $L u = \rho$
for example can be continued to nonlinear theories $(L+V) v = \rho$. Since Poisson equations have unique solutions,
also the discrete Dirichlet has unique solutions. The classical Standard map model $L_0 u + c \sin(u) = 0$
for example which is the Chirikov map in in 1D, requires KAM theory for solutions $u$ to exist. Weak
solutions also continue to exist by {\bf Aubry-Mather theory} \cite{MoserVariations}.
\paragraph{}
If the Hodge Laplacian is replaced by the connection Laplacian,
almost periodic solutions to driven systems continue to exist for small $c$ similarly as
the {\bf Aubry anti-integrable limit}
does classically for large $c$. But these continuations are in general
not interesting. Any Hamiltonian system with Hamiltonian $L+V$, where the kinetic energy is the connection Laplacian
remains simple. If we look at $L u = W$ where $W$ is obtained from an continuous function on the dyadic integers,
then the solution $u$ is given by a continuous function on the dyadic integers. This could be of some
interest but there is no interpretation of this solution as an orbit of a Hamiltonian system.
\begin{coro}
A nonlinear discrete difference equation $Lu+\epsilon V(u) = g$ has a unique almost periodic
solution if $g(n) = g(T^nx)$ is obtained from $Z^{\nu}$ action on $\DD_2^{\nu}$.
\end{coro}
\begin{proof}
For $\epsilon=0$, we have the Poisson equation $Lu=g$ which has the solution $u=L^{-1} g$,
where $u(n)=h(T^nx)$ is an almost periodic process with $h \in C(\DD_2^{\nu})$.
Since $L$ is invertible, the standard implicit function theorem allows a continuation for small
$\epsilon$.
\end{proof}
\paragraph{}
It could be useful to continue a Hamiltonian to the infinite limit.
An example is the Helmholtz Hamiltonian like $H(\psi) = (\psi, g \psi)$, where $g=L^{-1}$.
Since the Hessian $g$ is invertible, we can continue a minimal solution to $H(\psi) + \beta V(\psi)$
for small $\beta$ if $V$ is smooth.
We suspect that we can continue almost periodic solutions to the above defined Helmholtz system
$V(\psi) = \beta S(|\psi|^2)$ with entropy $S$ for small $\beta$ but there is a technical difficulty
as $p \to p \log|p|$ is not smooth at $p=0$, only continuous. One could apply the weak implicit
function theorem to continuous almost periodic functions on $X=D_2^{\nu}$ if
$f \to S(|f|)$ was Fr\'echet differentiable on the Banach space $X$ but we have not proved that.
It might require to smooth out $S_{\epsilon}$ first then show that the solution survives in the limit
$\epsilon \to 0$.
\section{A pseudo Riemannian case}
\paragraph{}
When implementing a speudo Riemannian metric signature like $(+,+,+,-)$ on the lattice $\ZZ^4$, this
changes the sign of the corresponding Hodge dual $d^*$. The Dirac operator of that coordinate axes
is now $d_i - d_i^*$ and the corresponding Hodge operator is $D^* D = -H$ has just changed sign.
The connection Laplacian $L$ is not affected by the change of Riemannian metric.
Having a Pseudo Riemannian metric on $\ZZ^4$, we can look
at kernel elements $Hu=0$ as solutions to a discrete wave equation. Global solutions on a
compact space like the product of circular graphs $C_n^4=C_n \times C_n \times C_n \times C_n$
are not that interesting. On an infinite lattice, we could prescribe
solutions on the space hypersurface $t=0$ and then continue it. Technically this leads to
a {\bf coupled map lattice}. \\
\paragraph{}
If $H$ is the operator of the $4$-torus $C_n^4$ with Lorentzian metric signature, then the
eigenvalues are all of the form
$\lambda_1 + \lambda_2+\lambda_3 - \lambda_4$, where $\lambda_i$ are eigenvalues of $C_n$.
This just shifts the eigenvalues. This completely answers also the question what the
limiting density of states.
\begin{propo}
The spectrum of the Hodge operator of the $\nu$-torus $\TT_n^{\nu}$ with
Lorentz signature $(m,k)$ agrees with the spectrum of the Hodge operator for the
signature $(\nu,0)$ shifted by $-4k$.
\end{propo}
\begin{proof}
For $\nu=1$, replacing $H$ with $-H$ has the effect that that the spectrum
changes sign. But this is $\sigma(H)-4$. When taking the product, we get
a convolution of spectra which commutes with the translation.
As for $C_n$, the spectrum satisfies $\sigma(-H)=\sigma(H)-4$, the switch
to a Lorentz metric goes over to the higher products.
\end{proof}
\paragraph{}
As the Lorentz space appears in physics, the geometry of the
$\ZZ^4$ lattice with Lorentz signature metric $(+,+,+,-)$ is of interest.
The Barycentric limit leads to more symmetry in this discrete lattice case.
The limiting operators $D$ and $L$ are almost periodic on $G_2^{4}$,
the compact group of dyadic integers. Besides of the group translations, there
are also {\bf scaling symmetries}. By allowing both scaling transformations
and group translations, we can implement symmetries which approximate Euclidean
symmetries in the continuum. The obstacle of poor symmetry properties
in a discrete lattice appears to disappear in the Barycentric limit.
\section{Illustrations}
\paragraph{}
Here is an illustration of an element $G=C_4-2 K_3 + (L_2 \times L_3)$
in the strong ring. In the Stanley-Reisner picture, we can write the ring
element as
\begin{eqnarray*}
f_G &=& a+ab+b+bc+c+cd+d+ad - 2(x+y+z+xy+xz+yz) \\
&+& (u+uv+v) (p+pq+q+qr+r) \; .
\end{eqnarray*}
The Euler characteristic is $\chi(G) = -f_G(-1,-1, \dots) = -1$.
\paragraph{}
The ring element $G=C_4-2 K_3 + (L_2 \times L_3)$ in the strong ring
is a sum of three parts, where the first is a Whitney complex of a graph,
the second is $-2$ times a Whitney complex and the third is a product of
two Whitney complexes $L_2$ and $L_3$.
\paragraph{}
In Figure~({\ref{example}), we drew the weak Cartesian product to visualize $L_2 \times L_3$.
In reality, $L_2 \times L_3$ is not a simplicial complex. There are 6 two-dimensional
{\bf square cells} present, one for each of the $6$ holes present in the weak product.
A convenient way to fill the hole is to look at the Barycentric refinement
$(L_2 \times L_3)_1$ as done in \cite{KnillKuenneth}. This is then a {\bf triangulation}
of $L_2 \times L_3$.
\paragraph{}
In Figure~({\ref{exampleconnection})
we see the connection graph $G'$ to the ring element $G$.
It is a disjoint union
of graphs, where the connection graphs to $K_3$ are counted negatively.
The connection Laplacian is
$L(G) = L(C_4) \oplus (-L(K_3)) \oplus (-L(K_3)) \oplus [ L(L_2) \otimes L(L_3) ]$.
Figure~({\ref{examplebarycentric}) shows the Barycentric refinement graph
$G_1$.
\begin{figure}[!htpb]
\scalebox{0.94}{\includegraphics{figures/strong.pdf}}
\label{example}
\caption{
$G=C_4-2 K_3 + (L_2 \times L_3)$ in the strong ring.
}
\end{figure}
\begin{figure}[!htpb]
\scalebox{0.94}{\includegraphics{figures/connection.pdf}}
\caption{
The connection graph of $G$.
}
\label{exampleconnection}
\end{figure}
\begin{figure}[!htpb]
\scalebox{0.94}{\includegraphics{figures/barycentric.pdf}}
\caption{
The Barycentric refinement $G_1$ of $G$.
}
\label{examplebarycentric}
\end{figure}
\paragraph{}
Figure~({\ref{S2S3}) shows $G=S^2 \times S^3$. The 2-sphere $S^2$ is implemented as the
Octahedron graph with $f$-vector $(6,12,8)$, the smallest 2-sphere, which already
Descartes has super-summed to $6-12+8=2$. The number of cells is $26$.
The complex $S^3$ is the suspension of $S^2$. It has $f$-vector $(8,24,32,16)$
which super-sums to $\chi(S^3)=8-24+32-16=0$ as any 3-manifold does. The number of cells
is $80$. The product has $26*80=2080$ cells
It is a 3-sphere, a cross polytop. The Poincar\'e polynomial of $G$ is
$(1+x^2) (1+x^3) = 1+x^2+x^3+x^5$. Indeed, the Betti vector of this 5-manifold
is $(1,0,1,1,0,1)$.
\paragraph{}
We see how useful the ring is. We did not have to build a
triangulation of the 5-manifold as we had done in \cite{KnillKuenneth}
where we defined the product to be $G_1$ in order to have a Whitney complex of a graph
$G_1$ with 2080 vertices and 51232 edges.
The Dirac and Hodge operator for $G=S^2 \times S^3$ is seen in Figure~({S2S3}).
They are both $2080 \times 2080$ matrices. As for any
5-dimensional complex, the Hodge operator has 6 blocks.
Blocks $H_1,H_3,H_4,H_6$ have a one-dimensional kernel.
McKean-Singer super symmetry shows that for the
union of the non-zero spectra $H_1,H_3,H_5$ is the
union of the non-zero spectra of $H_2,H_4,H_6$.
\paragraph{}
The ring element $G=C_4-2 K_3 + (L_2 \times L_3)$ in the strong ring
is a sum of three parts, where the first is a Whitney complex of a graph,
the second is $-2$ times a Whitney complex and the third is a product of
two Whitney complexes $L_2$ and $L_3$. We drew the weak Cartesian product to visualize $L_2 \times L_3$.
In reality, $L_2 \times L_3$ is not a simplicial complex. There are 6 additional
two dimensional cells present in the CW complex representing it,
one for each of the 6 holes present in the weak product.
\paragraph{}
Figure~(\ref{nonunique}) illustrates non-unique factorization in the strong ring
It is adapted from \cite{HammackImrichKlavzar} in weak ring.
The two ring elements $G_1 \times G_2$ and $H_1 \times H_2$ are the
same. It is the decomposition
$(1+x+x^2)(1+x^3) =(1+x^2+x^4)(1+x)$, where $1$ is a point $K_1$
$x$ is an interval $K_2$. This gives the square $x^2 = K_2 \times K_2$,
the cube $x^3=K_2 \times K_2 \times K_2$ and hyper cube
$x^4=K_2 \times K_2 \times K_2 \times K_2$.
\begin{figure}[!htpb]
\scalebox{0.25}{\includegraphics{figures/s2xs3.pdf}}
\scalebox{0.25}{\includegraphics{figures/s2xs3H.pdf}}
\caption{
The Dirac and Hodge operator for $G=S^2 \times S^3$.
}
\label{S2S3}
\end{figure}
\begin{figure}[!htpb]
\scalebox{0.24}{\includegraphics{figures/example.pdf}}
\caption{
Non-unique prime factorization. The products $AB$
and $CD$ of the two ring elements produces the same
$G$. This is only possible if $G$ is not connected.
}
\label{nonunique}
\end{figure}
\begin{figure}[!htpb]
\scalebox{0.19}{\includegraphics{figures/napkin.pdf}}
\caption{
The connection graph of $G=L_5 \times L_5$ where $L_n$ is the linear
graph of length $n$ is part of the connection graph of the discrete
lattice $Z^1 \times Z^1$. The adjacency matrix $A$ of the graph $G$
seen here has the property that $L=1+A$ is the connection graph which is
invertible.
}
\end{figure}
\begin{figure}[!htpb]
\scalebox{0.24}{\includegraphics{figures/densityofstates1.pdf}}
\caption{
The density of states of the connection Laplacian of $\ZZ$ has a mass gap at $0$.
We actually computed the eigenvalues of the connection Laplacian of $C_{10000}$ which has
a density of states close to the density of states of the
connection Laplacian of $\ZZ$. The mass gap contains $[-1/5,1/5]$.
}
\end{figure}
\begin{figure}[!htpb]
\scalebox{0.24}{\includegraphics{figures/densityofstates2.pdf}}
\caption{
Part of the density of states of $\ZZ^2$ for finite dimensional approximations like
$G=C_n \times C_n = C_{1000} \times C_{1000}=G_1 \times G_2$.
The eigenvalues of $L(G)$ are the products $\lambda_i \lambda_j$
of the eigenvalues $\lambda_i$ of $L(G_i)$.
The mass gap contains $[-1/25,1/25]$ which is independent of $n$.
}
\end{figure}
\section*{Cartesian closed category}
\paragraph{}
The goal of this appendix is to see the strong ring of
simplicial complexes as a {\bf cartesian closed category} and to ask
whether it is a {\bf topos}. As we have already finite products, the first
requires to show the existence of exponentials.
Cartesian closed categories are important in computer science as they have
{\bf simply typed lambda calculus} as language. Also here, we are close to computer science as
we deal with a category of objects which (if they are small enough) can be realized
in a computer. The elements can be represented as polynomials in a
ring for example. We deal with a combinatorial category which can be explored without
the need of finite dimensional approximations. It is part of combinatorics as all objects
are finite.
\paragraph{}
In order to realize the ring as a category we need to define the
{\bf morphisms}, identifying an initial and terminal object (here $0=\emptyset$ and $1=K_1$)
and show that {\bf currying} works: there is an exponential object $K^H$ in the ring
such that the set of morphisms $C(G \times H,K)$ from $G \times H$ to $K$
corresponds to the morphisms $C(G,K^H)$ via a {\bf Curry bijection} seeing a graph $z=f(x,y)$
of a function of two variables as a graph of the function
$x \to g_x(y)=f(x,y)$ from $G$ to functions from $H$ to $K$.
\paragraph{}
The existence of a product does not guarantee that a category is
cartesian closed. Topological spaces or smooth manifolds are not
cartesian closed but compactly generated Hausdorff spaces are.
In our case, we are close to the category of finite sets which is
cartesian closed. Like for finite sets we are close to computer science as
procedures in computer programming languages are using the Curry bijection.
Since the object $K^H$ is in general very large, it is as for now
more of theoretical interest.
\paragraph{}
The strong ring $R$ resembles much the category of sets but there are negative
elements in $R$. We can look at the category of {\bf finite signed sets} which is
the subcategory of zero dimensional signed simplicial complexes. Also this
is a ring. It is isomorphic to $\ZZ$ as a ring but the set of morphisms produces
a category which has more structure than the ring $\ZZ$.
This category of signed $0$-dimensional simplicial complexes is Cartesian closed in the
same way than the category of sets is. It is illustrative to see the exponential
element $2^G$ is th set of all subsets of $G$. But this shows how exponential elements
can become large.
\paragraph{}
A simplicial complex $G$ as a finite set of non-empty sets, which is
closed under the operation of taking non-empty subsets.
[The usual definition is to looking at the {\bf base set} $V=\bigcup_{x \in G} x$
and insisting that $G$ is a set of subset of $V$ with the property that if $y \subset x$
and $x \in G$ then $y \in G$ and also asking that $\{v\} \in G$ for every $v \in V$.
Obviously the first given {\bf point-free definition} is equivalent. ]
There is more structure than just the set of sets as the elements in $G$ are
partially ordered. A morphism between two simplicial complexes is not just a
map between the sets but an {\bf order preserving map}. This implies that the simplices
are mapped into each other. We could rephrase this that a morphism induces a graph
homomorphism between the barycentric refinements $G_1$ and $H_1$
but there are more graph homomorphisms in general on $G_1$.
\paragraph{}
The class $\mathcal{C}$ of simplicial complexes is a category for which the
objects are the simplicial complexes and the morphisms are
{\bf simplicial functions}, functions which preserve simplices. In
the point-free definition this means to look at functions
from $G$ to $H$ which preserve the partial order.
In order to be close to the definition of continuous functions
(the morphisms in topological spaces) or measurable functions
(the morphisms in measure spaces) one could ask that $f^{-1}(A)$
is a simplicial complex for every simplicial complex. As $f^{-1}(A)$ can be
the empty complex this is fine. [If for some set $y \in H$, the inverse $f^{-1}(x)$
can be empty would not be good since simplicial complexes never contain the empty set.
This is fine for the empty complex, which does not contain the empty set neither.
But if we look at $x$ as a complex by itself, then $f^{-1}(x)$ can be the empty complex.
It is in general important to distinguish the elements $x$ in the simplicial complex from the
subcomplex $x$ it represents, evenso this is often not done. ]
\paragraph{}
The category of simplicial complexes is close to the category of finite
sets as the morphisms are just a subclass of all functions. There is an other
essential difference: the product $G \times H$ of two simplicial complexes is
{\bf not} a simplicial complex any more in general, while the product $G \times H$
as sets is a set. This is also different from
{\bf geometric realizations} of simplicial complexes (called ``polyhedra"
in algebraic topology), where the product is a simplicial complex, the geometric
realization of the Barycentric refinement of $G \times H$ will do.
\paragraph{}
[To compare with topologies O, where continuous maps have the property that $f^{-1}(x) \in O$
for every $x \in O$, morphisms of simplicial complexes have the property $f(x) \in H$ for $x \in G$.
But only surjective morphisms also have the property that $f^{-1}(y) \in G$ for every $y \in H$, the
reason being that $f^{-1}(x)$ can be empty.
A continuous map on topological spaces which is also open has the property that $f$ and $f^{-1}$ preserve
the topology. A constant map for example is in general not open. We see that not only the object
of simplicial complex is simpler but also that the morphisms are simpler.]
It is better to therefore of a morphism $f$ between simplicial complexes
as a map for which both $f$ and $f^{-1}$ preserve sub simplicial complexes.
\paragraph{}
In order to work within the class of simplicial complexes (actually the special case of
Whitney complexes of graphs), we had looked in \cite{KnillKuenneth} at the Barycentric
refinements of Cartesian products and called this the Cartesian product.
We had to live however with the consequence that
the product $(G,H) \to (G \times H)_1$ is {\bf not associative}: already $(G \times K_1)_1 = G_1$
is the Barycentric refinement of $G$. While the geometric realization of the Barycentric
refinement $G_1$ is topologically equivalent to $G$, there is a problem with products
as in the topological realization $|K_2 \times K_2| = |K_4|$ meaning that the arithmetic
is not compatible. The geometric realization destroys the arithmetic. In the strong ring
$K_4$ {\bf is a multiplicative prime}, in the geometric realization, it is not; it
decays as $K_2 \times K_2$.
\paragraph{}
Having enlarged the category to the strong ring, we have not only to deal with morphisms for
simplicial complexes, we also have to say what the morphisms in the ring are.
The definition is recursive with respect to the {\bf degree} of a ring element,
where the degree is the degree in the Stanley-Reisner
polynomial representation. A map $G \to H$ is a morphism, if it is a
morphism of simplicial complexes if $G,H$ are simplicial complexes and if
for every pair $G,H$ in the ring, there is a pair $A,B$ in the ring and morphisms
$g:G \to A, h:H \to B$ such that $f(G \times H)=g(G) \times h(H)$.
[By the way, the degree of a monomial in the Stanley-Reisner representation $f_G$ only relates
to the dimension if we $G$ is prime, that is if $G$ is a simplicial complex. In an product $A \times B$,
the degree of a monomial is $c(A) + c(B)$, where $c$ is the clique number. It is the clique
number which is additive and not the dimension. The monomial $abcd$ in $(a+b+ab) (c+d+cd)$
for example belongs to a two-dimensional cell. ]
\paragraph{}
The strong ring $S$ is a ring and a category. But it is itself an element in the category
of rings. The image of the map $\phi: G \to G'$ is a subring $R$ of the Sabidussi ring of all
graphs. The map $\phi$ is a ring isomorphism. If we think of $S$ as a category, then $R$ can
be thought so too and $\phi$ is now a functor.
This is nothing strange. Category is a universal language where objects of categories
can be categories themselves. A directed graph for example is a category, where the objects
are the vertices and the morphisms are the directed edges.
\paragraph{}
The strong ring is also a {\bf cartesian monodidal category}, a category with a notion of
tensor product. The unit is the unit in the ring.
It is also a {\bf finitely complete category}, which is a category in which {\bf pullbacks} exist:
given any two ring elements $G,H$ and two morphisms $g:G \to K, h: H \to K$, there
is a subcomplex $K$ of $G \times H$ such that for
all $(x,y) \in K$, the equation $g(x)=h(y)$ holds. The subcomplex $K$ is called a
pullback.
\paragraph{}
The strong ring appears also to be a {\bf topos} but we have not yet checked that.
A topos is a cartesian closed category with a
sub-object classifier. Examples of topoi are sets or the G-dynamical
systems for a group G or the category of sheaves on a topological space.
Topoi enjoy stability properties: the fundamental theorem of
topos theory tells that a topos is stable under slicing, i.e that it is
locally cartesian closed.
\vfill
\pagebreak
\bibliographystyle{plain}
|
1,108,101,564,335 | arxiv | \section{Introduction}
The term Active Galactic Nucleus (AGN) refers to the existence of very
energetic phenomena occurring in the centers of some galaxies that cannot be
attributed to stars. In the standard model, AGNs consist of a black hole
surrounded by an accretion disk. There is good evidence that relativistic jets
in AGNs are powered by nonthermal synchrotron radiation; i.e., a flow of
radiating, accelerated electrons embedded in a magnetic field, which dominates
the continua of electromagnetic radiation over a wide window, extending from
radio to soft X-ray frequencies.
It is natural to suppose that all these wavebands should
share common properties, and that variability should be correlated
across the spectrum. However, consideration of the physics of AGN
jets leaves it unclear whether this will, in fact, be the case. In some
models \cite{MarscherGear85}, higher-energy emission is generated closer
to the base of the jet, so that emission at different wavelengths arises
in different regions of the jet with different physical properties (e.g.
different magnetic-field configurations), while, in
others, the high-energy and radio emission can be cospatial, at least for certain
combinations of jet geometry and particle flow \cite{Ghisellini85}. In addition,
even if emission from various wavebands are intrinsically closely
related, there are a variety of wavelength-dependent extrinsic processes,
such as Faraday Rotation and scintillation, that can give rise to different
observed properties at different wavelengths. Thus, it is important to
correct for these extrinsic effects if we wish to test for correlated behaviour
in different wavebands.
There is recent evidence of a much closer link between the radio and
high-energy synchrotron radiation that was originally indicated by previous studies.
For example, \cite{Jorstad01} found a
tendency for gamma-ray flares to occur several months after the
births of new VLBI component, suggesting these flares occur in these
radio components, at appreciable distances down the jet.
Polarization information has played a key role in revealing evidence for
other correlations: \cite{Gabuzda06}
demonstrated a strong tendency for the simultaneously measured optical and
Faraday-rotation-corrected VLBI core polarization angles to be aligned in about
a dozen BL Lacs, and \cite{Jorstad07} observed similar behaviour
for the optical and high-frequency radio polarization angles in a sample of
highly polarized AGNs. Also, \cite{DArcangelo07} observed a rapid,
simultaneous rotation in the optical and 7mm VLBA-core polarization angles
in 0420-014. All these results support the idea that the radio to optical,
and even gamma-ray, emission may be more closely related than was previously
thought.
\section{Observations and Results}
With the aim of verifying the results of \cite{Gabuzda06} and searching for
further evidence for optical--VLBI polarization correlations, we observed
an additional $\sim 30$ AGNs, including BL Lac objects
and both high- and low-polarization quasars (here, the degree
of polarization refers to the optical waveband \cite{Moore84}),
thus providing us with a sample of 40 AGNs for our analysis. We
obtained 7mm+1.3cm+2cm VLBA polarization data and nearly simultaneous optical
polarization data with the Steward Observatory 2.3m telescope in three
24-hour sessions, on November 1, 2004, March 16, 2005 and September 26, 2005.
The data reduction and imaging for the radio data were done with the NRAO
Astronomical Image Processing System (AIPS) using standard techniques (see,
e.g. \cite{Gabuzda06}).
The optical polarization observations spanned the VLBA observation runs
(October 30--November 2, 2004; March 15--17, 2005; and September 25--29, 2005).
These data were acquired using the SPOL imaging/spectropolarimeter
\cite{Schmidt92}. On various nights, the instrument was configured for either
imaging polarimetry using a KPNO ``nearly mould'' R filter (6000--7000\AA)
or spectropolarimetry using a 600 line/mm diffraction grating. The data
acquisition and reduction closely followed those described in\cite{Smith07}. The
spectropolarimetric observations were averaged over the R-filter bandpass for
direct comparison to the imaging polarimetry.
Since we have VLBA polarization data at three wavelengths, we expected to be
able to correct for Faraday rotation in the region of the compact core,
enabling a comparison between the ``zero-wavelength'' radio-core and optical
polarization angles. If the optical and radio emission is cospatial, and the
emitting region is optically thin at both
wavebands, the difference $\Delta\chi$ between the optical $\chi_{opt}$ and
Faraday-rotation-free radio-core $\chi_0$ polarization angles should
be close to zero. On the other hand, if
the VLBI-core emission is optically thick at the observed radio
wavelengths, this should give rise to an offset of $\Delta\chi=90^{\circ}$.
Thus, our ``null hypothesis'' was that the distribution of $\Delta\chi$
values would be dominated by two peaks: one near $0^{\circ}$ and one near
$90^{\circ}$. However, this was not the case for the new observations: although
the complete sample of BL Lac objects displayed
an overall clear peak near $0^{\circ}$,
the quasars display a flatter distribution, possibly with a weak
peak around $\sim50^{\circ}$\cite{Algaba08}. We discuss in section 3 the possible
origins of this difference in behaviour shown by the BL Lac objects and the
quasars in our sample.
\section{Discussion}
One obvious possibility is that there is indeed no correlation between the
optical and VLBI-core polarization angles for both the high-polarization and
low-polarization quasars. In other words, these results may provide evidence
that the optical and VLBI-core emission in quasars is usually not cospatial,
with the different emitting regions having different properties. If so, this
would seem to indicate a difference between the geometries or physical
conditions of the jets of quasars and BL Lac objects that is worthy of
further study.
However, the hint of a possible weak peak in the $\Delta\chi$ distribution
for the quasars suggests another possibility: that the VLBI cores are
subject to internal Faraday rotation. In this case, the rotation of the radio
polarization angle
initially obeys the $\lambda^2$ law characteristic of
external Faraday rotation, but then
saturates at rotations of about
$40-50^{\circ}$ with increase in the observing wavelength. If all
three of our wavelengths were in this regime, this could appear
as a small core Faraday rotation, leading to an inferred $\chi_{0}$ of
about $40-50^{\circ}$ away from its true value. Our past investigation
showed no evidence for
enhanced depolarization in the quasar cores compared to the BL Lac cores,
leading us to discard this idea as an explanation for the behaviour of the
quasars in our sample as a whole \cite{Algaba08}.
There is, however, another possibility: that the 7mm--2cm quasar cores are
subject to appreciably higher local but external Faraday rotations than the
BL Lac cores at these wavelengths. This would be consistent with the tendency
for quasar cores to have higher Faraday rotation measures than BL Lac cores
at 2cm--4cm \cite{ZandT04}. In fact, core rotation
measures as high as tens of thousands of rad/m$^2$ have been reported previously
\cite{Jorstad07,ZandT04}. In the
case of high core rotation measures, we encounter problems with
$n\pi$ ambiguities when fitting the rotation measures, leading to incorrect
determinations of both the rotation measure and $\chi_0$.
Our data suggest that such ambiguities may be
present. For example, if we observe a roughly $90^{\circ}$ change in the observed
polarization angles between two radio wavelengths, it is natural to
suppose that this is due to a transition between the optically thick and
optically thin regimes. This can be verified by
examining the observed spectral indices and degrees of polarization of the
VLBI core. However, this supporting evidence is not found in most cases where we
observe polarization-angle rotations by roughly $90^{\circ}$ between neighbouring
wavelengths, suggesting that for these objects, high core Faraday
rotations may be relevant. In addition, some of the
core rotation measures indicated by our observations
are low compared to the typical values deduced by \cite{ZandT04},
despite taking
into account the fact that our 7mm--2cm observations probe somewhat smaller
scales, where we expect both the electron density and magnetic-field strength
to be higher.
\begin{figure}
\begin{center}
\includegraphics[width=.80\textwidth]{90what.eps}
\caption{Top: total intensity maps with superposed polarization vectors
for 2230+114 at 2cm (left), 1.3cm (middle)
and 7mm (right). Bottom: 2cm--1.3cm (left) and 1.3cm--7mm (right) spectral-index
distributions for this source; both core spectral indices are $\sim0.46$, giving
no evidence for a change in optical depth in the observed frequency range.}
\end{center}
\vspace{-0.8cm}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.70\textwidth]{badfitgoodfit.eps}
\caption{Two alternative rotation-measure fits for 2145+067. Left: the
rotation-measure fit obtained for the nominal observed radio polarization
angles. Right: another possible fit allowing for possible $n\pi$ ambiguities
in the observed radio polarization angles. The thick lines in the corner and
in the core show the optical and VLBI-core Faraday-corrected polarization
angles, respectively.}
\end{center}
\vspace{-0.8cm}
\end{figure}
Two examples are shown in Figs. 1 and 2. Fig. 1 shows our images for the quasar
2230+114. The nominal observed
values for the polarization angles are $70^{\circ}$ at 2cm,
$34^{\circ}$ at 1.3cm and $94^{\circ}$ at 7mm, which clearly do not yield a
reasonable $\lambda^2$ fit. A good fit is obtained if both the 2cm and 1.3cm
polarization angles are rotated by $90^{\circ}$, with the implied offset
between $\chi_0 - \chi_{opt} \simeq 60^{\circ}$. An inspection
of the core degrees of polarization at our three wavelengths and the 2cm--1.3cm
and 1.3cm--7mm core spectral indices ($\alpha\simeq 0.46$) provide no
evidence for an optical-depth transition in our wavelength range, but
suggest that the VLBA core is probably optically thick
at all three wavelengths. Alternatively, a comparably good fit is
obtained if the 1.3cm polarization angle is rotated by $\pi$ and the 2cm
polarization angle by $2\pi$; in this case, the inferred rotation measure is
about $16,700$~rad/m$^2$ and the inferred zero-wavelength polarization
angle is $\chi_0\simeq 45^{\circ}$, offset from $\chi_{opt}$ by roughly
$80^{\circ}$, close to the offset expected if the radio and
optical emission regions are cospatial, but the radio core emission is optically
thick.
Similarly, Fig. 2 shows two possible rotation-measure fits for 2145+067.
The left option shows the fit to the nominal observed radio-core polarization
angles, which agree with each other to within about $6^{\circ}$. This
fit is acceptable, and implies an offset $\Delta\chi\simeq
53^{\circ}$. However, if we add $2\pi$ to the 1.3cm and $5\pi$ to the
2cm polarization angles (right panels in Fig.~2), we obtain a comparably good
fit that corresponds to a much larger but still reasonable core rotation
measure of about +47,000 rad/m$^2$ and a zero-wavelength polarization angle
of about $-74^{\circ}$. This fit implies a near alignment between the optical and
VLBI-core polarization angles, $\Delta\chi\simeq 5^{\circ}$, which is consistent
with an optically thin VLBI core
($\alpha = -0.30$ between 2cm and 1.3cm and $\alpha = -0.60$ between
1.3cm and 7mm).
We can find at least one acceptable fit with a high core rotation measure for
essentially every source in our sample, usually implying rotation measures of
the order of several tens of thousands of rad/m$^2$. In a few cases, we can
identify an unambiguous best fit, but in others two or more
different possible core rotation measures are plausible.
Some of these fits imply intrinsic
radio core polarization angles that are either aligned with or orthogonal
to the optical polarization angle, consistent with the observed core spectral
indices.
It can be argued that indiscriminately adding or subtracting some
number of $\pi$ rotations to the observed VLBA polarization
angles, we will eventually obtain a good fit,
even if the inferred rotation measure has no physical basis.
For this reason, we held the 7mm polarization angle fixed at its
observed value, and considered rotations of no more than $\pm 5\pi$ for the 2cm
polarization angles. The corresponding rotation measures range up to roughly
$\pm50,000$~rad/m$^2$, and are plausible, since we are probing regions of
higher electron density and stronger magnetic fields. Similar rotation measures
were found previously \cite{Jorstad07,ZandT04}. In addition, allowing
for a high core rotation measure
yields a good fit to the rotated polarization angles, there is no guarantee
that the corresponding zero-wavelength core polarization angle will show some
correlation with the optical polarization angle.
Therefore, it is likely that some of our alternative high core rotation
meausures are correct, but they must be confirmed. It remains possible that
internal Faraday rotation is influencing the
observed polarization angles in some cores. Unfortunately, it is impossible
to distinguish between the various scenarios with our current
three-wavelength data. For this reason, we have obtained new VLBA polarization
observations of 8 sources from our sample at 5 wavelengths from 7mm to 2cm.
The new data will improve
our ability to identify high external Faraday rotation and the signature of
internal Faraday rotation if present in the cores of these AGNs. Depending on
these results, we may propose analogous observations for a larger sample.
The possible presence of high core rotation measures has certain
interesting implications. First, it would suggest that the physical
conditions in the sub-parsec-scale jets in quasars are more
extreme than has previously been thought. Second, a friendly warning for
observers: we must be careful when observing polarization in quasar cores.
The possible presence of high core rotation measures should be taken into
consideration when planning polarization observations. For example, it is
clear from our results that three wavelengths may not be sufficient to
unambiguously derive reliable Faraday rotations, even at short VLBI
wavelengths. In addition, it seems likely that we should not assume that
the 7mm polarization angles are a good aproximation to the intrinsic
polarization, as has usually been assumed. Furthermore,
many of the MOJAVE\cite{MOJAVE} 2cm core polarization angles are likely
subject to appreciable Faraday rotation.
|
1,108,101,564,336 | arxiv | \section{Introduction}
The excited states of the Calogero-Sutherland model \cite{rSu}
and its relativistic model
(the trigonometric limit of the Ruijsenaars model) \cite{rR}
are described by the Jack polynomials \cite{rSt}
and their $q$-analog (the Macdonald polynomials) \cite{rM}, respectively.
Since the Jack polynomials coincide with
certain correlation functions of $\cW_N$ algebra \cite{rMY,rAMOS},
it is natural to expect that the Macdonald polynomials
are also realized by those of a deformation of $\cW_N$ algebra.
In a previous paper \cite{rSKAO},
we derived a quantum Virasoro algebra whose singular vectors are
some special kinds of Macdonald polynomials.
On the other hand,
E.~Frenkel and N.~Reshetikhin succeeded in constructing
the Poisson $\cW_N$ algebra
and its quantum Miura transformation
in the analysis of the $U_q(\widehat{sl_N})$ algebra at the critical level
\cite{rFR}.
Like the classical case \cite{rFL},
these two works, $q$-Virasoro and $q$-Miura transformation,
are essential to find and study a quantum $\cW_N$ algebra.
In this article,
we present a {$q$-${\cal W}_N$} algebra
whose singular vectors realize the general Macdonald polynomials.
This paper is arranged as follows:
In section 2, we define a quantum deformation of $\cW_N$ algebras
and its quantum Miura transformation.
The screening currents and a vertex operator are derived
in section 3 and 4.
A relation with the Macdonald polynomials is obtained in section 5.
Section 6 is devoted to conclusion and discussion.
Finally we recapitulate the $q$-Virasoro algebra and
the integral formula for the Macdonald polynomials in appendices.
\section{Quantum deformation of $\cW_N$ algebra}
We start with defining
a new quantum deformation of the $\cW_N$ algebra
by quantum Miura transformation.
\subsection{Quantum Miura transformation}
\newcommand\frenkel{
We found this commutation relation by comparing
the Poisson bracket in Frenkel-Reshetikhin's work \cite{rFR}
and the commutator in ours \cite{rSKAO}.
The oscillator $a_n$ used in \cite{rSKAO} is given by
$a_n = -n h^1_n p^{-n/2}/(1-t^n)$ and
$a_{-n} = n h^1_{-n} p^{n/2}(1+p^n)/(1-t^{-n})$ for $n>0$.
}
First we define fundamental bosons ${\h in}$ and ${\Qh i}$
for $i=1,2,\cdots,N$ and $n\in\bZ$ such that\footnote{\frenkel}
\begin{eqnarray}
[{\h in},{\h jm}] &\!\!=\!\!& -{1\/n}(1-q^n)(1-t^{-n})
{1-p^{(\,\delta_{ij}N-1)n}\/1-p^{Nn}} p^{Nn\theta(i<j)}
\,\delta_{n+m,0},\cr
[{\h i0},{\Qh j}] &\!\!=\!\!& \,\delta_{ij} - {1\/N},\qquad\quad
\sum_{i=1}^N p^{in} {\h in} = 0,\qquad\quad
\sum_{i=1}^N {\Qh i} =0,
\end{eqnarray}
with $q$, $t\equiv q^\beta\in\bC$ and $p\equiv q/t$.
Here $\theta(P)\equiv 1$ or $0$
if the proposition $P$ is true or false, respectively.
This bosons correspond to the weights of the vector representation
$h_i$ whose inner-product is $(h_i\cdot h_j)=(\,\delta_{ij}N-1)/N$.
Let us define fundamental vertices $\La_i(z)$ and
{$q$-${\cal W}_N$} generators $W^i(z)$ for $i=1,2,\cdots,N$ as follows:
\begin{eqnarray}
\La_i(z) &\!\!\equiv\!\!& :\Exp{ \sum_{n\neq 0}{\h in} z^{-n} }:
q^{\rb{\h i0}} p^{{N+1\/2}-i},\cr
W^i(zp^{1-i\/2}) &\!\!\equiv\!\!& \sum_{1\leq j_1<\cdots<j_i\leq N}
:\La_{j_1}(z) \La_{j_2}(zp^{-1}) \cdots \La_{j_i}(zp^{1-i}):,
\end{eqnarray}
and $W^0(z)\equiv 1$.
Here $:*:$ stands for the usual bosonic normal ordering such that
the bosons ${\h in}$ with non-negative mode $n\geq 0$ are in the right.
Note that
\begin{equation}
W^N(zp^{1-N\/2}) =
\,:\!\La_1(z) \La_2(zp^{-1}) \cdots \La_N(zp^{1-N})\!:\, = 1.
\end{equation}
If we take the limit $t\rightarrow 1$ with $q$ fixed,
the above generators reduce to those of Ref.\ \cite{rFR}.
These generators are obtained by the following quantum Miura transformation:
\begin{equation}
:\!\(p^{D_z} - \La_1(z)\) \(p^{D_z} - \La_2(zp^{-1})\) \cdots
\(p^{D_z} - \La_N(zp^{1-N})\)\!:\,
= \sum_{i=0}^N (-1)^i W^i(zp^{1-i\/2}) p^{(N-i)D_z},
\label{e:qMiura}
\end{equation}
with $D_z \equiv z{\partial\/\partial z}$.
Remark that $p^{D_z}$ is the $p$-shift operator such that
$p^{D_z} f(z) = f(pz)$.
\subsection{Relations of {$q$-${\cal W}_N$} generators}
Next we give the algebra of the above {$q$-${\cal W}_N$} generators.
Let $W^i(z) = \sum_{n\in\bZ} W^i_n z^{-n}$.
Let us define a new normal ordering $\:*\:$
for the {$q$-${\cal W}_N$} generators as follows:
\begin{eqnarray}
&\!\!&\!\!\!\!\!\!
\: W^i(rw)W^j(w)\: \cr
&\!\!\equiv\!\!&
\oint{dz\/2\pi iz}\left\{
{ 1\/1-{rw/z}}f^{ij}\({w\/z}\)W^i(z)W^j(w)
+{{z/rw}\/1-{z/rw}} W^j(w)W^i(z)f^{ji}\({z\/w}\)
\right\}\cr
&\!\!=\!\!&
\sum_{n\in\bZ}\sum_{m\geq 0}\sum_{\ell=0}^m f^{ij}_\ell \left\{
r^{ m-\ell}\cdot W^i_{-m} W^j_{n+m}
+r^{\ell-m-1}\cdot W^j_{n-m-1}W^i_{m+1}
\right\}w^{-n},
\end{eqnarray}
with
\begin{eqnarray}
f^{ij}(x) &\!\!\equiv\!\!& \Exp{ \sum_{n>0}{1\/n}(1-q^n)(1-t^{-n})
{1-p^{in}\/1-p^n}{1-p^{(N-j)n}\/1-p^{Nn}} p^{{j-i\/2}n} x^n },\cr
f^{ji}(x) &\!\!\equiv\!\!& f^{ij}(x),\qquad (i\leq j),
\end{eqnarray}
and
$f^{ij}(x)\equiv\sum_{\ell\geq 0}f^{ij}_\ell x^\ell$.
Here $(1-x)^{-1}$ stands for $\sum_{n\geq 0}x^n$.
Remark that this normal ordering $\:*\:$ is a generalization
of the following usual one $(*)$ used in the conformal field theory:
\begin{eqnarray}
\(AB\)(w) &\!\!\equiv\!\!& \oint_w {dz\/2\pi i} {1\/z-w} A(z) B(w)\cr
&\!\!\equiv\!\!& \oint_0 {dz\/2\pi iz}
\left\{{1\/1-{w/z}}A(z)B(w) + {{z/w}\/1-{z/w}}B(w)A(z)\right\}.
\end{eqnarray}
The relation of the {$q$-${\cal W}_N$} generators should be
written in this normal ordering.
Here we present some examples of them.
The relation of $W^1(z)$ and $W^j(z)$ for $j\geq 1$ is
\begin{eqnarray}
&\!\!\!\!\!\!&\!\!\!\!\!\!
f^{1j}\({w\/z}\)W^1(z)W^j(w) - W^j(w)W^1(z)f^{j1}\({z\/w}\) \\
&\!\!=\!\!&
-{(1-q)(1-t^{-1})\/1-p}\left\{
\,\delta\(p^{j+1\/2}{w\/z}\)W^{j+1}\(p^{1\/2}w\)
-\,\delta\(p^{-{j+1\/2}}{w\/z}\)W^{j+1}\(p^{-{1\/2}}w\)
\right\},\nonumber
\end{eqnarray}
with $\,\delta(x)\equiv \sum_{n\in\bZ} x^n$;
and that of $W^2(z)$ and $W^j(z)$ for $j\geq 2$ is
\begin{eqnarray}
&\!\!\!\!\!\!&\!\!\!\!\!\!
f^{2j}\({w\/z}\)W^2(z)W^j(w) - W^j(w)W^2(z)f^{j2}\({z\/w}\) \\
&\!\!=\!\!&
-{(1-q)(1-t^{-1})\/1-p}{(1-qp)(1-t^{-1}p)\/(1-p)(1-p^2)}
\left\{\,\delta\(p^{ {j\/2}+1}{w\/z}\)W^{j+2}(p w) \right. \cr
&\!\!\!\!\!\!&\hspace{66mm}
\left.-\,\delta\(p^{-{j\/2}-1}{w\/z}\)W^{j+2}(p^{-1}w) \right\}\cr
&\!\!\!\!\!\!&
-{(1-q)(1-t^{-1})\/1-p}
\left\{\,\delta\(p^{{j\/2}}{w\/z}\)
\: W^1(p^{-{1\/2}}z) W^{j+1}(p^{ {1\/2}} w)\: \right. \cr
&\!\!\!\!\!\!&\hspace{32mm}
\left.-\,\delta\(p^{-{j\/2}}{w\/z}\)
\: W^1(p^{ {1\/2}}z) W^{j+1}(p^{-{1\/2}} w)\: \right\}\cr
&\!\!\!\!\!\!&
+{(1-q)^2(1-t^{-1})^2\/(1-p)^2}
\left\{\,\delta\(p^{{j\/2}}{w\/z}\)
\({p^2\/1-p^2}W^{j+2}(pw)+{1\/1-p^j} W^{j+2}( w)\)\right.\cr
&\!\!\!\!\!\!&\hspace{35mm}
\left.-\,\delta\(p^{-{j\/2}}{w\/z}\)
\({p^j\/1-p^j}W^{j+2}( w)+{1\/1-p^2} W^{j+2}(p^{-1}w)\)\right\},
\nonumber
\end{eqnarray}
with $W^i(z) \equiv 0$ for $i>N$.
The main terms of
$$
f^{ij}\({w\/z}\)W^i(z)W^j(w) - W^j(w)W^i(z)f^{ji}\({z\/w}\)\qquad
(i\leq j)
$$
is
\begin{eqnarray}
&\!\!\!\!\!\!&
-{(1-q)(1-t^{-1})\/1-p}\sum_{k=1}^{{\rm min} \(i,N-j\)}
\prod_{\ell=1}^{k-1}{(1-qp^\ell)(1-t^{-1}p^\ell)\/(1-p^\ell)(1-p^{\ell+1})}\cr
&\!\!\!\!\!\!&\!\!\!\!\!\! \times
\left\{
\,\delta\(p^{{j-i\/2}+k}{w\/z}\)\:W^{i-k}(p^{-{k\/2}}z)W^{j+k}(p^{ {k\/2}}w)\:
-\,\delta\(p^{{i-j\/2}-k}{w\/z}\)\:W^{i-k}(p^{ {k\/2}}z)W^{j+k}(p^{-{k\/2}}w)\:
\right\}.
\nonumber
\end{eqnarray}
\newcommand\AC{
In these kinds of formulae we use
$ \Exp{-\sum_{n>0}x^n /n} = 1-x =
-x\Exp{-\sum_{n>0}x^{-n}/n}$.}
To obtain the above relations, the fundamental formula is
\begin{eqnarray}
f^{11}\({w\/z}\)\La_i(z)\La_i(w) &\!\!-\!\!& \La_i(w)\La_i(z)f^{11}\({z\/w}\)
= 0,\cr
f^{11}\({w\/z}\)\La_i(z)\La_j(w) &\!\!-\!\!& \La_j(w)\La_i(z)f^{11}\({z\/w}\)
\cr
&\!\!=\!\!&{(1-q)(1-t^{-1})\/1-p}\(\,\delta\({w\/z}\)-\,\delta\(p{w\/z}\)\)
:\La_i(z)\La_j(w):,
\nonumber
\end{eqnarray}
for $i<j$, here we use\footnote{\AC}
\begin{eqnarray}
&\!\!\!\!\!\!&\!\!\!\!\!\!
\Exp{ \sum_{n>0}{1\/n}(1-q^{ n})(1-t^{-n})x^{ n} }-
\Exp{ \sum_{n>0}{1\/n}(1-q^{-n})(1-t^{ n})x^{-n} }\cr
&\!\!\!\!\!\!&\!\!\!\!\!\! \hspace{55mm}
= {(1-q)(1-t^{-1})\/1-p}\(\,\delta(x)-\,\delta(px)\).
\end{eqnarray}
To calculate the general relations, the following formulae are useful:
\begin{eqnarray}
&\!\!\!\!\!\!&\!\!\!\!\!\!
\Exp{ \sum_{n>0}{1\/n}(1-q^{ n})(1-t^{-n})(1+r^{ n})x^{ n} }-
\Exp{ \sum_{n>0}{1\/n}(1-q^{-n})(1-t^{ n})(1+r^{-n})x^{-n} }\cr
&\!\!\!\!\!\!&\!\!\!\!\!\! \hspace{30mm}
={(1-q)(1-t^{-1})\/(1-p)(1-r)}
\left\{
(1-qr)(1-t^{-1}r) {\,\delta( x)-\,\delta(prx)\/1-pr} \right.\cr
&\!\!\!\!\!\!&\!\!\!\!\!\! \hspace{70mm}\left.
-(r-q )(r-t^{-1} ) {\,\delta(rx)-\,\delta(p x)\/r-p }
\right\},
\label{e:formula.3}
\end{eqnarray}
with $r\neq 0$;
For $r=1$ or $p^{\pm1}$,
the right hand side of \eq{e:formula.3} should be understood as the limit
$r\rightarrow 1$ or $p^{\pm1}$, respectively;
And
$f^{ij}(x) = \prod_{k=1}^i f^{1j}(p^{{i+1\/2}-k}x)$
for $i\leq j$.
\subsection{Example of $q$-$\cW_3$}
$N=2$ case is {${\cal V}ir\hspace{-.03in}_{q,t}\,$} studied in Ref. \cite{rSKAO} (see appendix A).\\
Here we give an example when $N=3$.
The generators are
\begin{eqnarray}
W^1(z) &\!\!=\!\!& \La_1(z) + \La_2(z) + \La_3(z),\cr
W^2(z) &\!\!=\!\!&
\La_1(zp^{1\/2})\La_2(zp^{-{1\/2}}) +
\La_1(zp^{1\/2})\La_3(zp^{-{1\/2}}) +
\La_2(zp^{1\/2})\La_3(zp^{-{1\/2}}).
\end{eqnarray}
The relation of these generators is
\begin{eqnarray}
\&\hspace{-5mm}
f^{11}\({w\/z}\) W^1(z) W^1(w) - W^1(w) W^1(z) f^{11}\({z\/w}\) \cr
\&\hspace{10mm}
= -{(1-q)(1-t^{-1})\/1-p}\left\{
\,\delta\({w\/z}p \) W^2\(wp^{ 1\/2 }\)-
\,\delta\({w\/z}p^{-1}\) W^2\(wp^{-{1\/2}}\)\right\},\cr
\&\hspace{-5mm}
f^{12}\({w\/z}\) W^1(z) W^2(w) - W^2(w) W^1(z) f^{21}\({z\/w}\) \cr
\&\hspace{10mm}
= -{(1-q)(1-t^{-1})\/1-p}\left\{
\,\delta\({w\/z}p^{ {3\/2}}\)-
\,\delta\({w\/z}p^{-{3\/2}}\)\right\},\cr
\&\hspace{-5mm}
f^{22}\({w\/z}\) W^2(z) W^2(w) - W^2(w) W^2(z) f^{22}\({z\/w}\) \cr
\&\hspace{10mm}
= -{(1-q)(1-t^{-1})\/1-p}\left\{
\,\delta\({w\/z}p \) W^1\(zp^{-{1\/2}}\)-
\,\delta\({w\/z}p^{-1}\) W^1\(zp^{ 1\/2 }\)\right\},\nonumber
\end{eqnarray}
with
\begin{eqnarray}
f^{11}(x) &\!\!=\!\!&
\Exp{ \sum{1\/n}(1-q^n)(1-t^{-n}){1-p^{2n}\/1-p^{3n}}x^n }
=f^{22}(x),\cr
f^{12}(x) &\!\!=\!\!&
\Exp{ \sum{1\/n}(1-q^n)(1-t^{-n}){1-p^n\/1-p^{3n}}p^{n\/2}x^n }
=f^{21}(x).\nonumber
\end{eqnarray}
Note that there is no difference between $W^1$ and $W^2$
in algebraically.
\subsection{Highest weight module of {$q$-${\cal W}_N$} algebra}
Here we refer to the representation of the {$q$-${\cal W}_N$} algebra.
Let $|\la\>$ be the highest weight vector of the {$q$-${\cal W}_N$} algebra
which satisfies
$W^i_n|\la\>=0$ for $n>0$ and $i=1,2,\cdots,N-1$ and
$W^i_0|\la\>= \la^i |\la\>$ with $\la^i\in\bC$.
Let $M_\la$ be the Verma module over the {$q$-${\cal W}_N$} algebra
generated by $|\la\>$.
The dual module $M_\la^*$ is generated by $\<\la|$ such that
$\<\la|W^i_n=0$ for $n<0$ and $\<\la|W^i_0= \la^i\<\la|$.
The bilinear form $M_\la^*\otimes M_\la\rightarrow\bC$
is uniquely defined by $\<\la|\la\>=1$.
A singular vector $|\chi\>\in M_\la$ is defined by
$W^i_n|\chi\>=0$ for $n>0$ and
$W^i_0|\chi\>= (\la^i+N^i) |\chi\>$
with $N^i\in\bC$.
\section{Screening currents and singular vectors}
Next we turn to the screening currents, a commutant of the {$q$-${\cal W}_N$} algebra,
which construct the singular vectors.
\subsection{Screening currents}
Let us introduce root bosons
${\al in} \equiv {\h in}-{\h {i+1}n}$ and
${\Qal i} \equiv {\Qh i}-{\Qh {i+1}}$ for $i=1,2,\cdots,N-1$.
Then they satisfies
\begin{eqnarray}
[{\al in},{\al jm}] &\!\!=\!\!& -{1\/n}(1-q^n)(1-t^{-n})
\left\{(1+p^{-n})\,\delta_{i,j} - \,\delta_{i+1,j} -
p^{-n}\,\delta_{i-1,j}\right\}
\,\delta_{n+m,0},\cr
[{\al i0},{\Qal j}] &\!\!=\!\!& 2\,\delta_{i,j}-\,\delta_{i+1,j}-\,\delta_{i-1,j},
\end{eqnarray}
and
\begin{eqnarray}
[{\h in},{\al jm}] &\!\!=\!\!& {1\/n}(1-q^{-n})(1-t^{-n})
\left\{ q^n\,\delta_{i,j} - t^n\,\delta_{i,j+1}\right\}
\,\delta_{n+m,0},\cr
[{\h i0},{\Qal j}] &\!\!=\!\!& \,\delta_{i,j}-\,\delta_{i,j+1},\qquad
[{\al i0},{\Qh j}] = \,\delta_{i,j}-\,\delta_{i+1,j}.
\end{eqnarray}
Note that
$[{\h in} + p^n {\h {i+1}n}, {\al im}] = 0$.
By using these root bosons, we define screening currents as follows:
\begin{eqnarray}
S^i_+(z) &\!\!\equiv\!\!&
:\Exp{ \sum_{n\neq0}{{\al in}\/1-q^n} z^{-n} }:
e^{\rb{\Qal i}} z^{\rb{\al i0}},\cr
S^i_-(z) &\!\!\equiv\!\!&
:\Exp{ -\sum_{n\neq0}{{\al in}\/1-t^n} z^{-n} }:
e^{\rbi{\Qal i}} z^{\rbi{\al i0}}.
\end{eqnarray}
Then we have
\proclaim Proposition.
The screening currents satisfy
\begin{eqnarray}
\&\hspace{-7mm}
\[\,:\!\(p^{D_z} - \La_1(z)\)\(p^{D_z} - \La_2(zp^{-1})\)\cdots
\(p^{D_z} - \La_N(zp^{1-N})\)\!:\,,
S^i_\pm(w)\]\cr
\&
= (1-q^{\pm1})(1-t^{\mp1}){d\/d_{q\atop t}w}
\,:\!\(p^{D_z} - \La_1(z)\)\cdots\(p^{D_z} - \La_{i-1}(zp^{2-i})\) \cr
\&\hspace{5mm}\times
w \,\delta\({w\/z}p^{i-1}\) A^i_\pm(w) p^{D_z}
\(p^{D_z} - \La_{i+2}(zp^{-1-i})\)\cdots\(p^{D_z} - \La_N(zp^{1-N})\) \!:\,,
\nonumber
\end{eqnarray}
with
\begin{eqnarray}
A^i_+(w) &\!\!=\!\!&
:\Exp{ \sum_{n\neq0}{{\h in}-q^n{\h {i+1}n}\/1-q^n}w^{-n} }:\,
e^{\rb{\Qal i}} w^{\rb{\al i0}} q^{\rb{\h{i+1}0}} p^{{N+1\/2}-i-1},\cr
A^i_-(w) &\!\!=\!\!&
:\Exp{ -\sum_{n\neq0}{t^n{\h in}-{\h {i+1}n}\/1-t^n}w^{-n} }:\,
e^{\rbi{\Qal i}} w^{\rbi{\al i0}} q^{\rb{\h i0}} p^{{N+1\/2}-i}.\nonumber
\end{eqnarray}
\noindent
Here ${d\/d_\xi w} f(w) \equiv (f(w)-f(\xi w))/((1-\xi)w)$.
\noindent{\it Proof.\quad}
First, we have
\begin{eqnarray}
[\La_i(z),S^j_+(w)] &\!\!=\!\!&
(t-1)\,\delta_{i,j}\,\delta\({w\/z}q\):\La_j(z) S^j_+(w):\cr
&&+
(t^{-1}-1)\,\delta_{i,j+1}\,\delta\({w\/z}\):\La_{j+1}(z) S^j_+(w):,\cr
[\La_i(z),S^j_-(w)] &\!\!=\!\!&
(q^{-1}-1)\,\delta_{i,j}\,\delta\({w\/z}\):\La_j(z) S^j_-(w):\cr
&&+
(q-1)\,\delta_{i,j+1}\,\delta\({w\/z}t\):\La_{j+1}(z) S^j_-(w):.
\end{eqnarray}
Here we use the following formula:
\begin{equation}
q^{\mp1}
\Exp{ \pm\sum_{n>0}{1\/n}(1-q^n )x^n } -
\Exp{ \pm\sum_{n>0}{1\/n}(1-q^{-n})x^{-n} } =
(q^{\mp1}-1)\,\delta\(xq^{1\mp1\/2}\).
\end{equation}
The operator parts are
\begin{eqnarray}
:\La_j(wq) S^j_+(w): &\!\!=\!\!& A^j_+(wq) p,\qquad
:\La_{j+1}(w) S^j_+(w):\,= A^j_+(w),\cr
:\La_j(w) S^j_-(w): &\!\!=\!\!& A^j_-(w),\qquad
:\La_{j+1}(wt) S^j_-(w):\,= A^j_-(wt) p^{-1}.
\end{eqnarray}
Next,
\begin{eqnarray}
[\La_i(z)+\La_{i+1}(z), S^i_\pm(w)] &\!\!=\!\!& -(1-q^{\pm1})(1-t^{\mp1})
{d\/d_{q\atop t}w}\left\{w\,\delta\({w\/z}\)A^i_\pm(w)\right\},\cr
[:\La_i(z)\La_{i+1}(zp^{-1}):, S^i_\pm(w)] &\!\!=\!\!& 0.
\end{eqnarray}
Hence,
\begin{eqnarray}
\&
\[\,:\!\(p^{D_z} - \La_i(z)\)\(p^{D_z} - \La_{i+1}(zp^{-1})\)\!:\,,
S^i_\pm(w)\]\cr
\&\hspace{35mm}
= (1-q^{\pm1})(1-t^{\mp1})
{d\/d_{q\atop t}w}\left\{w\,\delta\({w\/z}\)A^i_\pm(w)\right\} p^{D_z}.
\end{eqnarray}
This gives us the proposition. \hfill\fbox{}
Therefore,
the screening currents $S^i_\pm(z)$
commute with any {$q$-${\cal W}_N$} generators up to total difference.
Thus we obtain
\proclaim Theorem.
Screening charges $\oint dz S^i_\pm(z)$
commute with any {$q$-${\cal W}_N$} generators.
\subsection{Singular vectors}
Let $\cF_\a$ be the boson Fock space
generated by the highest weight state $|\a\>$ such that
${\al in} |0\> = 0$ for $n\geq0$ and
$|\a\> \equiv \exp\{\sum_{i=1}^{N-1}\a^i{\QLa i}\} |0\>$
with ${\QLa i}\equiv\sum_{j=1}^i {\Qh j}$.
Note that ${\al i0}|\a\> = \a^i|\a\>$.
And this state $|\a\>$ is also the highest weight state of the {$q$-${\cal W}_N$} algebra.
We denote the negative mode part of $S^i_+(z)$ as
$(S^i_+(z))_- \equiv \Exp{ \sum_{n<0}{{\al in}\/1-q^n} z^{-n} }$.
Then we have
\proclaim Proposition.
For a set of non-negative integers $s_a$ and $r_a\geq r_{a+1}\geq0$,
($a=1,\cdots,N-1$),
let
\begin{eqnarray}
\a_{r,s}^{a} &\!\!=\!\!&\rb(1+r_a-r_{a-1})\rbi(1+s_a),\qquad
r_0 = 0,\cr
\widetilde\a_{r,s}^{a} &\!\!=\!\!&\rb(1-r_a+r_{a+1})\rbi(1+s_a),\qquad
r_N = 0.
\label{e:weightalpha}
\end{eqnarray}
Then the singular vectors $|\chi_{rs}^+\>\in\cF_{\a_{rs}^+}$
are realized by the screening currents as follows:
\begin{eqnarray}
\&\hspace{-5mm}
|\chi_{r,s}\> =
\oint\prod_{a=1}^{N-1}\prod_{j=1}^{r_a} {dx^a_j}\cdot
S^1_+(x^1_1)\cdots S^1_+(x^1_{r_1}) \cdots
S^{N-1}_+(x^{N-1}_1) \cdots S^{N-1}_+(x^{N-1}_{r_{N-1}})
|\widetilde\a_{r,s}\>\cr
&\!\!=\!\!&
\oint\prod_{a=1}^{N-1} \prod_{j=1}^{r_a} {dx^a_j\/x^a_j}\cdot
\prod_{a=1}^{N-1} \Pi\(\overline{x^a},px^{a+1}\) \Delta(x^a) C(x^a)
\prod_{j=1}^{r_a} (x^a_j)^{-s_a} (S^a_+(x^a_j))_-\cdot|\a_{r,s}\>\cr
&&\label{e:singular}
\end{eqnarray}
with $x^N=0$, $\overline x = 1/x$ and
\begin{eqnarray}
\Pi(x,y) &\!\!=\!\!& \prod_{ij}
\Exp{ \sum_{n>0}{1\/n}{1-t^n\/1-q^n} x_i^n y_j^n },\qquad
\Delta(x) =
\prod_{i\neq j}^r\Exp{ -\sum_{n>0}{1\/n}{1-t^n\/1-q^n}{x_j^n\/x_i^n}},\cr
C(x) &\!\!=\!\!& \prod_{i<j}^r
\Exp{\sum_{n>0}{1\/n}{1-t^n\/1-q^n}\({x_i^n\/x_j^n}-p^n{x_j^n\/x_i^n}\)}
\prod_{i=1}^r x_i^{(r+1-2i)\beta}
\label{e:Delta}
\end{eqnarray}
\noindent{\it Proof.\quad}
The operator product expansion of the screening currents is
\begin{eqnarray}
S^a_+(x) S^a_+(y) &\!\!=\!\!&
\Exp{-\sum_{n>0}{1\/n}{1-t^n\/1-q^n}(1+p^n){y^n\/x^n}}x^{2\beta}
:S^a_+(x) S^a_+(y):,\cr
S^a_+(x) S^{a\pm1}_+(y) &\!\!=\!\!&
\Exp{\sum_{n>0}{1\/n}{1-t^n\/1-q^n}p^{{1\pm1\/2}n}{y^n\/x^n}}x^{-\beta}
:S^a_+(x) S^{a\pm1}_+(y):.
\end{eqnarray}
Since
\begin{eqnarray}
S^a_+(x_1)\cdots S^a_+(x_r)
&\!\!=\!\!&
\prod_{i<j}
\Exp{-\sum_{n>0}{1\/n}{1-t^n\/1-q^n}(1+p^n){x_j^n\/x_i^n}}
\prod_{i=1}^r x_a^{2\beta(r-i)}
:\prod_{i=1}^r S^a_+(x_i):\cr
&\!\!=\!\!&
\Delta(x) C(x)\prod_{i=1}^r x_i^{(r-1)\beta}:\prod_{i=1}^r S^a_+(x_i):,
\end{eqnarray}
and
\begin{equation}
:\prod_{a=1}^{N-1}\prod_{i=1}^{r_a} S^a_+(x_i):|\widetilde\a_{r,s}\>
=\prod_{a=1}^{N-1}\prod_{i=1}^{r_a}
(x^a_i)^{(1-r_a+r_{a+1})\beta-(1+s_a)}(S^a_+(x_i))_- \cdot|\a_{r,s}\>,
\end{equation}
we obtain the proposition. \hfill\fbox{}
Note that $C(x)$ is a pseudo-constant under the $q$-shift, {\it i.e.,}
$q^{D_{x_i}}C(x)=C(x)$.
The expression in \eq{e:weightalpha}
is the same as that of $q=1$ case \cite{rAMOS}.
Remark that the singular vectors are also realized
by using the other screening currents $S_-^i(x)$
by the replacing $t$ with $q^{-1}$ and $\rb$ with $-1/\rb$ in
\eq{e:singular}, that is to say:
\begin{eqnarray}
\&\hspace{-5mm}
|\chi_{r,s}^-\> =
\oint\prod_{a=1}^{N-1}\prod_{j=1}^{r_a} {dx^a_j}\cdot
S^1_-(x^1_1)\cdots S^1_-(x^1_{r_1}) \cdots
S^{N-1}_-(x^{N-1}_1) \cdots S^{N-1}_-(x^{N-1}_{r_{N-1}})
|\widetilde\a_{r,s}^-\>\cr
&\!\!=\!\!&
\oint\prod_{a=1}^{N-1} \prod_{j=1}^{r_a} {dx^a_j\/x^a_j}\cdot
\prod_{a=1}^{N-1} \Pi_-\(\overline{x^a},x^{a+1}\) \Delta_-(x^a) C_-(x^a)
\prod_{j=1}^{r_a} (x^a_j)^{-s_a} (S^a_-(x^a_j))_-\cdot|\a_{r,s}^-\>,\cr
&&\label{e:singularMinus}
\end{eqnarray}
where $\widetilde\a_{r,s}^-$, $\a_{r,s}^-$, $\Pi_-$, $\Delta_-$ and $C_-$
are obtained from those without $-$ suffix
by the replacing $t$ with $q$ and $\rb$ with $-1/\rb$.
And $(S^a_-(z))_-$ is the negative mode part of $S^a_-(z)$.
\section{Vertex operator of fundamental representation}
Now we introduce a vertex operator.
Let $V(z)$ be the vertex operator defined as
\begin{equation}
V(z) \equiv
\,:\!\Exp{ -\sum_{n\neq0}{{\h 1n}\/1-q^n} p^{-{n\/2}}z^{-n} }\!:\,
e^{-\rb{\Qh 1}} z^{-\rb{\h 10}}.
\label{e:vertex}
\end{equation}
When $q=1$,
this $V(z)$ coincides with the vertex operator of fundamental representation.
Note that the fundamental vertex $\La_1(z)$ can be realized
by $V(z)$ as
\begin{equation}
\La_1(zp^{1\/2}) = \,:\!V(zq^{-1}) V^{-1}(z)\!:\, p^{N-1\/2}.
\end{equation}
Hence, this vertex operator $V(z)$ can be considered as
one of a building block of the {$q$-${\cal W}_N$} generators.
We have
\proclaim Proposition.
The vertex operator $V(w)$ enjoys the following Miura-like relation:
\begin{eqnarray}
\&\hspace{-5mm}
:\!\(p^{D_z} - g^L\({w\/z }\) \La_1(z )\)\cdots
\(p^{D_z} - g^L\({w\/zp^{1-N}}\) \La_N(zp^{1-N})\)\!:
V(w)\cr
\&
- V(w)
:\!\(p^{D_z} - \La_1(z ) g^R\({z \/w}\)\)\cdots
\(p^{D_z} - \La_N(zp^{1-N}) g^R\({zp^{1-N}\/w}\)\)\!:
\cr
\& \hspace{5mm}
= p^{N-1\/2}(1-t^{-1})\,\delta\({w\/z}p^{1\/2}\)
:V(wq^{-1}) \(p^{D_z} - \La_2(zp^{-1})\)\cdots\(p^{D_z} - \La_N(zp^{1-N})\):,
\nonumber
\end{eqnarray}
and
\begin{eqnarray}
g^L(x) &\!\!=\!\!&
\Exp{\sum_{n>0} {1\/n}(1-t^{ n}){1-p^n \/1-p^{ Nn}}p^{ {n\/2}}x^n }
t^{-{1\/N}},\cr
g^R(x) &\!\!=\!\!&
\Exp{\sum_{n>0} {1\/n}(1-t^{-n}){1-p^{-n}\/1-p^{-Nn}}p^{-{n\/2}}x^n }.
\end{eqnarray}
\noindent{\it Proof.\quad}
The fundamental relation is
\begin{equation}
g^L\({w\/z}\) \La_i(z) V(w) - V(w) \La_i(z) g^R\({z\/w}\) =
p^{{N-1\/2}} (t^{-1}-1) \,\delta_{i,1} \,\delta\({w\/z}p^{1\/2}\) V(wq^{-1}),
\end{equation}
{\it i.e.,}
\begin{eqnarray}
\(p^{D_z} - g^L\({w\/z}\) \La_i(z)\)V(w)
&\!\!=\!\!&
V(w)\(p^{D_z} - \La_i(z) g^R\({z\/w}\)\)\cr
&\!\!+\!\!&
p^{N-1\/2}(1-t^{-1})\,\delta_{i,1}\,\delta\({w\/z}p^{1\/2}\) V(wq^{-1}),
\label{e:LaV}
\end{eqnarray}
here we use
$:\!\La_1(wp^{1\/2}) V(w)\!:\, = V(wq^{-1}) p^{{N-1\/2}}$.
By using this relation \eq{e:LaV} and
$V(w) \La_i(z) g^R\({z/w}\) = :\!V(w)\La_i(z)\!:$,
we obtain the proposition.
\hfill\fbox{}
For example, when $N=3$,
the relation between the vertex operator $V(w)$ and the {$q$-${\cal W}_N$} generators is
\begin{eqnarray}
\&
g^L\({w\/z}\) W^1(z) V(w) - V(w) W^1(z) g^R\({z\/w}\)
=p (t^{-1}-1)\,\delta\({w\/z}p^{1\/2}\) V(wq^{-1}),\cr
\&
g^L\({w\/z}\) g^L\({w\/z}p\) W^2(zp^{-{1\/2}}) V(w) -
V(w) W^2(zp^{-{1\/2}}) g^R\({z\/w}\) g^R\({z\/w}p^{-1}\) \cr
\&\hspace{5mm}
=p (t^{-1}-1)\,\delta\({w\/z}p^{1\/2}\)
\(\,:\!V(wq^{-1})\La_2(wp^{-{1\/2}})\!:\,+
\,:\!V(wq^{-1})\La_3(wp^{-{1\/2}})\!:\,\).
\end{eqnarray}
\section{Macdonald polynomials}
Finally we present a relation with the Macdonald polynomials.
The excited states of trigonometric Ruijsenaars model are
called Macdonald symmetric functions $P_\la(z)$
and they are defined as follows:
\begin{eqnarray}
&&\qquad
H P_\la(z_1,\cdots,z_M) =\varepsilon_\la P_\la(z_1,\cdots,z_M),\cr
&&
H = \sum_{i=1}^M \prod_{j\neq i}
{t z_i - z_j \/ z_i - z_j
\cdot q^{D_{z_i}},\qquad
\varepsilon_\la = \sum_{i=1}^M t^{M-i} q^{\la_i},
\label{e:macDef}
\end{eqnarray}
where
the $\la = (\la_1\geq\la_2\geq\cdots\la_M\geq0)$ is a partition.
The Macdonald polynomials with general Young diagram $\la$
are realized as some kinds of correlation functions of
the screening currents and vertex operators of the {$q$-${\cal W}_N$} algebra
as follows:
\proclaim Theorem.
Macdonald polynomial $P_\la(z)$ with the Young diagram
$\la = \sum_{i=1}^{N-1} (s_i^{r_i})$, $r_i\geq r_{i+1}$
is written as
\begin{equation}
P_\la\(z_1,\cdots,z_M\)\propto
\<\a_{r,s}|\Exp{-\sum_{n>0}{{\h 1n}\/1-q^n}\sum_{i=1}^Mz_i^n}|\chi_{r,s}\>.
\end{equation}
Here $|\chi_{r,s}\>$ is a singular vector in \eq{e:singular}.
Note that the operator part of the above equation
is the positive mode part of the product of the vertex operators \eq{e:vertex}.
The Young diagram is as follows:
\generalYoung
\noindent{\it Proof.\quad}
First we have
\begin{equation}
\Exp{ -\sum_{n>0}{{\h 1n}\/1-q^n}\sum_{i=1}^M z_i^n} S^a_+(w) =
\Pi\(z,px^1\)^{\,\delta_{a,1}}
S^a_+(w) \Exp{ -\sum_{n>0}{{\h 1n}\/1-q^n}\sum_{i=1}^M z_i^n}.
\end{equation}
By \eq{e:singular},
the right hand side of the equation of this theorem is
\begin{equation}
\oint\prod_{a=1}^{N-1} \prod_{j=1}^{r_a} {dx^a_j\/x^a_j}\cdot
\Pi\(z,px^1\)
\prod_{a=1}^{N-1} \Pi\(\overline{x^a},px^{a+1}\) \Delta(x^a) C(x^a)
\prod_{j=1}^{r_a} (x^a_j)^{-s_a},
\label{e:macOPE}
\end{equation}
If we replace $x^a$ with $(p^a x^a)^{-1}$ in \eq{e:macOPE},
then the integrand coincides with that of the integral formula for
Macdonald polynomials in Ref. \cite{rAOS}
except for the $C(x)$ parts.
For the integral representation of the Macdonald polynomial,
we need only the property with respect to a $q$-shift.
Since this $C(x)$ is a pseudo-constant under it, {\it i.e.,}
$q^{D_{x_i}}C(x)=C(x)$,
they are integral representations of the Macdonald polynomial
(see appendix B).
\hfill\fbox{}
Remark that the Macdonald polynomials with the dual Young diagram
$\la'= \(r_1^{s_1},r_2^{s_2},\cdots,r_{N-1}^{s_{N-1}}\)$
are realized by using the other screening currents $S_-^i(x)$
with $|\chi_{r,s}^-\>$ in \eq{e:singularMinus} as
\begin{equation}
P_{\la'}\(-z\)\propto
\<\a_{r,s}^-|\Exp{-\sum_{n>0}{{\h 1n}\/1-q^n}\sum_{i=1}^Mz_i^n}|\chi_{r,s}^-\>.
\end{equation}
\section{Conclusion and discussion}
\def\FeiginFrenkel{
After finishing of this work,
we received the preprint
{\it ``Quantum $\cW$-algebras and elliptic algebras''}
by B.~Feigin and E.~Frenkel (q-alg/9508009).
They discuss similar things with ours.
Although the algebra of screening currents is considered there,
the normal ordering of $q$-$\cW$ generators and
the relation with the Macdonald polynomial are not given.
}
We have derived a quantum $\cW_N$ algebra
whose some kinds of correlation functions are the Macdonald polynomials.
\footnote\FeiginFrenkel
Jack polynomials are realized in the following two ways
(see also \cite{rLV}):
one is some kinds of correlation function of $\cW_N$ algebra
\cite{rMY,rAMOS},
the other is suitable combinations of
correlation functions of $\widehat{sl_N}$ algebra \cite{rMC}.
The relations between Macdonald polynomials,
the {$q$-${\cal W}_N$} algebra and the $U_q(\widehat{sl_N})$ algebra
are interesting.
In the classical limit $\hbar\rightarrow 0$ with $q\equiv e^\hbar$,
$q$-Miura transformation \eq{e:qMiura} reduces to
the classical one.
Since the right hand side of it is order $\hbar^N$,
the left hand side must be the same order.
To do so, $\hbar$ expansion of the {$q$-${\cal W}_N$} generators must be nontrivial.
Moreover, the classical generators are obtained as a linear
combination of the {$q$-${\cal W}_N$} generators.
\vskip5mm
\noindent{\bf Acknowledgments:}
\noindent
We would like to thank
B.~Feigin, E.~Frenkel and Y.~Matsuo
for valuable discussions.
S.O. would like to thank members of YITP for their hospitality.
This work is supported in part by Grant-in-Aid for Scientific
Research from Ministry of Science and Culture.
\section*{Appendix A: Quantum Virasoro algebra}
\def\Lukyanov{
The same operator with $S^1_+(z)$ was considered in \cite{rPL}.
}
In this appendix, we give an example when $N=2$,
{\it i.e.,} {${\cal V}ir\hspace{-.03in}_{q,t}\,$} in \cite{rSKAO}.
The fundamental bosons ${\h 1n}$ and ${\Qh 1}$ satisfy
\begin{equation}
[{\h 1n},{\h 1m}] = -{1\/n}{(1-q^n)(1-t^{-n})\/1+p^n}\,\delta_{n+m,0},\qquad
[{\h 10},{\Qh 1}] = {1\/2}.
\end{equation}
The root bosons are
${\al 1n} = (1+p^{-n}) {\h 1n}$ and ${\Qal 1} = 2 {\Qh 1}$.
The $q$-Virasoro generator $W^1(z)$,
the screening currents $S^1_\pm(z)$ and
the vertex operator $V(z)$ are now\footnote\Lukyanov
\begin{eqnarray}
W^1(z) &\!\!=\!\!&
:\Exp{ \sum_{n\neq 0}{\h 1n} z^{-n}}:q^{ \rb{\h 10}} p^{ 1\/2 }+
:\Exp{-\sum_{n\neq 0}{\h 1n}p^{-n}z^{-n}}:q^{-\rb{\h 10}} p^{-{1\/2}},\cr
S^1_\pm(z) &\!\!=\!\!&
:\Exp{\pm\sum_{n\neq0}{1+p^{-n}\/1-r_\pm^n}{\h 1n} z^{-n}}:
e^{\pm2\rb^{\pm1}{\Qh 1}} z^{\pm2\rb^{\pm1}{\h 10}},\quad
r_+ = q,\quad r_- = t,\cr
V(z) &\!\!=\!\!&
\,:\Exp{ -\sum_{n\neq0}{{\h 1n}\/1-q^n} p^{-{n\/2}}z^{-n} }:\,
e^{-\rb{\Qh 1}} z^{-\rb{\h 10}}.
\end{eqnarray}
The relations of them are
\begin{eqnarray}
f^{11}\({w\/z}\) W^1(z) W^1(w) &\!\!-\!\!& W^1(w) W^1(z) f^{11}\({z\/w}\) \cr
&\!\!=\!\!& -{(1-q)(1-t^{-1})\/1-p}
\left\{\,\delta\({w\/z}p\)-\,\delta\({w\/z}p^{-1}\)\right\},\cr
f^{11}(x) &\!\!=\!\!& \Exp{\sum_{n>0}{1\/n}{(1-q^n)(1-t^{-n})\/1+p^n}x^n}
\end{eqnarray}
\begin{eqnarray}
\[\,W^1(z), S^1_\pm(w)\]
&\!\!=\!\!& -(1-q^{\pm1})(1-t^{\mp1})
{d\/d_{r_\pm}w}
\left\{w\,\delta\({w\/z}\)A^1_\pm(w)\right\},\cr
A^1_\pm(w) &\!\!=\!\!&
:\Exp{\sum_{n\neq0}{1+r_\mp^{\pm n}\/1-r_\pm^{\pm n}}{\h 1n}w^{-n} }:\,
e^{\pm2\rb^{\pm1}{\Qh 1}}w^{\pm2\rb^{\pm1}{\h 10}}
q^{\mp\rb{\h 10}}p^{\mp{1\/2}},\nonumbe
\end{eqnarray}
\begin{eqnarray}
g^L\({w\/z}\) W^1(z) V(w) &\!\!-\!\!& V(w) W^1(z) g^R\({z\/w}\)
= p^{1\/2}(t^{-1}-1) \,\delta\({w\/z}p^{1\/2}\) V(wq^{-1}),\cr
g^{L\atop R}(x) &\!\!=\!\!&
\Exp{\sum_{n>0} {1\/n}{1-t^{\pm n}\/1+p^{\pm n}}p^{\pm{n\/2}}x^n }
t^{-{1\pm1\/4}}.
\end{eqnarray}
For non-negative integers $s$ and $r\geq0$,
the singular vectors $|\chi_{rs}\>\in\cF_{\a_{rs}}$ are
\begin{eqnarray}
|\chi_{r,s}\> &\!\!=\!\!&
\oint\prod_{j=1}^{r} {dx_j}\cdot
S^1_+(x_1)\cdots S^1_+(x_{r}) |\a_{-r,s}\>\cr
&\!\!=\!\!&
\oint\prod_{j=1}^{r} {dx_j\/x_j}\cdot
\Delta(x) C(x)\prod_{j=1}^{r} (x_j)^{-s} (S_+(x_j))_-\cdot|\a_{r,s}\>
\end{eqnarray}
with
$\a_{r,s}^1 =\rb(1+r)\rbi(1+s)$.
$\Delta(x)$ and $C(x)$ are the same as \eq{e:Delta}.
\section*{Appendix B: Integral formula for the Macdonald polynomials}
Finally, we recapitulate the integral representation
of the Macdonald polynomials \cite{rAOS}
(\cite{rMY2,rAMOS} in the $q=1$ case).
Let us denote the Macdonald polynomial defined by \eq{e:macDef} as
$P_\la(z;q,t)$ or $P_\la(z_1,\cdots,z_M;q,t)$.
\proclaim Proposition.
The Macdonald polynomials with the Young diagram
$\la = \sum_{i=1}^{N-1} \(s_i^{r_i}\)$ or with its dual
$\la'= \(r_1^{s_1},r_2^{s_2},\cdots,r_{N-1}^{s_{N-1}}\)$ are
realized as follows:
\begin{eqnarray}
P_\la(z;q,t)&\!\!\propto\!\!&
\oint\prod_{a=1}^{N-1} \prod_{j=1}^{r_a} {dx^a_j\/x^a_j}\cdot
\Pi\(z,\overline{x^1}\)
\prod_{a=1}^{N-1} \Pi\(x^a,\overline{x^{a+1}}\) \Delta(x^a) C(x^a)
\prod_{j=1}^{r_a} (x^a_j)^{s_a},\cr
P_{\la'}(z;t,q)&\!\!\propto\!\!&
\oint\prod_{a=1}^{N-1} \prod_{j=1}^{r_a} {dx^a_j\/x^a_j}\cdot
\widetilde\Pi\(z,\overline{x^1}\)
\prod_{a=1}^{N-1} \Pi\(x^a,\overline{x^{a+1}}\) \Delta(x^a) C(x^a)
\prod_{j=1}^{r_a} (x^a_j)^{s_a},\nonumber
\end{eqnarray}
with an arbitrary pseudo-constant $C(x)$ such that $q^{D_{x_i}}C(x)=C(x)$.
Here $\widetilde\Pi(x,y)\equiv \prod_{ij}(1+x_i y_j)$.
$\Pi$ and $\Delta$ are in \eq{e:Delta}.
\noindent{\it Proof.\quad}
This proposition is proved by using
two transformations in the following lemmas iteratively.
The first transformation adds a rectangle to the Young diagram
and the second one increases the number of variables.
\hfill\fbox{}
\proclaim Lemma 1. Galilean transformation.
$($eq.\ $(VI.4.17)$ in $\cite{rM})$
\begin{equation}
P_{\la+(s^r)}(x_1,\cdots,x_r) = P_\la(x_1,\cdots,x_r)\prod_{i=1}^r x_i^s.
\end{equation}
This transformation adds a rectangle Young diagram to the original one:
$$
\Galilei
$$
\proclaim Lemma 2. Particle number changing transformation.
\begin{eqnarray}
P_\la(x_1,\cdots,x_N;q,t) &\!\!\propto\!\!&
\oint \prod_{j=1}^M {dy_j\/y_j}
\Pi(x,\overline y)\Delta(y) C(y) P_\la(y_1,\cdots,y_M;q,t),\cr
P_{\la'}(x_1,\cdots,x_N;t,q) &\!\!\propto\!\!&
\oint \prod_{j=1}^M {dy_j\/y_j}
\widetilde\Pi(x,\overline y)\Delta(y) C(y) P_\la(y_1,\cdots,y_M;q,t),\nonumber
\end{eqnarray}
here $C(y)$ is an arbitrary pseudo-constant $q^{D_{y_i}}C(y)=C(y)$
and $\la'$ is a dual Young diagram of $\la$.
\noindent{\it Proof.\quad}
Let us define scalar products $\<*,*\>$ and the another one $\<*,*\>'_N$
as follows:
\begin{eqnarray}
\<f,g\> &\!\!\equiv\!\!&
\oint\prod_{n>0} {dp_n\/2\pi i p_n}\,{f(\overline p)}\,g(p),\cr
\<f,g\>'_N &\!\!\equiv\!\!& {1\/N!}
\oint\prod_{j=1}^N {dx_j\/2\pi i x_j}\Delta(x)\,{f(\overline x)}\,g(x),
\end{eqnarray}
for the symmetric functions $f$ and $g$
with $p_n\equiv\sum_{i=1}^N x_i^N$,
$\overline{p_n}\equiv n{1-q^n\/1-t^n}{\partial\/\partial p_n}$ and
$\overline{x_j}\equiv{1/x_j}$ .
Here we must treat the power-sums $p_n$ as formally independent variables,
{\it i.e.},
${\partial\/\partial p_n}\, p_m = \delta_{n,m}$ for all $n,m>0$.
Then (eq.\ (VI.4.13) and (VI.5.4) in \cite{rM})
\begin{eqnarray}
\Pi(x,y) &\!\!=\!\!&
\sum_\la P_\la(x;q,t) P_\la(y;q,t) \<P_\la,P_\la\>^{-1},\cr
\widetilde\Pi(x,y) &\!\!=\!\!&
\sum_\la P_\la(x;q,t) P_{\la'}(y;t,q).
\label{e:completeness}\end{eqnarray}
Since the Macdonald operator is self-adjoint
for the another scalar product $\<*,*\>'_N$, that is to say
$\<H\,f,g\>'_N = \<f,H\,g\>'_N$ (eq.\ (VI.9.4) in \cite{rM}),
the Macdonald polynomials are orthogonal for this product
$\<P_\la,C\,P_\mu\>'_N \propto \delta_{\la,\mu}$
with an arbitrary pseudo-constant $C$.
The proposition follows from the completeness \eq{e:completeness} and
the orthogonality of $P_\la$'s.
\hfill\fbox{}
Remark that the above lemma 2 is also proved directly by using
the power-sum representation of the Macdonald operator \cite{rAMOS}.
Since that is also important to analyze the algebraic properties of
the Macdonald polynomials, we review it here.
\proclaim Proposition.
Macdonald operator $H(x_1,\cdots,x_N)$ are written
by the power sums $p_n \equiv \sum_{i=1}^N x_i^n$ as follows:
\begin{equation}
H = {t^N\/t-1}\oint{d\xi\/2\pi i \xi}
\Exp{\sum_{n>0}{1-t^{-n}\/n}p_n \xi^n}
\Exp{\sum_{n>0}(q^n-1){\partial\/\partial p_n} \xi^{-n}}
-{1\/t-1}.
\end{equation}
\noindent{\it Proof.\quad}
Since $q^{D_{x_i}} p_n = \((q^n-1)x_i^n+p_n\) q^{D_{x_i}}$,
we have
\begin{equation}
q^{D_{x_i}} =\,:\!\Exp{\sum_{n>0}(q^n-1)x_i^n{\partial\/\partial p_n}}\!:\,
= \oint{d\xi\/2\pi i \xi} \sum_{n\geq0} x_i^n \xi^n \cdot
\Exp{\sum_{n>0}(q^n-1){\partial\/\partial p_n} \xi^{-n}},
\end{equation}
here $:*:$ stands for the normal ordering such that
the differential operators ${\partial\/\partial p_n}$ are in the right.
It follows from eq.\ (III.2.9) and (III.2.10) in \cite{rM} that
\begin{equation}
\sum_i\prod_{j\neq i} {tx_i-x_j\/x_i-x_j}
\sum_{n\geq 0}x_i^n \xi^n
=
{t^N\/t-1}\Exp{\sum_{n>0}{1-t^{-n}\/n}p_n \xi^n}
-{1\/t-1}.
\end{equation}
This gives us the proposition. \hfill\fbox{}
Let
$\widetilde H_N(x_1,\cdots,x_N) \equiv t^{-N}\((t-1)H(x_1,\cdots,x_N)+1\)$,
then
\begin{equation}
\widetilde H_N(x_1,\cdots,x_N) \Pi(x,y) =
\widetilde H_M(y_1,\cdots,y_M) \Pi(x,y).
\end{equation}
With the self-adjointness of $H$ for the another scalar product,
we obtain the lemma 2 again.
|
1,108,101,564,337 | arxiv | \section*{Results}
\subsection*{Restricted Boltzmann Machines}
\subsubsection{Definition}
A Restricted Boltzmann Machine (RBM) is a joint probabilistic model for sequences and representations, see Fig.~1C. It is formally defined on a bipartite, two-layer graph (Fig.~1B). Protein sequences ${\bf v} = (v_1,v_2,...,v_N)$ are displayed on the Visible layer, and representations ${\bf h} = (h_1,h_2,...,h_M)$ on the Hidden layer. Each visible unit takes one out of $21$ values (20 amino acids + 1 alignment gap). Hidden-layer unit values $h_\mu$ are real. The joint probability distribution of ${\bf v}, {\bf h}$ is
\begin{equation}\label{Energy}
P({\bf v},{\bf h}) \propto \exp \bigg( \sum_{i=1}^N g_i(v_i) - \sum_{\mu =1}^M \mathcal{U}_\mu(h_\mu) + \sum_{i,\mu} h_\mu\, w_{i\mu} (v_i) \bigg) \ ,
\end{equation}
up to a normalization constant. Here, the weight matrix $w_{i\mu}$ couples the visible and the hidden layers, and $g_i(v_i)$ and $\mathcal{U}_\mu(h_\mu)$ are local potentials biasing the values of, respectively, the visible and the hidden variables (Figs.~1B,D).
\subsubsection{From sequence to representation, and back}
Given a sequence $\bf v$ on the visible layer, the hidden unit $\mu$ receives the input
\begin{equation}\label{inputmu}
I_\mu ({\bf v}) = \sum_{i} w_{i\mu} (v_i) \ .
\end{equation}
This expression is analogous to the score of a sequence with a position-specific weight matrix. \rev{Large positive or negative} $I_\mu$ \rev{values signal a good match between the sequence and, respectively, the positive and the negative components of the weights attached to unit} $\mu$, \rev{whereas small} $|I_\mu|$ \rev{correspond to a bad match.}
The input $I_\mu$ determines, in turn, the conditional probability of the activity $h_\mu$ of the hidden unit,
\begin{equation}
\label{condmu}
P( h_\mu | {\bf v}) \propto \exp \big(-\mathcal{U}_\mu(h_\mu) + h_\mu\, I_\mu({\bf v}) \big) \ ,
\end{equation}
up to a normalization constant. The nature of the potential ${\cal U}$ is crucial to determine how the average activity $h$ varies with the input $I$, see Fig.~1E and below.
In turn, given a representation (set of activities) $\bf h$ on the hidden layer, the residues on site $i$ are distributed according to
\begin{equation}
\label{condv}
P( v_i | {\bf h}) \propto \exp \bigg( g_i(v_i) + \sum _\mu h_\mu\, w_{i\mu}(v_i) \bigg)\ .
\end{equation}
Hidden units with large activities $h_\mu$ strongly bias this probability, and favor values of $v_i$ corresponding to large weights $w_{i\mu}(v_i)$.
Use of Eqn.~(3) allows us to sample the representation space given a sequence, while Eqn.~(4) defines the sampling of sequences given a representation, see both directions in Fig.~1C. Iterating this process generates high-probability representations, which, in turn produce very likely sequences, and so on.
\subsubsection{Probability of a sequence}
\rev{The probability of a sequence}, $P({\bf v})$, \rev{is obtained by summing (integrating)} $P({\bf v},{\bf h})$ \rev{over all its possible representations} $\bf h$.
\begin{equation} \label{marginal}
P({\bf v}) = \int \prod_{\mu=1}^M dh_\mu P({\bf v}, {\bf h}) \propto \exp \bigg[ \sum_{i=1}^N g_i(v_i) + \sum_{\mu=1}^M \Gamma_\mu\big (I_\mu ({\bf v})\big) \bigg] \ ,
\end{equation}
\rev{where} $\Gamma_\mu(I) = \log \int dh \, e^{-U_\mu(h) + h \,I} $ \rev{is the cumulant-generating function associated to the potential} ${\cal U}_\mu$ \rev{and is a function of the input to hidden unit} $\mu$, see Eqn.~(\ref{inputmu}).
\rev{For quadratic potentials} ${\cal U}_\mu(h)=\frac {\gamma_\mu}{ 2} h^2 + \theta_\mu h$ (Fig.~1E), \rev{the conditional probability $P( h_\mu | {\bf v})$ is Gaussian, and the RBM is said to be Gaussian. The cumulant-generating functions }$\Gamma_\mu(I) = \frac{1}{2 \gamma_\mu}(I-\theta_\mu)^2$ \rev{are quadratic, and their sum in }Eqn.~(\ref{marginal}) \rev{gives rise to effective pairwise couplings between the visible units,} $J_{ij} (v_i,v_j) =\sum _\mu \frac{1}{\gamma_\mu} w_{i\mu} (v_i) w_{j\mu}(v_j)$. \rev{Hence, a Gaussian RBM is equivalent to a Hopfield-Potts model} \cite{cocco2013principal}, \rev{where the number} $M$ \rev{of hidden units plays the role of the number of Hopfield-Potts `patterns'. }
\rev{Non-quadratic potentials }${\cal U}_\mu$, \rev{and, hence, non-quadratic} $\Gamma(I)$, \rev{introduce couplings to} \textit{all orders} \rev{between the visible units, all generated from the weights} $w_{i\mu}$. \rev{RBM thus offer a practical way to go beyond pairwise models, and express complex, high-order dependencies between residues, based on the inference of a limited number of interaction parameters (controlled by }$M$\rev{). In practice, for each hidden unit, we consider the class of 4-parameter potentials, }
\begin{equation}
\mathcal{U}_\mu(h) = \frac12 \gamma_{\mu,+} h_+^2 + \frac12 \gamma_{\mu,-} h_-^2 + \theta_{\mu,+} h_+ + \theta_{\mu,-} h_-\ ,\quad \text{where} \quad h_+ = \max(h,0)\ , \quad h_- = \min(h,0) \ ,
\end{equation}
\rev{hereafter called double Rectified Linear Units (dReLU) potentials (Fig.~1E). Varying the parameters allows us to span a wide class of behaviors, including quadratic potentials, double-well potentials (leading to bimodal distributions for }$h_\mu$\rev{) and hard constraints (e.g. preventing }$h_\mu$ \rev{from being negative).}
\rev{RBM can thus be thought of both as a framework to extract representations from sequences through Eqn.~(3), and as a way to model complex interactions between residues in sequences through Eqn.~(5). They constitute a natural candidate to unify (and improve) PCA-based and Direct-Coupling-based approaches to protein modeling.}
\subsubsection{Learning}
The weights $w_{i\mu}$ and the defining parameters of the potentials $g_i$ and ${\cal U}_\mu$ are learned by maximizing the average log-probability $\esp{\log P(\bf v) }_{MSA}$ of the sequences $\bf v$ in the Multiple Sequence Alignment (MSA). In practice, estimating the gradients of the average log-probability with respect to these parameters requires to sample from the model distribution $P({\bf v})$, which is done through Monte Carlo simulation of the RBM, see Methods.
We also introduce penalty terms over the weights $w_{i\mu}(v)$ (and the local potentials $g_i(v)$ on visible units) to avoid overfitting, and to promote sparse weights. Sparsity facilitates the biological interpretation of weights and thus, emphasizes the correspondence between representation and phenotypic spaces (Fig.~1C). Crucially, imposing sparsity also forces the RBM to learn a so-called compositional representation, in which each sequence is characterized by a subset of strongly activated hidden units, of size large compared to 1 but small compared to $M$ \cite{tubiana2017emergence}. All technical details about the learning procedure are reported in Methods.
In the next sections, we present results for selected values of the number of hidden units and of the regularization penalty. The values of these (hyper-)parameters are justified afterwards.
\subsection*{Kunitz domain}
\noindent
\subsubsection*{Description}
The majority of natural proteins are obtained by concatenating functional building blocks, called protein domains. The Kunitz domain, with a length of about 50-60 residues (protein family PF00014 \cite{finn2013pfam}) is present in several genes and its main function is to inhibit serine protease such as trypsin. Kunitz domains play a key role in the regulation of many important processes in the body such as tissue growth and remodeling, inflammation, body coagulation and fibrinolysis. They are implicated in several diseases such as tumor growth, Alzheimer, cardiovascular and inflammatory diseases and, therefore, have been largely studied and shown to have a large potential in drug design \cite{shigetomi2010anti,bajaj2001structure}.
Some examples of Kunitz domain-containing proteins include the Basic Pancreatic Trypsin Inhibitor (BPTI, 1 Kunitz domain), the Bikunin (2 domains) \cite{fries2000bikunin}, Hepatocyte growth factor activator inhibitor (HAI, 2 domains) and tissue factor pathway inhibitor (TFPI, 3 domains) \cite{shigetomi2010anti,bajaj2001structure}.
Figure~2A shows the MSA sequence logo and the secondary structure of the Kunitz domain. It is characterized by two $\alpha$ helices and two $\beta$ strands; Cystein-Cystein disulfide bridges largely contribute to the thermodynamic stability of the domain, as frequently observed in small proteins. BPTI structure was the first one ever resolved \cite{ascenzi2003bovine}, and is often used to benchmark folding predictions based on simulations \cite{levitt1975computer} and coevolutionary approaches \cite{morcos2011direct,hopf2012three,kamisetty2013assessing,cocco2013principal,haldane2018coevolutionary}. We train a RBM with $M=100$ dReLU on the MSA of PF00014, constituted by $B=8062$ sequences with $N=53$ consensus sites.
\subsubsection*{Inferred weights and interpretations.}
Figure~2B shows the weights $w_{i\mu}(v)$ attached to 5 selected hidden units. Each logo identifies the amino-acid motifs in the sequences $\bf v$ giving rise to large (positive or negative) inputs $I$ onto the associated hidden unit, see Eqn.~(2).
Weight 1 in Fig.~2B has large components on sites 45 and 49, in contact in the final $\alpha_2$ helix (Figs.~2A\&D). The distribution of the inputs $I_1$ partitions the MSA in three subfamilies (Fig.~2C, top panel, dark blue histogram). The two peaks in $I_1\simeq -2.5$ and $I_1\simeq 1.5$ identify sequences in which the contact is due to an electrostatic interaction with, respectively, $(+,-)$ and $(-,+)$ charged amino acid on sites 45 and 49; the other peak in $I_1\simeq 0$ identify sequences realizing the contact differently, e.g. with an aromatic amino acid on site 45. Weight 1 shows also a weaker electrostatic component on site 53 in Fig.~2B; the 4-site separation between sites 45--49--53 fits well with the average helix turn of 3.6 amino acids (Fig.~2D).
Weight 2 focuses on the contact between residues 11-35, realized in most sequences by a C-C disulfide bridge (Fig.~2B and negative $I_2$ peak in Fig.~2C, top). A minority of sequences in the MSA, corresponding to $I_2>0$ and mostly coming from nematode organisms (Appendix 1, Fig.~19), do not show the C-C bridge. A subset of these sequences strongly and positively activate hidden unit~3 (Appendix 1, Fig.~19 and $I_3>0$ peak in Fig.~2C). Positive components in weight~3 logo suggest that these proteins stabilize their structure through electrostatic interactions between sites 10 ($-$ charge) and 33-36 ($+$ charges both), see Figs.~2B\&D, to compensate the absence of C-C bridge on the neighbouring sites 11-35.
Weight 4 describes a feature mostly localized on the loop preceding the $\beta_1$-$\beta_2$ strands (sites 7 to 16), see Figs.~2B\&D. Structural studies of the trypsin-trypsin inhibitor complex have shown that this loop binds to proteases \cite{marquart1983geometry}; site 12 is in contact with the active site of the protease and is therefore key to the inhibitory activity of the Kunitz domain. The two amino acids (R, K) having a large positive contribution to weight 4 in position 12 are basic and bind to negatively charged residues (D, E) on the active site of trypsin-like serine proteases. While several Kunitz domains with known trypsin inhibitory activity, such as BPTI, TFPI, TPPI-2,... give rise to large and positive inputs $I_4$, Kunitz domains with no trypsin/chymotrypsin inhibition activity, e.g. associated to COL7A1 and COL6A3 genes \cite{chen2001carboxyl,kohfeldt1996conversion}, correspond to negative or vanishing values of $I_4$. Hence, hidden unit 4 possibly separates the Kunitz domains having trypsin-like protease inhibitory activity from the others.
This interpretation is also in agreement with mutagenesis experiments carried out on sites 7 to 16 to test the inhibitory effects of Kunitz domains BPT1, HAI-1, and TFP1 against trypsine-like proteases \cite{bajaj2001structure,kirchhofer2003tissue,shigetomi2010anti,grzesiak2000inhibition,chand2004structure}. In \cite{kirchhofer2003tissue} it was shown that mutation R12A on the first domain (out of two) of HAI-1 destroyed its inhibitory activity; a similar effect was observed with R12X, with X a non basic residue, in the first two domains (out of three) of TFP1 as discussed in \cite{bajaj2001structure}. The affinity between human serine proteases and the mutants G9F, G9S, G9P of bovine BPTI was shown to decrease in \cite{grzesiak2000inhibition}. Conversely, in \cite{kohfeldt1996conversion} it was shown that the set of mutations P10R, D13A, F14R could convert the COL6A3 domain into a Trypsin inhibitor. All these results are in agreement with the above interpretation and the logo of weight 4. Note that, though several sequences have large $I_4$ (top histogram in Fig.~2C), many correspond to small or negative values. This may be explained by the facts that (i) many of the Kunitz domains analyzed are present in two or more copies, and as such, are not all required to strongly bind trypsin \cite{bajaj2001structure} and (ii) Kunitz domain may have other specificities encoded by other hidden units. In particular, weight 34 in Supporting Information, displays on site 12 large components associated to medium to large size hydrophobic residues (L, M, Y), and is possibly related to other serine protease specificity classes such as chymotrypsin \cite{appel1986chymotrypsin}.
Weight~5 codes for a complex extended mode. To interpret this feature, we display in Fig.~2C (bottom histogram) the distributions of Hamming distances between all pairs of sequences in the MSA (gray histograms) and between the 100 sequences $\bf v$ in the MSA with largest inputs $|I_\mu({\bf v})|$ to the corresponding hidden unit (light blue histograms). For hidden unit 5, the distances between those top-input sequences are smaller than between random sequences in the MSA, suggesting that weight 5 is characteristic of a cluster of closely related sequences. Here, these sequences correspond to the protein Bikunin present in most mammals and some other vertebrates \cite{shigetomi2010anti}. Conversely, for other hidden units (e.g. 1,2), both histograms are quite similar, showing that the corresponding weight motifs are found in evolutionary distant sequences.
\rev{The five weights above were chosen based on several criteria: (i) Weight norm, which is a proxy for the relevance of the hidden unit. Hidden units with larger weight norms contribute more to the likelihood, whereas weight with low norms may arise from noise/overfitting. (ii) Weight sparsity. Hidden units with sparse weights are more easily interpretable in terms of structural/functional constraints. (iii) Shape of input distributions. Hidden units with multimodal input distributions separate the family in subfamilies, and are therefore potentially interesting. (iv) Comparison with available literature. (v) Diversity.} The remaining 95 inferred weights are shown in Supporting Information. We find a variety of structural features, \textit{e.g.} pairwise contacts as in weights 1 and 2, \rev{also reminiscent of the localized, low-eigenvalue modes of the Hopfield-Potts model }\cite{cocco2013principal}, and phylogenetic features (activated by evolutionary related sequences as hidden unit $5$); the latter include in particular stretches of gaps, mostly located at the extremities of the sequence \cite{cocco2013principal}. Several weights have strong components on the same sites as weight 4, showing the complex pattern of amino acids controlling binding affinity.
\begin{figure}
\begin{fullwidth}
\centering
\includegraphics[scale=0.95,angle=90]{figure2.pdf}
\caption{ {\bf Modeling Kunitz Domain with RBM.} {\bf A.} Sequence logo and secondary structure of the Kunitz domain (PF00014), showing two $\alpha$-helices and two $\beta$-strands. Note the presence of the three C-C disulfide bridges between 11-35, 2-52, 27-48. {\bf B.} Weight logos for five hidden units, see text. Positive and negative weights are shown by letters located, respectively, above and below the zero axis. Values of the norms $\|W_\mu\|_2 = \sqrt{\sum _{i,v} w_{i\mu}(v)^2}$ are given. Same color code for the amino acids as in Fig.~1A. {\bf C.} Top: Distribution of inputs $I_\mu({\bf v})$ over the sequences $\bf v$ in the MSA (dark blue), and average activity vs. input function (full line, left scale); red points correspond to activity levels used for design in Fig.~5. Bottom: Histograms of Hamming distances between sequences in the MSA (grey) and between the 20 sequences (light blue) with largest (for unit 2,3,4) or smallest (1,5) $I_\mu$. {\bf D.} 3D visualization of the weights, shown on PDB structure 2knt \cite{merigeau19981} using VMD \cite{humphrey1996vmd}. White spheres denote the positions of the 3 disulfide bridges in the wild type sequence. Green spheres locate residues $i$ such that $\sum_v |w_{i\mu}(v) |>S$, with $S= 1.5$ for hidden units $\mu=1,2,3$, $S=1.25$ for $\mu=4$, and $S=0.5$ for $\mu=5$. }
\label{fig2}
\end{fullwidth}
\end{figure}
\clearpage
\subsection*{WW domain}
\subsubsection{Description}
WW is a protein-protein interaction domain found in many eukaryotes and human signalling proteins, involved in essential cellular processes such as transcription, RNA processing, protein trafficking, receptor signalling. WW is a short domain of length 30-40 amino-acids (Fig.~3A, PFAM PF00397, $B=7503$ sequences, $N=31$ consensus sites), which folds into a three-stranded antiparallel $\beta$-sheet. The domain name stems from the two conserved tryptophans (W) at positions 5-28 (Fig.~3A), which serve as anchoring sites for the ligands. WW domains bind to a variety of proline (P)-rich peptide ligands, and can be divided into four groups, based on their preferential binding affinity \cite{sudol2000new}. Group I binds specifically to PPXY motif - where X is any amino acid; Group II to PPLP motifs; Group III to proline-arginine containing sequences (PR); Group IV to phosphorylated serine/threonine-proline sites [p(S/T)P]. Modulation of binding properties allow hundreds of WW domain to specifically interact with hundreds of putative ligands in mammalian proteomes.
\subsubsection{Inferred weights and interpretation}
Four weight logos of the inferred RBM are shown in Fig.~3B; the remaining 96 weights are given in Supporting Information. Weight 1 codes for a contact between sites 4-22 realized either by two amino acids with oppositive charges ($I_1<0$), or by one \rev{small} and one negatively charged amino acid ($I_1>0$). Weight 2 shows a $\beta$-sheet--related feature, with large entries defining a set of mostly hydrophobic ($I_2>0$) or hydrophilic ($I_2<0$) residues localized on the $\beta_1$ and $\beta_2$ strands (Fig.~3B) and in contact on the 3D fold, see Fig.~3D. The activation histogram in FIg.~3C, with a large peak on negative $I_2$, suggest that this part of the WW domain is exposed to the solvent in most, but not all, natural sequences.
Weights 3 and 4 are supported by sites on the $\beta_2$-$\beta_3$ binding pocket and on the $\beta_1$-$\beta_2$ loop of the WW domain. The distributions of activities in Fig.~3C highlight different groups of sequences in the MSA that strongly correlate with experimental ligand-type identification, see Fig.~3E. We find that
(i) Type I domains are characterized by $I_3<0$ and $I_4>0$;
(ii) Type II/III domains are characterized by $I_3 >0 $ and $I_4>0$;
(iii) There is no clear distinction between Type II and Type III domains;
(iv) Type IV domains are characterized by $I_3>0$ and $I_4<0$.
These findings are in good agreement with various studies:
(i) Mutagenesis experiment have shown the importance of sites 19, 21, 24, 26 for binding specificity \cite{espanel1999single,fowler2010high}. For the YAP1 WW domain, as confirmed by various studies (see Table~2 in \cite{fowler2010high}), the mutations H21X and T26X reduce the binding affinity to Type I ligands, while Q24R increases it and S12X has no effect. This is in agreement with the negative components of weight 3 (Fig.~3B): $I_3$ increases upon mutations H21X and T26X, decreases upon Q24R and is unaffected by S12X. Moreover the mutation L19W alone, or combined with
H21[D/G/K/R/S] could switch the specificity from Type I to Type II/III \cite{espanel1999single}. These results are consistent with Fig. 3E: YAP1 (blue cross) is of Type I but one or two mutations move it to the right side, closer to the other cluster (orange crosses). Espanel and Sudol \cite{espanel1999single} also proposed that Type II/III specifity required the presence of an aromatic amino acid (W/F/Y) on site 19, in good agreement with weight 3.
(ii) The distinction between Types II and III is unclear in the literature, because WW domains often have high affinity with both ligand types.
(iii) Several studies \cite{russ2005natural,kato2002determinants,jager2006structure} have demonstrated the importance of the $\beta_1$-$\beta_2$ loop for achieving Type IV specificity, which requires a longer, more flexible loop, as opposed to short rigid loop for other types. The length of the loop is encoded in weight 4 through the gap symbol on site 13: short and long loops correspond to, respectively, positive and negative $I_4$. The importance of residues R11 and R13 was shown in \cite{kato2002determinants} and \cite{russ2005natural}, where removing R13 of Type IV hPin1 WW domain reduced its binding affinity to [p(S/T)P] ligands. These observations agree with the logo of weight 4, whoch authorizes substitutions between K and R on sites 11 and 13.
(iv) A specificity-related sector of eight sites was identified in \cite{russ2005natural}, five of which carry the top entries of weight~3 (green balls in Fig.~3D). Our approach not only provides another specificity-related feature (weight 4) but also the motifs of amino acids affecting Type I \& IV specificity, in good agreement with the experimental findings of \cite{russ2005natural}.
\newpage
\begin{figure}
\begin{fullwidth}
\centering
\includegraphics[scale=0.95,angle=90]{figure3.pdf}
\caption{{\bf Modeling WW Domain with RBM.} {\bf A.} Sequence logo and secondary structure of the WW domain (PF00397), with three $\beta$-strands. Note the two conserved W in positions 5 and 28. {\bf B.} Weight logos for four representative hidden units, same as Fig.~3B. {\bf C.} Corresponding inputs, average activities and distances between top-20 feature activating sequences, same as Fig.~3C. {\bf D.} 3D visualization of the features, shown on the PDB structure 1e0m \cite{macias2000structural}. White spheres locate the two W. Green spheres locate residues $i$ such that $\sum_v |w_{i\mu}(v)|>0.7$ for each hidden unit $\mu$. {\bf E.} Scatter plot of inputs $I_3$ vs. $I_4$. Gray dots represent the sequences in the MSA; they cluster into three main groups. Colored dots show artificial or natural sequences whose specificities, given in the legend, were tested experimentally. Upper triangle: natural, from \cite{russ2005natural}. Lower triangle: artificial, from \cite{russ2005natural}. Diamond: natural, from \cite{otte2003ww}. Crosses: YAP1 (0) and variants (1 and 2 mutations from YAP1), from \cite{espanel1999single}. The three clusters match the standard ligand type classification. }
\label{fig3}
\end{fullwidth}
\end{figure}
\clearpage
\subsection*{Hsp70 Protein}
\subsubsection{Description}
70-kDa heat shock proteins (Hsp70) form a highly-conserved family represented in essentially all organisms. Hsp70, together with other chaperone proteins, perform a variety of essential functions in the cell: they can assist folding and assembly of newly synthetized proteins, trigger refolding cycles of misfolded proteins, transport unfolded proteins through organelle membranes, and when necessary, deliver non-functional proteins to the proteasome, endosome or lysosome for recycling \cite{bukau1998hsp70,young2004pathways,zuiderweg2017remarkable}. There are 13 HSP70s protein-encoding genes in humans, differing by where (nucleus/cytoplasm, mitochondria, endoplasmic reticulum) and when they are expressed. Some, such as HSPA8 (Hsc70) are constitutively expressed whereas others such as HSPA1 and HSPA5 are stress-induced (respectively by heat shock and glucose deprivation). Notably, Hsc70 can make up to 3\% of the total total mass of proteins within the cell, and is thus one of its most important housekeeping genes.
Structurally, Hsp70 are multi-domain proteins of length of 600-670 sites (631 for E-Coli DNaK gene). They consist of
\begin{itemize}
\item A Nucleotide Binding Domain (NBD, ~400 sites) that can bind and hydrolyse ATP.
\item A Substrate Binding Domain (SBD sites), folded in a beta-sandwich structure, which binds to the target peptide or protein.
\item A flexible, hydrophobic interdomain-linker linking the NBD and the SBD.
\item A LID domain, constituted by several (up to 5) $\alpha$ helices, which encapsulates the target protein and blocks its release.
\item An unstructured C-terminal tail of variable length, important for detection and interaction with other co-chaperones, such as Hop proteins \cite{scheufler2000structure}.
\end{itemize}
Hsp70 functions by adopting two different conformations, see Figs.~4A\&B. When the NBD is bound to ATP, the NBD and the SBD are held together and the LID is open, such that the protein has low binding affinity to substrate peptides. After hydrolysis of ATP to ADP, the NBD and the SBD detach from one another, and the LID is closed, yielding high binding affinity and effectively trapping the peptides between the SBD and the LID. By cycling between both conformations, Hsp70 can bind to misfolded proteins, unfold them by stretching (e.g. with two Hsp70 bound at two ends of the protein) and release them for refold cycles. Since Hsp70 alone have low ATPase activity, this cycle requires another type of co-chaperone, J-protein, which simultaneously binds to the target protein and the Hsp70 to stimulate its ATPase activity, as well as a Nucleotide Exchange Factor (NEF) that favors swaps of the ADP back to ATP and hence release of the target protein, see Fig.~1 in \cite{zuiderweg2017remarkable}.
We have constructed a multiple sequence alignment for HSP70 with $N=675$ consensus sites and $B=32,170$ sequences, starting from the seeds of \cite{malinverni2015large}, and queried SwissProt and Trembl UniprotKB databases using HMMER3 \cite{eddy2011accelerated}. Annotated sequences were grouped based on their phylogenetic origin and functional role. Prokaryotes mainly express two Hsp70 proteins: DnaK ($B=17,118$ sequences in the alignment), which are the prototype Hsp70, and HscA ($B=3,897$), which are specialized in chaperoning of Iron-Sulfur cluster containing proteins. Eukaryotes Hsp70 were grouped by location of expression (Mitochondria: $B=851$, Chloroplaste: $B=416$, Endoplasmic reticulum: $B=433$, Nucleus/Cytoplasm and others: $B=1,452$). We also singled out Hsp110 sequences, which, despite the high homology with Hsp70, correspond to non-allosteric proteins ($B=294$). We have then trained a dReLU RBM over the full MSA with $M=200$ hidden units. We show below the weight logos, structures and input distributions for ten selected hidden units, see Fig. 4 and Appendix 1, Figs.~21-26.
\subsubsection{Inferred weights and interpretation}
Weight 1 encodes a variability of the length of the loop within the IIB subdomain of the NBD, see stretch of gaps from sites 301 to 306. As shown in Fig.~4D (projection along x axis), it separates prokaryotic DNaK proteins - for which the loop is 4-5 sites longer - from most Eukaryotic Hsp70 proteins and prokaryotic HscA. An additional hidden unit (Weight 6 in Appendix 1, Fig.~21) further separates Eukaryotic Hsp70 from HscA proteins, whose loops are 4-5 sites shorter (distribution of inputs $I_6$ in Appendix 1, Fig.~26). This structural difference between the three families was previously reported and is of high functional importance to the NBD \cite{buchberger1994conserved,brehmer2001tuning}. Shorter loops increase the nucleotide exchange rates (and thus the release of target protein) in the absence of NEF, and the loop size controls interactions with NEF proteins \cite{brehmer2001tuning,briknarova2001structural,sondermann2001structure}. Hsp70 proteins having long and intermediate loop size interact specifically with respectively GrpE and Bag-1 NEF proteins, whereas short, HscA-like loops do not interact with any of them. This cochaperone specificity allows for functional diversification within the cell; for instance, Eukaryotic Hsp70 proteins expressed within mitochondria and chloroplasta, such as the human gene HSPA9 and the Chlamydomonas reinhardtii HSP70B share the long loop with prokaryotic DNaK proteins, and therefore do not interact with Bag proteins. Within the DNaK subfamily, two main variants of the loop can be isolated as well (Weight 7 in Appendix 1, Fig.~22), hinting at more NEF-protein specificities.
Weight 2 encodes a small collective mode localized on $\beta_4-\beta_5$ strands, at the edge of the $\beta$ sandwich within the SBD. Weight are quite large ($w\sim2$), and the input distribution is bimodal, separating notably HscA and chloroplastal Hsp70 ($I_2>0$) from mitochondrial Hsp70 and the other Eukaryotic Hsp70 ($I_2<0$). We note also a similarity in structural location and amino-acid content with weight 3 of the WW--domain, which controls binding specificity (Fig.~3B). Though we have not found trace of this motif in the literature, this suggests that it could be important for binding substrate specificity. Endoplasmic reticulum-specific Hsp70 proteins can also be separated from the other Eukaryotic proteins by looking at appropriate hidden units, see Weight 8 in Appendix 1, Fig.~22 and distribution of input $I_8$ in Appendix 1, Fig.~26.
RBM can also extract collective modes of coevolution spanning multiple domains, as shown by Weight 3 (Appendix 1, Fig.~21). The residues supporting Weight 3 (green spheres in Figs.~4A\&B) are physically contiguous in the ADP conformation, but not in the ATP conformation. Hence, Weight 3 captures inter-domain coevolution between the SBD and the LID domains.
Weight 4 (sequence logo in Appendix 1, Fig.~21) also codes for a wide, inter--domain collective mode, localized at the interface between the SBD and the NBD domains. When the Hsp70 protein is in the ATP conformation, the sites carrying weight 4 are physically contiguous, whereas in the ADP state they are far apart, see yellow spheres in Fig.~4A\&B. Moreover, its input distribution, shown in Fig.~4E, separates the non-allosteric Hsp110 subfamily ($I_4 \sim 0$) from the other subfamilies ($I_4 \sim 40$), suggesting that this motif is important for allostery. Several mutational studies have highlighted 21 important sites for allostery within E-Coli DNaK \cite{smock2010interdomain}; 7 of these positions carries the top entries of Weight 3, 4 appear in another Hsp110-specific hidden unit (Weight 9 in Appendix 1, Fig.~22), and several others are highly conserved and do not coevolve at all.
Lastly, Weight 5 (Fig. 4C) codes for a collective mode located mainly on the unstructured C-terminal tail, with few sites on the LID domain. Its amino-acid content is strikingly similar across all sites: positive weights for hydrophilic residues (in particular, lysine), and negative weights for tiny, hydrophobic residues. Hydrophobic-rich or hydrophilic-rich sequences are found in the MSA, see Appendix 1, Fig.~28. This motif is consistent with the role of the tail for cochaperone interaction: hydrophobic residues are important for formation of Hsp70-Hsp110 complexes via the Hop protein \cite{scheufler2000structure}. High-charge content is also frequently encountered and at the basis of recognition mechanism in intrinsically disordered protein regions \cite{oldfield2014intrinsically}, which could suggest the existence of different protein partners.
Some of the results presented here were previously obtained with others coevolutionary methods. In \cite{malinverni2015large}, the authors showed that Direct Coupling Analysis could detect conformation-specific contacts; this is similar to hidden units, respectively, 3 and 4 presented here, located on contiguous sites in the, respectively, ADP-bound and ATP-bound conformations. In \cite{smock2010interdomain}, an inter-domain sector of sites discriminating between allosteric and non-allosteric sequences was found. This sector share many sites with our weight 4, and is also localized at the SBD/NBD edge. However, only a sector could be retrieved with sector analysis, whereas many other meaningful collective modes could be extracted using RBM.
\begin{figure}
\begin{fullwidth}
\centering
\includegraphics[scale=0.9,angle=90]{figure4.pdf}
\caption{{\bf Modeling HSP70 with RBM.} {\bf A, B.} 3D structures of the DNaK E-coli HSP70 protein in the ADP-bound ({\bf A}: PDB: 2kho \cite{bertelsen2009solution}) and ATP-bound ({\bf B}: PDB: 4jne \cite{qi2013allosteric}) conformations. The colored spheres show the sites carrying the largest entries in the weights in panel C. {\bf C.} Weight logos for hidden units $\mu=1$, 2 and 5; see Appendix 1, Fig.~21 for the other hidden units. Due to the large protein length, we show only weights for positions $i$ with large weights ($\sum_v |w_{i\mu}(v)| > 0.4\times \max_i \sum_v |w_{i\mu}(v)|$), with surrounding positions up to $\pm 5$ sites away; dashed lines vertical locate the left edges of the intervals. Protein backbone colors: Blue=NBD, Cyan=Linker, Red=SBD, Gray=LID. Colors: Orange=Unit 1 (NBD loop), black = Unit 2 (SBD $\beta$ strand), green= Unit 3 (SBD/LID), yellow = Unit 4 (Allosteric). {\bf D.} Scatter plot of inputs $I_1$ vs. $I_2$. Gray dots represent the sequences in the MSA, and cluster into four main groups. Colored dots represent the main sequence categories based on gene phylogeny, function and expression. {\bf E.} Histogram of input $I_4$, showing separation between allosteric and non-allosteric protein sequences in the MSA.}
\label{fig4}
\end{fullwidth}
\end{figure}
\clearpage
\begin{figure}
\begin{fullwidth}
\centering
\includegraphics[width = 1.2\columnwidth,angle=0]{figure5.pdf}
\vskip -4cm
\caption{{\bf Sequence design with RBM.} {\bf A.} Conditional sampling of WW domain-modeling RBM. Sequences are drawn according to Eqn.~(3), with activities $(h_3,h_4)$ fixed to $(h_4^-,h_4^+)$, $(h_3^+,h_4^-)$, $(h_3^+,h_4^+)$ and $(3 h_3^-,h_4^-)$, see red points indicating the values of $h_3^\pm,h_4^\pm$ in Fig.~3C. Natural sequences in the MSA are shown with gray dots, and generated sequences with colored dots. Four clusters of sequences are obtained; the first three are putatively associated to, respectively, ligand-specific groups I, II/III and IV. The sequences in the bottom left cluster, obtained through very strong conditioning, do not resemble any of the natural sequences in the MSA; their binding specificity is unknown. {\bf B.} Sequence logo of the red sequences in panel {\bf A}, with `long $\beta_1$-$\beta_2$ loop' and `type I' features. {\bf C.} Conditional sampling of Kunitz domain-modeling RBM, with activities $(h_2,h_5)$ fixed to $(h_2^\pm,h_5^\pm)$, see red dots indicating $h_2^\pm,h_5^\pm$ in Fig.~2C. Red sequences combine the absence of the 11-35 disulfide bridge and a strong activation of the Bikunin-AMBP feature, though these two phenotypes are never found together in natural sequences. {\bf D.} Sequence logo of the red sequences in panel {\bf }C, with `no disulfide bridge' and `bikunin' features. {\bf E.} Scatter plot of the number of mutations to the closest natural sequence vs log-probability, for natural (gray) and artificial (colored) WW domain sequences. Same color code as panel {\bf A}; dark dots were generated with the high-probability trick, based on duplicated RBM (Methods). Note the existence of many high-probability artificial sequences far away from the natural ones. {\bf F.} Same scatter plot as in panel {\bf E} for natural and artificial Kunitz domain sequences. }
\label{fig6}
\end{fullwidth}
\end{figure}
\clearpage
\subsection*{Sequence Design}
The biological interpretation of the features inferred by the RBM guides us to sample new sequences $\bf v$ with putative functionalities. In practice, we sample from the conditional distribution $P( {\bf v} | {\bf h} )$, Eqn.~(3), where a few hidden-unit activities in the representation $\bf h$ are fixed to desired values, while the others are sampled from Eqn.~(4).
For WW domains, we condition on the activities of hidden units 3 and 4, related to binding specificity. Fixing $h_3$ and $h_4$ to levels corresponding to the peaks in the histograms of inputs in Fig.~3C allows us to generate sequences belonging specifically to each one of the three ligand-specificity clusters, see Fig.~5A.
In addition, sequences with combinations of activities that are not encountered in the natural MSA can be engineered. As an illustration, we generate by conditional sampling hybrid WW-domain sequences with strongly negative values of $h_3$ and $h_4$, corresponding to a Type I-like $\beta_2$-$\beta_3$ binding pocket and a long, Type IV-like $\beta_1$-$\beta_2$ loop, see Fig.~5A\&B.
For Kunitz domains, the property `no 11-35 disulfide bond' holds only for some sequences of nematode organisms, whereas the Bikunin-AMBP gene is present only in vertebrates; they are thus never observed simultaneously in natural sequences. Sampling our RBM conditioned to appropriate levels of $h_2$ and $h_5$ allows us to generate sequences with both features activated, see Figs.~5C\&D.
The sequences designed by RBM are far away from all natural sequences in the MSA, but have comparable probabilities, see Figs.~5E (WW) and 5F (Kunitz). Their probabilities estimated with pairwise direct-coupling models (trained on the same data), whose ability to identify functional and artificial sequences has already been tested \cite{balakrishnan2011learning,cocco2018inverse}, are also large, see Appendix 1, Fig.~7.
Our RBM framework can also be modified to design sequences with very high probabilities, even larger than in the MSA, by appropriate duplication of the hidden units (Methods). This trick can be combined with conditional sampling, see Fig.~5E\&F.
\subsection*{Contact Predictions}
As illustrated above, co-occurrence of large weight components in highly sparse features often corresponds to nearby sites on the 3D fold. To extract structural information in a systematic way, we use our RBM to derive effective pairwise interactions between sites, which can then serve as estimators for contacts as in direct-coupling based approaches \cite{cocco2018inverse}. The derivation is sketched in Fig.~6A. We consider a sequence ${\bf v}^{a,b}$ with residues $a$ and $b$ on, respectively, sites $i$ and $j$. Single mutations $a\to a'$ or $b\to b'$ on, respectively, site $i$ or $j$ are accompanied by changes in the log probability of the sequence indicated by the full arrows in Fig.~6A. Comparing the change resulting from the double mutation with the sum of the changes resulting from the two single mutations provides our RBM-based estimate of the epistatic interaction, see Eqns.~(10,11) in Methods. These interactions are well correlated with the outcomes of the Direct-Coupling Analysis, see Appendix 1, Fig.~9.
Figure~6 shows that the quality of prediction of the contact maps of the Kunitz (panel B) and the WW (panel C) domains with RBM is comparable to state-of-the-art methods based on direct couplings \cite{morcos2011direct}; predictions for long-range contacts are reported in Appendix 1, Fig.~10. The quality of contact prediction with RBM
\begin{itemize}
\item does not seem to depend much on the choice of the hidden-unit potential, compare the Gaussian and dReLU PPV performances in Figs.~6B,C\&D, though the latter have better performance in terms of sequence scoring than the former, see Appendix 1, Figures~1, 2\&5.
\item strongly increases with the number of hidden units, see Appendix 1, Fig.~11,12. This dependence is not surprising, as the number $M$ of hidden units acts in practice as a regularizor over the effective coupling matrix between residues. In the case of Gaussian RBM, the value of $M$ fixes the maximal rank of the matrix $J_{ij}(v_i,v_j)$, see Methods. The value $M=100$ of the number of hidden units is small compared to the maximal ranks $R=20\times N$ of the couplings matrices of the Kunitz ($R=1060$) and WW ($R=620$) domains, and explains why Direct-Coupling Analysis gives slightly better performance than RBM in the contact predictions of Figs.~6B\&C.
\item worsens with stronger weight-sparsifying regularizations, see Appendix 1, Fig.~12, as expected.
\end{itemize}
\rev{We further tested RBM distant contact predictions in a fully blind setting on the 17 protein families (the Kunitz domain plus 16 other domains) that were used for benchmarking plmDCA} \cite{ekeberg2014fast}\rev{, a state-of-the-art procedure for inferring pairwise couplings in Direct-Coupling Analysis. The number of hidden units was fixed to} $M = 0.3\, R$, \textit{i.e.} \rev{proportionally to the domain lengths, and the regularization strength was fixed to} $\lambda_1^2=0.1$. \rev{Contact predictions averaged over all families are reported in Fig.~6D for different choices of the hidden-unit potentials (Gaussian and dReLU). We find that performances are comparable to the ones of plmDCA, but the computational cost of training RBM is substantially higher.}
\clearpage
\begin{figure}
\begin{fullwidth}
\centering
\includegraphics[scale=.95,angle=0]{figure6.pdf}
\caption{{\bf Contact predictions using RBM. }
{\bf A.} Sketch of the derivation of effective epistatic interactions between residues with RBM. The change in log probability resulting from a double mutation (purple arrow) is compared to the sum of the changes accompanying the single mutations (blue and red arrows), see text and Methods, Eqns.~(10,11).
{\bf B}. Positive Predictive Value (PPV) vs. pairs $(i,j)$ of residues, ranked according to their scores for the Kunitz domain. RBM predictions with quadratic (Gaussian RBM) and dReLU potentials are compared to direct coupling-based methods -- Pseudo-Likelihood Method (plmDCA) \cite{ekeberg2014fast}, and Boltzmann Machine (BM) learning \cite{Sutto13567}.
{\bf C.} Same as panel {\bf B} for the WW domain.
{\bf D.} \rev{Distant contact predictions for the 17 protein domains used for benchmarking plmDCA in \cite{ekeberg2014fast} obtained using fixed regularization $\lambda_1^2=0.1$ and $M = 0.3 \times N \times 20$. Positive Predictive Value for contacts between residues separated by at least 5 sites along the protein backbone vs. ranks of the corresponding couplings, expressed as fractions of the protein length $N$; solid lines indicate the median PPV and colored areas the corresponding 1/3 to 2/3 quantiles.} }
\label{fig5}
\end{fullwidth}
\end{figure}
\clearpage
\subsection*{Benchmarking on Lattice Proteins}
Lattice protein (LP) models were introduced in the $90's$ to study protein folding and design \cite{mirny2001}. In one of those models \cite{shakhnovich1990enumeration}, a `protein' of $N=27$ amino acids may fold into $\sim 10^5$ distinct structures on a $3\times 3\times 3$ cubic lattice, with probabilities depending on its sequence (Methods and Figs.~7A\&B). LP sequence data were used to benchmark the Direct-Coupling Analysis in \cite{jacquin2016benchmarking}, and we here follow the same approach to assess the performances of RBM in a case where the ground truth is known. We first generate a MSA containing sequences having large probabilities ($p_{nat}>0.99$) of folding into one structure shown in Fig.~7A \cite{jacquin2016benchmarking}. A RBM with $M=100$ dReLU hidden units is then learned, see Appendix 1 for details about regularization and cross-validation.
Various structural LP features are encoded by the weights as in real proteins, including complex negative-design related modes, see Figs.~7C\&D and the remaining weights in Supporting Information. Performances in terms of contact predictions are comparable to state-of-the art methods on LP, see Appendix 1, Fig.~11.
The capability of RBM to design new sequences with desired features and high values of fitness, exactly computable in LP as the probability of folding into the native structure in Fig.~7A, can be quantitatively assessed. Conditional sampling allows us to design sequences with specific hidden-unit activity levels, or combinations of features not found in the MSA (Fig.~7E). These designed sequences are diverse and have large fitnesses, comparable to the MSA sequences and even higher when generated by duplicated RBM (Fig.~7F), and well correlated with the RBM probabilities $P({\bf v})$ (Appendix 1, Fig.~6).
\begin{figure}
\begin{fullwidth}
\centering
\includegraphics[scale=0.8,angle=90]{figure7.pdf}
\caption{{\bf Benchmarking RBM with lattice proteins.} {\bf A.} $S_A$, one of the $103,406$ distinct structures that a 27-mer can adopt on the cubic lattice \cite{shakhnovich1990enumeration}. Circled sites are related to the features shown in panel 6C. {\bf B.} $S_G$, another fold with a contact map (set of neighbouring sites) close to $S_A$ \cite{jacquin2016benchmarking}. {\bf C.} Four weight logos for a RBM inferred from sequences folding into $S_A$, see Supporting Information for the remaining 96 weights. Weight 1 corresponds to the contact between sites 3 and 26, see black dashed contour in panel A; the contact can be realized by amino acids of opposite (-+) charges ($I_1>0$), or by hydrophobic residues ($I_1<0$). Weights 2 and 3 are related to, respectively, the triplets of amino acids 8-15-27 and 2-16-25, each realizing two overlapping contacts on $S_A$ (blue dashed contours). Weight 4 codes for electrostatic contacts between 3-26, 1-18 and 1-20, and imposes that the charges of amino acids 1 and 26 have same sign. The latter constraint is not due to the native fold (1 and 26 are `far away' on $S_A$) but impedes folding in the `competing' structure, $S_G$ (Fig.~7B and Methods), in which sites 1 and 26 are neighbours \cite{jacquin2016benchmarking}. {\bf D.} Distributions of inputs $I$ and average activities (full line, left scale). All features are activated across the entire sequence space (not shown). {\bf E.} Conditional sampling with activities $(h_2,h_3)$ fixed to $(h_2^\pm, h_3^\pm)$, see red dots in panel D. Designed sequences occupy specific clusters in the sequence space, corresponding to different realizations of the overlapping contacts encoded by weights 2 and 3 (panel 6C). Conditioning to $(h_2^-, h_3^+)$ makes possible to generate sequences combining features not found together in the MSA, see bottom left corner, even with very high probabilities (Methods). {\bf F.} Scatter plot of the number of mutations to the closest natural sequence vs. the probability $p_{nat}$ of folding into structure $S_A$ (see \cite{jacquin2016benchmarking} for a precise definition) for natural (gray) and artificial (colored) sequences. Note the large diversity and the existence of sequences with higher $p_{nat}$ than in the training sample. }
\label{fig7}
\end{fullwidth}
\end{figure}
\clearpage
\subsection{Cross-validation of the model and interpretability of the representations}
Each RBM was trained on a randomly chosen subset of 80\% of the sequences in the MSA, while the remaining 20\% (called test set) were used for validation of its predictive power. In practice, we compute the average log-probability of the test set to assess the performances of the RBM for various values of the number $M$ of hidden units, of the regularization strength $\lambda_1^2$ and for different hidden-unit potentials. Results for the WW and Kunitz domains and for Lattice Proteins are reported in Fig.~8 and in Appendix~2 (Model Selection). The dReLU potential, which include quadratic and Bernoulli (another popular choice for RBM) potentials as special cases, is consistently better than the latters. As expected, increasing $M$ allows RBM to capture more features in the data distribution and, therefore, improves performances up to some level, after which overfitting starts to occur.
The impact of the regularization strength $\lambda_1^2$ favoring weight sparsity (see definition in Methods Eqn.~(8)) is twofold, see Fig.~8A for the WW domain. In the absence of regularization ($\lambda_1^2=0$) weights have components on all sites and residues, and the RBM overfit the data, as found from the large difference between the log-probabilities of the training and test sets. \rev{Overfitting notably results in generated sequences that are close to the natural ones and not very diverse, as seen from the entropy of the sequence distribution (Appendix 1 Fig.~8)}. Imposing mild regularization allows the RBM to avoid overfitting and maximize the log-probability of the test set ($\lambda_1^2=0.03$ in Fig.~8A), but most sites and residues carry non-zero weights. Interestingly, imposing stronger regularizations has low impact on the generalization abilities of RBM (weak decrease of test set log-probability), while making weights much sparser ($\lambda_1^2=0.25$ in Fig.~3). For too large regularizations, too few non-zero weights remain available and the RBM is not powerful enough to adequately model the data (drop in log-probability of the test set).
Favoring sparser weights in exchange for a small loss in log-probability has a deep impact on the nature of the representation of the sequence space by the RBM, see Fig.~8B. Good representations are expected to capture invariant properties of sequences across evolutionary divergent organisms, rather than idiosyncratic features attached to a limited set of sequences (mixture model in Fig.~8C). For sparse enough weights, the RBM is driven into the compositional representation regime (see \cite{tubiana2017emergence}) of Fig.~8E, in which each hidden unit encodes a limited portion of a sequence and the representation of a sequence is defined by the set of hidden units with strong inputs. Hence, the same hidden unit (e.g. weights 1 and 2 coding for the realizations of contacts in the Kunitz domain in Fig.~2B) can be recruited in many parts of the sequence space corresponding to very diverse organisms (see bottom histograms attached to weights 1 and 2 in Fig.~2C, showing that the sequences corresponding to strong inputs are scattered all over the sequence space). In addition, silencing or activating one hidden unit affects only a limited number of residues (contrary to the entangled regime of Fig.~8D), and a large diversity of sequences can be generated through combinatorial choices of the activity states of the hidden units, which guarantees efficient sequence design.
\rev{In addition, inferring sparse weights makes their comparison across many different protein families easier. We show in Figs.~9\&10 some representative weights obtained after training RBMs with the MSAs of the 16 families considered in }\cite{ekeberg2014fast} \rev{(the 17th family, the Kunitz domain, is shown in Fig.~2), chosen to illustrate the broad classes of encountered motifs; see Supporting Information for the other top weights of the 16 families. We find that weights may code for a variety of structural properties}
\begin{itemize}
\item \rev{Pairwise contacts on the corresponding structures, realized by various types of residue-residue physico-chemical interactions, see Figs.~9A\&B. These motifs are similar to weights 2 of the Kunitz domain (Fig.~2B) and weight 1 of the WW domain (Fig.~3B).}
\item \rev{Structural triplets, carrying residues in proximity either on the tertiary or on the secondary structure, see Figs.~9C,D,E\&F. Many such triplets arise from electrostatic interactions and carry amino acids with alternating charges (Figs.~9C,D\&E); they are often found in} $\alpha$\rev{-helices and reflect their} $\sim 4$\rev{-site periodicity (Fig.~9E and last two sites in Fig.~9D), in agreement with weight 1 of the Kunitz domain (Fig.~2B). Triplets may also involve residues with non-electrostatic interactions (Fig.~9F). }
\item \rev{Other structural motifs involving four or more residues, \textit{e.g.} between} $\beta$\rev{-strands, see Fig.~9G. Such motifs were also found in the WW domain, see weight 2 in Fig.~3B.}
\end{itemize}
\rev{In addition, weights may also reflect non-structural properties, such as:}
\begin{itemize}
\item \rev{Stretches of gaps at the extremities of the sequences, indicating the presence of subfamilies containing shorter proteins, see Fig.~10A\&B.}
\item \rev{Stretches of gaps in regions corresponding to internal loops of the proteins, see Figs.~10C\&D. These motifs control the length of these loops, similarly to weight 1 of HSP70, see Fig.~4C.}
\item \rev{Contiguous residue motifs on loops (Figs.~10E\&F) and }$\beta-$\rev{strands (Fig.~10G). These motifs could be involved in binding specificity, as found in the Kunitz and WW domains (weights 4 in Fig.~2B\&3B).}
\item \rev{Phylogenetic properties shared by a subset of evolutionary close sequences, see bottom histograms Figs.~19H\&I, contrary to the motifs listed above. These motifs are generally less sparse and scattered over the protein sequence, as weight 5 of the Kunitz domain in Fig.~2B.}
\end{itemize}
\rev{For all those motifs, the top histograms of the inputs on the corresponding hidden units indicate how the protein families cluster into distinct subfamilies with respect to the features.}
\begin{figure}
\begin{fullwidth}
\centering
\includegraphics[scale=0.7,angle=0]{figure8.pdf}
\caption{ {\bf Nature of representations built by RBM and interpretability of weights.}
{\bf A.} \textit{Effect of sparsifying regularization.} Left: log-probability (Methods, Eqn.~(8)) as a function of the regularization strength $\lambda_1^2$ (square root scale) for RBM with $M=100$ hidden units trained on WW domain sequence data. Right: Weights attached to three representative hidden units are shown for $\lambda_1^2=0$ (no regularization) and 0.03 (optimal log-likelihood for the test set, see left panel); weights shown in Fig.~3 were obtained at higher regularization $\lambda_1^2=0.25$. For larger regularization, too many weights vanish, and the log-likelihood diminishes.
{\bf B.} Sequences (purple dots) in the MSA attached to a protein family define a highly sparse subset of the sequence space (symbolized by the blue square), from which a RBM model is inferred. The RBM then defines a distribution over the entire sequence space, with high scores for natural sequences and over many more other sequences putatively belonging to the protein family. The representations of the sequence space by RBM can be of different types, three examples of which are sketched in the following panels.
{\bf C.} \textit{Mixture model:} each hidden unit focuses on a specific region in sequence space (color ellipses, different colors correspond to different units), and the attached weights form a template for this region. The representation of a sequence thus involves one (or a few) strongly activated hidden units, while all remaining units are inactive.
{\bf D.} \textit{Entangled model:} all hidden units are moderatly active across the sequence space. The pattern of activities vary from one sequence to another in a complex manner.
{\bf E.} \textit{Compositional model:} a moderate number of hidden units are activated for each protein sequence, each recognizing one of the motifs (shown by colors) in the sequence and controlling one of the protein biological properties. Composing the different motifs in various way (right circled compositions) generates a large diversity of sequences.
}
\label{fig8}
\end{fullwidth}
\end{figure}
\clearpage
\begin{figure}
\begin{fullwidth}
\centering
\includegraphics[scale=0.15,angle=0]{figure9.png}
\caption{ {\bf Representative weights of the protein families selected in} \cite{ekeberg2014fast} \ref{coding for structural properties}. \rev{RBM parameters: $\lambda_1^2=0.25$, $M=0.05 \times N \times 20$. Same format as Figs.~2B, 3B and 4B. Weights are ordered by similarity, from top to bottom: Sushi domain (PF00084), Heat shock protein Hsp20 (PF00011), SH3 Domain (PF00018), Homeodomain protein (PF00046), Zinc finger--C4 type (PF00105), Cyclic nucleotide-binding domain (PF00027), RNA recognition motif (PF00076). Green spheres show the sites carrying largest weights on the 3D folds (in order, PDB: 1elv,2bol,2hda,2vi6,1gdc,3fhi,1g2e). The ten weights with largest norms in each family are shown in Supporting Files 5-6.}
}
\label{fig9}
\end{fullwidth}
\end{figure}
\clearpage
\clearpage
\begin{figure}
\vspace*{-2.0cm}
\begin{fullwidth}
\centering
\includegraphics[scale=0.15,angle=0]{figure10.png}
\caption{ {\bf Representative weights of the protein families selected in} \cite{ekeberg2014fast} \ref{coding for non-structural properties}. \rev{RBM parameters: $\lambda_1^2=0.25$, $M=0.05 \times N \times 20$. Same format as Figs.~2B, 3B and 4B. Weights are ordered by similarity, from top to bottom: SH2 domain (PF00017), Superoxide dismutase (PF00081), K Homology domain (PF00013), Fibronectin type III domain (PF00041), Double-stranded RNA binding motif (PF00035), Zinc-binding dehydrogenase (PF00107), Cadherin (PF00028), Glutathione S-transferase, C-terminal domain (PF00043), 2Fe-2S iron-sulfur cluster binding domain (PF00111). Green spheres show the sites carrying largest weights on the 3D folds (in order, PDB: 1o47,3bfr,1wvn,1bqu,1o0w 1a71,2o72,6gsu,1a70). The ten weights with largest norms in each family are shown in Supporting Files 5-6.}
}
\label{fig10}
\end{fullwidth}
\end{figure}
\clearpage
\section*{Discussion}
In summary, we have shown that RBM are a promising, versatile, and unifying method for modeling and generating protein sequences. RBM, when trained on protein sequence data, reveal a wealth of structural, functional and evolutionary features. To our knowledge, no other method has been able to extract such detailed information in a unique framework so far. In addition, RBM can be used to design new sequences: hidden units can be seen as representation-controlling knobs, tunable at will to sample specific portions of the sequence space corresponding to desired functionalities. A major and appealing advantage of RBM is that the two-layer architecture of the model embodies the very concept of genotype-phenotype mapping (Fig. 1C). Codes for learning and visualizing RBM are attached to this publication (Methods).
From a machine-learning point of view, the values of RBM defining parameters (class of potentials and number $M$ of hidden units, regularization penalties) were selected based on the log-probability of a test set of natural sequences not used for training and on the interpretability of the model. The dReLU potentials we have introduced in this work (Eqn.~(6)) consistently outperform other potentials for generative purposes. As expected, increasing $M$ improves likelihood up to some level, after which overfitting starts to occur. Adding sparsifying regularization not only prevents overfitting but also facilitates the biological interpretation of weights (Fig.~8A). It is thus an effective way to enhance the correspondence between representation and phenotypic spaces (Fig.~1C). It also allows us to drive the RBM operation point in which most features can be activated across many regions of the sequence space (Fig.~8E); examples are provided by hidden units 1 and 2 for the Kunitz domain in Figs.~2B\&C and hidden unit 3 for the WW domain in Figs.~3B\&C. Combining these features allows us to generate a variety of new sequences with high probabilities, such as those shown in Fig.~5.
Note that some inferred features, such as hidden unit 5 in Figs.~2C\&D and, to a lesser extent, hidden unit 2 in Figs.~3B\&C, are, on the contrary, activated by evolutionary close sequences. Our inferred RBMs thus share some partial similarity with the mixture models of Fig.~8C. Interestingly, the identification of specific sequence motifs with structural, functional or evolutionary meaning does not seem to be restricted to a few protein domains or proteins, but could be a generic property as suggested by our study of 16 more families (Figs.~9\&10).
Despite the algorithmic improvements developed in the present work (Methods), training RBM is challenging as it requires intensive sampling. Generative models alternative to RBM and not requiring Markov Chain sampling exist in machine learning, such as Generative Adversarial Networks \cite{goodfellow2014generative} and Variational Auto--encoders (VAE) \cite{kingma2013auto}. VAE were recently applied to protein sequence data for fitness prediction \cite{novak,marks}. Our work differs in several important points: our RBM is an extension of direct-based coupling approaches, requires much less hidden units (about 10 to 50 times less than \cite{novak} and \cite{marks}), has a simple architecture with two layers carrying sequences and representations, infers interpretable weights with biological relevance, and can be easily tweaked to design sequences with desired statistical properties. We have shown that RBM can successfully model small domains (with a few tens of amino acids) as well as much longer proteins (with several hundreds of residues). The reason is that, even for very large proteins, the computational effort can be controlled through the number $M$ of hidden units, see Methods for discussion about the running time of our learning algorithm. Choosing moderate values of $M$ makes the number of parameters to be learned reasonable and avoids overfitting, yet allowing for the discovery of important functional and structural features. It is, however, unclear how $M$ should scale with $N$ to unveil `all' the functional features of very complex and rich proteins (such as Hsp70).
From a computational biology point of view, RBM unifies and extends previous approaches in the context of protein coevolutionary analysis. From the one hand, the features extracted by RBM identify `collective modes' controlling the biological functionalities of the protein, in a similar way to the so-called sectors extracted by statistical coupling analysis \cite{halabi2009protein}. However, contrary to sectors, the collective modes are not disjoint: a site may participate to different features, depending on the value of the residue it carries. On the other hand, RBM coincide with direct-coupling analysis \cite{morcos2011direct} when the potential ${\cal U}(h)$ is quadratic in $h$. For non-quadratic potentials ${\cal U}$, couplings to all orders between the visible units are present. The presence of high-order interactions allows for a significantly better description of gap modes \cite{feinauer2014improving}, of multiple long-range couplings due to ligand binding, and of outliers sequences (Appendix 1, Fig.~5). Our dReLU RBM model offers an efficient way to go beyond pairwise coupling models, without an explosion in the number of interaction parameters to be inferred, as all high-order interactions (whose number, $~q^N$, is exponentially large in $N$) are effectively generated from the same $M\times N\times q$ weights $w_{i\mu}(v)$. \rev{RBM also outperforms the Hopfield-Potts framework} \cite{cocco2013principal}, \rev{an approach previously introduced to capture both collective and localized structural modes. Hopfield-Potts 'patterns' were derived with no sparsity regularization and within the mean-field approximation, which made the Hopfield-Potts model not sufficiently accurate for sequence design, see Appendix 1, Figs.~14-18. }
\rev{The weights shown in Figs.~2B, 3B and 4B are stable with respect to subsampling (Appendix 1, Fig.~13)} and could be unambiguously interpreted and related to existing literature. However, the biological significance of some of the inferred features remains unclear, and would require experimental investigation. Similarly, the capability of RBM to design new functional sequences need experimental validation besides the comparison with past design experiments (Fig.~5E) and the benchmarking on \textit{in silico} proteins (Fig.~7). While recombining different parts of natural proteins sequences from different organisms is a well recognized procedure for protein design \cite{stemmer1994rapid,khersonsky2016reinvent}, RBM innovates in a crucial aspect. Traditional approaches cut sequences into fragments at fixed positions based on secondary structure considerations, but such parts are learnt and need not be contiguous along the primary sequence in RBM models. We believe protein design with detailed computational modeling methods, such as Rosetta \cite{simons1997assembly,khersonsky2016reinvent}, could be efficiently guided by our RBM-based approach, in much the same way protein folding greatly benefited from the inclusion of long-range contacts found by direct-coupling analysis \cite{marks2011protein,hopf2012three}.
Future projects include developing systematic methods for identifying function-determining sites, and analyzing more protein families. \rev{As suggested by the analysis of the 16 families shown in Figs.~9\&10, such a study could help establish a general classification of motifs into broad classes with structural or functional relevance, shared by distinct proteins}. In addition, it would be very interesting to use RBM to determine evolutionary paths between two, or more, protein sequences in the same family, but with distinct phenotypes. In principle, RBM could reveal how functionalities continuously change along the paths, and provide a measure of viability of intermediary sequences.
\section{Materials and Methods}
\subsection{Data preprocessing}
We use the PFAM sequence alignments of the V31.0 release (March 2017) for both Kunitz (PF00014) and WW (PF00397) domains. All columns with insertions are discarded, after which duplicate sequences are removed. We are left with, respectively, $N=53$ sites and $B= 8062$ unique sequences for Kunitz, and $N=31$ and $B=7503$ for WW; each site can carry $q=21$ different symbols. To correct for the heterogeneous sampling of the sequence space, a reweighting procedure is applied: each sequence ${\bf v}^\ell$ with $ \ell=1,...,B$ is assigned a weight $w_\ell$ equal to the inverse of the number of sequences with more than $90\%$ amino-acid identity (including itself). In all that follows, the average over the sequence data of a function $f$ is defined as
\begin{equation}
\langle f({\bf v}) \rangle_{MSA}=\left(\sum_{\ell=1}^B w_\ell \; f({\bf v^\ell})\right)\bigg/\left(\sum_{\ell=1}^B w_\ell \right)\ .
\end{equation}
\subsection{Learning procedure}
\subsubsection{Objective function and gradients}
Training is performed by maximizing, through stochastic gradient ascent, the difference between the log-probability of the sequences in the MSA and the regularization costs,
\begin{equation} \label{cost_total}
\langle \log P({\bf v})\rangle_{MSA} - \frac{\lambda_f}{2} \sum_{i,v} g_i(v)^2 - \frac{\lambda_1^2}{2 q N} \sum_\mu \left( \sum_{i,v} | w_{i\mu}(v)| \right)^2\ ,
\end{equation}
Regularization termes include a standard $L_2$ penalty for the potentials acting on the visible units, and a custom $L_2/L_1$ penalty for the weights. The latter penalty corresponds to an effective $L_1$ regularization with an adaptive strength increasing with the weights, thus promoting homogeneity among hidden units \footnote{This can be seen from the gradient of the regularization term, which reads $\lambda_1^2 \left(\sum_{i,v'} | w_{i\mu}(v')|/qN \right) \text{sign}(w_{i\mu}(v) )$}.
Besides, it prevents hidden units from ending up entirely disconnected ($w_{i\mu}(v) = 0 \; \forall i, v$), and makes the determination of the penalty strength $\lambda_1^2$ more robust, see Appendix 1, Fig.~2.
According to Eqn.~(5), the probability of a sequence $\bf v$ can be written as,
\begin{equation}\label{pdev}
P ({\bf v}) = e^{-E_\text{eff}({\bf v})} \bigg/ \bigg(\sum _{{\bf v}'}e^{-E_\text{eff}({\bf v}')}\bigg)\ , \quad \text{where}\quad E_\text{eff}({\bf v}) = - \sum_{i=1}^N g_i(v_i) - \sum _{\mu=1}^M \Gamma \big( I_\mu({\bf v})\big)\end{equation}
is the effective `energy' of the sequence, which depends on all the model parameters. The gradient of $\langle \log P({\bf v})\rangle_{MSA}$ over one of these parameters, denoted generically by $\psi$, is therefore
\begin{equation}\label{grad56}
\frac{\partial }{\partial\psi} \langle \log P({\bf v})\rangle_{MSA} =\sum_{\bf v} P({\bf v}) \frac{\partial E_\text{eff}}{\partial\psi} ({\bf v})- \bigg\langle \frac{\partial E_\text{eff}}{\partial\psi} ({\bf v})\bigg\rangle_{MSA} \ .
\end{equation}
Hence, the gradient is the difference between the average values of the derivative of $E_{eff}$ with respect to $\psi$ over the model and the data distributions.
\subsubsection{Moment evaluation}
\rev{Several methods have been developped to evaluate the model average in the gradient, see Eqn.~}(\ref{grad56}) \cite{fischer2012introduction}. \rev{The naive approach is to run for each gradient iteration a full Markov Chain Monte Carlo (MCMC) simulation of the RBM until the samples reach equilibrium, then use these samples to compute the model average }\cite{ackley1987learning}. \rev{A more efficient approach is the Persistent Constrastive Divergence} \cite{tieleman2008training}\rev{: the samples obtained from the previous simulation are used to initialize for the next MCMC simulation, and only a small number of Gibbs updates }($N_{MC} \sim 10$) \rev{is performed between each gradient evaluation. If the model parameters evolve slowly, the samples are always at equilibrium, and we obtain the same accuracy as the naive approach at a fraction of the computational cost. In practice, PCD successes if the mixing rate of the Markov Chain - which depends on the nature and dimension of data, and model parameters - is fast enough. In our training sessions, PCD proved sufficient to learn relevant features and good generative models for small proteins and regularized RBM. For larger proteins, to speed up mixing, we use Parallel Tempering techniques }\cite{parallel tempering,future_algo}.
\subsubsection{Stochastic Gradient Ascent}
\rev{The optimization is carried out by Stochastic Gradient Ascent. At each step, the gradient is evaluated using a mini-batch of the data, as well as a small number of Markov Chain Monte Carlo configurations. In most of our training sessions, we used the same batch size }($=100$) \rev{for both sets. The model is initialized as follows:}
\begin{itemize}
\item Weights $w_{i\mu}(v)$, are randomly and independently drawn from a Gaussian distribution with zero mean and variance equal to $ \frac{0.1}{N}$. The scaling factor $\frac{1}{N}$ ensures that the initial input distribution has variance of the order of $1$.
\item The potentials $g_i(v)$ are given their values in the independent-site model: $g_i(v) = \log \esp{\delta_{v_i,v}}_{\text{MSA}}$, where $\delta$ denotes the Kronecker function.
\item For all hidden-unit potentials, we set $\gamma_+=\gamma_-=1$, $\theta_+=\theta_-=0$.
\end{itemize}
\rev{The learning rate is initially set to }$0.1$\rev{, and decays exponentially after a fraction of the total training time (e.g. 50\%) until it reaches a final, small value, e.g. }$10^{-4}$.
\subsubsection{Dynamic reparametrization}
\rev{For Gaussian and dReLU potentials, there is a redundancy between the slope of the hidden unit average activity and the global amplitude of the weight vector. Indeed, for the Gaussian potential, the model distribution is invariant under rescaling transformations }$\gamma_\mu \rightarrow \lambda^2 \gamma_\mu$, $w_{i\mu} \rightarrow \lambda w_{i\mu}$, $\theta_\mu \rightarrow \lambda \theta_\mu$ \rev{and offset transformation }$\theta_\mu \rightarrow \theta_\mu + K_\mu$, $g_i \rightarrow g_i - \sum_\mu w_{i\mu} \frac{K_\mu}{\gamma_\mu}$. \rev{Though we can set }$\gamma_\mu =1, \; \theta_\mu=0 \; \forall \mu$ \rev{without loss of generality, it can lead either to numerical instability (at high learning rate) or slow learning (at low learning rate). A significantly better choice is to dynamically adjust the slope and offset so that }$\esp{h_\mu} \sim 0$ and $\text{Var}(h_\mu) \sim 1$ \rev{at all time. This new approach, reminiscent of batch normalization for deep networks, is implemented in the training algorithm released with this work and is benchmarked in} \cite{future_algo}.
\subsubsection{Gauge choice}
\rev{Since the conditional probability Eqn.~\ref{condv} is normalized, the transformations }$g_i(v) \rightarrow g_i(v) + \lambda_i$ and $w_{i\mu}(v) \rightarrow w_{i\mu}(v) + K_{i\mu}$ \rev{leave the conditional probability invariant. We choose the zero-sum gauges, defined by }$\sum_v g_i(v) = 0$, $\sum_v w_{i\mu}(v) = 0$. \rev{Since the regularization penalties over the fields and weight depend on the gauge choice, the gauge must be enforced throughout all training and not only at the end. The updates on the fields leave the gauge invariant, so the transformation }$g_i(v) \rightarrow g_i(v) - \frac{1}{q} \sum_{v'} g_i(v')$ \rev{can be used only once, after initialization. On the other hand, it is not the case for the updates on the weights, so the transformation }$w_{i\mu}(v) - \frac{1}{q} \sum_{v'} w_{i\mu}(v')$ \rev{must be applied after each gradient update.}
\subsubsection{Evaluating the partition function}\label{evaluate_Z}
\rev{Evaluating }$P({\bf v})$ \rev{requires knowledge of the partition function} $Z = \displaystyle{\sum_{\bf v} \exp \left( -E_\text{eff}({\bf v}) \right)}$, \rev{see denominator in }Eqn.~(\ref{pdev}). \rev{The later expression, which involves summing over }$q^N$ \rev{terms is not tractable. Instead, we estimate} $Z$ \rev{using the Annealed Importance Sampling algorithm (AIS)} \cite{neal2001annealed,salakhutdinov2008quantitative}. \rev{Briefly, the idea is to estimate partition function ratios. Let }$P_1({\bf v}) = \frac{P_1^*({\bf v})}{Z_1}$, $P_0 = \frac{P_0^*({\bf v})}{Z_0}$ \rev{be two probability distributions with partition functions }$Z_1$, $Z_0$\rev{. Then:}
\begin{equation}
\esp{\frac{P_1^*({\bf v})}{P_0^*({\bf v})} }_{ {\bf v} \sim P_0 } = \sum_{ {\bf v}} \frac{P_1^*({\bf v})}{P_0^*({\bf v})} \frac{P_0^*({\bf v})}{Z_0} = \frac{1}{Z_0} \sum_{ {\bf v}} P_1^*({\bf v}) = \frac{Z_1}{Z_0}
\end{equation}
\rev{Therefore, provided that }$Z_0$ \rev{is known (e.g. if }$P_0$\rev{ is an independent model with no couplings), one can in principle estimate }$Z_1$ \rev{through Monte Carlo sampling. The difficulty lies in the variance of the estimator: if }$P_1$, $P_0$ \rev{are very different from one another, some configurations can be very likely for} $P_1$ \rev{and have very low probability with} $P_0$\rev{; these configurations appear almost never in the Monte Carlo estimate of }$\esp{.}$\rev{, but the probability ratio can be exponentially large. In Annealed Importance Sampling, we address this problem by constructing a continuous path of interpolating distributions} $P_\beta ({\bf v})= P_1({\bf v})^\beta\; P_0({\bf v})^{1-\beta}$, \rev{and estimate} $Z_1$ \rev{as a product of ratios of partition functions:}
\begin{equation}
Z_1 = \frac{Z_1}{Z_{\beta_{l_{max}}}} \frac{Z_{\beta_{l_{max}-1}}}{Z_{\beta_{l_{max}-2}}} ... \frac{Z_{\beta_1}}{Z_0} \times Z_0\ ,
\end{equation}
\rev{where we choose a linear set of interpolating inverse temperatures of the form }$\beta_l = \frac{l}{l_{\text{max}}}$. \rev{To evaluate the successive expectations, we use a fixed number }$C$ \rev{of samples initially drawn from} $P_0$\rev{, and gradually anneal them from }$P_0$ \rev{to }$P_1$\rev{ by successive applications of Gibbs sampling at }$P_\beta$\rev{. Moreover, all computations are done in logarithmic scales for numerical stability purposes: we estimate }$\log \frac{Z_1}{Z_0} \approx \esp{ \log \frac{P_1^*({\bf v})}{P_0^*({\bf v})} }_{ {\bf v} \sim P_0 }$\rev{, which is justified if }$P_1$\rev{ and }$P_0$\rev{ are close. In practice, we used }$C=20$ \rev{chains, }$n_\beta = 5\times 10^4$ \rev{steps. For the initial distribution }$P_0$\rev{, we take the closest (in terms of KL divergence) independent model to the data distribution} $P_{MSA}$. \rev{The visible layer fields are the ones of the independent model inferred from the MSA, and the weights are} ${\bf w}^{\beta =0} = 0$. \rev{For the hidden potential values, we infer the parameters from the statistics of the hidden layer activity conditioned to the data.}
\subsubsection{Explicit formula for sampling and training RBM}
\rev{Training, sampling and computing the probability of sequences with RBM requires: (1) Sampling from }$P({\bf v}|{\bf h})$, \rev{(2) Sampling from} $P({\bf h}|{\bf v})$\rev{, and (3) Evaluating the effective energy} $E_{\text{eff}}({\bf v})$ \rev{and its derivatives. This is done as follows:}
\begin{enumerate}
\item \rev{Each sequence site} $i$ \rev{is encoded as a categorical variable taking integer values }$v_i \in [0,20]$\rev{, with each integer corresponding to one of the 20 amino-acids + 1 gap. Similarly, the fields and weights are encoded as respectively a }$N \times 21$ \rev{matrix, and a }$M \times N \times 21$ \rev{tensor. Owe to the bipartite structure of the graph, }$P({\bf v} | {\bf h}) = \prod_i P({\bf v_i} | {\bf h})$\rev{, see Eqn.~(4). Therefore, sampling from} $P({\bf v} | {\bf h})$ \rev{is done in three steps: compute the inputs received from the hidden layer, then the conditional probabilities} $P(v_i |{\bf h})$ \rev{given the inputs, and sample each visible unit independently from the others from the corresponding conditional distributions.}
\item \rev{The conditional probability} $P({\bf h} | {\bf v} )$ \rev{factorizes. Given a visible configuration} $\bf v$\rev{, each hidden unit is sampled independently from the others via }$P(h_\mu | {\bf v})$\rev{, see Eqn.~(3). For a quadratic potential} ${\cal U}(h)=\frac12 \gamma h^2 + \theta h$, \rev{this conditional distribution is Gaussian. For the dReLU potential} $\mathcal{U}(h)$ \rev{in Eqn.~(6), we introduce first} $\Phi(x) = \exp(\frac{x^2}{2}) \left[1- \text{erf}(\frac{x}{\sqrt{2}} ) \right] \sqrt{\frac{\pi}{2}}$
\rev{Some useful properties of }$\Phi$ \rev{are:}
\begin{itemize}
\item $\Phi(x) \sim_{x \rightarrow -\infty} \exp(\frac{x^2}{2}) \sqrt{2\pi} $
\item $\Phi(x) \sim_{x \rightarrow \infty} \frac{1}{x} - \frac{1}{x^3} + \frac{3}{x^5} + \mathcal{O}(\frac{1}{x^7})$
\item $\Phi'(x) = x \Phi(x) - 1$
\end{itemize}
\rev{To avoid numerical issues, }$\Phi$ \rev{is computed in practice with its definition for }$x<5$ \rev{and with its asymptotic expansion otherwise. We also write }$\mathcal{TN}(\mu,\sigma^2,a,b)$ \rev{the truncated Gaussian distribution of mode} $\mu$\rev{, width }$\sigma$ \rev{and support }$[a,b]$.
\rev{Then,} $P(h|I)$ \rev{is given by a mixture of two truncated Gaussians:}
\begin{equation}
P(h| I) = p^+ \mathcal{TN} \left(\frac{I - \theta^+}{\gamma_+},\frac{1}{\gamma_+}, 0,+\infty \right) + p^- \mathcal{TN} \left(\mu = \frac{I - \theta^-}{\gamma^-},\sigma^2 = \frac{1}{\gamma^-},- \infty,0 \right)
\end{equation}
where $Z^\pm = \Phi \left( \frac{\mp (I - \theta^\pm)}{\sqrt{\gamma^\pm}} \right) \frac{1}{\sqrt{\gamma^\pm}}$, and $p^\pm = \frac{Z^\pm}{Z^++Z^-}$.
\item \rev{Evaluating} $E_{\text{eff}}$ \rev{and its derivatives requires an explicit expression for the cumulant--generating function} $\Gamma(I)$. \rev{For quadratic potentials} $\Gamma(I)$ \rev{is quadratic too. For dReLU potentials, we have }$\Gamma(I) = \log (Z^+ + Z^-)$ \rev{where }$Z^\pm$ \rev{are defined above}.
\end{enumerate}
\subsubsection{Computational complexity}
The computational complexity is of the order of $M \times N \times B$, with more accurate variants taking more time. The algorithm scales reasonably to large protein sizes, and was tested successfully for $N$ up to $\sim 700$, taking of the order of 1-2 days on an Intel Xeon Phi processor with 2 $\times$ 28 cores.
\subsection{Sampling procedure}
\label{sectraining}
Sampling from $P$ in Eqn.~(5) is done with Markov Chain Monte Carlo methods, with the standard alternate Gibbs sampler described in the main text and in \cite{fischer2012introduction}.
Conditional sampling, \textit{i.e.} sampling from $P({\bf v} | h_\mu = h_\mu^c)$ is straightforward with RBM: it is achieved by the same Gibbs sampler while keeping $h_\mu$ fixed.
The RBM architecture can be modified to generate sequences with high probabilities, such as in Figs.~5E\&F. The trick is to duplicate the hidden units, the weights, and the local potentials acting on the visible units, as shown in Methods, Fig.~1. By doing so, the sequences $\bf v$ are distributed according to
\begin{equation}
P_2({\bf v}) \propto \int \prod_{\mu} dh_{\mu1}\,dh_{\mu2} \; P({\bf v} | {\bf h}_1)\, P({\bf v} | {\bf h}_2) = P({\bf v})^2 \ .
\end{equation}
Hence, with the duplicated RBM, sequences with high probabilities in the original RBM model are given a boost compared to low-probability sequences.
Note that more subtle biases can be introduced by duplicating some (but not all) of the hidden units in order to give more importance in the sampling to the associated statistical features.
\begin{center}
\includegraphics[width=.6\columnwidth,angle=0]{figure11.png}
\justify
\captionof{figure}{Duplicate RBM for biasing sampling toward high-probability sequences. Visible-unit configurations $\bf v$ are sampled from $P_2({\bf v})\propto P({\bf v})^2$.}
\label{figMethods2}
\end{center}
\subsection{Contact map estimation}
RBM can be used for contact prediction in a manner similar to pairwise coupling models, after derivation of an effective coupling matrix $J_{ij}^{\text{eff}}(a,b)$. Consider a sequence $\bf v$, and two sites $i,j$. Define the set of mutated sequences ${\bf v}^{a,b}$ with amino acid content: $v^{a,b}_k = v_k$ if $k \neq i,j$, $a$ if $k = i$, $b$ if $k=j$ (Fig.~6A).
The differential likelihood ratio
\begin{equation} \label{epistasis}
\Delta \Delta R_{ij}({\bf v}; a,a',b,b') \equiv \log \left[ \frac{P( {\bf v}^{a,b}) \, P({\bf v}^{a',b'})}{P({\bf v}^{a',b}) \, P({\bf v}^{a,b'})} \right] \ ,
\end{equation}
where $P$ is the marginal distribution in Eqn.~(\ref{marginal}), measures epistatic contributions to the double mutation $a \rightarrow a'$ and $b \rightarrow b'$ on, respectively, sites $i$ and $j$ in the background defined by sequence $\bf v$, see Fig.~6A. The effective coupling matrix is then defined as
\begin{equation} \label{couplings_from_epistasis}
J_{ij}^{\text{eff}}(a,b) = \esp{\frac{1}{q^2}\sum_{a',b'} \Delta \Delta R_{ij}({\bf v}; a,a',b,b')}_{MSA} \ ,
\end{equation}
where the average is taken over the sequences $\bf v$ in the MSA.
For a pairwise model, $\Delta \Delta R_{ij}$ does not depend on the background sequence ${\bf v}$, and Eqn.~(\ref{couplings_from_epistasis}) coincides with the true coupling in the zero-sum gauge. Contact estimators are based on the Frobenius norms of $J^{\text{eff}}$, with the Average Product Correction, see \cite{cocco2018inverse}.
\subsection{Code availability}
The Python 2.7 package for training and visualizing RBMs, used to obtained the results reported in this work, is available at \url{https://github.com/jertubiana/ProteinMotifRBM}. In addition, Jupyter notebooks are provided for reproducing most figures of the article.
\section{Acknowledgments}
We thank D. Chatenay for useful comments on the manuscript and L. Posani for his help on lattice proteins. This work was partly funded by the ANR project RBMPro CE30-0021-01.
\section{Supporting files}
\begin{itemize}
\item Supporting file 1 (pdf): Weight logo for all hidden units inferred from the Kunitz domain MSA.
\item Supporting file 2: Weight logo for all hidden units inferred from the WW domain MSA.
\item Supporting file 3: Weight logo for all hidden units inferred from the LP MSA.
\item Supporting file 4: Weight logo of 12 Hopfield-Potts pattern inferred from the Hsp70 protein MSA. Same format as Appendix 1 Figures 14-16.
\item Supporting file 5: Weight logo and associated structures of the 10 weights with highest norms, excluding the gap modes for each of the 16 additional domains shown in Fig.~9.
\item Supporting file 6: Weight logo and associated structures of the 10 sparse (i.e. within the 30\% most sparse weights of the RBM) weights with highest norms, excluding the gap modes for each of the 16 additional domains shown in Fig.~9.
\end{itemize}
|
1,108,101,564,338 | arxiv | \section{Introduction}
Here is a 3-ball:
\begin{center}
\includegraphics[scale=0.65]{figures/3ball}
\end{center}
\noindent and here is a 3-ball with a handle attached:
\begin{center}
\includegraphics[scale=0.65]{figures/3ballhandle}
\end{center}
\noindent This is the Conway knot:
\begin{center}
\includegraphics[scale=0.8]{figures/conway}
\end{center}
Our knots will live in the 3-sphere $S^3$, which is the boundary of the 4-ball $B^4$. A knot is \emph{slice} if it bounds a smooth disk in the 4-ball. The term slice comes from the fact that such knots are cross sections (i.e., slices) of higher dimensional knots.
\begin{namedthm*}{Main Theorem}[Piccirillo \cite{Piccirillo}]
The Conway knot is not slice.
\end{namedthm*}
This knot is not the Conway knot:
\begin{center}
\includegraphics[scale=0.8]{figures/KT}
\end{center}
It is called the Kinoshita-Terasaka knot, and it is related to the Conway knot by \emph{mutation}, that is, we cut out a ball containing part of the knot, rotate it $180^\circ$, and glue it back in.
\begin{center}
\labellist
\pinlabel {$180^\circ$} at 166 148
\endlabellist
\includegraphics[scale=0.8]{figures/ConwayKTmutation}
\end{center}
The Kinoshita-Terasaka knot is slice. Here is a slightly different diagram of the Kinoshita-Terasaka knot. As we can see, it bounds an immersed disk in $S^3$:
\begin{center}
\includegraphics[scale=0.8]{figures/KT_ribbon}
\end{center}
Thinking of this immersed disk as sitting in the $S^3$ boundary of the 4-ball, we can push the surface into the 4-ball and eliminate the arcs of self-intersection by pushing one sheet of the surface near the arc deeper into the 4-ball, giving us an embedded disk in the 4-ball.
One way to study knots is to use a knot invariant. A knot invariant is a mathematical object (like a number, a polynomial, or a group) that we assign to a knot. Knot invariants can be used to distinguish knots. Certain knot invariants obstruct a knot from being slice. One such invariant is Rasmussen's $s$-invariant, which to a knot $K$ assigns an integer $s(K)$. If $s(K) \neq 0$, then $K$ is not slice.
Since the Conway knot and the Kinoshita-Terasaka knots are mutants, they have a lot in common. For example, the $s$-invariant of both knots is zero. In fact, all known knot invariants that obstruct sliceness vanish for the Conway knot. That leads one to wonder: how did Piccirillo show that the Conway knot is not slice? Her key idea was to find some other knot $K'$ such that the Conway knot is slice if and only if $K'$ is slice, and to obstruct $K'$ from being slice. The goal of this article is to give some context for her result and sketch the main ideas of her proof.
\section{Telling knots apart}\label{sec:tellingapart}
The fundamental group is one of the first algebraic invariants encountered in a topology class. A knot is homeomorphic to $S^1$, so its fundamental group is always isomorphic to the integers. However, instead of studying the knot, we can study the space around the knot. That is, we consider the \emph{knot complement}, consisting of the 3-sphere minus a neighborhood of the knot. The \emph{knot group} is the fundamental group of the knot complement.
Typically, one studies knots up to \emph{ambient isotopy}. Intuitively, this means that we can wiggle and stretch our knot, but we cannot cut it nor let it pass through itself. Since isotopic knots have homeomorphic complements and homeomorphic spaces have isomorphic fundamental groups, the knot group is an invariant of the isotopy class of a knot.
Here are two knots, the unknot and the trefoil:
\begin{center}
\includegraphics[scale=0.8]{figures/unknot}
\includegraphics[scale=0.8]{figures/trefoil}
\end{center}
\begin{example}
The knot group of the unknot is $\mathbb{Z}$.
\end{example}
\begin{example}
The knot group of the trefoil is $\langle x, y \mid x^2=y^3 \rangle$. This group is non-abelian, since it surjects onto the symmetric group $S_3$. Therefore, the trefoil and the unknot are different.
\end{example}
\noindent Riley \cite{Riley} distinguished the Kinoshita-Terasaka knot and the Conway knot up to isotopy, using a delicate analysis of their fundamental groups.
Since it can often be difficult to tell if two group presentations describe isomorphic groups, it can be convenient to pass to more tractable invariants. One example is the Alexander polynomial, denoted $\Delta(t)$, which Fox \cite{FoxI} showed can be algorithmically computed from a group presentation for the knot complement.
\begin{example}
The Alexander polynomial of the unknot is $1$.
\end{example}
\begin{example}
The Alexander polynomial of the trefoil is $t^2-t+1$.
\end{example}
\begin{example}
The Conway knot and the Kinoshita-Terasaka knot both have Alexander polynomial $1$.
\end{example}
The Alexander polynomial is invariant under mutation, which explains why the Conway knot and the Kinoshita-Terasaka knot have the same Alexander polynomial. There are several other polynomial knots invariants, such as the Jones, HOMFLY-PT, and Kauffman polynomials, all of which are also invariant under mutation. Knot Floer homology \cite{OSknots} and Khovanov homology \cite{Khovanov} categorify the Alexander and Jones polynomials; that is, to a knot, they assign a graded vector space whose graded Euler characteristic is the desired polynomial. A certain version of knot Floer homology is invariant under mutation \cite{Zibrowius}, as are versions of Khovanov homology \cite{Bloom, Wehrli}. Moreover, Rasmussen's $s$-invariant is invariant under mutation \cite{KWZ}; this gives a quick way to determine that the $s$-invariant of the Conway knot is zero, since it is the mutant of a slice knot.
As we already observed, isotopic knots have homeomorphic complements. What about the converse? If two knots have
homeomorphic complements, then are they isotopic? This question was answered in the affirmative in 1989 by Cameron Gordon and John Luecke \cite{GordonLuecke}, who proved that knots are determined by their complements.
This is in contrast to links.
For example, the two links below have homeomorphic complements, but are not isotopic, since in the first, both components are unknots, while in the second, one component is the trefoil.
\begin{center}
\includegraphics[scale=0.8]{figures/links}
\end{center}
\section{Measuring the complexity of a knot}
How can we measure the complexity of a knot $K$? One such measure is the \emph{unknotting number}, denoted $u(K)$, which is the minimal number of times a knot must be passed through itself to untie it. Both the Conway knot and the Kinoshita-Terasaka knot can be unknotted by changing a single crossing, hence the unknotting number is one for both of them. Note that a knot has unknotting number zero if and only if it is the unknot.
There is a natural way to add together two knots $K_1$ and $K_2$, called the \emph{connected sum}, denoted $K_1 \# K_2$. Here is the connected sum of the trefoil and the Conway knot:
\begin{center}
\includegraphics[scale=0.8]{figures/connectedsum}
\end{center}
What is the unknotting number of $K_1 \# K_2$? A natural guess is that $u(K_1 \# K_2) = u(K_1) + u(K_2)$. One can readily check that $u(K_1 \# K_2) \leq u(K_1) + u(K_2)$. However, whether or not the reverse inequality holds remains an open question!
Here is another measure of complexity. Every knot in the 3-sphere bounds a compact, oriented, connected surface. Such surface is called a \emph{Seifert surface} for the knot. Recall that compact, oriented surfaces with connected boundary are characterized up to homeomorphism by their genus. The surfaces below are all have genus one:
\begin{center}
\includegraphics[scale=0.8]{figures/surfaces}
\end{center}
The boundary of each of the first two surfaces is the unknot. The boundary of the last surface is the trefoil.
The \emph{genus} of a knot $K$ is the minimal genus of a Seifert surface for $K$. The unknot is the only knot that bounds a disk. In other words, a knot had genus zero if and only if is the unknot. In contrast to unknotting number, we know how genus behaves under connected sum; Schubert \cite{Schubert1949} showed that genus is additive under connected sum, that is, $g(K_1 \# K_2) = g(K_1) + g(K_2)$.
The Alexander polynomial gives a lower bound on the genus of a knot:
\[ \frac{1}{2} \deg \Delta_K(t) \leq g(K). \]
Since the Kinoshita-Terasaka knot and the Conway knot both have Alexander polynomial one, this bound does prove any useful information about their genera; for that, we turn to a result of Gabai, using foliations:
\begin{example}[\cite{Gabai}]
The Conway knot has genus three. The Kinoshita-Terasaka knot has genus two.
\end{example}
The unknot is the only knot with unknotting number zero, and it's also the only knot with genus zero. What about a measure of complexity where there are nontrivial knots that are also simple? Enter the \emph{slice genus}.
Recall that $S^3$ is the boundary of the 4-ball, and that a knot $K$ in $S^3$ is \emph{slice} if it bounds a smooth disk in the 4-ball. Such a disk is a called a \emph{slice disk} for $K$. Not every knot $K$ bounds a smooth disk in the 4-ball, but every knot does bound a smooth compact, oriented, connected surface in the 4-ball. (One way to obtain such a surface is by pushing a Seifert surface for $K$ into the 4-ball.) The minimal genus of such surface is called the \emph{slice genus} of $K$. Slice knots are precisely those knots with slice genus zero. Of course the unknot is slice, but there are also infinitely many nontrivial knots which are slice. For example, the Kinoshita-Terasaka knot is slice. Unlike the ordinary genus of a knot, slice genus is not additive under connected sum.
The Alexander polynomial can obstruct sliceness: if $K$ is slice, then $\Delta_K(t)$ is of the form $t^n f(t) f(t^{-1})$ for some polynomial $f$ and some natural number $n$.
\begin{example}
The trefoil is not slice, since its Alexander polynomial $t^2-t+1$ is irreducible.
\end{example}
Closely related to the notion of sliceness is the following equivalence relation: two knots $K_0$ and $K_1$ are \emph{concordant} if they cobound an annulus $A$ in $S^3 \times [0,1]$, where the boundary of $A$ is $K_0 \subset S^3 \times \{ 0\}$ and $K_1 \subset S^3 \times \{ 1\}$. One can check that a knot is slice if and only if it concordant to the unknot.
Note that we required our surfaces to be smoothly embedded. What would happen if we just asked for topologically embedded disks in $B^4$? It turns out that every knot bounds a topologically embedded disk in $B^4$. Recall that the \emph{cone} of a space $X$ is $\Cone(X) = (X \times [0,1])/(X \times \{0\})$. Since $\Cone (S^3, K) = (B^4, B^2)$, every knot $K$ in $S^3$ bounds a topological disk in $B^4$, but the disk is not smoothly embedded, because of the cone point. Rather than requiring smoothness, one can instead require that the disk be locally flat; a knot that bounds a locally flat disk is called \emph{topologically slice}. Freedman \cite{Freedmandisk} proved that any knot with Alexander polynomial one is topologically slice; in particular, the Conway knot is topologically slice. Work of Donaldson \cite{Donaldson} implies that there are topologically slice knots that are not slice. Many slice obstructions actually obstruct topological sliceness, which is part of the reason why showing the Conway knot is not slice is so difficult.
\section{An equivalent condition for sliceness}
There are many invariants that obstruct sliceness, such as the aforementioned factoring of the Alexander polynomial, integer-valued invariants $\tau$ and $\nu$ coming from knot Floer homology \cite{OS4ball, OSrational}, and Rasmussen's integer-valued invariant $s$ coming from Lee's perturbation of Khovanov homology \cite{Rasmussen4ball, Lee}. These invariants (and many more!) all vanish for the Conway knot.
(In my PhD thesis, I defined a new slice obstruction. One of the first questions people asked me was what its value was on the Conway knot; sadly, the obstruction vanishes for the Conway knot.)
Recall that in Section \ref{sec:tellingapart}, starting from a knot $K$ in $S^3$, we built a 3-manifold, the knot complement. Piccirillo's strategy for showing that the Conway knot is not slice relies on building a 4-manifold, called the \emph{knot trace}, from a knot $K$ in $S^3$. We will denote the trace of $K$ by $X(K)$. The following folklore result (see \cite{FoxMilnor}) is a key ingredient in Piccirillo's proof:
\begin{namedthm*}{Trace Embedding Lemma}
A knot $K$ is slice if and only if its trace $X(K)$ smoothly embeds in $S^4$.
\end{namedthm*}
In contrast to the fact that knots are determined by their complements, knots are not determined by their traces. That is, there exist non-isotopic knots $K$ and $K'$ with the same (i.e., diffeomorphic) traces \cite{Akbulut2dim}.
Allison Miller and Lisa Piccirillo \cite{MillerPiccirillo} proved something even stronger: they showed that there exist knots $K$ and $K'$ with the same trace such that $K$ and $K'$ are not even concordant. This disproved a conjecture of Abe \cite{Abe}. Miller and Piccirillo's result implies that it's possible to have knots $K$ and $K'$ with the same trace, but for, say, $s(K)$ to be zero while $s(K')$ is nonzero.
We are slowing uncovering Piccirillo's strategy for proving the Conway knot is not slice: find a knot $K'$ with the same trace as the Conway knot, and show that $K'$ is not slice. Then the Trace Embedding Lemma implies that the Conway knot is not slice either.
\section{Handles and traces}
Let $B^n$ denote the $n$-ball.
Recall the 3-ball with a handle attached from beginning of these notes. More specifically, the handle consists of $B^1 \times B^2$ attached to $S^2 = \partial B^3$ along $S^0 \times B^3 = \partial B^1 \times B^2$. This handle is called a $3$-dimensional $1$-handle.
\begin{center}
\labellist
\pinlabel {attaching region} at 133 93
\pinlabel {core} at 96 140
\endlabellist
\includegraphics[scale=1]{figures/3ballhandle_attachingregion}
\end{center}
More generally, we consider an $n$-dimensional $k$-handle $B^k \times B^{n-k}$. Such a handle can be attached to an $n$-manifold $M$ with boundary by identifying a submanifold $S^{k-1} \times B^{n-k} \subset \partial M$ with $S^{k-1} \times B^{n-k} = \partial B^k \times B^{n-k}$. The submanifold $S^{k-1} \times B^{n-k} \subset \partial M$ is called the \emph{attaching region} of the handle. The \emph{core} of the handle is $B^k \times \{0\}$, where we think of $B^k$ as the unit ball in $\mathbb{R}^k$.
To build the knot trace, we will consider a $4$-dimensional $2$-handle $B^2 \times B^2$ attached to $S^3 = \partial B^4$. We need to specify the attaching region $S^1 \times B^2 \subset S^3$. This is just a tubular neighborhood of a knot. (The careful reader will note that we need to specify a parametrization of the neighborhood with $S^1 \times B^2$; this is called the \emph{framing} of the knot. For ease of exposition, we will largely suppress this key point from our discussion.) The \emph{trace} of a knot $K$ is the result of attaching a ($0$-framed) $2$-handle to $S^3 = \partial B^4$ along $K$. This is just a higher dimensional analog of the 1-handle attached to the 3-ball above.
\section{Knots with the same trace}
In order to understand Piccirillo's construction of a knot with the same trace as the Conway knot, it will be helpful to consider an analogy one dimension lower, in 3-dimensions, where we can more easily visualize things.
Consider the 3-ball with a 1-handle attached. Recall that a (3-dimensional) 2-handle is just a thickened disk $B^2 \times B^1$, which we attached along an annulus $S^1 \times B^1$. Suppose we attached a 2-handle along the grey annulus
\begin{center}
\includegraphics[scale=1]{figures/3ballhandle_cancelinghandles}
\end{center}
Observe that the resulting manifold $M_1$ is homeomorphic (in fact, diffeomorphic, after smoothing corners) to $B^3$!
We could instead attach a 2-handle along the following grey thickened curve:
\begin{center}
\includegraphics[scale=1]{figures/3ballhandle_cancelinghandles2}
\end{center}
This would yield a manifold, $M_2$, which is again homeomorphic to $B^3$.
If we attached 2-handles to both of the grey curves, we obtain a manifold $M$ that is homeomorphic to $B^3$ with a 2-handle attached. Note that $M$ is built from a 3-ball, one 1-handle, and two 2-handles. We could view $M$ as $M_1 \cong B^3$ with a 2-handle attached
or we could view $M$ as $M_2 \cong B^3$ with a 2-handle attached.
Notice that the attaching regions for these 2-handles are just (thickened) embedded circles in $S^2 = \partial B^3$. Of course, embedded circles in $S^2$ are not especially interesting. But what happens when we bump things up a dimension?
Now consider the trace $X(C)$ of the Conway knot $C$. Piccirillo found a clever way to build $X(C)$ as a 4-ball, a 1-handle, and two 2-handles. (All of the handles here are 4-dimensional.) If you take the 4-ball, the 1-handle, and the first 2-handle, you get a 4-ball, and the second 2-handle is attached along the Conway knot $C$ in $S^3$ (the boundary of the 4-ball). On the other hand, if you take the 4-ball, the 1-handle, and the second 2-handle, you still get a 4-ball, and the remaining 2-handle is attached along some different knot $K'$. This means that $C$ and $K'$ have the same trace! Here is Piccirillo's knot $K'$ that has the same trace as the Conway knot:
\begin{center}
\includegraphics[scale=1]{figures/Kprime}
\end{center}
\section{Proof of the Trace Embedding Lemma}
Now that we have seen handles and traces, we will sketch the proof of the Trace Embedding Lemma.
Suppose that $K$ is slice. This means that $K$ bounds a smooth disk in the 4-ball. Recall that $S^4$ is the union of two 4-balls, say $B^4_1$ and $B^4_2$. Think of $K$ as sitting in the common $S^3$ boundary of these two 4-balls. Since $K$ is slice, it bounds a slice disk $D$ in say $B^4_2$. Recall that a 4-dimensional 2-handle is just $D^2 \times D^2$. Then $B^4_1$ together with a closed neighborhood of $D$ is the trace of $K$, smoothly embedded in $S^4$. A schematic of $S^4$ as the union of two 4-balls is shown below:
\begin{center}
\labellist
\pinlabel {$B_1^4$} at 76 40
\pinlabel {$B_2^4$} at 76 120
\pinlabel {$K$} at 72.5 82
\pinlabel $S^3$ at 25 70
\endlabellist
\includegraphics[scale=1]{figures/trace1}
\end{center}
The slice disk is represented by the thick grey curve. The trace of $K$ consists of $B_1^4$ together with a neighborhood of the slice disk for $K$.
Now suppose that $X(K)$ embeds in $S^4$. Consider the piecewise linear embedded $S^2$ in $X(K)$ consisting of the core of the $2$-handle together with the cone of $K$. Smoothly embed $X(K)$ in $S^4$; composition gives a piecewise linear embedding of $S^2$ in $S^4$, which is smooth away from the cone point $p$. Now take a small neighborhood around $p$ in $S^4$. The complement of this neighborhood is a 4-ball $B$. Consider the piecewise linear embedding of $S^2$ intersected with $B$; we've cut out the cone point, so this gives a slice disk in $B$ for $K$ in $\partial B$. A schematic of the trace embedded in $S^4$ is shown below:
\begin{center}
\labellist
\pinlabel {\small{$p$}} at 68 50
\pinlabel {$S^3$} at 85 45
\pinlabel {slice disk for $K$} at -10 50
\pinlabel {$K$} at 75 80
\endlabellist
\includegraphics[scale=1]{figures/trace}
\end{center}
The 4-ball $B$ is everything outside of the $S^3$ dotted circle, and the thick grey curve shows the slice disk for $K$.
\section{Showing that $K'$ is not slice}
The goal is now to find a way to show that $K'$, the knot that shares a trace with the Conway knot, is not slice. It turns out that some slice obstructions, such as the invariant $\nu$ coming from knot Floer homology, are actually trace invariants: if two knots $K_1$ and $K_2$ have the same trace, then $\nu(K_1) = \nu(K_2)$ \cite{HMP}.
Luckily, the same is not true for Rasmussen's $s$-invariant. Using a computer program and some simple algebraic observations, Piccirillo shows that $s(K')=2$, implying that $K'$ is not slice. Since $K'$ and the Conway knot have the same trace, the Trace Embedding Lemma implies that the Conway knot is not slice.
\section{What's next?}
Now that we know exactly which knots with fewer than 13 crossings are slice, what's next? Of course, one could try to determine exactly which knots with fewer than 14 or 15 crossings are slice. But why not try to apply some of our tools to other open problems?
The smooth 4-dimensional Poincar\'e conjecture posits that a smooth 4-manifold
that is homeomorphic to $S^4$ is actually diffeomorphic to $S^4$. To disprove the conjecture, one wants to find an \emph{exotic} $S^4$, that is, a smooth 4-manifold that is homeomorphic but not diffeomorphic to $S^4$.
One possible approach (outlined in
\cite{FGMW}) to disprove the smooth 4-dimensional Poincar\'e conjecture relies on Rasmussen's $s$-invariant, as follows.
There are many constructions of potentially exotic 4-spheres $\Sigma$ (see, for example \cite{CS}; note that certain infinite subfamilies of these are known to be standard by \cite{Akbulut, GompfCS, MeierZupan}).
By removing a neighborhood of a point in $\Sigma$, one can instead study potentially exotic 4-balls $\beta$. The difficult part is now determining whether or not $\beta$ is exotic, or if it is in fact just the standard $B^4$.
While slice obstructions like $\nu$ actually obstruct a knot from being slice in an exotic $4$-ball, it remains possible that the $s$-invariant only obstructs a knot from being slice in the standard 4-ball. The game is then to try to find a knot $K$ that is slice in a potentially exotic 4-ball $\beta$. If $s(K)$ is non-zero, then $K$ is not slice in the standard 4-ball, thereby implying that $\beta$ must be exotic.
Both of the key steps in this approach (constructing the potentially exotic 4-ball and computing $s$) seem difficult. But maybe there is some other way to get handle on the problem in order to trace a solution. I look forward to reading an article about such a result!
\section*{Acknowledgements}
I would like to thank JungHwan Park and Lisa Piccirillo for helpful comments on an earlier draft.
\bibliographystyle{amsalpha}
\def\MR#1{}
|
1,108,101,564,339 | arxiv | \section{Chacon infinite transformation}
\subsection{Introduction}
The classical Chacon transformation, which is a particular case of a finite measure preserving rank-one transformation, is considered as one of the jewels of ergodic theory~\cite{kt2006}. It has been formally described in~\cite{Friedman}, following ideas introduced by Chacon in 1966. Among other properties, it has been proved to have no non trivial factor, and to commute only with its powers~\cite{DJ1978}. More generally, it has minimal self-joinings~\cite{DJRS1980}. For a symbolic version of this transformation, Del~Junco and Keane~\cite{DJK1985} have also shown that if $x$ and $y$ are not on the same orbit, and at least one of them is outside a countable set of exceptional points, then $(x,y)$ is generic for the product measure.
Adams, Friedman and Silva introduced in 1997 (\cite{AFS1997}, Section~2) an infinite measure preserving rank-one transformation which can be seen as the analog of the classical Chacon transformation in infinite measure. They proved that all its Cartesian powers are conservative and ergodic.
This transformation, denoted by $T$ throughout the paper, is the main object of the present work. We recall its construction on $\mathbb{R}_+$ by cutting and stacking in the next section. In particular, we are interested in lifting known results about self-joinings of Chacon transformation to the infinite-measure case. This leads us to study all ergodic measures on $(\mathbb{R}_+)^d$ which are boundedly finite and $T^{\times d}$-invariant: we prove in Theorem~\ref{thm:msj} that all such measures are products of so-called \emph{diagonal measures}, which are measures generalizing in some way the measures supported on a graph (see Definition~\ref{def:diagonal}). These diagonal measures are studied in details in Section~\ref{sec:diagonal}. Surprisingly, besides measures supported on a graph arising from powers of $T$, we prove the existence of some weird invariant measures whose marginals are singular with respect to the Lebesgue measure. (It may happen that these marginals take only the values 0 or $\infty$, which is for example the
case for a product measure. But even in such a case, it makes sense to consider their absolute continuity.) However, we prove in Theorem~\ref{thm:product_of_graphs} that, if a $T^{\times d}$-invariant boundedly
finite measure has all its marginals absolutely continuous with respect to the Lebesgue measure, then its ergodic components are products of graph joinings arising from powers of $T$. We derive from these results in Section~\ref{sec:consequences} that the infinite Chacon transformation has trivial centralizer, and has no nontrivial factor. At the end of the paper, we prove in Annex~A a result used in the proof of Theorem~\ref{thm:msj} which can be of independent interest: Theorem~\ref{thm:product} provides sufficient conditions for an infinite measure preserving dynamical system defined on a Cartesian product to decompose into a direct product of two dynamical systems.
Another important motivation for the present work comes from the study of $T$-point processes, which we briefly introduce now. Given an infinite measure preserving dynamical system $(X,\mathscr{A},\mu,T)$ where $X$ is a complete separable metric space, we consider the space $X^*$ of boundedly finite, simple counting measures on $(X,\mathscr{A})$, which are measures of the form
\[
\xi=\sum_{i\in I}\delta_{x_i}
\]
where $I$ is at most countable, $x_i\neq x_j$ whenever $i\neq j$, and
\[ \xi(A) = \left|\left\{ i\in I: x_i\in A\right\}\right| <\infty
\] for all bounded measurable $A\subset X$.
For such a $\xi$, we define\footnote{In general, if $\varphi$ is any measurable map from $(X,\mathscr{A})$ to $(Y,\mathscr{B})$, and if $m$ is a measure on $(X,\mathscr{A})$, we denote by $\varphi_*(m)$ the pushforward image of $m$ by $\varphi$.}
\[
T_{*}\left(\xi\right):=\sum_{i\in I}\delta_{T\left(x_{i}\right)}.
\]
It is not true that for any $\xi\in X^*$, $T_{*}\left(\xi\right)\in X^*$. However, we can consider probability measures on $X^*$ which are $T_*$-invariant.
We define a \emph{$T$-point process} as a $T_*$-invariant probability measure on $X^*$ which satisfies $\mathbb{E}[\xi(A)] = \mu(A)$ for each $A\in\mathscr{A}$. The canonical example of a $T$-point process is given by the Poisson process of intensity $\mu$, providing the so-called \emph{Poisson suspension} associated to $(X,\mathscr{A},\mu,T)$.
We show in~\cite{sushis} that, if $T$ satisfies the properties proved in Theorem~\ref{thm:product_of_graphs}, then any $T$-point process satisfying some integrability condition is a superposition of Poisson processes.
\subsection{Construction of Chacon infinite transformation}
We define the transformation on $X:=\mathbb{R}_+$: In the first step we consider the interval $[0,1)$, which is cut into three subintervals of equal length. We take the extra interval $[1,4/3)$ and stack it above the middle piece, and 4 other extra intervals of length $1/3$ which we stack above the rightmost piece. Then we stack all intervals left under right, getting a tower of height $h_1=8$. The transformation $T$ maps each point to the point exactly above it in the tower. At this step $T$ is yet undefined on the top level of the tower.
After step $n$ we have a tower of height $h_n$, called tower~$n$, made of intervals of length $1/3^n$ which are closed to the left and open to the right. At step~$(n+1)$, tower~$n$ is cut into three subcolumns of equal width. We add an extra interval of length $1/3^{n+1}$ above the middle subcolumn and $3h_n+1$ other extra intervals above the third one. We pick the extra intervals successively by taking the leftmost interval of desired length in the unused part of $\mathbb{R}_+$. Then we stack the three subcolumns left under right and get tower~$n+1$ of height $h_{n+1}=2(3h_n+1)$.
Extra intervals used at step $n+1$ are called \emph{$(n+1)$-spacers}, so that tower~$(n+1)$ is the union of tower~$n$ with $3h_{n}+2$ such $(n+1)$-spacers. The total measure of the added spacers being infinite, we get at the end a transformation $T$ defined on $\mathbb{R}_+$, which preserves the Lebesgue measure $\mu$.
For each $n\ge1$, we define $C_n$ as the bottom half of tower~$n$: $C_n$ is the union of $h_n/2$ intervals of width $1/3^n$, which contains the whole tower~$(n-1)$. Notice that $C_n\subset C_{n+1}$, and that $X=\bigcup_n C_n$.
We also define a function $t_n$ on tower~$n$, taking values in $\{1,2,3\}$, which indicates for each point whether it belongs to the first, the second, or the third subcolumn of tower~$n$.
\begin{figure}[htp]
\centering
\includegraphics{construction.pdf}
\caption{Construction of Chacon infinite measure preserving transformation by cutting and stacking}
\label{fig:construction}
\end{figure}
\section{Ergodic invariant measures for Cartesian powers of the infinite Chacon transformation}
Let $d\ge1$ be an integer. We consider the $d$-th Cartesian power of the transformation $T$:
\[T^{\times d}:X^d\ni(x_1,\ldots,x_d)\mapsto(Tx_1,\ldots,Tx_d).\]
\begin{definition}
\label{def:locally-finite}
A measure $\sigma$ on $X^d$ is said to be \emph{boundedly finite} if $\sigma(A)<\infty$ for all bounded measurable subset $A\subset X^d$.
\end{definition}
Equivalently, $\sigma$ is boundedly finite if $\sigma(C^{d}_n)<\infty$ for each $n$. Obviously, boundedly finite implies $\sigma$-finite.
\subsection{Products of diagonal measures}
Our purpose in this section is to describe, for each $d\ge1$, all boundedly finite measures on $X^d$ which are ergodic for the action of $T^{\times d}$.
Examples of such measures are given by so-called \emph{graph joinings}:
A measure $\sigma$ on $X^d$ is called a graph joining if there exist some real $\alpha>0$ and $(d-1)$ $\mu$-preserving transformations $S_2,\ldots,S_d$, commuting with $T$, and such that
\[
\sigma(A_1\times\cdots\times A_d) = \alpha \mu (A_1\cap S_2^{-1}(A_2)\cap\cdots\cap S_d^{-1}(A_d)).
\]
In other words, $\sigma$ is the pushforward measure of $\mu$ by the map $x\mapsto(x,S_2x,\ldots,S_dx)$. In the case where the transformations $S_j$ are powers of $T$, such a graph joining is a particular case of what we call a \emph{diagonal measure}, which we define now.
From the properties of the sets $C_n$, it follows that $C^{d}_n\subsetC^{d}_{n+1}$, and that $X^d=\bigcup_n C^{d}_n$. We call \emph{$n$-box} a subset of $X^d$ which is a Cartesian product $I_1\times \cdots\times I_d$, where each $I_j$ is a level of $C_n$. We call \emph{$n$-diagonal} a finite family of $n$-boxes of the form
\[
B, T^{\times d} B,\ldots, (T^{\times d})^{\ell}B,
\]
which is maximal in the following sense: $ (T^{\times d})^{-1}B\not\subset C^{d}_n$ and $ (T^{\times d})^{\ell+1}B\not\subset C^{d}_n$.
\begin{definition}
\label{def:diagonal}
A boundedly finite, $T^{\times d}$-invariant measure $\sigma$ on $X^d$ is said to be a \emph{diagonal measure} if there exists an integer $n_0$ such that, for all $n\ge n_0$, $\sigma|_{C^{d}_n}$ is concentrated on a single $n$-diagonal.
\end{definition}
Note that, for $d=1$, there is only one $n$-diagonal for any $n$, therefore $\mu$ is itself a 1-dimensional diagonal measure. A detailed study of diagonal measures will be presented in Section~\ref{sec:diagonal}.
\begin{theo}
\label{thm:msj}
Let $d\ge1$, and let $\sigma$ be a nonzero, $T^{\times d}$-invariant, boundedly finite measure on $X^d$, such that the system $(X^d,\sigma,T^{\times d})$ is ergodic. Then there exists a partition of $\{1,\ldots,d\}$ into $r$ subsets $I_1,\ldots,I_r$, such that $\sigma=\sigma^{I_1}\otimes \cdots\otimes \sigma^{I_r}$, where $\sigma^{I_j}$ is a diagonal measure on $X^{I_j}$.
If the system $(X^d,\sigma,T^{\times d})$ is totally dissipative, $\sigma$ is a diagonal measure supported on a single orbit.
\end{theo}
We will prove the theorem by induction on $d$. The following proposition deals with the case $d=1$.
\begin{prop}
\label{prop:d=1}
The Lebesgue measure $\mu$ is, up to a multiplicative constant, the only $T$-invariant, boundedly finite measure on $X$.
\end{prop}
\begin{proof}
Let $\sigma$ be a $T$-invariant $\sigma$-finite measure. Then for each $n$, the intervals which are levels of tower~$n$ have the same measure. Since the successive towers exhaust $\mathbb{R}_+$, we get that for each $n$, all intervals of the form $[j/3^n,(j+1)/3^n)$ for integers $j\ge 0$ have the same measure $\sigma_n$. Obviously $\sigma_{n+1}=\sigma_n/3$. Since $\sigma$ is boundedly finite, $\sigma_0<\infty$. Hence $\sigma_n<\infty$ and $\sigma$ is, up to the multiplicative constant $\sigma_0$, equal to the Lebesgue measure.
\end{proof}
Observe that assuming only $\sigma$-finiteness for the measure $\sigma$ is not enough: The counting measure on rational points is $\sigma$-finite, $T$-invariant, but singular with respect to Lebesgue measure. Can we have a counterexample where $\sigma$ is conservative?
\subsection{Technical lemmas}
In the following, $d$ is an integer, $d\ge2$.
\begin{lemma}
\label{lemma:BtoSB}
Let $G_1\sqcup G_2=\{1,\ldots,d\}$ be a partition of $\{1,\ldots,d\}$ into two disjoint sets, one of which is possibly empty. Let us define a transformation $S:X^d\to X^d$ by
\[
S(y_1,\ldots, y_d) := (z_1,\ldots,z_d),\text{ where } z_i:=\begin{cases}
T y_i &\text{ if }i\in G_1, \\
y_i &\text{ if }i\in G_2.
\end{cases}
\]
Let $n\ge1$, let $B$ be an $n$-box,
and let $x=(x_1,\ldots,x_d)\in C^{d}_n$. If $t_n(x_i)=1$ for $i\in G_1$ and $t_n(x_i)=2$ for $i\in G_2$, then
\[
x\in B\Longleftrightarrow (T^{\times d})^{h_n+1}x\in SB.
\]
Similarly, if $t_n(x_i)=2$ for $i\in G_1$ and $t_n(x_i)=3$ for $i\in G_2$, then
\[
x\in SB\Longleftrightarrow (T^{\times d})^{-h_n-1}x\in B.
\]
\end{lemma}
\begin{proof}
Let $x=(x_1,\ldots,x_d)\in C^{d}_n$ such that $t_n(x_i)=1$ for $i\in G_1$ and $t_n(x_i)=2$ for $i\in G_2$.
For each $1\le i\le d$, let $L_i$ be the level of $C_n$ containing $x_i$.
If $i\in G_1$, $T^j x_i$, $j$ ranging from $1$ to $h_n+1$, never goes through an $(n+1)$-spacer, hence $T^{h_n+1}x_i\in T L_i$ (see Figure~\ref{fig:construction}). If $i\in G_2$, $T^j x_i$, $j$ ranging from $1$ to $h_n+1$, goes through exactly one $(n+1)$-spacer, hence $T^{h_n+1}x_i\in L_i$. Hence, $(T^{\times d})^{h_n+1}x\in S(L_1\times\cdots\times L_d)$. Observe that, since $B$ is an $n$-box, $B\subsetC^{d}_n$, thus both $B$ and $SB$ are Cartesian products of levels of tower~$n$.
We then get
\begin{align*}
x\in B & \Longleftrightarrow B=L_1\times\cdots\times L_d\\
& \Longleftrightarrow SB= S(L_1\times\cdots\times L_d)\\
& \Longleftrightarrow (T^{\times d})^{h_n+1}x\in SB.
\end{align*}
The case $t_n(x_i)=2$ for $i\in G_1$ and $t_n(x_i)=3$ for $i\in G_2$ is handled in the same way.
\end{proof}
\begin{lemma}
\label{lemma:tn}
Let $n\ge2$, $x=(x_1,\ldots,x_d)\in C^{d}_{n-1}$ and $\ell\ge n$. If $t_\ell(x_i)\in\{1,2\}$ for each $1\le i\le d$, then $(T^{\times d})^{h_\ell+1}x\in C^{d}_n$.
\end{lemma}
\begin{proof}
Let $B_\ell$ (respectively $B_n$) be the $\ell$-box (respectively the $n$-box) containing $x$. Observe that $B_\ell\subset B_n\subset C^{d}_{n-1}$ because $x\inC^{d}_{n-1}$. Applying Lemma~\ref{lemma:BtoSB}, we get $(T^{\times d})^{h_\ell+1}x\in SB_\ell\subset SB_n$, where $S$ is the transformation of $X^d$ acting as $T$ on coordinates $i$ such that $t_\ell(x_i)=1$ and acting as $\Id$ on other coordinates. Since $B_n\subset C^{d}_{n-1}$, $SB_n\subsetC^{d}_n$, which ends the proof.
\end{proof}
\begin{definition}
Let $x=(x_1,\ldots,x_d)\in X^d$. For each integer $n\ge1$, we call \emph{$n$-crossing for $x$} a maximal finite set of consecutive integers $j\in\mathbb{Z}$ such that $(T^{\times d})^j x\in C^{d}_n$.
\end{definition}
Note that, when $j$ ranges over an $n$-crossing for $x$, $(T^{\times d})^j\ x$ successively belongs to the $n$-boxes constituting an $n$-diagonal, and that for each $1\le i\le d$, $t_n(T^jx_i)$ remains constant.
\begin{lemma}
\label{lemma:separated}
An $n$-crossing contains at most $h_n/2$ elements.
Two distinct $n$-crossings for the same $x$ are separated by at least $h_n/2$ integers.
\end{lemma}
\begin{proof}
The first assertion is obvious since $C_n$ is a tower of height $h_n/2$. Consider the maximum element $j$ of an $n$-crossing for $x=(x_1,\ldots,x_d)$. Then there exists $1\le i\le d$ such that $T^{j}(x_i)\in C_n$, but $T^{j+1}(x_i)\notin C_n$. By construction, $T^{j+\ell}(x_i)\notin C_n$ for all $1\le \ell\le h_n/2$, hence $(T^{\times d})^{j+\ell}x\notin C^{d}_n$.
\end{proof}
\begin{lemma}
\label{lemma:long-crossing}
Let $j\ge 0$ and $n\ge 2$ such that $(T^{\times d})^{j}x\in C^{d}_{n-1}$. Then $j,j+1,\ldots,j+h_{n-1}/2$ belong to the same $n$-crossing.
\end{lemma}
\begin{proof}
For all $1\le i\le d$, $T^{j}(x_i)\in C_{n-1}$, hence for all $1\le \ell\le h_{n-1}/2$, $T^{j+\ell}(x_i)$ belongs to tower~$(n-1)$, hence to $C_n$.
\end{proof}
For $x\in X^d$, let us define $n(x)$ as the smallest integer $n\ge1$ such that $x\in C^{d}_{n}$. Observe that $x\inC^{d}_n$ for each $n\ge n(x)$. In particular, for each $n\ge n(x)$, 0 belongs to an $n$-crossing for $x$, which we call the \emph{first $n$-crossing for $x$}. Observe also that the first $(n+1)$-crossing for $x$ contains the first $n$-crossing for $x$. Since $n$-crossings for $x$ are naturally ordered, we refer to the next $n$-crossing for $x$ after the first one (if it exists) as the \emph{second $n$-crossing for $x$}.
\begin{lemma}
\label{lemma:special_n}
Let $x=(x_1,\ldots,x_d)\in X^d$ such that, for any $n\ge n(x)$, there exist infinitely many $n$-crossings for $x$ contained in $\mathbb{Z}_+$. Then there exist infinitely many integers $n\ge n(x)+1$ such that the first $(n+1)$-crossing for $x$ also contains the second $n$-crossing for $x$. Moreover, for such an integer $n$, $t_n(x_i)\in\{1,2\}$ for each $i\in\{1,\ldots,d\}$, and for $j$ in the second $n$-crossing, we have $t_n(T^jx_i)=t_n(x_i)+1$.
\end{lemma}
\begin{proof}
Let $m\ge n(x)+1$, and let $\{s,s+1,\ldots,s+r\}$ be the second $m$-crossing for $x$. Define $n\ge m$ as the smallest integer such that $(T^{\times d})^{j}x\inC^{d}_{n+1}$ for each $0\le j\le s+r$. Then the $n$-crossing for $x$ containing zero is distinct from the $n$-crossing for $x$ containing $s$, and these two $n$-crossings are contained in the same $(n+1)$-crossing for $x$. Therefore the first $(n+1)$-crossing for $x$ contains both the first and the second $n$-crossings for $x$.
By Lemma~\ref{lemma:separated}, the first and the second $n$-crossings are separated by at least $h_n/2$, hence each coordinate has to leave $C_n$ between them.
If we had $t_n(x_i)=3$ for some $i$, then $T^j(x_i)$ would also leave $C_{n+1}$ before coming back to $C_n$, which contradicts the fact that both $n$-crossings are in the same $(n+1)$-crossing. Hence $t_n(x_i)\in\{1,2\}$ for each $i$.
Moreover, recall that $n\ge m \ge n(x)+1$, thus $x\in C^{d}_{n-1}$. Hence $x$ satisfies the assumptions of Lemma~\ref{lemma:tn}, with $\ell=n$. Therefore, $(T^{\times d})^{h_n+1}x\in C^{d}_n$, which proves that $h_n+1$ belongs to the second $n$-crossing. At time $h_n+1$, each coordinate has jumped to the following subcolumn: $t_n(T^{h_n+1}x_i)=t_n(x_i)+1$. The conclusion follows as $t_n$ is constant over an $n$-crossing.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:msj}, conservative case}
Now we consider an integer $d\ge2$ such that the statement of Theorem~\ref{thm:msj} (in the conservative case) is valid up to $d-1$. Let $\sigma$ be a nonzero measure on $X^d$, which is boundedly finite, $T^{\times d}$-invariant, and such that the system $(X^d,\sigma,T^{\times d})$ is ergodic and conservative. By Hopf's ergodic theorem, if $A\subset B\subset X^d$ with $0<\sigma(B)<\infty$, we have for $\sigma$-almost every point $x=(x_1,\ldots,x_d)\in X^d$
\begin{equation}
\label{eq:Hopf}
\dfrac{\sum_{j\in I}\ind{A}((T^{\times d})^jx)}{\sum_{j\in I}\ind{B}((T^{\times d})^jx)}
\tend{|I|}{\infty} \dfrac{\sigma(A)}{\sigma(B)},
\end{equation}
where the sums in the above expression range over an interval $I$ containing 0.
Recall that $C^{d}_n\subsetC^{d}_{n+1}$, and that $X^d=\bigcup_n C^{d}_n$. In particular, for $n$ large enough, $\sigma(C^{d}_n)>0$ (and $\sigma(C^{d}_n)<\infty$ because $\sigma$ is boundedly finite). By conservativity, this implies that almost every $x\in X^d$ returns infinitely often in $C^{d}_n$.
We say that $x\in X^d$ is \emph{typical} if, for all $n$ large enough so that $\sigma(C^{d}_n)>0$,
\begin{itemize}
\item[(i)] Property~\eqref{eq:Hopf} holds whenever $A$ is an $n$-box and $B$ is $C^{d}_n$,
\item[(ii)] $(T^{\times d})^jx\in C^{d}_n$ for infinitely many integers $j\ge0$.
\end{itemize}
We know that $\sigma$-almost every $x\in X^d$ is typical. From now on, we consider a fixed typical point $\overline{x}=(\overline{x}_1,\ldots,\overline{x}_d)$ and we will estimate the measure $\sigma$ along its orbit.
By (ii), $\overline{x}$ satisfies the assumption of Lemma~\ref{lemma:special_n}. Hence we are in exactly one of the following two complementary cases.
\smallskip
\noindent {\bf Case 1:} There exists $n_1$ such that, for each $n\ge n_1$ satisfying the condition given in Lemma~\ref{lemma:special_n}, and for each $1\le i\le d$, $t_n(\overline{x}_i)=t_n(\overline{x}_1)$.
\smallskip
\noindent {\bf Case 2:} There exist a partition of $\{1,\ldots,d\}$ into two disjoint nonempty sets \[
\{1,\ldots,d\}=G_1\sqcup G_2,
\]
and infinitely many integers $n$ satisfying the condition given in Lemma~\ref{lemma:special_n} such that, for each $i\in G_1$, $t_n(\overline{x}_i)=1$, and for each $i\in G_2$, $t_n(\overline{x}_i)=2$.
\smallskip
Theorem~\ref{thm:msj} will be proved by induction on $d$ once we will have shown the following proposition.
\begin{prop}
If Case 1 holds, then the measure $\sigma$ is a diagonal measure.
If Case 2 holds, then $\sigma$ is a product measure of the form
\[
\sigma = \sigma_{G_1}\otimes \sigma_{G_2},
\]
where, for $i=1,2$, $\sigma_{G_i}$ is a measure on $X^{G_i}$ which is boundedly finite, $T^{\times |G_i|}$-invariant, and such that the system $(X^{G_i},\sigma_{G_i},T^{\times |G_i|})$ is ergodic and conservative.
\end{prop}
\begin{proof}
All $n$-crossings used in this proof are $n$-crossings for the fixed typical point $\overline{x}$.
First consider Case~1. Let $m\ge n_1$. We claim that every $m$-crossing passes through the same $m$-diagonal as the first $m$-crossing. Let $J\subset \mathbb{N}$ be an arbitrary $m$-crossing. Define $n$ as the smallest integer $n\ge m$ such that all integers $j\in\{0,\ldots,\sup J\}$ are contained in the same $(n+1)$-crossing. Then $n$ satisfies the conditions of Lemma~\ref{lemma:special_n}: The $(n+1)$-crossing containing 0 contains (at least) two different $n$-crossings, the one containing 0 and the one containing the $m$-crossing $J$. Since we are in Case~1, all coordinates have met the same number of $(n+1)$-spacers between the $n$-crossing containing 0 and the $n$-crossing containing $J$. Hence the $n$-diagonal where $\overline{x}$ lies is the same as the $n$-diagonal containing $(T^{\times d})^{j}\overline{x}$ for $j\in J$. Now we prove the claim by induction on $n-m$. If $n-m=0$ we have the result. Let $k\ge0$ such that the claim is true if $n-m\le k$, and assume that $n-m=k+1$. We consider the $n$-crossing containing
0: It
may contain several $m$-crossings, but by the induction hypothesis, all these $m$-crossings correspond to the same $m$-diagonal.
Now, we know that the $n$-crossing containing $J$ corresponds to the same $n$-diagonal as the $n$-crossing containing 0, thus all the $m$-crossings it contains correspond to the same $m$-diagonal as the $m$-crossing containing 0. Now, since we have chosen $\overline{x}$ typical, it follows that the $m$-diagonal containing $\overline{x}$ is the only one which is charged by $\sigma$. But this is true for all $m$ large enough, hence $\sigma$ is a diagonal measure.
\smallskip
Let us turn now to Case~2. Consider the transformation $S:X^d\to X^d$ defined as in Lemma~\ref{lemma:BtoSB} by
\[
S(y_1,\ldots, y_d) = (z_1,\ldots,z_d),\text{ where } z_i:=\begin{cases}
T y_i \text{ if }i\in G_1, \\
y_i \text{ if }i\in G_2.
\end{cases}
\]
Let us fix $m$ large enough so that $\sigma(C^{d}_{m-1})>0$. For each $m$-box $B$, denote by $n_B$ (respectively $n'_B$) the number of times the orbit of $\overline{x}$ falls into $B$ along the first $n$-crossing (respectively the second). We claim that there exists an $m$-box $B$ such that $SB$ is still an $m$-box, and $\sigma(B)>0$. Indeed, it is enough to take any $m$-box in $C^{d}_{m-1}$ with positive measure. For such an $m$-box $B$, we want now to compare $\sigma(B)$ and $\sigma(SB)$.
Let $n>m$ be a large integer satisfying the condition stated in Case~2.
Partition the $m$-box $B$ into $n$-boxes: since $SB$ is also an $m$-box, for each $n$-box $B'\subset B$, $SB'$ is an $n$-box contained in $SB$, and we get in this way all $n$-boxes contained in $SB$. Let us fix such an $n$-box, and apply Lemma~\ref{lemma:BtoSB}: For each $j$ in the first $n$-crossing, we have
\[
(T^{\times d})^j\overline{x}\in B' \Longleftrightarrow (T^{\times d})^{j+h_n+1}\overline{x}\in SB',
\]
and in this case, by Lemma~\ref{lemma:separated}, $j+h_n+1$ belongs to the second $n$-crossing. In the same way, for each $j$ in the second $n$-crossing, we have
\[
(T^{\times d})^j\overline{x}\in SB' \Longleftrightarrow (T^{\times d})^{j-h_n-1}\overline{x}\in B',
\]
and in this case, by Lemma~\ref{lemma:separated}, $j-h_n-1$ belongs to the first $n$-crossing. Summing over all $n$-boxes $B'$ contained in $B$, It follows that
\begin{equation}
\label{eq:n_et_n'}
\text{if both $B$ and $SB$ are $m$-boxes, }n'_{SB}=n_B.
\end{equation}
Set
\[
N:= \sum_{B} n_B,\quad\text{and}\quad N':= \sum_{B} n'_B,
\]
where the two sums range over all $m$-boxes $B$. Since we have chosen $\overline{x}$ typical, and since the length of the first $n$-crossing go to $\infty$ as $n\to\infty$, we can apply~\eqref{eq:Hopf} and get, for any $m$-box $B$, as $n\to\infty$
\begin{equation}
\label{eq:Hopf-bis}
\frac{n_B}{N}=\frac{\sigma(B)}{\sigma(C^{d}_{m})} + o(1),\quad\text{and}\quad\frac{n_B+n'_B}{N+N'}=\frac{\sigma(B)}{\sigma(C^{d}_{m})} + o(1).
\end{equation}
Since $N'\ge\sum n'_{SB}$ where the sum ranges over the set $\mathscr{B}_m$ of all $m$-boxes $B$ such that $SB$ is still an $m$-box, we get by~\eqref{eq:n_et_n'}
\[
N' \ge \sum_{B\in\mathscr{B}_m} n_B.
\]
Then, applying the left equality in~\eqref{eq:Hopf-bis} for all $B\in\mathscr{B}_m$, we obtain
\[
\frac{N'}{N} \ge \frac{\sum_{B\in\mathscr{B}_m} \sigma(B)}{\sigma(C^{d}_{m})}+o(1).
\]
As we know that $\sum_{B\in\mathscr{B}_m} \sigma(B)>0$, it follows that $N'/N$ is larger than some positive constant for $n$ large enough, and we can deduce from~\eqref{eq:Hopf-bis} that, for all $m$-box $B$, we also have as $n\to\infty$
\[
\frac{n'_B}{N'}=\frac{\sigma(B)}{\sigma(C^{d}_{m})} + o(1).
\]
Let $B\in\mathscr{B}_m$. Applying the above equation for $SB$ and the left equality in~\eqref{eq:Hopf-bis} for $B$, and using~\eqref{eq:n_et_n'}, we get, if $\sigma(B)>0$,
\[
\frac{N}{N'}=\frac{\sigma(SB)}{\sigma(B)}+o(1).
\]
It follows that the ratio $\sigma(SB)/\sigma(B)$ does not depend on $B$. We denote it by $c_m$.
Moreover, observe that if $\sigma(B)=0$, we get $n_B/N\to 0$, hence also $n_B/N'=n'_{SB}/N'\to 0$, and $\sigma(SB)=0$. Finally, for all $B\in\mathscr{B}_m$, we have $\sigma(SB)=c_m\sigma(B)$.
Note that any box $B\in\mathscr{B}_m$ is a finite disjoint union of $(m+1)$-boxes in $\mathscr{B}_{m+1}$. This implies that $c_m=c_{m+1}$. Therefore, there exists $c>0$ such that, for all $m$ large enough and all $B\in\mathscr{B}_m$,
\[
\sigma(SB)=c\sigma(B).
\]
But, as $m\to\infty$, the finite partition of $X^d$ defined by all $m$-boxes in $\mathscr{B}_m$ increases to the Borel $\sigma$-algebra of $X^d$. Hence, for any measurable subset $B\subset X^d$, the previous equality holds.
A direct application of Theorem~\ref{thm:product} proves that $\sigma$ has the product form announced in the statement of the proposition. And since $\sigma$ is boundedly finite, the measures $\sigma_{G_1}$ and $\sigma_{G_2}$ are also boundedly finite.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:msj}, dissipative case}
We consider now a nonzero measure $\sigma$ on $X^d$, which is boundedly finite, $T^{\times d}$-invariant, and such that the system $(X^d,\sigma,T^{\times d})$ is ergodic and totally dissipative. Up to a multiplicative constant, this measure is henceforth of the form
\[ \sigma = \sum_{k\in\mathbb{Z}} \delta_{(T^{\times d})^k x} \]
for some $x\in X^d$. And since we assume that $\sigma$ is boundedly finite, for each $n$ there exist only finitely many $n$-crossings for $x$. Now we claim that for $n$ large enough, there is only one $n$-crossing for $x$, which will show that $\sigma$ is a diagonal measure.
Let $n$ be large enough so that $x\inC^{d}_{n-1}$, and let $m$ be large enough so that all $n$-crossings for $x$ are contained in a single $m$-crossing. Assume that there is a second $m$-crossings for $x$. Then we consider the smallest integer $\ell$ such that the first and the second $m$-crossings are contained in a single $(\ell+1)$-crossing. As in the proof of Lemma~\ref{lemma:special_n}, we have $t_\ell(x_i)\in\{1,2\}$, so we can apply Lemma~\ref{lemma:tn}.
We get $(T^{\times d})^{h_\ell+1}x\inC^{d}_n$, but $h_\ell+1$ is necessarily in the second $m$-crossing. This contradicts the fact that all $n$-crossings for $x$ are contained in a single $m$-crossing. A similar argument proves that there is no other $m$-crossing contained in $\mathbb{Z}_-$, and this ends the proof of the theorem.
\section{Diagonal measures}
\label{sec:diagonal}
The purpose of this section is to provide more information on $d$-dimensional diagonal measures introduced in Definition~\ref{def:diagonal}, and which play an important role in our analysis. We are going to prove that there exist exactly two classes of ergodic diagonal measures:
\begin{itemize}
\item graph joinings arising from powers of $T$, as defined by~\eqref{eq:graph_joining};
\item \emph{weird} diagonal measures, whose marginals are singular with respect to $\mu$.
\end{itemize}
Moreover, we will provide a parametrization of the family of ergodic diagonal measures, and a simple criterion on the parameter to decide to which class a specific measure belongs.
\subsection{Construction of diagonal measures}
Let $d\ge2$, and let $\sigma$ be a diagonal measure on $X^d$. We define $n_0(\sigma)$ as the smallest integer $n_0$ for which $\sigma(C_{n_0-1}^d)>0$, and such that, for any $n\ge n_0$, $\sigma$ gives positive measure to a single $n$-diagonal, denoted by $D_n(\sigma)$.
\begin{definition}
\label{def:consistent}
Let $n_0\ge 1$, and for each $n\ge n_0$, let $D_n$ be an $n$-diagonal. We say that the family $(D_n)_{n\ge n_0}$ is \emph{consistent} if
\begin{itemize}
\item $C^{d}_{n_0-1}\cap\bigcap_{n\ge n_0}D_n\neq \emptyset$,
\item $D_{n+1}\capC^{d}_n\subset D_n$ for each $n\ge n_0$.
\end{itemize}
\end{definition}
Obviously, the family $(D_n(\sigma))_{n\ge n_0(\sigma)}$ is consistent.
\begin{definition}
\label{def:seen}
We say that $x\in X^d$ is \emph{seen} by the consistent family of diagonals $(D_n)_{n\ge n_0}$ if, for each $n\ge n_0$, either $x\notin C_n^d$ (which happens only for finitely many integers $n$), or $x\in D_n$. We say that $x\in X^d$ is \emph{seen} by the diagonal measure $\sigma$ if it is seen by the family $(D_n(\sigma))_{n\ge n_0(\sigma)}$.
\end{definition}
Observe that, thanks to the first condition in the definition of a consistent family of diagonals, there always exist some $x\inC^{d}_{n_0-1}$ which is seen by the family. Moreover, if $\sigma$ is a diagonal measure, then
\begin{equation}
\label{eq:seen}\sigma\Bigl(\left\{x\in X^d:\ x\text{ is not seen by }\sigma\right\}\Bigr)=0.
\end{equation}
\begin{lemma}
\label{lemma:seen}
If $x$ is seen by the consistent family of diagonals $(D_n)_{n\ge n_0}$, then for each $j\in\mathbb{Z}$,
$(T^{\times d})^jx$ is also seen by $(D_n)_{n\ge n_0}$.
\end{lemma}
\begin{proof}
Let $n\ge n_0$. Let $m\ge n$ be large enough so that $(T^{\times d})^ix$ belong to $C^{d}_m$ for each $0\le i\le j$ (or each $j\le i\le 0$). Consider the $m$-box $B$ containing $x$: Since $x$ is seen by $(D_n)_{n\ge n_0}$, $B\subset D_m$ and $(T^{\times d})^jB\subset D_m$. Now, observe that an $m$-box is either contained in an $n$-box, or it is contained in $X^d\setminus C^{d}_n$. Hence, either $(T^{\times d})^j x\in (T^{\times d})^jB \subsetC^{d}_n$, or $(T^{\times d})^j x\in (T^{\times d})^jB \subset X^d\setminus C^{d}_n$. In the former case, $(T^{\times d})^j x\in (T^{\times d})^jB \subset D_n$ because $D_m\cap C^{d}_n\subset D_n$. This proves that $(T^{\times d})^jx$ is also seen by $(D_n)_{n\ge n_0}$.
\end{proof}
Let $(D_n)_{n\ge n_0}$ be a consistent family of diagonals. We want to describe the relationship between $D_n$ and $D_{n+1}$ for $n\ge n_0$.
Let us consider an $n$-box $B$. For each $d$-tuple $\tau=(\tau(1),\ldots,\tau(d))\in\{1,2,3\}^d$,
\begin{equation}
\label{eq:defB}B(\tau) := \{x\in B:\ t_n(x_i)=\tau(i)\ \forall 1\le i \le d\}
\end{equation}
is an $(n+1)$-box. Moreover, $B$ is the disjoint union of the $3^d$ $(n+1)$-boxes $B(\tau)$. Notice that if $B$ and $B'$ are two $n$-boxes included in the same $n$-diagonal, then $B(\tau)$ and $B'(\tau)$ are included in the same $(n+1)$-diagonal. Therefore, for each $n$-diagonal $D$ and each $d$-tuple $\tau\in\{1,2,3\}^d$, we can define the $(n+1)$-diagonal $D(\tau)$ as the unique $(n+1)$-diagonal containing $B(\tau)$ for any $n$-box $B$ included in $D$.
Let us fix $x\in C^{d}_{n_0-1}$ which is seen by $(D_n)_{n\ge n_0}$. For each $n\ge n_0$, since $x\in D_n\cap D_{n+1}$, we get
\[
D_{n+1}=D_n(t_n(x_1),\ldots,t_n(x_d)).
\]
Moreover, we will see that some values for the $d$-tuple $(t_n(x_1),\ldots,t_n(x_d))$ are forbidden (see Figure~\ref{fig:diagonal}). As a matter of fact, assume $\{1,2\}=\{t_n(x_i):\ 1\le i\le d\}$. We can apply Lemma~\ref{lemma:BtoSB}, and observe that the transformation $S$ used in this lemma acts as $T$ on some coordinates and as $\Id$ on others. Therefore, $x$ and $(T^{\times d})^{h_n+1}x$ belong to two different $n$-diagonals, which is impossible by Lemma~\ref{lemma:seen}. By a similar argument, we prove that the case $\{2,3\}=\{t_n(x_i):\ 1\le i\le d\}$ is also impossible. Eventually, only two cases can arise:
\begin{description} \item[Corner case] $\{1,3\}\subset \{t_n(x_i):\ 1\le i\le d\}$; then the first $(n+1)$-crossing for $x$ contains only one $n$-crossing for $x$.
\item[Central case] $t_n(x_1)=t_n(x_2)=\cdots = t_n(x_d)$; then the first $(n+1)$-crossing for $x$ contains three consecutive $n$-crossings for $x$, and $D_{n+1}=D_n(1,\ldots,1)=D_n(2,\ldots,2)=D_n(3,\ldots,3)$.
\end{description}
\begin{figure}[htp]
\centering
\includegraphics[width=10cm]{diagonale.pdf}
\caption{Relationship between $D_n$ and $D_{n+1}$ in the case $d=2$. The 4 positions marked with $\ast$ are impossible because the corresponding $(n+1)$-diagonal meets another $n$-diagonal.}
\label{fig:diagonal}
\end{figure}
It follows from the above analysis that the diagonals $D_n$, $n\ge n_0$, are completely determined by the knowledge of $D_{n_0}$ and a family of parameters $(\tau_n)_{n\ge n_0}$, where each $\tau_n=(\tau_n(i), 1\le i\le d)$ is a $d$-tuple in $\{1,2,3\}^d$, satisfying either $\{1,3\}\subset \{\tau_n(i):\ 1\le i\le d\}$ (corner case), or $\tau_n(1)=\cdots = \tau_n(d)$ (central case).
\begin{lemma}
\label{lemma:central_case}
If $\sigma$ is a diagonal measure, and if $(X^d,T^{\times d},\sigma)$ is conservative, then there are infinitely many integers $n$ such that the transition from $D_{n}(\sigma)$ to $D_{n+1}(\sigma)$ corresponds to the central case:
\[
D_{n+1}(\sigma)=D_{n}(\sigma)(1,\ldots,1).
\]
\end{lemma}
\begin{proof}
Since $(X^d,T^{\times d},\sigma)$ is conservative, for $\sigma$-almost all $x$, for any $n\ge n(x)$, there exist infinitely many $n$-crossings for $x$ in $\mathbb{Z}_+$. Moreover, $\sigma$-almost all $x$ is seen by $\sigma$. Applying Lemma~\ref{lemma:special_n} to such an $x$, we get that there are infinitely many integers $n$ for which the corner case does not occur, hence such that the transition from $D_{n}(\sigma)$ to $D_{n+1}(\sigma)$ corresponds to the central case.
\end{proof}
\begin{lemma}
\label{lemma:intersection_of_n_boxes}
Let $(\tau_m)_{m\ge m_0}$ be a sequence of $d$-tuples in $\{1,2,3\}^d$. We define a decreasing sequence of $m$-boxes by choosing an arbitrary $m_0$-box $B_{m_0}$ and setting inductively $B_{m+1}:= B_m(\tau_m)$. Then
\[
\bigcap_{m\ge m_0}B_m \neq \emptyset
\]
if and only if
\begin{equation}
\label{eq:condition_tau}
\text{for all }1\le i\le d, \text{ there exist infinitely many integers }m\text{ with } \tau_m(i)\in\{1,2\}.
\end{equation}
\end{lemma}
\begin{proof}
Recall that the levels of each tower in the construction of $T$ are intervals which are closed to the left and open to the right. If we have a decreasing sequence $(I_m)$ of intervals, where $I_m$ is a level of tower~$m$, then
\[ \bigcap_m I_m=\begin{cases}
\emptyset, \text{ if $I_{m+1}$ is the rightmost subinterval of $I_m$ for each large enough $m$,}\\
\text{a singleton, otherwise.}
\end{cases}
\]
Since $\tau_m(i)$ indicates the subinterval chosen at step $m$ for the coordinate $i$, the conclusion follows.
\end{proof}
\begin{lemma}
\label{lemma:consistent}
Let $n_0\ge 2$. Let $D_{n_0}$ be an $n_0$-diagonal such that $D_{n_0}\cap C^{d}_{n_0-1}\neq \emptyset$.
Let $(\tau_n)_{n\ge n_0}$ be a sequence of $d$-tuples in $\{1,2,3\}^d$ satisfying either $\{1,3\}\subset \{\tau_n(i):\ 1\le i\le d\}$, or $\tau_n(1)=\cdots = \tau_n(d)$.
Then the inductive relation $D_{n+1}:= D_n(\tau_n)$, $n\ge n_0$ defines a consistent family of diagonals if and only if Property~\eqref{eq:condition_tau} holds.
\end{lemma}
\begin{proof}
Applying Lemma~\ref{lemma:intersection_of_n_boxes}, the first condition in the definition of a consistent family of diagonals is equivalent to Property~\eqref{eq:condition_tau}. The second condition comes from the restrictions made on the $d$-tuples.
\end{proof}
\begin{prop}
\label{prop:construction_sigma}
Let $n_0\ge2$. Let $(D_n)_{n\ge n_0}$ be a consistent family of diagonals.
Then there exists a diagonal measure $\sigma$, unique up to a multiplicative constant, with $n_0(\sigma)\le n_0$, and for each $n\ge n_0$, $D_n(\sigma)=D_n$.
This measure satisfies $\sigma(X^d)=\infty$.
If the transition from $D_n$ to $D_{n+1}$ corresponds infinitely often to the central case, then the system $(X^d, T^{\times d}, \sigma)$ is conservative ergodic. Otherwise, it is ergodic and totally dissipative.
\end{prop}
\begin{proof}
We first define $\sigma$ on the ring
\[
\mathscr{R}:=\{B\subset X^d:\ \exists n\ge1,\ B\text{ is a finite union of $n$-boxes}\}.
\]
Since we want to determine $\sigma$ up to a multiplicative constant, we can arbitrarily set $\sigma(C^{d}_{n_0})=\sigma(D_{n_0}):= 1$. As we want $\sigma$ to be invariant under the action of $T^{\times d}$, this fixes the measure of each $n_0$-box: For each $n_0$-box $B$,
\[
\sigma(B):=\begin{cases}
\dfrac{1}{\text{number of $n_0$-boxes in $D_{n_0}$}} &\text{ if }B\subset D_{n_0},\\
0 &\text{otherwise.}
\end{cases}
\]
Now assume that we have already defined $\sigma(B)$ for each $n$-box, for some $n\ge n_0$, and that we have some constant $\alpha_n>0$ such that, for any $n$-box $B$,
\[
\sigma(B)=\begin{cases}
\alpha_n &\text{ if }B\subset D_{n},\\
0 &\text{otherwise.}
\end{cases}
\]
We set $\sigma(B'):= 0$ for any $(n+1)$-box $B'\not\subset D_{n+1}$, and it remains to define the measure of $(n+1)$-boxes included in $D_{n+1}$. These boxes must have the same measure, which we denote by $\alpha_{n+1}$.
\begin{itemize}
\item Either the transition from $D_n$ to $D_{n+1}$ corresponds to the corner case. Then each $n$-box contained in $D_n$ meets only one $(n+1)$-box contained in $D_{n+1}$, and we set $\alpha_{n+1}:=\alpha_n$.
\item Or the transition from $D_n$ to $D_{n+1}$ corresponds to the central case. Then each $n$-box contained in $D_n$ meets three $(n+1)$-boxes contained in $D_{n+1}$, and we set $\alpha_{n+1}:=\alpha_n/3$.
\end{itemize}
For any $R\in\mathscr{R}$ which is a finite union of $n$-boxes, we can now define $\sigma(R)$ as the sum of the measures of the $n$-boxes included in $R$. At this point, $\sigma$ is now defined as a finitely additive set function on $\mathscr{R}$.
It remains now to prove that $\sigma$ can be extended to a measure on the Borel $\sigma$-algebra of $X^d$, which is the $\sigma$-algebra generated by $\mathscr{R}$. Using Theorems~F p.~39 and A p.~54 (Caratheodory's extension theorem) in~\cite{Halmos}, we only have to prove the following.
\begin{claim*}
If $(R_k)_{k\ge 1}$ is a decreasing sequence in $\mathscr{R}$ such that
$ \lim_{k\to\infty}\downarrow \sigma(R_k) > 0$,
then $\bigcap_k R_k\neq \emptyset$.
\end{claim*}
Having fixed such a sequence $(R_k)$, we say that an $m$-box $B$ is \emph{persistent} if
\[
\lim_{k\to\infty}\downarrow \sigma(R_k\cap B) > 0.
\]
We are going to construct inductively a decreasing family $(B_m)_{m\ge m_0}$ where $B_m$ is a persistent $m$-box and
\[
\emptyset\neq \bigcap_{m\ge m_0}B_m\subset \bigcap_{k} R_k.
\]
We first consider the case where the transition from $D_n$ to $D_{n+1}$ corresponds infinitely often to the central case. Choose $k_0$ large enough so that
\[
\sigma(R_{k_0}) < \frac{3}{2}\lim_{k\to\infty}\downarrow \sigma(R_k).
\]
Then there exists $m_0$ such that $R_{k_0}$ is a finite union of $m_0$-boxes, and (choosing a larger $m_0$ if necessary), the transition from $D_{m_0}$ to $D_{m_0+1}$ corresponds to the central case. Let $B$ be a persistent $m_0$-box. Then $\sigma$ on $B$ is concentrated on the $(m_0+1)$-boxes $B(1,\ldots,1)$, $B(2,\ldots,2)$ and $B(3,\ldots,3)$. If $B(1,\ldots,1)$ is not persistent, we get
\begin{align*}
0 < \lim_{k\to\infty}\downarrow \sigma(R_k \cap B) & = \lim_{k\to\infty}\downarrow \sigma(R_k \cap B(2,\ldots,2)) + \lim_{k\to\infty}\downarrow \sigma(R_k \cap B(3,\ldots,3))\\
& \le \sigma(R_{k_0} \cap B(2,\ldots,2)) + \sigma(R_{k_0} \cap B(3,\ldots,3)) \\
& \le \sigma(B(2,\ldots,2)) + \sigma(B(3,\ldots,3)) \\
& = \frac{2}{3} \sigma (B) = \frac{2}{3} \sigma (R_{k_0}\cap B).
\end{align*}
Therefore, there exists some persistent $m_0$-box $B_{m_0}$ such that $B_{m_0+1}:= B_{m_0}(1,\ldots,1)$ is also persistent.
Indeed, otherwise we would have
\begin{align*}
\sigma(R_{k_0}) &\ge \sum_{B \text{persistent $m_0$}-box} \sigma(R_{k_0}\cap B)\\
& \ge \frac{3}{2} \sum_{B \text{persistent $m_0$}-box}\lim_{k\to\infty}\downarrow \sigma(R_k\cap B)\\
& = \frac{3}{2}\lim_{k\to\infty}\downarrow \sigma(R_k),
\end{align*}
which would contradict the definition of $k_0$.
Assume that we have already defined $B_{m_i}$ and $B_{m_i+1}=B_{m_i}(1,\ldots,1)$ for some $i\ge 0$.
Then we choose $k_{i+1}$ large enough so that
\[
\sigma(R_{k_{i+1}}\cap B_{m_i+1}) < \frac{3}{2}\lim_{k\to\infty}\downarrow \sigma(R_k\cap B_{m_i+1}).
\]
We choose $m_{i+1}>m_i+1$ such that $R_{k_{i+1}}$ is a finite union of $m_{i+1}$-boxes, and the transition from $D_{m_{i+1}}$ to $D_{m_{i+1}+1}$ corresponds to the central case. Then the same argument as above, replacing $R_k$ by $R_k\cap B_{m_i+1}$, proves that there exists a persistent $m_{i+1}$-box $B_{m_{i+1}}\subset B_{m_i+1}$ such that $B_{m_{i+1}+1}:= B_{m_{i+1}}(1,\ldots,1)$ is also persistent.
Now we can complete in a unique way our sequence to get a decreasing sequence $(B_m)_{m\ge m_0}$ of persistent boxes. Since we have $B_{m_i+1}=B_{m_i}(1,\ldots,1)$ for each $i\ge0$, Lemma~\ref{lemma:intersection_of_n_boxes} ensures that
\[
\bigcap_m B_m\neq\emptyset.
\]
It only remains to prove that $\bigcap_m B_m \subset\bigcap_k R_k$. Indeed, let us fix $k$ and let $\overline{m}$ be such that $R_k$ is a finite union of $\overline{m}$-boxes. In particular, $R_k$ contains all persistent $\overline{m}$-boxes, which implies
\[
\bigcap_m B_m \subset B_{\overline{m}} \subset R_k.
\]
Now we consider the case where there exists $m_0\ge n_0$ such that, for $n\ge m_0$, the transition from $D_n$ to $D_{n+1}$ always correspond to the corner case. That is, there exists a family $(\tau_n)_{n\ge m_0}$ of $d$-tuples in $\{1,2,3\}$, with $\{1,3\}\subset\{\tau_n(i), 1\le i\le d\}$ for each $n\ge m_0$, such that $D_{n+1}=D_n(\tau_n)$. By Lemma~\ref{lemma:consistent}, property~\eqref{eq:condition_tau} holds for $(\tau_n)_{n\ge m_0}$. We will now construct the family $(B_m)_{m\ge m_0}$ of $m$-boxes satisfying the required conditions. Start with $B_{m_0}$ which is a persistent $m_0$-box (such a box always exists). Since the transition from $D_{m_0}$ to $D_{m_0+1}$ corresponds to the corner case, there is only one $(m_0+1)$-box contained in $D_{m_0+1}\cap B_{m_0}$, and this box is precisely $B_{m_0}(\tau_{m_0})$. Therefore this box is itself persistent, and defining inductively $B_{m+1}:= B_m(\tau_m)$ gives a decreasing family of persistent boxes. By Lemma~\ref{lemma:intersection_of_n_boxes}, $\
bigcap_{m\ge m_0} B_m\neq\emptyset$. We prove as in the preceding case that $\bigcap_m B_m \subset\bigcap_k R_k$. This ends the proof of the claim.
\medskip
This proves that $\sigma$ can be extended to a $T^{\times d}$-invariant measure, whose restriction to each $C^{d}_n$, $n\ge n_0$, is by construction concentrated on the single diagonal $D_n$. And since $C^{d}_{n_0-1}\cap D_{n_0}\neq\emptyset$, we get $n_0(\sigma)\le n_0$. If $B$ is an $n$-box, then $(T^{\times d})^{h_n/2}B\subsetC^{d}_{n+1}$. Moreover, by Lemma~\ref{lemma:separated}, $(T^{\times d})^{h_n/2}B\not\subsetC^{d}_{n}$.
It follows that $(T^{\times d})^{h_n/2}D_n\subsetC^{d}_{n+1}\setminusC^{d}_n$. But $\sigma\bigl((T^{\times d})^{h_n/2}D_n\bigr)=\sigma(D_n)$, hence $\sigma(C^{d}_{n+1})\ge 2\sigma(C^{d}_n)$. We conclude that $\sigma(X^d)=\infty$.
Now we want to show the ergodicity of the system
$(X^d,T^{\times d},\sigma)$. Let $A\subset X^d$ be a $T^{\times d}$-invariant measurable set, with $\sigma(A)\neq 0$. Let $n$ be such that $\sigma(A\cap C^{d}_n)>0$. Given $\varepsilon>0$, we can find $m>n$ large enough such that there exists $\tilde A$, a finite union of $m$-boxes, with
\[
\sigma \left( (A\vartriangle \tilde A)\capC^{d}_n\right) < \varepsilon\,\sigma(A\cap C^{d}_n).
\]
Let $B$ be an $m$-box in $D_m$, and set $s_m:= \sigma(A\cap B)$: By invariance of $A$ under the action of $T^{\times d}$, $s_m$ does not depend on the choice of $B$.
We have
\[
\sigma(A\cap C^{d}_n) = \sum_{\stack{B\ m\text{-box in }D_m}{B\subsetC^{d}_n}}\sigma(A\cap B) = s_m \cdot \left|
\{B\ m\text{-box}:\ B\subset D_m\cap C^{d}_n\}\right|.
\]
On the other hand, we can write
\[
s_m \cdot \left|
\{B\ m\text{-box}:\ B\subset D_m\cap C^{d}_n\setminus\tilde A\}\right| \le \sigma \left( (A\vartriangle \tilde A)\capC^{d}_n\right) < \varepsilon\,\sigma(A\cap C^{d}_n).
\]
It follows that
\[
\dfrac{\Bigl|\{B\ m\text{-box}:\ B\subset D_m\cap C^{d}_n\setminus\tilde A\}\Bigr|}{\Bigl|
\{B\ m\text{-box}:\ B\subset D_m\cap C^{d}_n\}\Bigr|} < \varepsilon,
\]
hence
\[
\sigma(\tilde A\capC^{d}_n) > (1-\varepsilon) \sigma(C^{d}_n),
\]
and finally
\[
\sigma(A\capC^{d}_n) > (1-2\varepsilon) \sigma(C^{d}_n).
\]
But this holds for any $\varepsilon>0$, which proves that $\sigma(A\capC^{d}_n) = \sigma(C^{d}_n)$. Again, this holds for any large enough $n$, thus $\sigma(X^d\setminus A)=0$, and the system is ergodic.
We can observe that, if the central case occurs infinitely often, the measure $\alpha_n$ of each $n$-box on $D_n$ decreases to 0 as $n$ goes to infinity, which ensures that $\sigma$ is continuous. Therefore the conservativity of $(X^d,T^{\times d},\sigma)$ is a consequence of the ergodicity of this system. On the other hand, if the central case occurs only finitely many times, there exists $m_0$ such that for each $m\ge m_0$, $\alpha_m=\alpha_{m_0}>0$. It follows that $\sigma$ is purely atomic, and by ergodicity of $(X^d,T^{\times d},\sigma)$, $\sigma$ is concentrated on a single orbit.
\end{proof}
\subsection{A parametrization of the family of diagonal measures}
\label{sec:parametrization}
If $\sigma$ is a diagonal measure, by definition of $n_0(\sigma)$, the diagonal $D_{n_0(\sigma)}(\sigma)$ is initial in the sense given by the following definition.
\begin{definition}
Let $n_0\ge1$, and $D$ an $n_0$-diagonal. We say that $D$ is an \emph{initial} diagonal if
\begin{itemize}
\item Either there exist at least two $(n_0-1)$-diagonals which have non-empty intersection with $D$;
\item Or $D$ has non-empty intersection with exactly one $(n_0-1)$-diagonal, but does not intersect $C^{d}_{n_0-2}$ (with the convention that $C^{d}_0=\emptyset$).
\end{itemize}
\end{definition}
In Proposition~\ref{prop:construction_sigma}, it is clear that $n_0=n_0(\sigma)$ if and only if $D_{n_0}$ is initial.
Now we are able to provide a canonical parametrization of the family of diagonal measures: we consider the set of parameters
\[
\mathscr{D} :=\Bigl\{(n_0, D,\tau)\Bigr\},
\]
where
\begin{itemize}
\item $n_0\ge 1$,
\item $D$ is an initial $n_0$-diagonal;
\item $\tau= (\tau_n)_{n\ge n_0}$, where for each $n\ge n_0$, $\tau_n\in\{1,2,3\}^d$ and satisfies either $\{1,3\}\subset\{\tau_n(i),\ 1\le i\le d\}$ (corner case), or $\tau_n(i)=1$ for each $1\le i\le d$ (central case);
\item Property~\eqref{eq:condition_tau} holds for $(\tau_n)$.
\end{itemize}
To each $(n_0,D,\tau)\in\mathscr{D} $, by Proposition~\ref{prop:construction_sigma} we can canonically associate an ergodic diagonal measure $\sigma_{(n_0,D,\tau)}$, setting $\sigma_{(n_0,D,\tau)}(C^{d}_{n_0}):= 1$. Conversely, any ergodic diagonal measure $\sigma$ can be written as
\[
\sigma = \lambda\,\sigma_{(n_0,D,\tau)}
\]
for some $(n_0,D,\tau)\in\mathscr{D} $, where $\lambda:= \sigma(C^{d}_{n_0(\sigma)})$, $n_0:= n_0(\sigma)$, and $D:= D_{n_0(\sigma)}(\sigma)$.
Note that, by construction, for each $n\ge1$, each $(n_0,D,\tau)\in\mathscr{D}$, and each $n$-box $B$, we have $\sigma_{(n_0,D,\tau)}(B)\le1$. Thus,
\begin{equation}
\label{eq:borne_universelle}
\forall n\ge1,\ \forall (n_0,D,\tau)\in\mathscr{D},\ \sigma_{(n_0,D,\tau)}(C^{d}_n)\le \left(\dfrac{h_n}{2}\right)^d.
\end{equation}
\subsection{Identification of graph joinings}
\begin{prop}
\label{prop:graph}
Graph joinings of the form
\begin{equation}
\label{eq:graph_joining}\sigma(A_1\times\cdots\times A_d) = \alpha\, \mu (A_1\cap T^{-k_2}(A_2)\cap\cdots\cap T^{-k_d}(A_d))
\end{equation}
for some real $\alpha>0$ and some integers $k_2,\ldots,k_d$, are the diagonal measures $\sigma_{(n_0,D,\tau)}$ for which there exists $n_1\ge n_0$ such that, for $n\ge n_1$, $\tau_n(i)=1$ for each $1\le i\le d$.
\end{prop}
\begin{proof}
Let $\sigma:=\sigma_{(n_0,D,\tau)}$, and assume that for $n\ge n_1$, $\tau_n(i)=1$ for each $1\le i\le d$.
Consider $n\ge n_1$, and let $B$ be an $n$-box in $D_{n}(\sigma)$. Then $B$ is of the form $B_1\times T^{k_2}B_1\times\cdots\times T^{k_d}B_1$ for some level $B_1$ of tower~$n$ and some integers $k_2,\ldots,k_d$. Moreover, $k_2,\ldots,k_d$ do not depend on the choice of $B$ in $D_{n}(\sigma)$. Let us also write $B$ as $T^{\ell_1}F_{n}\times \cdots\times T^{\ell_d}F_{n}$, where $F_n$ is the bottom level of tower~$n$. Then $k_i=\ell_i-\ell_1$ for each $2\le i\le d$. Now, recalling notation~\eqref{eq:defB}, consider $B(1,\ldots,1)$, which is an $(n+1)$-box in $D_{n+1}(\sigma)$. Then $B(1,\ldots,1)=T^{\ell_1}F_{n+1}\times \cdots\times T^{\ell_d}F_{n+1}$, thus this $(n+1)$-box is of the form $B'_1\times T^{k_2}B'_1\times\cdots\times T^{k_d}B'_1$, for some level $B'_1$ in tower~$(n+1)$, and the \emph{same} integers $k_2,\ldots,k_d$ as above. By induction, this is true for any $n$-box in $D_n(\sigma)$ for any $n\ge n_1$.
As in the proof of Proposition~\ref{prop:construction_sigma}, let us denote by $\alpha_n$ the measure of each $n$-box in $D_{n}(\sigma)$. By hypothesis, all transitions from $n_1$ correspond to the central case hence for each $n\ge n_1$, $\alpha_n=\alpha_{n_1}/3^{n-n_1}$.
Fix $n\ge n_1$ and consider some $n$-box $B$, of the form $B=A_1\times A_2\times\cdots\times A_d$ for sets $A_i$ which are levels of tower~$n$. We have
\[
\sigma(B) = \begin{cases}
\alpha_{n_1}/3^{n-n_1} &\text{ if } A_1\cap T^{-k_2}(A_2)\cap\cdots\cap T^{-k_d}(A_d)=A_1\\
0 &\text{ otherwise, that is if } A_1\cap T^{-k_2}(A_2)\cap\cdots\cap T^{-k_d}(A_d)=\emptyset.
\end{cases}
\]
Observing that $\mu(A_1)=\mu(F_{n_1})/3^{n-n_1}$, we get
\[
\sigma(A_1\times A_2\times\cdots\times A_d) = \alpha \mu (A_1\cap T^{-k_2}(A_2)\cap\cdots\cap T^{-k_d}(A_d)),
\]
with $\alpha:= \alpha_{n_1}/\mu(F_{n_1})$. Finally, the above formula remains valid if the sets $A_i$ are finite unions of levels of tower~$n$, then for any choice of these sets.
Conversely, assume that $\sigma$ is a graph joining of the form given by~\eqref{eq:graph_joining}. Observe that if $A$ is a level of $C_n$, and if $|k|\le h_n$, then
\[A\cap T^k A=
\begin{cases}
A &\text{ if } k=0,\\
\emptyset&\text{ otherwise.}
\end{cases}
\]
Take $n$ large enough so that for all $1\le i\le d$, $h_n/2 > |k_i|$. Let $B$ be an $n$-box, which can always be written as $B=A\times T^{k'_2}A_2\times\cdots\times T^{k'_d}A_d$ for some level $A$ of $C_n$ and some integers $k'_2,\ldots,k'_d$ satisfying $|k'_i|\le h_n/2$. Then
\[
\sigma(B) = \alpha \mu(A\cap T^{k'2-k_2}(A)\cap\cdots\cap T^{k'd-k_d}(A)),
\]
which is positive if and only if for each $1\le i\le d$, $k_i=k'_i$. Hence $\sigma|_{C^{d}_n}$ is concentrated on a single diagonal, which is constituted by $n$-boxes of the form $A\times T^{k_2}(A)\times\cdots\times T^{k_d}(A)$. This already proves that $\sigma$ is a diagonal measure. Moreover, if $B$ is such an $n$-box, then $B(1,\ldots,1)$ is an $(n+1)$-box of the same form, hence the transition from $n$ to $n+1$ corresponds to the central case.
\end{proof}
\begin{definition}
We say that $x_1\in X$ is \emph{compatible} with the diagonal measure $\sigma_{(n_0,D,\tau)}$ if there exists $(x_2,\ldots,x_d)\in X^{d-1}$ such that $(x_1,\ldots,x_d)$ is seen by $\sigma_{(n_0,D,\tau)}$.
\end{definition}
\begin{prop}
\label{prop:compatible}
Let $\sigma_{(n_0,D,\tau)}$ be a diagonal measure. If the set of $x_1\in X$ which are compatible with $\sigma_{(n_0,D,\tau)}$ is of positive measure $\mu$, then $\sigma_{(n_0,D,\tau)}$ is a graph joining arising from powers of $T$, as defined by~\eqref{eq:graph_joining}.
\end{prop}
\begin{proof}
Let $x_1$ be compatible with the diagonal measure $\sigma:=\sigma_{(n_0,D,\tau)}$, and let $(x_2,\ldots,x_d)\in X^{d-1}$ be such that $(x_1,\ldots,x_d)$ is seen by $\sigma$. Let $n\ge n_0$ be large enough so that $(x_1,\ldots,x_d)\in C^{d}_n$. Then
\[(x_1,\ldots,x_d)\in D_{n+1}(\sigma)=D_n(\sigma)(\tau_n(1),\ldots,\tau_n(d)).
\]
If we further assume that $(\tau_n(1),\ldots,\tau_n(d))\neq(1,\ldots,1)$, then the transition from $D_n(\sigma)$ to $D_{n+1}(\sigma)$ corresponds to the corner case, and there is only one occurrence of $D_n(\sigma)$ inside $D_{n+1}(\sigma)$. Since also $(x_1,\ldots,x_d)\in D_{n}(\sigma)$, it follows that $t_n(x_1)=\tau_n(1)$.
Therefore, if there exist infinitely many integers $n$ such that $(\tau_n(1),\ldots,\tau_n(d))\neq(1,\ldots,1)$, then the compatibility of $x_1$ with the diagonal measure $\sigma$ forces the value of $t_n(x_1)$ for infinitely many integers $n$. This implies that $x_1$ belongs to a fixed set which is $\mu$-negligible.
To conclude the proof, it is enough to apply Proposition~\ref{prop:graph}.
\end{proof}
\begin{remark}
Taking $(n_0,D,\tau)\in\mathscr{D}$ for which the corner case occurs infinitely often, and considering the corresponding diagonal measure $\sigma_{(n_0,D,\tau)}$, we see that there exist ergodic diagonal measures which are not graph joinings. By Proposition~\ref{prop:compatible}, these measures are concentrated on sets $N_1\times N_2\times \cdots\times N_d$, where each $N_i$, $1\le i\le d$, is a $\mu$-negligible set. We call such a measure a \emph{weird} measure.
It is conservative whenever the central case occurs infinitely often.
\end{remark}
\section{Ergodic decomposition with absolute continuity of the marginals}
Let $\sigma$ be a boundedly finite measure on $X^d$ which is $T^{\times d}$-invariant.
We recall that by Hopf's decomposition, $\sigma$ can be written as $\sigma=\sigma_{\!\text{diss}}+\sigma_{\!\text{cons}}$, where $\sigma_{\!\text{diss}}$ and $\sigma_{\!\text{cons}}$ are mutually singular, boundedly finite, $T^{\times d}$-invariant, the system $(X^d,\sigma_{\!\text{diss}},T^{\times d})$ is totally dissipative, and $(X^d,\sigma_{\!\text{cons}},T^{\times d})$ is conservative.
The conservative part $\sigma_{\!\text{cons}}$ admits an \emph{ergodic decomposition} (see \cite{Aaronson}, Section 2.2.9). By Theorem~\ref{thm:msj}, its ergodic components are all products of diagonal measures. Note that the dissipative part $\sigma_{\!\text{diss}}$ can also be written as
\[
\sigma_{\!\text{diss}} = \int_W \omega_x\, d\sigma_{\!\text{diss}}(x),
\]
where $W$ is a wandering set satisfying $\bigsqcup_{k\in\mathbb{Z}} (T^{\times d})^k W = X^d\mod \sigma_{\!\text{diss}}$, and $\omega_x$ is defined by
\[
\omega_x := \sum_{k\in\mathbb{Z}} \delta_{(T^{\times d})^kx}.
\]
By Theorem~\ref{thm:msj}, these measures $\omega_x$ are (weird) $d$-dimensional diagonal measures.
Observe that, even if the $\sigma$-algebra generated by one coordinate is not $\sigma$-finite, we can always define the marginals of $\sigma$ as the respective pushforward measures of $\sigma$ by the projections on each coordinate. Note that these marginal measures may take only the values 0 or $\infty$, which is for example the case when $\sigma$ is the product measure $\mu^{\otimes d}$ with $d\ge 2$. But even in such a case, it makes sense to consider the absolute continuity of the marginal with respect to $\mu$.
The purpose of the present section is to show that, with an assumption of absolute continuity of the marginals of $\sigma$, no weird measure can appear in the decomposition of $\sigma$. More precisely, we will prove the following theorem.
\begin{theo}
\label{thm:product_of_graphs}
Let $\sigma$ be a boundedly finite, $T^{\times d}$-invariant measure on $X^d$. Assume that all its marginals are absolutely continuous with respect to $\mu$. Then the system $(X^d,\sigma,T^{\times d})$ is conservative, and the ergodic components of $\sigma$ are products of graph joinings arising from powers of $T$.
\end{theo}
\subsection{Contribution of $d$-dimensional diagonal measures}
In this section we consider the contribution of $d$-dimensional diagonal measures to the decomposition of $\sigma$, which takes into account all $d$-dimensional diagonal measures appearing as ergodic components of $\sigma_{\!\text{cons}}$, and all $\gamma_x$ appearing in the decomposition of $\sigma_{\!\text{diss}}$. More precisely, using the parametrization of ergodic diagonal measures presented in Section~\ref{sec:parametrization}, this contributions takes the form
\begin{equation}
\label{eq:decomposition_sigma_Delta}
\sigma_\Delta =\int_{\mathscr{D} } \sigma_{(n_0,D,\tau)}\, dm(n_0,D,\tau),
\end{equation}
where $m$ is a $\sigma$-finite measure on $\mathscr{D}$ (this measure takes into account the multiplicative constants up to which the diagonal measures are defined).
\begin{prop}
\label{prop:diagonal}
Let $\sigma$ be a boundedly finite, $T^{\times d}$-invariant measure on $X^d$, whose first marginal is absolutely continuous with respect to $\mu$. Then the system $(X^d,\sigma,T^{\times d})$ is conservative, and the ergodic components of $\sigma$ which are $d$-dimensional diagonal measures are graph joinings arising from powers of $T$.
\end{prop}
Before proving the proposition, we need some additional technical tools.
\begin{definition}
We say that $(x_2,\ldots,x_d)\in X^{d-1}$ is \emph{compatible} with $x_1\in X$ if there exists a diagonal measure $\sigma_{(n_0,D,\tau)}$ such that $(x_1,\ldots,x_d)$ is seen by $\sigma_{(n_0,D,\tau)}$.
We set
\[
\overline{x_1}:=\{(x_2,\ldots,x_d)\in X^{d-1}: (x_2,\ldots,x_d)\text{ is compatible with }x_1\}.
\]
\end{definition}
\begin{remark}
\label{rem:classe}
It follows from the definition of $\mathscr{D} $ and from Proposition~\ref{prop:construction_sigma} that $\overline{x_1}$ is the set of $(x_2,\ldots,x_d)\in X^{d-1}$ satisfying, for all large enough $n$:
\begin{itemize}
\item either $t_n(x_i)=t_n(x_1)$ for each $2\le i\le d$,
\item or $\{1,3\}\subset\{t_n(x_i):1\le i\le d\}$.
\end{itemize}
\end{remark}
\begin{lemma}
\label{lemma:unique}
For each $x_1\in X$ and each $(x_2,\ldots,x_d)\in \overline{x_1}$, there exists a \emph{unique} $(n_0,D,\tau)\in \mathscr{D} $ such that $(x_1,\ldots,x_d)$ is seen by $\sigma_{(n_0,D,\tau)}$.
\end{lemma}
\begin{proof}
If $(x_1,\ldots,x_d)$ is seen by two diagonal measures $\sigma$ and $\sigma'$, then for all $n$ large enough, $(x_1,\ldots,x_d)\in D_n(\sigma)$ and $(x_1,\ldots,x_d)\in D_n(\sigma')$. It follows that $D_n(\sigma)=D_n(\sigma')$ for all large enough $n$, hence $\sigma$ and $\sigma'$ are proportional.
\end{proof}
The preceding lemma enables us to define, for any $x_1\in X$, the measurable function $\varphi_{x_1}: \overline{x_1}\to \mathscr{D}$, by
\begin{multline*}
\varphi_{x_1}(x_2,\ldots,x_d) := \text{ the unique }(n_0,D,\tau)\in \mathscr{D} \\
\text{ such that $(x_1,\ldots,x_d)$ is seen by $\sigma_{(n_0,D,\tau)}$.}
\end{multline*}
Obviously, for any $(x_2,\ldots,x_d)\in \overline{x_1}$,
\begin{equation}
\label{eq:compatible}
\text{$x_1$ is compatible with the diagonal measure $\sigma_{\varphi_{x_1}(x_2,\ldots,x_d)}$.}
\end{equation}
\begin{lemma}
\label{lemma:invariance_phi}
For each $x_1\in X$, we have $\overline{Tx_1}=T^{\times (d-1)}(\overline{x_1})$. Moreover, \[\varphi_{Tx_1}\circ T^{\times (d-1)} = \varphi_{x_1}.\]
\end{lemma}
\begin{proof}
This follows from the fact that $(x_1,\ldots,x_d)$ is seen by the diagonal measure $\sigma_{(n_0,D,\tau)}$ if and only if $(Tx_1,\ldots,Tx_d)$ is seen by $\sigma_{(n_0,D,\tau)}$.
\end{proof}
\begin{remark}
Using Remark~\ref{rem:classe} and the fact that $t_n(x_1)=t_n(Tx_1)$ if $n$ is large enough, we also get $\overline{Tx_1}=\overline{x_1}$.
\end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:diagonal}]
Since $\sigma_\Delta$ is absolutely continuous with respect to $\sigma$, its first marginal is absolutely continuous with respect to $\mu$. Therefore, we can disintegrate $\sigma_\Delta$ with respect to $\mu$ (see e.g. \cite{ChangPollard}, Theorem~1): There exists a family ${(\nu_{x_1})}_{x_1\in X}$ of $\sigma$-finite measures on $X^{d-1}$, such that for each measurable $B\subset X^{d-1}$, $x_1\mapsto \nu_{x_1}(B)$ is measurable, and for each measurable $A\subset X$,
\begin{equation}
\label{eq:disintegration_of_sigma} \sigma_\Delta(A\times B)= \int_A \nu_{x_1}(B) d\mu(x_1).
\end{equation}
Let us consider the following measurable subset of $X^d$:
\[
C:=\{(x_1,\ldots,x_d):\ (x_2,\ldots,x_d)\text{ is compatible with }x_1\} = \bigcup_{x_1\in X}\{x_1\}\times \overline{x_1}.
\]
Recalling~\eqref{eq:seen}, for any diagonal measure $\sigma_{(n_0,D,\tau)}$, we have
\[
\sigma_{(n_0,D,\tau)}(X^d\setminus C)=0.
\]
Thus, by~\eqref{eq:decomposition_sigma_Delta}, $\sigma_\Delta(X^d\setminus C)=0$.
It follows that, for $\mu$-almost all $x_1\in X$, $\nu_{x_1}$ is concentrated on $\overline{x_1}$. This enables us to define, for $\mu$-almost all $x_1\in X$, the measure $\gamma_{x_1}$ on $\mathscr{D}$ as the pushforward of $\nu_{x_1}$ by the map $\varphi_{x_1}$ introduced after Lemma~\ref{lemma:unique}. By~\eqref{eq:compatible}, $\gamma_{x_1}$ is concentrated on the set of $(n_0,D,\tau)\in\mathscr{D}$ such that $x_1$ is compatible with $\sigma_{(n_0,D,\tau)}$.
According to Lemma~\ref{lemma:invariance_phi}, the following diagram commutes:
\begin{center}
\includegraphics{diagramme.pdf}
\end{center}
Using the invariance of $\sigma_\Delta$ by $T^{\times d}$ and the invariance of $\mu$ by $T$, we get that
$(T^{\times(d-1)})_*(\nu_{x_1})=\nu_{Tx_1}$ for $\mu$-almost all $x_1\in X$. Therefore, for $\mu$-almost all $x_1\in X$, we obtain
\begin{align*}
\gamma_{x_1} &= (\varphi_{x_1})_*(\nu_{x_1})\\
&= (\varphi_{Tx_1})_*(T^{\times(d-1)})_*(\nu_{x_1})\\
&= (\varphi_{Tx_1})_*(\nu_{Tx_1}) \\
&= \gamma_{Tx_1}.
\end{align*}
By ergodicity of $T$, it follows that there exists some measure $\gamma$ on $\mathscr{D}$ such that $\gamma_{x_1}=\gamma$ for $\mu$-almost all $x_1\in X$. Moreover, $\gamma$ is concentrated on the set of parameters $(n_0,D,\tau)$ such that $\mu$-almost every $x_1$ is compatible with $\sigma_{(n_0,D,\tau)}$.
From Proposition~\ref{prop:compatible}, it follows that $\gamma$ is concentrated on the set of parameters corresponding to graph joinings arising from powers of $T$. For $k_2,\ldots,k_d\in\mathbb{Z}$, let us denote by $\pi(k_2,\ldots,k_d)\in\mathscr{D}$ the parameter corresponding to the graph joining given by~\eqref{eq:graph_joining}. Then there exist non-negative coefficients $c_{k_2,\ldots,k_d}$, $k_2,\ldots,k_d\in\mathbb{Z}$, such that
\[
\gamma = \sum_{k_2,\ldots,k_d\in\mathbb{Z}}c_{k_2,\ldots,k_d} \delta_{\pi(k_2,\ldots,k_d)}.
\]
Observe now that, for any $x_1\in X$, the only point $(x_2,\ldots,x_d)\in\overline{x_1}$ such that $(x_1,x_2,\ldots,x_d)$ is seen by the graph joining $\sigma_{\pi(k_2,\ldots,k_d)}$ is given by $x_i=T^{k_i}(x_1)$, $2\le i\le d$.
Therefore, for $\mu$-almost every $x_1\in X$,
\[
\nu_{x_1} = \sum_{k_2,\ldots,k_d\in\mathbb{Z}}c_{k_2,\ldots,k_d} \delta_{(T^{k_2}x_1,\ldots,T^{k_d}x_1)}.
\]
Coming back to formula~\eqref{eq:disintegration_of_sigma}, we obtain
\[
\sigma_\Delta(A\times B)=\sum_{k_2,\ldots,k_d\in\mathbb{Z}}c_{k_2,\ldots,k_d}\, \mu\Bigl(A\cap (T^{k_2}\times\cdots\times T^{k_d})^{-1}(B)\Bigr).
\]
In particular, we see that no measure of the form $\omega_x$ appear in the decomposition of $\sigma$, hence $\sigma_{\!\text{diss}}=0$ and the system $(X^d,\sigma,T^{\times d})$ is conservative.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:product_of_graphs}}
We already know by Proposition~\ref{prop:diagonal} that, under the assumptions of the theorem, the system $(X^d,\sigma,T^{\times d})$ is conservative. We can therefore consider the ergodic decomposition of $\sigma$, which by Theorem~\ref{thm:msj} and the parametrization of the set of boundedly finite diagonal measures can be described as follows.
For each nonempty $I\subset\{1,\ldots,d\}$, let $\mathscr{D}^I$ be the set of parameters for the boundedly finite, $T^{\times |I|}$-invariant, diagonal measures on $X^I$ ($\mathscr{D}^I$ is the exact analog of $\mathscr{D}$, which corresponds precisely to $I=\{1,\ldots,d\}$). For each $\omega\in\mathscr{D}^I$, we thus have a canonical diagonal measure $\sigma_\omega^I$ on $X^I$, and each diagonal measure on $X^I$ is of the form $c\, \sigma_\omega^I$ for some $c>0$ and some $\omega\in\mathscr{D}^I$.
Let $\P_d$ be the set of all partitions of $\{1,\ldots,d\}$.
For any $\pi=\{I_1,\ldots,I_r\}\in\P_d$, let
\[
\mathscr{D}^\pi:= \mathscr{D}^{I_1}\times\cdots\times \mathscr{D}^{I_r}.
\]
$\mathscr{D}^\pi$ can be viewed as a natural set of parameters for boundedly finite, $T^{\times d}$-invariant measures, which are of the form $\sigma^{I_1}\otimes\cdots\otimes\sigma^{I_r}$, where each $\sigma^{I_j}$ is a diagonal measure on $X^{I_j}$.
From Theorem~\ref{thm:msj}, it follows that the ergodic decomposition of $\sigma$ can be written as
\[
\sigma=\sum_{\pi=\{I_1,\ldots,I_r\}\in\P_d} p_\pi
\int_{\omega=(\omega_1,\ldots,\omega_r)\in \mathscr{D}^\pi}
c(\omega) \sigma_{\omega_1}^{I_1}\otimes\cdots\otimes\sigma_{\omega_r}^{I_r} \, dm_\pi(\omega),
\]
where $p_\pi\ge0$, $\sum_\pi p_\pi=1$, $m_\pi$ is a probability measure on $\mathscr{D}^\pi$, and $c(\omega)>0$ $m_\pi$-a.s.
Let us fix $\pi=\{I_1,\ldots,I_r\}\in\P_d$ such that $p_\pi>0$, and assume that $1\in I_1$. Set
\[
\sigma_1:= \int_{\mathscr{D}^\pi} \sigma_{\omega_1}^{I_1} \, dm_\pi(\omega).
\]
This is a measure on $X^{I_1}$, which is $T^{\times|I_1|}$-invariant, and boundedly finite by~\eqref{eq:borne_universelle}. We want to show that Proposition~\ref{prop:diagonal} can be applied
to $\sigma_1$, and for this we only need to check that its first marginal is absolutely continuous with respect to $\mu$. Let $N\subset X$ be a $\mu$-negligible set. We know that
$\sigma(N\times X^{d-1})=0$, thus
\[
\int_{\omega=(\omega_1,\ldots,\omega_r)\in \mathscr{D}^\pi}
c(\omega) \sigma_{\omega_1}^{I_1}(N\times X^{I_1\setminus 1})\sigma_{\omega_2}^{I_2}(X^{I_2}) \cdots\sigma_{\omega_r}^{I_r}(X^{I_r}) \, dm_\pi(\omega)=0.
\]
By Proposition~\ref{prop:construction_sigma}, we know that
\[\sigma_{\omega_2}^{I_2}(X^{I_2}) \cdots\sigma_{\omega_r}^{I_r}(X^{I_r}) =\infty,
\]
and since $c(\omega)>0$ $m_\pi$-a.s., we deduce that
\[ c(\omega)\sigma_{\omega_2}^{I_2}(X^{I_2}) \cdots\sigma_{\omega_r}^{I_r}(X^{I_r}) =\infty \quad m_\pi\text{-a.s.}
\]
It follows that
\[\sigma_1(N\times X^{I_1\setminus 1}) =\int_{\omega=(\omega_1,\ldots,\omega_r)\in \mathscr{D}^\pi}
\sigma_{\omega_1}^{I_1}(N\times X^{I_1\setminus 1})\, dm_\pi(\omega)=0.
\]
Then Proposition~\ref{prop:diagonal} gives that, for $m_\pi$-almost all $\omega\in\mathscr{D}^\pi$, $\sigma_{\omega_1}^{I_1}$ is a graph joining arising from powers of $T$.
Since we assumed that all marginals of $\sigma$ are absolutely continuous with respect to $\mu$, the same argument applies for each $\sigma_{\omega_j}^{I_j}$, $1\le j\le r$, and this ends the proof.
\section{Consequences for Chacon infinite transformation}
\label{sec:consequences}
\subsection{Commutant of $T$}
\begin{prop}
The centralizer of $T$ is reduced to the powers of $T$.
\end{prop}
\begin{proof}
Let $S$ be a $\mu$-preserving transformation commuting with $T$. Then the graph joining defined on $X\times X$ by
\[ \sigma_S(A\times B):= \mu(A\cap S^{-1}(B)) \]
is a conservative ergodic $T\times T$-invariant measure which is supported on the graph of $S$. This measure is also boundedly finite. Since it is not proportional to the product measure, by Theorem~\ref{thm:msj} it has to be a 2-dimensional diagonal measure. Moreover, its marginals are absolutely continuous with respect to $\mu$, hence by Proposition~\ref{prop:diagonal}, $\sigma$ is supported by the graph of a power of $T$.
\end{proof}
\subsection{Joinings and factors}
Let $(Y_i,\mathscr{B}_i,\nu_i,S_i)$, $i=1,2$, be two infinite measure preserving dynamical systems.
We recall that a \emph{joining} between them is any $S_1\times S_2$-invariant measure $m$ on the Cartesian product $Y_1\times Y_2$, whose marginals are respectively $\nu_1$ and $\nu_2$. In particular,
in the dynamical system $(Y_1\times Z, \mathscr{B}\otimes\mathcal{Z}, m)$, the sub-$\sigma$-algebra generated by the projection on coordinate $i$ is $\sigma$-finite if and only if $(Y_i,\mathscr{B}_i,\nu_i)$ is a $\sigma$-finite measure space.
\begin{prop}
\label{prop:alphamu}
For any $\alpha\in(0,\infty]$, $\alpha\neq1$, there is no joining between $\left(X,\mathscr{A},\mu,T\right)$ and $\left(X,\mathscr{A},\alpha\mu,T\right)$.
\end{prop}
\begin{proof}
Assume that there exists a joining $m$ between $\left(X,\mathscr{A},\mu,T\right)$ and $\left(X,\mathscr{A},\alpha\mu,T\right)$. Then $m$ is $T\times T$-invariant, and its marginals are absolutely continuous with respect to $\mu$. Moreover, since the first marginal is $\mu$, $m$ is boundedly finite. By Proposition~\ref{prop:diagonal}, the system it defines is conservative, and
any ergodic component of $m$ which is a diagonal measure is the graph joining supported on the graph of $T^k$ for some $k\in\mathbb{Z}$. But $\mu\otimes\mu$ cannot appear
as an ergodic component of $m$ (otherwise the $\sigma$-algebra generated by the first coordinate would not be $\sigma$-finite).
Therefore, there exist nonnegative numbers $a_{k}\in\mathbb{Z}$ with $\sum_{k\in\mathbb{Z}}a_{k}=1$ such that the ergodic decomposition of $m$ writes
\[
m\left(A_{1}\times A_{2}\right)=\sum_{k\in\mathbb{Z}}a_{k} \, \mu\left(A_{1}\cap T^{-k}A_{2}\right).
\]
But then, both marginals of $m$ are equal to $\mu$, thus $\alpha=1$.
\end{proof}
This proposition leads to a nice corollary, for which we need to recall from~\cite{Aaronson} the following definition.
\begin{definition}
A \emph{law of large numbers} for a conservative, ergodic, measure preserving dynamical system $(Y,\mathscr{B},\nu,S)$ is a function $L:\{0,1\}^\mathbb{N}\to[0,\infty]$ such that
for all $B\in\mathscr{B}$, for $\nu$-almost every $y\in Y$,
\[
L\left(\ind{B}(y),\ind{B}(Sy),\ldots\right) = \nu(B).
\]
\end{definition}
Theorem 3.2.5 in~\cite{Aaronson} provides a sufficient condition for $S$ to have a law of large numbers, which is exactly the conclusion of Proposition~\ref{prop:alphamu}.
\begin{corollary}
\label{cor:law_of_large_numbers}
The dynamical system $\left(X,\mathscr{A},\mu,T\right)$ has a law of large numbers.
\end{corollary}
\begin{prop}
\label{prop:joining with Chacon}
Let $\left(Z,\mathcal{Z},\rho,R\right)$ be any dynamical system, and assume that there exists a joining $\left(X\times Z,\mathcal{A}\otimes\mathcal{Z},m,T\times R\right)$.
Then $\left(X,\mathscr{A},\mu,T\right)$ is a factor of $\left(Z,\mathcal{Z},\rho,R\right)$.
\end{prop}
\begin{proof}
Since the marginal of $m$ on the second coordinate is $\rho$, there exists a family $(\mu_z)_{z\in Z}$ of probability measures on $X$ (defined $\rho$-almost everywhere), such
that we have the following disintegration of $m$: for all $A\in\mathscr{A}$ and all $B\in\mathcal{Z}$,
\[
m(A\times B) = \int_B \mu_z(A)\, d\rho(z).
\]
Since $m$ is $T\times R$-invariant, we have $\rho$-almost everywhere
\begin{equation}
\label{eq:mu_Rz}
\mu_{Rz}=T_*(\mu_z).
\end{equation}
We can then form the relatively independent joining of $\left(X\times Z,\mathcal{A}\otimes\mathcal{Z},m,T\times R\right)$
over $\left(Z,\mathcal{Z},\rho,R\right)$, that is:
\[
\left(X\times Z\times X,\mathcal{A}\otimes\mathcal{Z}\otimes\mathcal{A},m\otimes_{\mathcal{Z}}m,T\times R\times T\right),
\]
where
\[
m\otimes_{\mathcal{Z}}m\left(A_{1}\times B\times A_{2}\right)=\int_{B}\mu_{z}\otimes\mu_{z}\left(A_{1}\times A_{2}\right)\rho\left(dz\right),
\]
and extract from it a self-joining $\left(X\times X,\mathcal{A}\otimes\mathcal{A},\widetilde{m},T\times T\right)$
where
\[
\widetilde{m}\left(A_{1}\times A_{2}\right)=\int_{Z}\mu_{z}\otimes\mu_{z}\left(A_{1}\times A_{2}\right)\rho\left(dz\right).
\]
Then $\widetilde{m}$ is $T\times T$-invariant, and its marginals are both equal to $\mu$.
As in the proof of Proposition~\ref{prop:alphamu}, we deduce that there exist nonnegative numbers $a_{k}\in\mathbb{Z}$ with $\sum_{k\in\mathbb{Z}}a_{k}=1$ such that the ergodic decomposition of $\widetilde{m}$ writes
\[
\int_{Z}\mu_{z}\otimes\mu_{z}\left(A_{1}\times A_{2}\right)\rho\left(dz\right)=\sum_{k\in\mathbb{Z}}a_{k} \, \mu\left(A_{1}\cap T^{-k}A_{2}\right).
\]
For $\rho$-a.e. $z\in Z$, the probability measure $\mu_{z}\otimes\mu_{z}$ is therefore supported
by the graphs of $T^{k}$, $k\in\mathbb{Z}$. In particular, $\mu_{z}$
is a discrete probability measure, and its support is necessarily contained in
a single $T$-orbit. This support can be totally ordered according
to the place on the orbit, thus
we can measurably choose one point $\varphi(z)$ on the support of $\mu_z$ by looking at the
point with the highest weight and the lowest place in the orbit (this
is well defined as the number of such points is finite). Since $\mu_z$ is supported by the $T$-orbit of $\varphi(z)$,
we have a family $(w_i)_{i\in\mathbb{Z}}$ of measurable functions from $Z$ to $[0,1]$ such that, for $\rho$-almost every $z$,
\[
\mu_z = \sum_{i\in\mathbb{Z}} w_i(z) \, \delta_{T^i\varphi(z)}.
\]
Then, the disintegration of $m$ becomes
\begin{equation}
\label{eq:disintegration2}
m(A\times B) = \sum_{i\in\mathbb{Z}} \int_B w_i(z) \ind{A}\bigl(T^i\varphi(z)\bigr)\,d\rho(z).
\end{equation}
Of course, since $\mu_z$ is a probability, we have $\sum_{i\in\mathbb{Z}} w_i(z) =1$, $\rho$-almost everywhere.
Moreover, from~\eqref{eq:mu_Rz}, we deduce that $\varphi\circ R=T\circ \varphi$, and that each function $w_i$ is $R$-invariant.
To show that $\varphi$ is a homomorphism between the dynamical systems $\left(Z,\mathcal{Z},\rho,R\right)$ and $\left(X,\mathscr{A},\mu,T\right)$, it only remains to check
that $\varphi_*(\rho)=\mu$. But this comes from the following computation: for each $A\in\mathscr{A}$, we have
\begin{align*}
\rho\bigl(\varphi^{-1}(A)\bigr) & = \int_Z \ind{A}\bigl(\varphi(z)\bigr)\,d\rho(z) \\
& = \int_Z \sum_{i\in\mathbb{Z}} w_i(z) \ind{A}\bigl(\varphi(z)\bigr)\,d\rho(z)\\
& = \sum_{i\in\mathbb{Z}} \int_Z w_i(R^i z) \ind{A}\bigl(\varphi(R^i z)\bigr)\,d\rho(z) \quad\text{(by $R$-invariance of $\rho$)}\\
& = \sum_{i\in\mathbb{Z}} \int_Z w_i(z) \ind{A}\bigl(T^i\varphi(z)\bigr)\,d\rho(z) \\
& = m(A\times Z) \quad\text{(by~\eqref{eq:disintegration2})}\\
& = \mu(A).
\end{align*}
\end{proof}
\begin{prop}[$T$ has no non-trivial factor]
\label{prop:sans_facteur}
Assume that $\left(Z,\mathcal{Z},\rho,R\right)$ is a factor of $\left(X,\mathscr{A},\mu,T\right)$.
Then any homomorphism $\pi:X\to Z$ between the two systems is in fact an isomorphism.
\end{prop}
\begin{proof}
To any homomorphism $\pi:X\to Z$, we can associate the joining $\Delta_\pi$ of the two systems defined by
\[
\Delta_\pi(A\times B):= \mu(A\cap \pi^{-1}B)
\]
for any $A\in\mathscr{A}$, $B\in\mathcal{Z}$. Let us repeat the construction made in the proof of Proposition~\ref{prop:joining with Chacon} with $m=\Delta_\pi$, and use the same notations as in this proof.
Since $T$ is ergodic, $R$ is also ergodic, hence the weights $w_i(z)$, $i\in\mathbb{Z}$, which are $R$-invariant, are $\rho$-almost everywhere constant. By construction, $w_0>0$, and we claim that
for $i\neq0$, $w_i=0$. Indeed, otherwise we would have, for $\rho$-almost all $z$,
$z=\pi(\varphi(z))=\pi(T^i \varphi(z))$. This would imply that, for $\mu$-almost all $x$, $\pi(x)=\pi(T^ix)$, hence $\pi$ would be constant as $T^i$ is ergodic. This is impossible because $\left(Z,\mathcal{Z},\rho,R\right)$ cannot be reduced to a single point system (since $\rho$ is $\sigma$-finite).
We conclude that the conditional measure $\mu_z$ is $\rho$-almost everywhere the Dirac mass at $\varphi(z)$. Therefore, $\pi$ is inversible, and its inverse is $\varphi$.
\end{proof}
\begin{remark}
It is easily seen that all the results proved in Section~\ref{sec:consequences} are valid for
any dynamical system $\left(X,\mathscr{A},\mu,T\right)$ for which the conclusion of Theorem~\ref{thm:product_of_graphs} holds.
Concerning Corollary~\ref{cor:law_of_large_numbers}, it is known in fact that Chacon infinite transformation admits a \emph{measurable} law of large numbers: this is a consequence
of Theorem~3.3.1 in~\cite{Aaronson}, and the fact that Chacon infinite transformation is rationally ergodic~\cite{Silva_et_al2015}.
We do not know whether the conclusion of Theorem~\ref{thm:product_of_graphs} alone implies the existence of a measurable law of large numbers.
\end{remark}
\begin{appendix}
\section{Product theorem}
\begin{theo}
\label{thm:product}
Let $X$ and $Y$ be two standard Borel measurable spaces. Let $T\ :\ X\to X$ and $S\ : Y\to Y$ be invertible, bi-measurable transformations.
Let $\sigma$ be a $\sigma$-finite measure on $X\times Y$ satisfying
\begin{itemize}
\item there exist $X_0\subset X$ and $Y_0\subset Y$ with $0<\sigma(X_0\times Y_0)<\infty$,
\item $\sigma$ is $T\times S$-invariant,
\item the dynamical system $(X\times Y,T\times S,\sigma)$ is conservative and ergodic,
\item $\Id\times S$ is non-singular with respect to $\sigma$.
\end{itemize}
Then, $\sigma$ is in fact $\Id\times S$-invariant, and there exist two measures $\mu$ and $\nu$ respectively on $X$ and $Y$, invariant by $T$ and $S$, such that $\sigma=\mu\otimes\nu$. Moreover, the dynamical systems $(X,\mu, T)$ and $(Y,\nu,S)$ are conservative and ergodic.
\end{theo}
\begin{proof}
Since $\Id\times S$ commutes with $T\times S$, the density
\[
\frac{d (\Id\times S)_*\sigma}{d\sigma}(x,y)
\]
is $T\times S$-invariant. Hence, by ergodicity, it is $\sigma$-almost everywhere equal to some constant $c$, $0<c<\infty$.
Set, for each $n\in\mathbb{Z}$, $X_{n}:= T^{n}X_{0}$, and $Y_{n}:= S^{n}Y_{0}$, where $X_0$ and $Y_0$ are given in the assumptions of the theorem.
As $\sigma$ is invariant by $T\times S$, we deduce that, for all $\left(m,n\right)\in\mathbb{Z}^{2}$,
\[ \sigma\left(X_{n}\times Y_{m}\right) = \sigma\left(X_{0}\times Y_{m-n}\right)
=c^{n-m}\sigma\left(X_{0}\times Y_{0}\right).
\]
Choose two sequences of positive numbers $\left(k_{n}\right)_{n\in\mathbb{Z}}$
and $\left(\ell_{n}\right)_{n\in\mathbb{Z}}$ such that
\[
\sum_{\left(n,m\right)\in\mathbb{Z}^{2}}k_{n}\ell_{m}c^{n-m}
=\left(\sum_{n\in\mathbb{Z}}k_{n}c^{n}\right)\left(\sum_{m\in\mathbb{Z}}\ell_{m}c^{-m}\right)<\infty.
\]
Define $f:=\sum_{n\in\mathbb{Z}}k_{n}\ind{X_{n}}$ and $g:=\sum_{n\in\mathbb{Z}}\ell_{n}\ind{Y_{n}}$.
As $f\otimes g$ is supported on $\cup_{\left(n,m\right)\in\mathbb{Z}^{2}}\left(X_{n}\times Y_{m}\right)$
which contains $\cup_{n\in\mathbb{Z}}\left(X_{n}\times Y_{n}\right)=X\times Y$
mod $\sigma$ (by ergodicity of $T\times S$), we deduce that $f\otimes g>0$
$\sigma$-a.e. Moreover,
\[
\int_{X\times Y}f\otimes g\,d\sigma=\sigma\left(X_{0}\times Y_{0}\right)\left(\sum_{n\in\mathbb{Z}}k_{n}c^{n}\right)\left(\sum_{m\in\mathbb{Z}}\ell_{m}c^{-m}\right)<\infty.
\]
So we can assume that $\int_{X\times Y}f\otimes g\,d\sigma=1$, and we can
define the probability measure $\rho$ whose density with respect to $\sigma$ is equal to $f\otimes g$. We denote its respective projections on $X$ and $Y$ by $\rho_X$ and $\rho_Y$.
Let us compute the density of $(\Id\times S)_*(\rho)$ with respect to $\rho$. For any measurable non-negative functions $h$ on $X$ and $k$ on $Y$, we have
\begin{align*}
&\int_{X\times Y}h\otimes k\circ\left(\Id\times S\right)\left(x,y\right)d\rho(x,y)\\
& =\int_{X\times Y}h\left(x\right)k\left(Sy\right)f\left(x\right)g\left(y\right)d\sigma(x,y)\\
& =c \int_{X\times Y}h\left(x\right)k\left(y\right)f\left(x\right)g\left(S^{-1}y\right)d\sigma(x,y)\\
& =c \int_{X\times Y}h\left(x\right)k\left(y\right)\frac{g\left(S^{-1}y\right)}{g\left(y\right)}d\rho (x,y).
\end{align*}
This proves that the sought-after density is equal to $c\frac{g\left(S^{-1}y\right)}{g\left(y\right)}$. In particular, it only depends on $y$, and by taking $h=1$ in the above computation, we get that $S$ is non-singular with respect to $\rho_Y$, with the same density.
Now we wish to prove that the non-singular dynamical system $(Y, \rho_Y, S)$ is ergodic and conservative. Indeed, if $A$ is an $S$-invariant set with $\rho_Y(A)>0$, then $X\times A$ is $T\times S$-invariant with $\rho(X\times A)>0$. By ergodicity of $T\times S$, $\rho(X\times A)=1$ and $\rho_Y(A)=1$.
In the same vein, if $W$ is a wandering set for $S$, then $X\times W$
is a wandering set for $T\times S$, therefore $\rho_Y\left(W\right)=\rho\left(X\times W\right)=0$,
by conservativity of $T\times S$.
Consider the measure $\nu$ on $Y$ whose density with respect to $\rho_Y$ is equal to $1/g(y)$. It is straightforward to check that the density of $S_*(\nu)$ with respect to $\nu$ is constant equal to $c$.
We claim that $c=1$.
Indeed, we consider the Maharam extension of $S$ defined on $(Y\times \mathbb{R}_+^*,\nu\otimes dt)$ by
\[
\tilde S(y,t):= (Sy, t/ c) \in Y\times \mathbb{R}_+^*.
\]
Observe that if $c\neq 1$, $\tilde S$ is totally dissipative. But we know that $(Y,S,\nu)$ is conservative, hence $\tilde S$ is also conservative by Theorem~2 in~\cite{Maharam1964}, and we conclude that $c=1$.
This proves that $\sigma$ is in fact invariant by $\Id\times S$.
The same arguments applied on the first coordinate lead to similar results: If $\mu$ is the measure on $X$ whose density with respect to $\rho_X$ is equal to $1/f(x)$, then $\mu$ is invariant by $T$, and the measure-preserving dynamical system $(X,\mu,T)$ is conservative and ergodic.
The end of the proof is an application of Lemma~3.1.1 in~\cite{RudolphSilva} to the measure $\rho$: This lemma proves that $\rho$ is the product of its marginals $\rho_X$ and $\rho_Y$, thus $\sigma=\mu\otimes\nu$.
\end{proof}
\end{appendix}
|
1,108,101,564,340 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
Type Ia Supernovae (SNe Ia) are one of the most energetic events
in the universe, now known to be originated by thermonuclear
detonations of carbon-oxygen (CO) white dwarfs
(\citealt{Hoyle1960}). Several possible scenarios leading to a SN
Ia outburst are currently envisaged, although there might be some
overlap between them. All scenarios have advantages and drawbacks
(e.g., \citealt{TsebrenkoSoker2015b}), and there is not yet a
general consensus on the leading scenario for SN Ia. In fact, it
is well possible that all of them contribute to the total SN Ia
rate in some unknown fraction.
These scenarios can be listed as follows, according to alphabetical order.
(a){\it The core-degenerate (CD) scenario} (e.g.,
\citealt{Livio2003, Kashi2011, Soker2011, Ilkov2012, Ilkov2013,
Soker2013, TsebrenkoSoker2015a}). Within this scenario the WD
merges with the hot core of a massive asymptotic giant branch
(AGB) star. In this case the explosion might occur shortly or a
long time after the merger. In a recent paper, \cite{TsebrenkoSoker2015b} argue
that at least $20 \%$, and likely many more, of all
SNe Ia come from the CD scenario. (b){\it The double degenerate
(DD) scenario} (e.g., \citealt{Webbink1984, Iben1984}). This
scenario is based on the merger of two WDs. However, this scenario
does not specify the subsequent evolution of the merger product, namely,
how long after
the merger the explosion of the remnant takes place (e.g.,
\citealt{vanKerkwijk2010}). Recent papers, for example, discuss
violent mergers (e.g., \citealt{Loren2010, Pakmoretal2013})
as possible channels of the DD scenario, while others
consider very long delays from merger to explosion, e.g., because
rapid rotation keeps the structure overstable
\citep{TornambPiersanti2013}. \cite{Levanonetal2015} argue that the
delay between merger and explosion in the DD scenario should be
$\gg 10 ~\rm{yr}$. (c){\it The single degenerate (SD) scenario} (e.g.,
\citealt{Whelan1973, Nomoto1982, Han2004}). In this scenario a
white dwarf (WD) accretes mass from a non-degenerate stellar
companion and explodes when its mass reaches the Chandrasekhar
mass limit. (d){\it The double-detonation mechanism} (e.g.,
\citealt{Woosley1994, Livne1995}). Here a sub-Chandrasekhar mass
WD accumulates a layer of helium-rich material coming from a helium donor
on its surface. The helium layer is compressed as more material is
accreted and detonates, leading to a second detonation near the
center of the CO WD (see, for instance, \citealt{Shenetal2013} and
references therein, for a recent paper). (e) \emph{The WD-WD
collision scenario} (e.g., \citealt{Thompson2011, KatzDong2012,
Kushniretal2013, Aznar2014}). In this scenario either a tertiary star brings two WDs
to collide, or the dynamical interaction occurs in a dense stellar system, where such interactions are likely. In some cases, the collision results in an immediate explosion. Despite
some attractive features of this scenario, it can account for at
most few per cent of all SNe Ia \citep{Hamersetal2013,
Prodanetal2013, Sokeretal2014}.
Finally, it should be mentioned that very recently it has been
suggested that pycnonuclear reactions could be able to drive
powerful detonations in single CO white dwarfs
\citep{Chiosietal2014}. This scenario -- the so-called {\it single
WD scenario} -- has, however, two important shortcomings. The
first one is that the typical H mass fraction found in detailed
evolutionary calculations of CO WD progenitors is much smaller
than that needed to ignite the core of the WD. The second drawback
of this recently suggested scenario is that most SN Ia come from
WDs with masses near the Chandrasekhar limit (e.g.,
\citealt{Seitenzahletal2013, Scalzoetal2014}), while the mass at
which ignition may possibly occur in the single WD scenario is
$\sim 1.2 M_\odot$. Hence, this scenario would also only account
for a small percentage of all SN Ia.
As mentioned earlier, there is some overlap between these
scenarios. For example, in the violent merger model
\citep{Loren2009, Pakmor2012} it is possible that during the first
stages of the merger of the two CO WDs the small helium buffer
($\simeq 10^{-2}\, M_{\sun}$) of the original CO WDs is ignited.
In this case both the DD scenario and the double detonation
mechanism operate simultaneously. Also, the double detonation
mechanism might operate in the CD scenario.
In this paper we study the response of a donor star that is a He
WD to an exploding CO WD with mass below the Chandrasekhar limit,
$M_{\rm WD} \simeq 1.0-1.1 M_\odot$. These parameters fit the
double detonation scenario where a very low mass helium shell
triggers the SN Ia explosion of a CO WD \citep{Bildstenetal2007,
ShenBildsten2009, ShenBildsten2014}. We will answer five
questions. (1) Does the shock wave induced by the ejecta ignite
helium in the WD companion by adiabatic compression or by shock
heating? (2) Is carbon in the ejecta ignited as it is shocked in
the outer layers of the He WD? (3) Can mixing of helium from the
donor and carbon from the ejecta lead to vigorous nuclear burning?
(4) How much helium is entrained by the ejecta? (5) What is the
morphology of the SNR long time after the explosion as the SN
ejecta sweep some ambient medium gas? To do so we will adopt two
masses for the He WD companion. First we study analytically and
then numerically the impact of the SN Ia ejecta of a WD of mass
$0.43 M_\odot$ residing at $\sim 0.02-0.03 R_\odot$ from the
exploding CO WD. This setting is based on the numerical
simulations of \cite{Guillochonetal2010}, \cite{Raskin et al2012},
and \cite{Pakmoretal2013}, for similar (but not identical)
progenitors that might lead to SN Ia. In a second step, and
following \cite{Bildstenetal2007} and \cite{ShenBildsten2009} we
also consider a He WD of $0.2M_\odot$ at an orbital separation of
$0.08 R_\odot$.
{ {{{ There are a number of simulations
studying similar processes to those studied by us, but in the SD
scenario. \cite{Mariettaetal2000} conducted 2D simulations to
study the impact of a SN Ia on a hydrogen-rich non-degenerate
companion. They found that several tenths of a solar mass of
hydrogen are striped from the companion into a cone with a solid
angle of $65-115 ^\circ$ behind the companion, depending on the
type of companion. \cite{Kasen2010} was interested in the effect of the
companion on the light curve shortly, up to several days, after
the explosion. \cite{Pakmoretal2008} found the striped
hydrogen mass to be much lower, and compatible with limits from
observations. \cite{Panetal2010} took the companion in their 2D
simulations to be a non-degenerate helium star.
\cite{Panetal2012a} extended the study to 3D simulations and to
hydrogen-rich companions. \cite{Panetal2012b} were interested in
the evolution of a main sequence companion after the passage of
the SN shock. We reproduce the dense conical surface found to be
formed behind the companion by \cite{Panetal2010} and
\cite{Panetal2012a}, but we continue to follow the interaction of
this cone with the ISM. We note that none of the papers listed
above continued their simulations to the stage of interaction with
the ISM, as we do in the present study. Neither they included
nuclear reactions in the companion as a result of the shock. Here
we study the interaction of a type Ia supernova with a He~WD to
examine He ignition and the SNR morphology.}}}}
Our paper is organized as follows. In section \ref{sec:ejecta} we
discuss and quantify the properties of the material ejected from
the disrupted CO WD, while in section \ref{sec:ignition} we assess
analytically the possibility of an explosive ignition. In section
\ref{sec:numeric} we conduct 2D axisymmetrical numerical
simulations of the interaction of the ejecta with the He WD, and
we examine nuclear reactions and helium entrainment. Finally, in section
\ref{sec:conclusions} we summarize our results and their
implications to the double detonation scenario.
\section{EJECTA PROPERTIES}
\label{sec:ejecta}
To facilitate an analytical estimate we assume that the SN Ia
ejecta is already in homologous expansion, and we take the profile
of \cite{Dwarkadas1998}
\begin{equation}
\rho_{\rm{SN}} = A \,{\rm{exp}}{} (-v/v_{\rm{ejecta}})t^{-3},
\label{eq:rhosn}
\end{equation}
where $v_{\rm{ejecta}}$ is a constant which depends on the mass and kinetic energy of the ejecta,
\begin{equation}
v_{\rm{ejecta}} = 2.9 \times 10^8 E_{51}^{1/2}
\left(\frac{M_{\rm{SN}}}{1 M_\odot}\right)^{-1/2} ~\rm{cm} ~\rm{s}^{-1},
\label{eq:vejacta}
\end{equation}
$E_{51}$ is the explosion energy in units of $10^{51} ~\rm{erg}$, and
$A$ is a parameter given by
\begin{equation}
A = 3.3 \times 10^6 \left(\frac{M_{\rm{SN}}}{1
M_\odot}\right)^{5/2} E_{51}^{-3/2} ~\rm{g} ~\rm{s}^{3} ~\rm{cm}^{-3} .
\label{eq:aconst}
\end{equation}
The maximum velocity of the SN Ia ejecta is $v_{\rm SNm} \simeq
20,000 ~\rm{km} ~\rm{s}^{-1}$. { {{{ We compared this analytical
profile with $M_{\rm{SN}}=1 M_\odot$ and
$E_{51}=1$ with models 7D and 9C from \cite{WoosleyKasen2011}, who calculated the explosion of WD models.
The maximum velocity in the analytical profile used here is
$20,000 ~\rm{km} ~\rm{s}^{-1}$. We found our model to be practically identical to their model 7D for the outer $0.2 M_\odot$ of the ejecta, and somewhat
slower than model 9D in that mass range. For inner mass
coordinates the analytical fit is slower than models 7D and 9C of
\cite{WoosleyKasen2011}. As the outer layers determine whether the
companion will be ignited, using models 7D or 9C of
\cite{WoosleyKasen2011} will result in an easier ignition of the
companion.
For that, and to keep the profile simple and flexible to changes,
}}}} we use the analytical profile as given above both in the
analytical and the numerical calculations.
For the analytical estimates derived in section \ref{sec:ignition}
we now estimate the maximum ram pressure of the ejecta on the
He~WD. A cold He WD of mass $0.43\, M_{\sun}$ has a radius of
$\sim 0.015R_\odot$. As it overflows its Roche lobe, with a CO~WD
companion of $1 M_\odot$, in a stable mass transfer the orbital
separation is $\sim 3.3$ times this distance, namely, $a \simeq
0.05 R_\odot$. However, detailed numerical calculations show that
for a powerful ignition to occur the mass transfer must be unstable
\citep{Guillochonetal2010}, and the surface of the He~WD that
fills the Roche lobe can be as close as $\sim 0.02 R_\odot$ to the
exploding CO~WD \citep{Raskin et al2012, Pakmoretal2013}.
The ram pressure of the ejecta at a distance $r_{\rm e}$ from the
explosion at time $t$ after explosion is given by
\begin{equation}
P_{\rm ram}=\rho(r_{\rm e}) v^2 =
A \,{\rm{exp}}{} (-r_{\rm e}/v_{\rm{ejecta}}t)t^{-5} r_{\rm e}^2,
\label{eq:pram1}
\end{equation}
where $v=r_{\rm e}/t$. The maximum ram pressure is achieved at
time $t_{\rm max}=r_{\rm e}/(5 v_{\rm{ejecta}})=1 (r_{\rm
e}/0.02\, R_{\sun}) ~\rm{s}$, and its value is
\begin{equation}
P_{\rm ram}^{\rm max}=5.2 \times 10^{22}
E_{51} \left(\frac{r_{\rm e}}{0.02\, R_{\sun}} \right)^{-3} ~\rm{erg} ~\rm{cm}^{-3}.
\label{eq:pramm}
\end{equation}
At $t=2t_{\rm max}$ and $t=3 t_{\rm max}$ the pressure drops to a
value of $0.38 P_{\rm ram}^{\rm max}$ and $0.12 P_{\rm ram}^{\rm
max}$, respectively. The first material hits the WD at time $\sim
0.02\, R_{\sun} / 20,000$~km ~s$^{-1} = 0.7 ~\rm{s} \simeq 0.7 t_{\rm
max}$, with a ram pressure of $0.7 P_{\rm ram}^{\rm max}$.
Overall, the phase in which the pressure is larger than $\sim 0.3
P_{\rm ram}^{\rm max}$ lasts for about two seconds at $\sim 0.02
R_\odot$ from the explosion. The density of the ejecta at maximum
ram pressure is
\begin{equation}
\rho(t_{\rm max}) = 2.5 \times 10^4 \left(\frac{M_{\rm{SN}}}{1
M_\odot}\right) \left(\frac{r_{\rm e}}{0.02\, R_{\sun}}
\right)^{-3} ~\rm{g} ~\rm{cm}^{-3} .
\label{eq:rhoej}
\end{equation}
\section{CONDITIONS FOR NUCLEAR IGNITION}
\label{sec:ignition}
Fig.~\ref{f:profilePrho} shows two of the physical quantities of a
$0.43\, M_{\sun}$ He WD which are relevant for our study, namely the
pressure and density as a function of the mass coordinate
$-\log(1-M_r/M_{\rm WD})$. This specific model corresponds to a WD
with central temperature $T\simeq 10^7$~K, which results in a
surface luminosity $\log(L/_{\sun})\sim -2.85$, an otherwise
typical luminosity of field white dwarfs, an effective temperature
$\log T_{\rm eff}\simeq 3.93$, and corresponds to a sequence which
was evolved performing full evolutionary calculations that
consider the main energy sources and processes of chemical
abundance changes during white dwarf evolution
\citep{Althaus2009}. There are three possible ways in which the He~WD or the CO ejecta
might be ignited:
\begin{figure}[t]
\resizebox{\hsize}{!}
{\includegraphics[width=\columnwidth]{fig01.eps}}
\caption{Pressure and density profiles of a $0.43\, M_{\sun}$ He
WD, as a function of the mass coordinate $\log(1-M_r/M_{\rm WD})$.
This coordinate allows to better resolve the very outer layers of
the star, where the effects of the shock are presumably more
important. The central temperature of the WD is $10^7 ~\rm{K}$.}
\label{f:profilePrho}
\end{figure}
{(1) \it Shock ignition of helium.} It turns out that, for the
model WD used here, He is shocked and ignited in a region where
both thermal and radiation pressures play a role. In this region $\rho
\sim 10^5 ~\rm{g} ~\rm{cm}^{-3}$ and $T \simeq 1.2 \times 10^9 ~\rm{K}$. A good
estimate of the temperature in the shocked region of the He~WD can be
obtained by equating the radiation pressure to the ram pressure
given in equation (\ref{eq:pramm}):
\begin{equation}
T_{\rm He} \simeq 1.2 \times 10^9
\left(\frac{r_{\rm e}}{0.04 R_{\sun}} \right)^{-3/4}
~\rm{K}.
\label{eq:TempHeWD}
\end{equation}
The burning time-scale of pure helium at these conditions is $\sim 10 ~\rm{s}$, just a little longer
than the timescale of the dynamical flow, defined as the ejecta speed divided by
the He WD radius, $\sim 0.04 R_\odot /10,000 ~\rm{km} ~\rm{s}^{-1} \sim
3 ~\rm{s}$. For these parameters, ignition conditions are reached for $r \la 0.04 R_\odot$.
This is compatible with the numerical
results to be described in section \ref{subsec:MassiveWD}, where
the exact radius is found.
{(2) \it Carbon burning in the shocked ejecta.} The second
possibility we explore is the ignition of carbon-rich material of
the ejecta as it is shocked upon hitting the He WD. The post-shock
pressure of the ejecta is dominated by radiation pressure. The
temperature at maximum ram pressure is given by equation
(\ref{eq:TempHeWD}). For a distance to the explosion $r_e = 0.02
R_\odot$ we find the temperature to be $T_{\rm CO} \simeq 2 \times
10^9 ~\rm{K}$. For this temperature we expect that carbon will be
burned. Nevertheless, we need to compare the burning time with the
dynamical timescale of the flow, $\tau_{\rm flow} \sim 1 ~\rm{s}$. For
the scaling and parameters used in Sect.~\ref{sec:ejecta} the
ejecta density at the time of maximum ram pressure and at a
distance of $0.02\, R_{\sun}$ from the center of explosion is $3.5
\times 10^4 ~\rm{g} ~\rm{cm}^{-3}$. If the carbon mass is half of the mass
of the ejecta and the compression factor is $\sim 4$, then the
post-shock density in the carbon-rich region is $\rho_{\rm C}
\simeq 7 \times 10^4 ~\rm{g} ~\rm{cm}^{-3} \sim 10^5 ~\rm{g} ~\rm{cm}^{-3}$. As in
this scenario the companion star is much closer to the center of
the explosion than the corresponding one of the single-degenerate
scenario, the density of the shocked ejecta will be much higher,
and the burning timescale much shorter. We find that the carbon
burning timescale for this density and a typical temperature $\sim
2 \times 10^9 ~\rm{K}$ to be about one second. These temperatures and
densities are achieved near the stagnation point in a small region
of size $\sim 0.1 r_e$ -- see below. The outflow time from this region is
$\sim 0.002 R_\odot/1.5 \times 10^4 ~\rm{km} ~\rm{s}^{-1} = 0.1 ~\rm{s}$. Thus,
the outflow time is shorter than the burning time scale. In the
numerical results to be described next we obtain no significant
carbon burning, showing that the outflow time of carbon from the
shocked region is indeed very short. This is unlike the case in
which helium belonging to the He~WD is shocked inside the
He~WD and cannot flow outward.
{\it (3) Igniting helium by mixing ejecta.} Even if carbon is not
ignited, mixing of the ejecta at $T \sim 10^9 ~\rm{K}$ with helium
might, in principle, even if helium was not ignited by the shock,
power a thermonuclear runaway. {{{{ In our numerical
simulations mixing is not sufficiently deep to cause ignition
by this process (see section \ref{sec:numeric}). }}}}
For the case of a low-mass He~WD we repeated all these calculations
and we found that none of the previously described processes drive
a powerful nuclear outburst, and thus the evolution in this case
should mostly consist of a purely hydrodynamical flow. As it will be
explained in detail in the next section, full hydrodynamical numerical
simulations confirm this.
\section{NUMERICAL SIMULATIONS}
\label{sec:numeric}
\subsection{Numerical setup}
\label{subsec:setup}
We use version 4.2.2 of the FLASH gas-dynamical numerical code
\citep{Fryxell2000}. {{{{ The FLASH code has been used
before for a similar study in the SD scenario, in 2D
\citep{Kasen2010, Panetal2010} and 3D \citep{Panetal2012a,
Panetal2012b}.
}}}}
The widely used \texttt{FLASH} code is a publicly
available code for supersonic flow suitable for astrophysical
applications. The simulations are done using the unsplit PPM
solver of \texttt{FLASH}. We use 2D axisymmetric cylindrical coordinates
with an adaptive mesh refinement (AMR) grid. The origin of the
grid, $(0,0)$, is taken at the center of the explosion. In all the
figures shown below the symmetry axis of the grid is the vertical
axis. The axisymmetric grid forces us to neglect the orbital
relative velocity of the He~WD and the exploding CO~WD. In any
case, the orbital velocity is much smaller than the ejecta
velocity, and will have virtually no effect on our conclusions.
{ {{{ For the equation of state we use the Helmholtz EOS
\citep{TimmesSwesty2000}. This EOS includes
contributions from partial degenerate electrons and positrons, radiation, and non-degenerate ions. We use
the Aprox19 nuclear network of 19 isotopes \citep{Timmes1999} in \texttt{FLASH}. The hydrodynamic module is coupled to the nuclear network by setting the parameter \texttt{enucDtFactor=0.1} in \texttt{FLASH} \citep{Hawleyetal2012}, and shock
burning is disabled by default. Self gravity is included using the new multipole solver in \texttt{FLASH} with order $l=10$. }}} }
{{{ {We run our collision simulations with two different resolutions as a test for convergence and found no appreciable difference. In addition, we run a low resolution simulation on a much larger grid to follow the long time evolution of the ejecta. For the high-resolution simulations the minimum
cell size was $\sim12\times12 ~\rm{km}$ with a total of 10 levels of AMR
refinement. For the low-resolution simulations the minimum cell
size was $\sim48\times48 ~\rm{km}$. In addition we lowered the resolution in the large grid simulation
to $\sim 92\times 92 ~\rm{km}$ from initially $\sim46\times46 ~\rm{km}$ after $t=16 ~\rm{s}$ from the explosion to reduce the computational time.} }}}
The initial He~WD mass, radius, and distance from the center of
the explosion in the two simulated cases to be presented below
are $(M_{\rm WD}, R_{\rm WD}, a_0)=(0.2 M_\odot, 0.02 R_\odot, 0.082 R_\odot)$ and
$(M_{\rm WD}, R_{\rm WD}, a_0)=(0.43 M_\odot, 0.015 R_\odot,
0.029-0.043 R_\odot)$
for the low- and high-mass He~WDs, respectively.
The WDs are cold, and the radius of the $0.43 M_\odot$ WD is
somewhat smaller than the hotter WD presented in
Fig.~\ref{f:profilePrho}.
These models were built with version 6022 of the Modules for
Experiments in Stellar Astrophysics (MESA; \citealt{Paxton2011}).
Initially, the ejecta in our simulations is homologous expanding
according to equations (\ref{eq:rhosn})-(\ref{eq:aconst}), with
$E_{51}=1$ and $M_{\rm SN} = 1 M_\odot$. The maximum velocity at
the front of the ejecta is set to $20,000 ~\rm{km} ~\rm{s}^{-1}$. Its outer
radius from the center of explosion is set to almost touch the
He~WD. {{{Ideally one should start from a real explosion of the CO WD. But we limit ourselves in the present study to explore the basic processes. We estimate the internal energy as follows. At shock breakout, about half the energy is thermal, half is kinetic.
As the gas expands, thermal energy drops as $1/r$.
By the time it hits the He~WD the thermal energy is one third of its initial value. Most of it went to accelerate the gas to almost the terminal velocity. The kinetic energy is now $5/6$ of the initial energy, and the thermal energy is $1/6$.
Over all, the kinetic energy is
5 times or more higher than the thermal energy. In the simulations we therefore set the thermal energy to be $0.2$ of kinetic energy at $t=0$ from the start of the simulations.
We also simulated cases where the initial temperature was set to a very low value, and found no significant differences from the results presented here (see version V1 of this paper on astro-ph). }}}
In the figures described below time is measured from the moment at which the CO explodes.
{{{{We ran our simulations with two different chemical compositions. In a first set of simulations we assumed that the ejecta was entirely made of nickel, while in a second set we adopted C/O. We found that our results are not sensitive to the adopted composition. (see version V1 of this paper on astro-ph for figures with C/O composition).}
}}}
Finally, we mention that radiative
cooling and photon diffusion are not important for the problem
simulated here, and hence have not been
included in our calculations.
\subsection{A low-mass helium WD}
\label{subsec:entrain}
In the case in which a low-mass He-WD is considered, nuclear
reactions are not significant and three distinct stages of the
interaction can be differentiated. (i) The early interaction
phase, when temperatures of the shocked gas are at maximum, and
the ejecta flows around the He~WD. (ii) The intermediate phase,
when the shock breaks out from the back of the WD and ejects
helium from it. (iii) The late time phase, when expansion is
homologous until the ejecta sweep a non-negligible ambient mass
and adopts the shape of an old SNR. We ran the simulations using
both the low- and high-resolutions grids. This was done for
checking numerical convergence. As mentioned earlier, the low-resolution
grid was designed to cover a larger region around the interacting
WDs, and thus was used to follow the evolution of the SNR at late
times. In the overlapping regions, the results of the two
simulations with different resolutions were found to be the same.
{\it The early stage.} In Fig. \ref {fig:DensP1LowMass} we present
the density and velocity maps at several times from $t=2 ~\rm{s}$ (2
seconds after explosion) to the time instant at which the shock
that runs through the He~WD reaches the backside of the He~WD
($t=16 ~\rm{s}$). The SN ejecta hits the WD and flows around it,
forming a dense surface with a 3D conical shape. { {{{ Such
dense conical surfaces appear in the 2D simulations of
\cite{Panetal2010} and of \cite{Panetal2012a} where non-degenerate
companion stars were used.
}}}} In our 2D grid the dense shell has a shape of two dense
stripes on the meridional plane, one at each side of the symmetry
axis. Note that as mentioned in section \ref{sec:ignition} the
temperatures and densities are too low to drive any significant
nuclear burning.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\textwidth]{fig2.eps}
\caption{Density maps in the meridional plane at 6 times for the
case in which a $0.2 M_\odot$ WD is adopted. The time elapsed
since explosion is indicated in each panel. The simulation starts
$2 ~\rm{s}$ after explosion. The symmetry axis is along the left edge,
and the origin of the grid (outside the plots) is at the center of the exploding
CO~WD. The blue line encloses the volume where the local helium
mass fraction is $Y>0.5$; this represents the He~WD and the
material removed from the He~WD. Prominent features include a
shock running around the WD, and the formation of a dense conical
surface in the expanding ejecta. The shock just reaches the back
edge of the He~WD at $t=16 ~\rm{s}$. Temperatures and densities are too
low to drive any significant nuclear burning. The plots are from
the high-resolution run. The lower resolution simulation results
in a similar structure. Velocity is proportional to the arrow
length, with the inset showing an arrow for $10,000 ~\rm{km} ~\rm{s}^{-1}.$
{{{ Note the very fast gas at the outskirts, having velocities larger than the initial speed of $20,000 ~\rm{km} ~\rm{s}^{-1}$. This very low mass gas was accelerated by the initial thermal energy that was non-negligible. When the ejecta is inserted with low temperatures no such velocities are achieved; the differences from the present run are very small (see version V1 on astro-ph). }}}
}
\label{fig:DensP1LowMass}
\end{center}
\end{figure}
{\it The intermediate stage.} In Fig. \ref{fig:DensP2LowMass} we
show the flow after the break-out of the shock from the back side
(down flow) of the He~WD, and the consequential helium outflow.
Most of the ejected helium falls back to the WD as can be seen in
the last panel. Only $0.003 M_\odot$ of helium escapes and flows
outward near the symmetry axis, too small to be observed with
current observational means. The strong concentration at the axis
is a numerical effect. The volume inside the dense conical shell
is a region of low density ejecta. The dense conical surface
continues to expand and more or less preserves its shape in
homologous expansion. The homologous expansion continues until the
interaction with the ambient gas -- the interstellar medium (ISM)
or a circumstellar matter (CSM) -- starts to shape the outskirts
of the ejecta.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{fig3.eps}
\caption{Same as Fig. \ref{fig:DensP1LowMass} for later times, the
intermediate stage. The shock breaks out from the rear of the WD,
ejecting helium. Only $0.003 M_\odot$ of helium escapes while most
of the helium falls back on the WD as can be seen in the last
panel. The plots are from the low-resolution run. Velocity is
proportional to the arrow length, with the inset showing an arrow
for $10,000 ~\rm{km} ~\rm{s}^{-1}.$}
\label{fig:DensP2LowMass}
\end{center}
\end{figure}
{\it The late stage.} We are interested in the morphology of the
ejecta at hundreds of years after explosion. For numerical
reasons, we let the ejecta interact with an ambient medium close
to the explosion site. As the ejecta expansion is already
homologous with high Mach numbers ($\ga 10$) at the end of the
intermediate stage, the morphology obtained here at the late stage
and on a scale of several solar radii represents quite well the
expected morphology hundreds of year later and with a much larger
size (a few pc). For the scaled numerical study of the
ejecta-ambient gas interaction we set the ambient density to be
$0.01 ~\rm{g} ~\rm{cm}^{-3}$, and follow the expansion until $t=492 ~\rm{s}$,
when the medium mass intercepted by the ejecta is $\sim 1
M_\odot$. The interaction of the dense conical surface with the
ambient gas forms a circle of high pressure, with its center on
the symmetry axis (half of this circle is into, and half out of,
the page). This high pressure circle accelerates gas, both ambient
and ejecta, toward the relatively empty cone (toward the symmetry
axis). This gas and the helium along the symmetry axis, determine
the flow structure within the cone.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{fig4.eps}
\caption{Density maps in the meridional plane at 2 late times for
the case in which a $0.2 M_\odot$ WD is adopted. The
computational grid was folded around the axis to present the
entire meridional plane. A homologous expansion of the ejecta,
with a Mach number $>10$, has developed by the beginning of this
evolutionary phase, with a dense conical surface surrounding a
conical volume almost completely devoid of SN ejecta. The ambient
gas density is fixed by our requirement that at the end of the
simulation the ejecta sweeps a substantial mass (see text). At the
end of our simulations, $t=492 ~\rm{s}$,
the SN ejecta has swept $1 M_\odot$ of ambient gas. As the outflow
of the ejecta is already homologous, the morphology obtained here
mimics that at hundreds of years later. The small features along
the symmetry axis itself, both at the top and bottom of the SN-ISM
interaction, are numerical artifacts.}
\label{fig:DensP3LowMass}
\end{center}
\end{figure}
{{{{ The morphological changes due to this flow depend on
the swept ISM mass in front of the dense conical surface, $M_s$. The
dissipated energy when the swept ISM mass is lower than the ejecta
mass that interacts with it $M_s<M_e$, is approximately $E_d \simeq
0.5 M_s v^2$, where $v$ is the radial speed of the ejecta. If a
fraction $\eta$ of this energy goes into azimuthal (tangential)
motion, then the azimuthal speed $v_\theta$ is given by $0.5 M_e
v^2_\theta \simeq \eta E_d$, from which we find $v_\theta \sim \eta
(M_s/M_e)^{1/2} \eta^{1/2}$. This is a crude expression, which nonetheless shows that the filling of the empty cone depends mainly
on the total swept ISM mass, and not on the ISM density which is higher in our simulation due to numerical limitations. }}}}
To form a synthetic map (in radio, X-ray synchrotron, or thermal
X-ray), we integrate over density squared along the lines of
sight, but considering only shocked, hot gas,
\begin{equation}
I(x,y) \equiv \int [\rho(x,y,z)]^2 dz,
\label{eq:I1}
\end{equation}
where $x,y$ are the coordinates on the plane of the sky and $z$ is
taken along line of sight. {{{{The interaction regions are where
synchrotron emission will be formed. Although here the plots are
given shortly after explosion, in this paper we mimic the structure
hundreds of years after explosion, when radioactive decay is very
small and does not play a role in forming the hot regions. }}}}
The obtained `intensity maps' are presented in Fig.
\ref{fig:ImageP3LowMass}. Two inclinations are presented, the
symmetry axis is in the plane of the sky (left), or at $30^\circ$
to the plane of the sky (right). These are presented at two times
when the swept-up ambient masses are $\sim 0.1 M_\odot$ ($t= 202
~\rm{s}$ upper panels), and $\sim 1 M_\odot$ ($t=492 ~\rm{s}$ lower panels).
In Fig. \ref{fig:ImageP3OnlyEjectaLowMass} we present the integral
of the density but only for the ejected mass,
\begin{equation}
N_{\rm eject}(x,y) \equiv \int [\rho(x,y,z)_{\rm eject}] dz
\label{eq:I2}
\end{equation}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig5a.eps}
\includegraphics[width=0.45\textwidth]{fig5b.eps}
\includegraphics[width=0.45\textwidth]{fig5c.eps}
\includegraphics[width=0.45\textwidth]{fig5d.eps}
\caption{Synthetic observed morphology (eq. \ref{eq:I1}) of the
resulting SNR for the case of a low-mass He~WD. We show the
intensity map described in the main text, and only
for the high-temperature gas. The $x$ and $y$ coordinates are on the plane of the sky, and the $z$ coordinate is taken along line of
sight. Two inclinations are presented,
the symmetry axis is in the plane of the sky (left), or at
$30^\circ$ to the plane of the sky (right). These are presented at
two times, namely when the swept-up ambient masses are $\sim 0.1 M_\odot$
($t= 202 ~\rm{s}$ upper panels), and $\sim 1 M_\odot$ ($t=492 ~\rm{s}$ lower
panels).
As the outflow of the ejecta is already homologous at the beginning of this phase,
the morphologies obtained here mimic that at hundreds of years
later when the ejecta interacted with $\sim 0.1-1 M_\odot$ of
homogeneous ambient medium (CSM or ISM).
}
\label{fig:ImageP3LowMass}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig6a.eps}
\includegraphics[width=0.5\textwidth]{fig6b.eps}
\caption{The integrated ejected mass (eq. \ref{eq:I2}) for the two
times as in Fig. \ref{fig:ImageP3LowMass}, and for the symmetry
axis at $30^\circ$ to the plane of the sky.
{{{ Note the very low fraction of ejecta in the shadow behind the He WD (upper part in the figures) close to the edge of the remnant. }}}
}
\label{fig:ImageP3OnlyEjectaLowMass}
\end{center}
\end{figure}
The prominent features of the SNR when the symmetry axis is
close to the plane of the sky before the swept ISM gas is too
large are the following ones.
(a) A `flat front' of the conical region (upper part in the figure which
is the initial direction of the He~WD); (b) A region of lower
intensity at that flat front relative to the rest of the SNR
front; (c) A dense conical surface in the interior; (d) The inner
volume of the conical surface is almost completely devoid of
ejecta gas. The first two features fade as more ambient gas (ISM)
is swept. Let us note that the main result here does not depend
much on whether the He~WD is younger and hotter, hence has a
larger radius. It will simply have somewhat larger orbital
separation. But as the double-detonation model requires stable
Roche lobe overflow (RLOF), the solid angle covered by the He~WD
will be about the same, and so is the conical shape formed behind
it {{{{ (see \citealt{Mariettaetal2000}). Here we find the
angular size (from symmetry axis to conical surface) of the cone
to be $\sim 35^\circ$. \cite{Mariettaetal2000}
found in their study of the single-degenerate scenario that the
companion creates a `hole' in the supernova debris with an angular
size of $\sim 30-40^\circ$, depending on the part of the ejecta,
and \cite{Pakmoretal2008} found an angular size of $\sim 23
^\circ$. Most similar to our structure of the cone are the results
of \cite{Panetal2010}, where the angular size of the cone is $\sim
40^\circ$, and of \cite{Panetal2012a} where in many cases the
angular size of the cone is $\sim 40^\circ$ (in some 3D
simulations there is no well defined cone). All these results agree
with each other within the range of different initial parameters.
}}}}
{{{{ Although some SNRs show some dipole deviations from
sphericity, we are not aware of any SNR that shows such a conical imprint morphology.}}}} One might think of SN1006, but examining the prominent
features we find that SN1006 cannot be explained by such an
interaction. (a) SNR SN1006 has a flat front. However, there is a
hydrogen-rich optical filament along the flat front. The flat
front seems to have been formed by an asymmetrical external
interaction formed by asymmetrical ISM. (b) In SN1006 the X-ray
intensity of the flat front is lower than the front on the
orthogonal directions, but not lower than the other side of the
SNR (e.g., \citealt{Winkleretal2014}). Also, SN1006 does not show
a uniform intensity along the spherical parts not including the
flat front. (c) A dense conical surface in the interior is not
observed in SN1006 (e.g., \citealt{Winkleretal2014}) (d) As can
be seen from figure 9 of \cite{Winkleretal2014}, the volume behind
the flat front is rich in neon and oxygen, and it is not poor in
ejecta. We conclude that the structure of the SNR SN1006, despite
the flat front on one side, is incompatible with the morphology
expected from the double-detonation scenario.
{{{{ The results of asymmetrical SNR obtained here applies
to all single-degenerate scenarios as well. The DD scenario also
leads to asymmetrical explosion if it occurs too shortly after the
merger of the two WDs. Overall, it seems that the symmetrical
structures of most SNRs Ia hint that when it explodes the WD is
all alone. This is compatible with the CD scenario. In cases where
a circumstellar gas is present and influences the SNR morphology,
e.g., in forming two opposite `ears' as in the Kepler SN remnant,
the CD scenario seems to do better than other scenarios as well
\citep{TsebrenkoSoker2015b}. }}}}
{{{{ In some SNRs one can identify two opposite `ears' that
divert the SNR from being spherical (see
\citealt{TsebrenkoSoker2015b} for a list of objects). These `ears'
might be formed by jets in the pre-explosion evolution, as
expected for some SNRs in the CD scenario
\citep{TsebrenkoSoker2015b}. These SNRs are not perfectly
spherical, but the asymmetry is like a quadrupole, and not as a dipole as
expected if a companion influence the shaping of the SNR. }}}}
{{{{ A word of caution is in place here. Our conclusions
hold as long as there are no processes that erase the asymmetry
caused by the companion. If the initial asymmetry is large, e.g.,
as proposed by \cite{Maedaetal2010}, then the morphology of SNRs
discussed above implies that there is a process that erases
asymmetry. For example, radioactive heating of dense regions can
cause them to expand and fill empty regions. However, three points
should be made regarding the homogenizing effect on the flow by
radioactive heating. (1) The change in velocity and density cause
a deviation from the purely homologous density profile of about 10
per cent \citep{PintoEastman2000, Woosleyetal2007,
Noebaueretal2012}. Such small variations will not erase the dipole
asymmetry. (2) The nickel is concentrated in the center, while we
are interested in the outer layers that are first to interact with
the ISM. (3) The observed very low level continuum polarization at
the first few weeks in SN~2012fr points to a symmetrical explosion
that is inconsistent with the merger-induced explosion scenario
\citep{Maundetal2013}. Namely, it seems that explosion is not far
from spherical from the beginning. }}}}
{{{{ Over all, despite the caution one must take at this
stage, the assumptions and approximations made here lead to a fair
representation of the SNR that result from the double detonation
scenario with low mass He~WD as the donor. }}} }
\subsection{A massive helium WD}
\label{subsec:MassiveWD}
In this case we place a $0.43 M_\odot$ He~WD at closer
distances than the $0.2 M_\odot$ one, as described
in section \ref{subsec:setup}. We find that the helium WD is
ignited when the distance of its center to the center of the CO WD is $\la 3.1 \times 10^9 ~\rm{cm} = 0.045 R_\odot$, and that
practically no burning occurs if it is placed at larger distances.
In Figs. \ref{fig:ImageDensity3e9HighMass} to \ref{fig:ImageNicke3e9HighMass}
we present the evolution of density, temperature, and nickel mass
fraction, of the ignited He WD at 6 different times, as indicated.
The initial distance of the center of the He WD from the center of
explosion is $0.043 R_\odot$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{fig7.eps}
\caption{Density maps in the meridional plane at six times for a
He~WD of $0.43 M_\odot$ at an initial distance of its center to
the center of explosion of $0.045 R_\odot$. Note that at $t=2 ~\rm{s}$
helium is ignited and an explosion occurs in the He~WD. The
velocities are proportional to the arrow length, with the inset
showing an arrow for $10,000 ~\rm{km} ~\rm{s}^{-1}.$
}
\label{fig:ImageDensity3e9HighMass}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{fig8.eps}
\caption{Same as Fig. \ref{fig:ImageDensity3e9HighMass} but for
temperature.}
\label{fig:ImageTemp3e9HighMass}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{fig9.eps}
\caption{Same as Fig. \ref{fig:ImageDensity3e9HighMass} but for the
nickel mass fraction. Ignition of helium in the He~WD occurs just
before $t=2 ~\rm{s}$.
{{{ The deep-red indicates the ejecta gas, that we took to be composed entirely of nickel. (Using CO composition for the ejecta does not change the results; see version V1 of the paper on astro-ph.). The lighter-red is the nickel mass fraction that is synthesized in the He WD. White regions are composed of He~WD gas that did not form nickel. }}}
}
\label{fig:ImageNicke3e9HighMass}
\end{center}
\end{figure}
Note that this calculation shares some features in common with the
evolution in the case in which a low-mass He~WD is considered, but
also some noticeable differences. In particular, although the
evolution of the hydrodynamical flow is apparently similar, the
key difference is the much larger temperatures attained during the
interaction between the ejecta and the He~WD. Ignition of helium
occurs just before $t=2 ~\rm{s}$, as can be seen in the lower panels of
Fig. \ref{fig:pressure043}. The ignited helium raises the
temperature and a thermonuclear detonation occurs, in accordance
with the theoretical estimates presented in section
\ref{sec:ignition}. By the last panel the explosion has ended.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{fig10.eps}
\caption{Total pressure (top left), ratio of radiation to total
pressure (top right), temperature (bottom left), and density
(bottom right) at $t=1.8 ~\rm{s}$, just after He ignition. The velocities are proportional to the
arrow length, with the
inset
showing an arrow for $10,000 ~\rm{km} ~\rm{s}^{-1}.$ The blue line shows
when the helium fraction is $Y=0.5$.
The figure is for the case in which a He~WD of mass $0.43 M_\odot$ is adopted.}
\label{fig:pressure043}
\end{center}
\end{figure}
It is interesting to note as well the important role of radiation
pressure in this simulation, as it should be expected given the
considerations explained in section~\ref{sec:ignition}. To
corroborate this, in the upper panels of Fig.
\ref{fig:pressure043} we show the total pressure (top) and ratio
of radiation to total pressure (bottom) at the time of helium
ignition, $t=2 ~\rm{s}$. It can be seen that at the ignition point the
radiation pressure dominates, but thermal pressure is not
negligible. Also, the total pressure in the ignition region is
$\sim 10^{22} ~\rm{erg} ~\rm{cm}^{-3}$, comparable to the estimate given in
equation (\ref{eq:pramm}) if we adopt $r_e=0.04 R_\odot$.
Given that the temperatures attained during
the interaction between the ejecta and the massive He~WD are
rather high, extensive nuclear processing occurs, and a
substantial amount of nickel is synthesized. Nickel first appears
in a region laying between the center of the He~WD and the surface
facing the ejecta. Note that after a few seconds most of the
material of the He~WD has been processed to nickel. This
contradicts observations, as the SNR will be highly asymmetrical,
as in the violent merger simulation presented by
\cite{Pakmor2012}. We find that not all helium is burned and $\sim
0.15 M_\odot$ of helium is ejected from the exploding He~WD.
{{{ {This also contradicts some observations, e.g., \cite{MazzaliLucy1998} found a limit of $<0.1 M_\odot$ of helium in SN Ia 1994D.} }}} {{{ { In a recent study \cite{Lundqvistetal2015} put a much stronger limit of $\la 0.01 M_\odot$ of ablated mass from a helium-rich companion to SN~2011fe and to SN~2014J. }}}}
We conclude that the presence of a relatively close by, $a_0 \la 0.45
R_\odot$, He~WD donor to the exploding CO~WD leads to an explosion
that has characteristics contradicting observations of SNe Ia.
Accordingly, the double-detonation scenario seems to do not apply to
normal SNe Ia.
{{{{ We have actually simulated here a `triple detonation
scenario'. The three stages are: He detonation on the surface of a
WD, then a CO detonation, and finally a He detonation in the He WD
companion. The outcome is a total ejected mass of about the
Chandrasekhar mass, although the two WDs were each much below the
Chandrasekhar mass. The ejected mass and synthesized nickel are
larger than those inferred for SN~2005E \citep{Peretsetal2010}, or
`calcium-rich gap' transients in general
(\citealt{Kasliwaletal2012}; for a recent list of transients see
\citealt{Perets2014}). We also expect iron group elements, which
are not generally observed in SN~2005E and the other
`calcium-rich gap' transients \citep{Peretsetal2010,
Kasliwaletal2012}. One of these gap transients have hydrogen
\citep{Kasliwaletal2012}, which is not expected in the
tripe-detonation scenario. Such transients are more likely to come
from helium detonation on a WD without ignition of the He~WD
companion \citep{MengHan2014}. }}}}
{{{{
The presence of helium might lead to classification of the event as a SN
Ib, but with high helium-burning products that will make it a peculiar SN Ib. Such SNe might be related to the peculiar low-luminosity SNe Ib with relatively strong Ca spectral lines (e.g., \citealt{Peretsetal2010, Foley2015}).
\cite{Foley2015}, following \cite{Peretsetal2010}, suggests that the progenitor system for these SNe is a double WD system where at least one WD has a significant He abundance. We here raise the possibility that some Ca-rich peculiar Ib SNe come from the triple-detonation scenario. This speculation deserves a separate study. In any case, we expect the triple-detonation scenario to be very rare. }}}}
\section{SUMMARY AND CONCLUSIONS}
\label{sec:conclusions}
We have studied the impact of the ejecta of an exploding CO WD on
the donor star in the double-detonation scenario for the formation
of Type Ia supernovae (SN Ia). We have done so for two masses of
the secondary He WD, namely $0.2 M_\odot$ and $0.43 M_\odot$,
assuming that the SN Ia ejecta is already in homologous expansion
when it hits the surface of the secondary WD. The first part of
our study was done using analytical estimates, while in the second
part of our work we performed full 2-dimensional hydrodynamical
calculations, employing the \texttt{FLASH} code. Our most relevant results
can be summarized as follows.
For the case in which a massive He~WD ($0.43 M_\odot$) is
considered, our analytical estimates predicted that the material
of the He~WD would undergo a powerful thermonuclear runaway when
the ejected material of the exploding CO~WD interacts with outer
layers of the donor WD (Sect. \ref{sec:ignition}). Our analytical
predictions are confirmed by our detailed hydrodynamical
calculations that also give us the evolution with time of the flow,
where ignition occurs, the amount of nickel formed, and the mass
of helium ejected by the interaction (Figs.
\ref{fig:ImageDensity3e9HighMass} - \ref{fig:pressure043}). In particular,
the mass of ejected helium ($0.15 M_\odot$) would have been easily detected in
observations, implying that this scenario seems to be ruled out for
standard SN Ia.
For the binary system containing a low-mass He~WD ($0.2 M_\odot$)
no significant nuclear processing occurs, and the evolution
consists of an almost pure hydrodynamical flow. The evolution can
be divided in three distinct phases. During the initial phase a
shock runs through the outer layers of the He~WD, and the SN
ejecta flows around the secondary star, forming a region with
conical shape (Fig. \ref{fig:DensP1LowMass}). In the intermediate
stage, just after the shock breaks-out from the back side of the
He~WD, some material from the He~WD is ejected but most of it
falls back at later times, while a conical dense surface continues
expanding (Fig. \ref{fig:DensP2LowMass}). Finally, during the late
stages of the evolution the SN ejecta interacts with the ambient medium, which we
numerically set to a very high density to mimic interaction with
the ISM hundreds of years later. During this phase the conical
flow previously described forms a ring of high pressure, which
accelerates material towards the low-density conical region (upper right panel of Fig.
\ref{fig:DensP3LowMass}).
The hydrodynamical evolution previously described has
observational consequences. In an attempt to model the morphology
of the resulting SNR we integrated the density squared of the hot
gas for two viewing angles and two times (Fig.
\ref{fig:ImageP3LowMass}. The integrated ejecta density is shown
in Fig. \ref{fig:ImageP3OnlyEjectaLowMass}). We found that the
shape of the SNR, that contains a prominent flat region in the
direction of the shadow of the He~WD, is at odds with known SNR
morphologies.
In conclusion, our study supports previous claims that the
double-detonation scenario can at best be responsible for a very
small fraction of all SN Ia. Specifically,
\cite{Piersantietal2013} claimed that the double-detonation
scenario can account for only a small fraction of all SN Ia,
because the parameter space leading to explosion is small.
\cite{Ruiteretal2014}, on the other hand, argued that the
double-detonation model can account for a large fraction of SN Ia.
For that to be the case, most ($>70 \%$) of the donors in the
study of \cite{Ruiteretal2014} are He~WD. Our results show that
He~WD donors lead to explosions that are in contradiction with the
observed morphology of the SNRs of Type Ia SN, and that if the
He~WD is massive ($\sim 0.4 M_\odot$), not all helium is burned
and, consequently, would be spectroscopically observed, again in
contradiction with observations.
There is another severe problem with the double detonation
scenario \citep{TsebrenkoSoker2015b}. As \cite{Ruiteretal2014}
showed, most exploding WDs in the double-detonation scenario are
of mass $<1.1 M_\odot$. This is in a strong contrast with recent
claims that most SN Ia masses are peak around $1.4 M_\odot$
\citep{Scalzoetal2014}. \cite{Seitenzahletal2013} also claimed
that at least $50\%$ of all SN Ia come from near Chandrasekhar
mass ($M_{\rm Ch}$) WDs.
All in all, we conclude that the double-detonation scenario can
lead to explosions, but their characteristics are not typical of
those of SN Ia. Thus, SNe Ia must be originated by other
channels, most likely the core-degenerate and the
double-degenerate scenarios \citep{TsebrenkoSoker2015b}.
{{{{ We thank an anonymous referee for many detailed comments
that substantially improved, both the presentation of
our results and their scientific content. }}}}
This research was supported by the Asher Fund for Space
Research at the Technion, and the E. and J. Bishop Research Fund
at the Technion. This work was also partially supported by MCINN
grant AYA2011--23102, and by the European Union FEDER funds. OP is
supported by the Gutwirth Fellowship.
|
1,108,101,564,341 | arxiv | \section{Introduction}\label{sec:intro}
Early-type galaxies (ETGs) are among the most massive systems in
the Universe. They are on average metal-rich, dust and gas poor,
and formed their stars in early, rapid events (e.g.,
\citealt{Thomas2005,Fontanot+09}). The lack of easily
interpretable dynamical tracers, such as the cold gas in spiral
galaxies, has made the mass mapping of these systems very
difficult in the outer regions where dark matter (DM) is expected
to be dominant, although discrete tracers (planetary nebulae and
globular clusters; e.g., \citealt{Napolitano+09,Romanowsky+09})
are providing a clearer view of the DM halos. The mass content of
these galaxies' central regions has on the other hand been
extensively investigated (e.g., \citealt{Gerhard+01};
\citealt{Cappellari+06}; \citealt[T+09 hereafter]{Tortora2009a}),
with building evidence that the central DM fraction (\mbox{$f_{\rm DM}$}) is an
increasing function of total stellar mass ($M_\star$), providing
the main driver for the tilt of the fundamental plane (e.g.,
T+09).
New insights into the galaxy assembly process are emerging from
joint analyses of the structural and star formation properties of
nearby ETGs (e.g., \citealt{Gargiulo+09,Graves10}; \citealt[NRT10
hereafter]{NRT10}). A key discovery of NRT10 is an
anti-correlation between \mbox{$f_{\rm DM}$}\ and galaxy stellar age, such that
older galaxies (at a fixed $M_\star$) have lower \mbox{$f_{\rm DM}$}. In the
context of $\Lambda$CDM halos, this trend can be partially
explained by an anti-correlation between galaxy sizes and ages,
with the remaining effect apparently driven by variations in star
formation efficiency, stellar initial mass function (IMF), or DM
distribution (e.g., adiabatic contraction, AC hereafter). As
discussed in NRT10, such correlations would have deep implications
for the assembly histories of ETGs, but critically need to be
confirmed by independent analyses.
Gravitational lenses offer a unique tool to map the mass profile
in galaxies over a range of redshifts. The database of lenses is
growing quickly thanks to ongoing surveys (e.g., SLACS:
\citealt[A+09 hereafter]{Auger+09}; COSMOS: \citealt{Faure+08}).
In particular, the SLACS survey has collected $~\rlap{$>$}{\lower 1.0ex\hbox{$\sim$}} 80$ secure
lenses, which is a sample comparable to the total number of lenses
discovered since the late 70s from other campaigns or by
serendipity (e.g. \citealt{Covone+09}). Here, we will use the
SLACS sample data to extend the analysis of NRT10, using both
lensing and dynamics as independent probes of total mass, and
providing a higher-redshift ($z\sim0.2$) comparison to the
$z\sim0$ galaxies previously studied\footnote{In the paper, we use
a cosmological model with $(\Omega_{m}, \, \Omega_{\Lambda}, \, h)
= (0.3, \, 0.7, \, 0.7)$, where $h = H_{0}/100 \, \textrm{km} \,
\textrm{s}^{-1} \, \textrm{Mpc}^{-1}$ (\citealt{WMAP2}),
corresponding to a universe age today of $t_{\rm univ}=13.5 \, \rm
Gyr$.}.
\section{Data sample and analysis}\label{sec:sample}
\subsection{Galaxy Samples}
The lensing galaxy sample is taken from the SLACS survey
(A+09)\footnote{See also {\tt http://www.slacs.org/}}, which has
been extensively analyzed in other works (e.g.,
\citealt{SLACS1,SLACS5,Gavazzi07,Cardone+09,Treu+09,Grillo+09,CT10}).
The lens galaxy redshift ($z_{l}$) range is $0.05\leq z_l\leq
0.5$, with a median of $z_{l} \sim 0.2$. Our sample is selected:
1) to have a measured Einstein radius \mbox{$R_{\rm E}$}\footnote{The Einstein
radii are derived by fitting a singular isothermal ellipsoid (SIE)
profile and are quoted adopting an intermediate-axis
normalization. Five of the galaxies without a measured \mbox{$R_{\rm E}$}\ have a
nearby companion while, for other systems, the $HST$ data do not
have sufficient sensitivity to adequately perform the lensing
model.} in Table 3 of A+09, 2) to be classified as elliptical or
S0, 3) to have a measured $V$-band effective radius (measured at
the intermediate axis). Of the 85 lenses from this latest SLACS
release, 66 passed our selection criteria.
As a $z\sim0$ comparison sample, we use the collection of 330 ETGs
over the same mass range analyzed in T+09 and NRT10.
\subsection{Stellar population analysis}
To estimate stellar mass-to-light ratios ($\Upsilon_\star$) and
star formation histories, we analyze spectral energy distributions
(SEDs) based on broad-band Sloan Digital Sky Survey (SDSS)
photometry (namely, $ugriz$). Our general procedure is to adopt a
set of synthetic spectra from the prescription of \citet{BC03}, a
uniform metallicity $Z$, an age $t$ characterizing the time of
star formation onset, and an exponentially declining star
formation rate with timescale $\tau$. For each galaxy, $Z$, $t$,
and $\tau$ are fitted parameters, with the determined
$\Upsilon_{\star}$ based on a \citet{Kroupa01} IMF; uncertainties
on the estimated parameters have been quantified via Monte Carlo
simulations: further details are provided in T+09 and NRT10 along
with explorations of systematic uncertainties and degeneracies.
The only change here is to shift the spectral responses of the
SDSS filters to correspond to the lens redshifts before convolving
with the model SEDs. We have checked that imposing restrictions on
$Z$ or $\tau$, or adopting the stellar populations results from
A+09 or \citet{Grillo+09}, does not qualitatively affect the
results described below.
\subsection{Total mass and dark matter content}
We derive the deprojected total mass from dynamics and lensing
observables, and separate the DM from the stellar components using
the stellar mass estimates discussed in the previous section. We
adopt the stellar effective radius \mbox{$R_{\rm eff}$}\ as the fiducial reference
point for mass comparisons. For the lens galaxies, \mbox{$R_{\rm eff}$}\ is
measured in the $V$-band, corresponding approximately to the
rest-frame $B$-band used for the local galaxies. In both galaxy
samples, the mass constraints are generally based on measurements
at smaller radii ($\sim 0.1 \mbox{$R_{\rm eff}$}$ and $\sim 0.5 \mbox{$R_{\rm eff}$}$,
respectively), and therefore some extrapolation is required. For
the total mass distribution we adopt a singular isothermal sphere
(SIS) with density $\rho(r) = \sigma_{\rm SIS}^{2} / (2 \pi G
r^{2})$ (e.g. \citealt{SLACS3}, \citealt{Gavazzi07},
\citealt{Koopmans+09}), where $\sigma_{\rm SIS}$ is an unknown
normalization to be determined by fitting the observables. For the
stars, we adopt a constant-$\Upsilon_*$ mass profile based on the
\citet{H90} model.
To estimate dynamical masses we have used the SDSS stellar
velocity dispersions $\sigma_{\rm SDSS}$, measured within a
circular aperture of $R_{\rm ap} = 1.5''$. Briefly, we have
adopted the spherical Jeans equation to derive the surface
brightness weighted velocity dispersion $\sigma_{\rm ap,SIS}$
within $R_{\rm ap}$ (see T+09 for further details), to be matched
to $\sigma_{\rm SDSS}$. As discussed in T+09, there is some degree
of systematic uncertainty from assumptions of sphericity and
orbital isotropy, which we can now check in the case of the lenses
by using the independent lensing-based masses (which do have their
own uncertainties from mass-sheet degeneracies).
For the lensing mass estimates, we have used the Einstein radius
\mbox{$R_{\rm E}$}, to derive a model independent measurement of projected mass
($M_{\rm proj}$) within \mbox{$R_{\rm E}$}, since $M_{E}=M_{\rm proj}(\mbox{$R_{\rm E}$}) = \pi
\mbox{$R_{\rm E}$}^{2} \Sigma_{\rm crit}$, where $\Sigma_{\rm crit} = c^{2}
D_{s}/4 \pi G D_{l} D_{ls}$, with $D_{s}$, $D_{l}$ and $D_{ls}$
the observer - source, observer - lens and lens - source comoving
angular diameter distances, respectively. Finally we match the
prediction of the SIS model projected mass, $M_{proj,SIS}$ with
$M_{E}$ to have a further constraint on the only free model
parameter for each galaxy, $\sigma_{\rm SIS}$.
The best fitted $\sigma_{\rm SIS}$ can be derived independently
using either technique to estimate the best 3D deprojected mass
profile which we extrapolate to $r = \mbox{$R_{\rm eff}$}$ to obtain our reference
mass values. Using this approach, we find that lensing and
dynamics provide consistent results, modulo a $\sim$~10\% (and a
scatter of $\sim 25\%$) higher mass from dynamics, corresponding
to a change of $0.03 \pm 0.10$ in $f_{\rm DM}$. Thus, we adopt a
combination of the constraints for our final masses, by minimizing
with respect to $\sigma_{\rm SIS}$ a combined $\chi^2$ function
including one term for dynamics and one for lensing observables,
given by
\begin{equation}
\chi^{2}= \bigg
(\frac{\sigma_{ap}-\sigma_{SDSS}}{\delta_{d}}\bigg)^{2} + \bigg
(\frac{M_{E}-M_{proj,SIS}}{\delta_{l}}\bigg)^{2},
\end{equation}
where $\delta_{d}$ and $\delta_{l}$ are the uncertainties on
$\sigma_{\rm SDSS}$ and $M_{E}$, respectively \footnote{Note that
$\delta_{d}$ is given in Table 3 of A+09 and ranges from $2\%$ to
$19\%$ (with mean $6\%$), while we have assumed a nominal $5\%$
uncertainty on \mbox{$R_{\rm E}$}\ which corresponds to a relative error of
$10\%$ on $M_{E}$. However, the results are qualitatively
unchanged if we would assume $\delta_{d}=\delta_{l}$.}.
In the following we will focus on the central 3D deprojected \mbox{$f_{\rm DM}$}\
and the mean DM density within \mbox{$R_{\rm eff}$} , defined as $\langle \rho_{\rm
DM}\rangle= M_{\rm DM}/(4/3 \pi \mbox{$R_{\rm eff}$}^3)$ where $M_{\rm DM}=M_{\rm
tot}-M_\star$ at \mbox{$R_{\rm eff}$}{} is the DM mass.
\subsection{Cosmological models}
As in T+09 and NRT10, to interpret the observational results, we
construct a series of toy mass models based on $\Lambda$CDM
cosmological simulations. For each bin in $M_\star$, we use the
average $\mbox{$R_{\rm eff}$}$-age relations from the combined lens+local sample,
and parameterize the virial DM mass by a star formation efficiency
$\epsilon_{\rm SF}=M_\star/(\Omega_{\rm bar} M_{\rm tot})$, where
$\Omega_{\rm bar}=0.17$ (\citealt{WMAP2}) is the baryon density
parameter. The halo densities are initially characterized as
\citet{NFW} profiles following an average mass-concentration
relation, adjusted by $(1+z)^{-1}$ for the lens galaxies. A recipe
for AC from baryon settling is then applied \citep{Gnedin+04}. The
toy models for $\mbox{$\epsilon_{\rm SF}$} = 0.03,0.1,0.3$ are shown in both Figs.
\ref{fig: fig1} and \ref{fig: fig3}.
\section{Results: correlations with size and formation epoch}\label{sec:results}
In order to marginalize any correlations with $M_\star$, we group
the galaxies from both samples into bins of common median mass:
$\log M_*/M_\odot\sim 11.6$, $11.3$ and $10.9$. For the lens
galaxies, the corresponding median redshifts are $z_{\rm
med}=0.28, 0.18, 0.13$. The local galaxies sample extends to even
lower masses ($\log M_*/M_\odot \sim 10.4$) with no lens
counterparts.
Our first result (which we do not show for the sake of space) is
that for the lenses, \mbox{$f_{\rm DM}$}\ increases on average with $M_\star$
(see \citealt{Cardone+09} and \citealt{CT10} for details). This
result confirms that as in local galaxies (T+09), \mbox{$f_{\rm DM}$}\ is also a
main driver of the fundamental plane tilt at $z\sim 0.2$.
Next, following NRT10, we focus on correlations of the DM metrics
with galaxy size and age. For the latter we adopt the look-back
time to the formation epoch in order to put all the galaxies with
different observed redshifts on a common reference frame.
\begin{figure}[t]
{\hspace{-0.59cm}}\includegraphics[width=0.58\textwidth,clip]{fig1.eps}
\caption{Dark matter fraction within an effective radius (\mbox{$R_{\rm eff}$}) as
a function of \mbox{$R_{\rm eff}$}. The lens and local galaxies are shown as filled
and open symbols, respectively. For the latter, open symbols with
error bars show the median and $\pm 25\%$ values. Typical
$1\sigma$ uncertainties for individual galaxies are shown to the
left. The differently colored symbols denote different bins of
stellar mass, as labeled in the legend in the top. The second
panel from the top shows the combined bins for the lens galaxies
only and includes toy-model $\Lambda$CDM predictions: solid,
long-dashed, and short-dashed curves show star formation
efficiencies of $\epsilon_{\rm SF}=0.03,0.1,0.3$, respectively.
}\label{fig: fig1}
\end{figure}
Fig. \ref{fig: fig1} demonstrates that there is a strong positive
correlation between \mbox{$f_{\rm DM}$}\ and \mbox{$R_{\rm eff}$}, once the galaxies are divided
into mass bins. This may be understood as a larger \mbox{$R_{\rm eff}$}\ enclosing
a bigger portion of the DM halo; this ``aperture effect'' appears
to be more dominant than the \mbox{$f_{\rm DM}$}\ correlation with $M_\star$. The
local and lens samples appear reasonably similar, although the
lens galaxies in the lowest mass bin are systematically higher,
which is an issue we will discuss below. Both samples are in rough
agreement with our $\Lambda$CDM toy model predictions (top panel).
Fig.~\ref{fig: fig2} shows that $\langle\rho_{\rm DM}\rangle$
strongly anti-correlates with \mbox{$R_{\rm eff}$}. Again considering the aperture
effect and assuming DM halo homogeneity, the implication is that
we are measuring a mean DM density profile with radius, with a
best fitted log slope of $\sim -1.7$. As discussed in NRT10, this
steep slope is indicative of cuspy halos, perhaps as induced by
AC. One could suspect that we are getting out what we are putting
in, since our default galaxy model assumes an isothermal total
density profile (with slope $\sim -2$) in order to extrapolate
measurements to $r = \mbox{$R_{\rm eff}$}$, but we have shown in NRT10 that the use
of an alternative constant-M/L profile yields similar results
(modulo a difference of $0.1-0.2$ in the slope), still fully
consistent with a cuspy contracted halo\footnote{Note that the two
massive galaxies J0157$-$0056 and J0330$-$0020 with $\log \mbox{$R_{\rm eff}$} \sim
0.9$, which depart from the mean trend, are fitted by a possibly
unrealistic supersolar metallicity; setting $Z$ to the solar
value, the galaxies' densities are incremented by $0.15$ and
$0.25$ dex, respectively.}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth,clip]{fig2.eps}
\caption{Mean DM density within 1 \mbox{$R_{\rm eff}$}\ as a function of \mbox{$R_{\rm eff}$}. See
Fig. \ref{fig: fig1} for the meaning of symbols. The grey region
shows an average of spiral galaxy results from
\citet{McGaugh+07}.}\label{fig: fig2}
\end{figure}
In Fig. \ref{fig: fig2}, we also see that the ETGs have DM
densities substantially larger than those of local spiral
galaxies, which have been suggested to follow a unified halo trend
with dwarf spheroidals \citep{Donato+09,Walker+10b}. This
dichotomy is qualitatively consistent with other findings
(\citealt{Gerhard+01,Thomas+09}; NRT10; \citealt{CT10}), and may
imply different formation mechanisms. The difference might be
assumed as simply caused by ETGs forming from denser late-type
galaxies at earlier epochs, which would yield the corollary
prediction that ETGs with younger stellar ages have less dense
halos because their late-type progenitors were less dense.
However, as we will see below, the opposite trend appears to be
observed.
Finally we consider the \mbox{$f_{\rm DM}$}-age dependencies in Fig.~\ref{fig:
fig3}, again using separate $M_\star$ bins. The lens and local
galaxies match up remarkably well in general, showing a clear
trend for lower \mbox{$f_{\rm DM}$}\ at older ages. The low-mass lens galaxies
are predominantly young, which is probably a selection effect on
apparent magnitude. The \mbox{$f_{\rm DM}$}-age anti-correlation then causes the
overall \mbox{$f_{\rm DM}$}\ for this mass bin to be high. In summary, at a fixed
galaxy age and mass, the higher-$z$ sample does not show any
significant difference with the local galaxies. Since we have not
applied any evolutionary corrections, the implication is that in
the last $\sim 2.5$ Gyr (on average) the galaxy populations have
experienced no measurable evolution except stellar
aging\footnote{Here the stellar mass bins can be biased at
different redshifts due to the stellar population evolution and
introduce a spurious tilt in the $\mbox{$f_{\rm DM}$}$-age relation. We have
checked from the stellar models that the change in stellar mass
due to stellar evolution might be not larger than $\sim -0.01$ in
$\log \mbox{$M_{\star}$}$ (due to mass loss), which makes our stellar bins at
different $z$ fairly homogeneous.}.
We also include the $\Lambda$CDM toy models in Fig. \ref{fig:
fig3}, and see that some \mbox{$f_{\rm DM}$}-age anticorrelation is expected,
which can be traced to the anti-correlation between \mbox{$R_{\rm eff}$}\ and age.
However, there are indications in every mass bin that the observed
\mbox{$f_{\rm DM}$}-age slope is steeper than in the models. Some systematic link
between age and $\epsilon_{\rm SF}$ is possible, but the models
shown in Fig.~\ref{fig: fig3} suggest that this would not be a
strong enough effect, as changes in virial mass do not propagate
strongly to changes in central DM content (and in fact earlier
collapsing halos should have {\it denser} centers, which goes in
the wrong way to explain the observations). The alternatives as
discussed in NRT10 are that AC is more effective in younger
galaxies, or that older galaxies have ``lighter'' IMFs (e.g.,
Kroupa versus \citealt{Salpeter55} for the younger galaxies).
\begin{figure}[t]
{\hspace{-0.95cm}}\includegraphics[width=0.6\textwidth,clip]{fig3.eps}
\caption{DM fraction within 1 \mbox{$R_{\rm eff}$}\ as a function of galaxy stellar
``age''. See Figs. \ref{fig: fig1} and \ref{fig: fig2} for the
meaning of symbols.}\label{fig: fig3}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
We have analyzed the central DM content of a sample of
intermediate-$z$ lenses from the latest release of the SLACS
survey (A+09). Following the phenomenological framework introduced
in T+09 and NRT10 we have discussed scaling relations between DM
fraction, galaxy size, and formation epoch. Gravitational lensing
and dynamical analyses are used to constrain the total mass
profile, while synthetic spectral populations are used to infer
the stellar mass and other stellar properties such as galaxy age.
ETGs at $z \sim 0.2$ are found to be similar to local ones; future
work will include extending the baseline to higher redshifts. The
somewhat surprising findings of NRT10 are now confirmed with an
independent and arguably more robust data set. The DM fraction
within \mbox{$R_{\rm eff}$}\ is found to strongly correlate with \mbox{$R_{\rm eff}$}, because
larger length-scales probe a more DM dominated region. On these
scales, the DM mean density decreases with \mbox{$R_{\rm eff}$}\ as
$\langle\rho_{\rm DM}\rangle \, \propto \mbox{$R_{\rm eff}$}^{-1.7}$, which argues
for cuspy DM halos for ETGs out to $z\sim0.5$ (as was discussed in
NRT10 for local galaxies). At a fixed stellar mass and
length-scale, we have found that the DM halos of ETGs are denser
than those of local spiral galaxies, providing a critical test for
the merging formation scenario (see also \citealt{CT10}).
Finally, we have confirmed our earlier finding that central DM
content anti-correlates with stellar age. The strength of this
correlation appears to exceed what is expected from size-age
effects. A fundamental connection between galactic structure and
star formation history is implied, which we propose is a
consequence of variations with formation epoch of either DM halo
contraction or stellar IMF.
In future work, we plan to investigate the impact on these results
of more complex total and DM galaxy profiles (e.g.
\citealt{Cardone05}, \citealt{Tortora2007}) along the lines of
recent work in \cite{Cardone+09} and \cite{CT10}. New high-quality
data are also expected with the advent of future surveys both in
the local Universe and at larger redshifts. Such surveys will
include larger samples of gravitational lenses along with more
detailed spectroscopic information, and could be used to verify
and extend the results presented here, providing a clearer picture
of the physical processes of ETG assembly.
\acknowledgments
We thank the anonymous referee for the suggestions which helped to
improve the paper. CT was supported by the Swiss National Science
Foundation. AJR was supported by National Science Foundation
Grants AST-0808099 and AST-0909237.
\vspace{0.5cm}
{\it Note added in proof.} During the final phase of this
manuscript, there appeared closely related work from \citet[A+10a
and A+10b respectively, hereafter]{Auger+10a, Auger+10b}, using a
combination of strong lensing, and stellar dynamics and
populations, to analyze virtually the same lens-galaxy sample. A
key point of agreement is that A+10a found the strongest
non-trivial correlate with central $f_{\rm DM}$ is \mbox{$R_{\rm eff}$}, which we
think demonstrates that size rather than mass variation is the
main driver for the fundamental plane tilt.
A+10b also constructed $\Lambda$CDM models, adding constraints
from weak lensing to break the degeneracies between \mbox{$\epsilon_{\rm SF}$}, IMF, and
halo model (AC or no-AC). Given the same IMF and halo assumptions,
they found SF values that are on average higher than both ours
(e.g. $\mbox{$\epsilon_{\rm SF}$} \sim 0.3-0.4$ versus $\sim 0.1$ for Salpeter+no-AC),
and typical values found in other studies. Although these
differences may seem large, they involve relatively small changes
in the central DM properties which could be driven by differences
in the modeling methods. We considered three-dimensional DM masses
within \mbox{$R_{\rm eff}$}, while A+10b analyzed projected masses within $\mbox{$R_{\rm eff}$}/2$
in combination with large-radius constraints from weak lensing, so
a direct comparison is not straightforward. We hope to track down
the reasons for these discrepant conclusions in future work.
A+10b found that a Salpeter+no-AC model is preferred over a
Chabrier IMF (with or without AC) using direct model fits. Without
weak lensing constraints, our modeling permits either
Salpeter+no-AC or Kroupa+AC solutions (see NRT10 for further
details), while our Figure 2 (or Figure 9 in NRT10) does provide
indirect evidence for very cuspy halos with AC (and a Kroupa IMF).
They also found that the IMF may become heavier with galaxy mass
($\eta > 1$ in their notation). We find that stellar age is the
more important parameter in this context, but if an IMF-mass
relation is demanded, then $\eta < 1$. This difference appears to
be caused at least partially by different stellar populations
models: although we found that varying these models did not alter
the basic age trends, the mass trends are weaker and more
sensitive to the models.
|
1,108,101,564,342 | arxiv | \section{Introduction}
We consider the initial value problem for a coupled Dirac--Klein-Gordon system in two spatial dimensions. The DKG system of interest describes a massless spinor $\psi = \psi(t,x): \RR^{1+2} \to \CC^2$ and a scalar field $v = v(t,x):\RR^{1+2} \to \RR$ of mass $m \geq 0$, whose dynamics are governed by
\bel{eq:D-KG}
\aligned
i \gamma^\mu \del_\mu \psi
&= v \psi,
\\
-\Box v + m^2 v
&= \psi^* \gamma^0 \psi,
\endaligned
\ee
with prescribed initial data at $t=t_0=2$
\bel{eq:ID}
\big(\psi, v, \del_t v \big)(t_0) = (\psi_0, v_0, v_1 ).
\ee
The Dirac matrices $\{\gamma^0, \gamma^1, \gamma^2\}$ are a representation of the Clifford algebra, and are defined by the identities
\be
\aligned
\{ \gamma^\mu, \gamma^\nu \}
&:= \gamma^\mu \gamma^\nu + \gamma^\nu \gamma^\mu
= -2\eta^{\mu \nu} I_{2},
\\
(\gamma^\mu)^*
&= -\eta^\mu{}_\nu \gamma^\nu.
\endaligned
\ee
Here, $I_2$ is the $2\times 2$ identity matrix, and $A^* = (\bar{A})^T$ is the Hermitian conjugate of the matrix $A$.
To be more concrete, the $\gamma$ matrices can be represented by
$$
\gamma^0
=
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix},
\quad
\gamma^1
=
\begin{pmatrix}
0 & 1 \\
-1 & 0
\end{pmatrix},
\quad
\gamma^2
=
\begin{pmatrix}
0 & -i \\
-i & 0
\end{pmatrix},
$$
however we do not use any explicit representation for the $\gamma$ matrices. We also define $\eta := - \textrm{d} t^2 + (\textrm{d} x^1)^2 + (\textrm{d} x^2)^2$ and use $\Box := \eta^{\alpha\beta}\del_\alpha \del_\beta = -\del_t \del_t + \del_{x^1}\del_{x^1} + \del_{x^2}\del_{x^2}$ to denote the Minkowski wave operator. Without loss of generality we hereon set $m=1$.
\subsection*{Background on the DKG system}
The system \eqref{eq:D-KG}, allowing also for a non-zero Dirac mass $m_\psi\geq0$, arises in particle physics as a model for Yukawa interactions between a scalar field and a Dirac spinor. It appears in the theory of pions and in the Higgs mechanism \cite{physics}. We note that the nonlinearity $\psi^*\gamma^0\psi$ is often writen as $\bar{\psi}\psi$ where $\bar{\psi}:= \psi^*\gamma^0$ is the Dirac adjoint, and thus transforms as a scalar under Lorentz transformations.
The Cauchy problem for the DKG system has been actively studied in various spacetime dimensions and for different cases of the Klein-Gordon and Dirac masses (i.e. $m \geq 0$ and $m_\psi \geq 0$). In $1+3$ dimensions there are small-data high-regularity results showing global existence with asymptotic decay rates \cite{Bachelot, Katayama12a, Tsutsumi}. There are also numerous results concerning low-regularity data leading to global existence and scattering, see for example \cite{Bejenaru-Herr} and references within, as well as large-data results, see for example \cite{Candy-Herr2, DFS-07, Dias-Fig91} and references cited within.
Moving to $1+2$ dimensions, the DKG system naturally presents further difficulties due to the weaker dispersive properties of the linear wave and Klein-Gordon equations. Nevertheless there are local \cite{Bournaveas-2D, DFS-07-2D} and global \cite{GH10} existence results for low-regularity, and possibly large, data.
\subsection{Main result}
We now state our main theorem and then discuss novel ideas in the proof.
\begin{theorem}\label{thm:main}
Consider the DKG initial value problem \eqref{eq:D-KG}-\eqref{eq:ID}, and let $N \geq 4$ be an integer. There exists an $\eps_0 > 0$, such that for all $\eps \in (0, \eps_0)$, and all compactly supported initial data satisfying the smallness condition
\bel{thm:data-assumpt}
\|\psi_0\|_{H^{N}} + \| v_0 \|_{H^{N+1}} + \| v_1 \|_{H^{N}} \leq \eps,
\ee
the Cauchy problem \eqref{eq:D-KG}-\eqref{eq:ID} admits a global solution $(\psi, v)$. Furthermore there exists a constant $C > 0$ such that the solution satisfies the following sharp pointwise decay estimates
\bel{eq:sharp-decay}
|\psi | \leq C \eps t^{-1/2} \big(1+|t-|x||\big)^{-1/2},
\qquad
|v| \leq C \eps t^{-1}.
\ee
\end{theorem}
\subsection*{Difficulties and challenges}
We first remind the reader of the important identity
\bel{eq:dirac-to-wave}
\Box \psi = \big(i \gamma^\mu \del_\mu \big) \big( i \gamma^\nu \del_\nu \psi \big).
\ee
Thus we can think of \eqref{eq:D-KG} as also encoding a coupled wave-like and Klein-Gordon system.
Proving global existence and asymptotic decay results for coupled nonlinear wave and Klein-Gordon equations, such as in Theorem \ref{thm:main}, is typically a challenging question in two spatial dimensions. This is because linear wave $w$ and linear Klein-Gordon $u$ equations have very slow pointwise decay rates in $\RR^{1+2}$, namely
\bel{intro-linear-decay}
|w| \lesssim \big(1+t + |x|\big)^{-1/2} \big(1 + |t-|x||\big)^{-1/2},
\qquad
|u| \lesssim \big(1+t+|x|\big)^{-1}.
\ee
The identity \eqref{eq:dirac-to-wave} also indicates that a linear massless Dirac field should obey the same slow pointwise decay rates as $|w|$ above.
As a consequence of \eqref{intro-linear-decay}, when using Klainerman's vector field method \cite{Klainerman86} on quadratic nonlinearities, we might \emph{at best} get an integral of $t^{-1}$. This leads to problems when closing the bootstrap argument and can possibly indicate finite-time blow-up (see for example \cite{G81}).
Another obstacle when studying Klein-Gordon equations, in the framework of the vector field method, is that the scaling vector field $L_0 = t\del_t + x^1 \del_{x^1} + x^2 \del_{x^2}$ does not commute with the Klein-Gordon operator $-\Box + 1$. The scaling vector field can be avoided by using a spacetime foliation of surfaces $\Hcal_s$ of constant hyperboloidal time $s = \sqrt{t ^2 - |x|^2}$, see \cite{Hormander, Klainerman85,PLF-YM-book}. However, this prevents one from using the classical Klainerman-Sobolev inequality to gain additional $(t-|x|)$-decay. Furthermore the inability to control the $L_0$ vector field on a Klein-Gordon field leads to issues, in $\RR^{1+2}$, when controlling the symmetric quadratic null form between two Klein-Gordon components (see the discussion in \cite[\textsection 3.3]{DW-2020}).
Returning now to the DKG problem \eqref{eq:D-KG}, we use the identity \eqref{eq:dirac-to-wave} to derive the following
\bel{eq:W-KG}
\aligned
-\Box \psi &= -i \del_\nu \big( v \gamma^\nu \psi \big),
\\
-\Box v + v &=\psi^* \gamma^0 \psi.
\endaligned
\ee
If we ignore the structure here, we roughly speaking have obtained a wave--Klein-Gordon system of the form
\bel{eq:simplified-wKG}
\aligned
-\Box w &= \del (u w) = w \del u+ u \del w,
\\
-\Box u + u &= w^2.
\endaligned
\ee
The global existence of general small-data solutions to \eqref{eq:simplified-wKG} is unknown in $\RR^{1+2}$. Furthermore, if we assume that $w$ and $u$ obey the linear estimates \eqref{intro-linear-decay}, then the best we can expect from the nonlinearities (in the flat $t=\text{cst.}$ slices) is
$$
\|\del (u w) \|_{L^2(\RR^2)} \lesssim t^{-1}, \qquad \| w^2 \|_{L^2(\RR^2)} \lesssim t^{-1/2}.
$$
Returning to the original PDE \eqref{eq:D-KG}, the best we can expect appears to be
$$
\|v \psi\|_{L^2(\RR^2)} \lesssim t^{-1}, \qquad \|\psi^*\gamma^0\psi \|_{L^2(\RR^2)} \lesssim t^{-1/2}.
$$
Thus one quantity is at the borderline of integrability and the other is strictly below the borderline of integrability. In previous work of the authors \cite{DW-2020}, such a situation was termed `below-critical', and indicates that if the classical vector field method is to be successful, then new ideas are required to close both the lower-order and higher-order bootstraps.
\subsection*{Key ingredients and novel ideas in the proof of Theorem \ref{thm:main}}
To conquer the aforementioned difficulties in studying the DKG equations \eqref{eq:D-KG}, we need several ingredients and novel observations.
As explained before, if we ignore the structure of the DKG nonlinearities and study the more general wave-Klein-Gordon system \eqref{eq:simplified-wKG}, then we immediately face issues coming from the below-critical nonlinearities which could even lead to finite-time blow-up.
Given this, we instead directly study the system \eqref{eq:D-KG} and uncover hidden structure within the nonlinearities.
The first ingredient is an energy functional, defined on hyperboloids, for solutions to the Dirac equation. This was first derived by the authors and LeFloch in \cite{DLW}. Using this Dirac-energy functional, we find that the best we can hope for is
$$
\aligned
\| (s/t) \psi \|_{L^2_f(\Hcal_s)} &\lesssim 1,
\qquad
&|\psi| &\lesssim t^{-1/2} (t-|x|)^{1/2} \lesssim s^{-1},
\\
\| v \|_{L^2_f(\Hcal_s)} &\lesssim 1,
\qquad
&|v| &\lesssim t^{-1}.
\endaligned
$$
Here $\Hcal_s$ are constant $s$-surfaces defined in Section \ref{sec:pre-hyp} and $L^2_f(\Hcal_s)$ is defined in \eqref{eq:L-1-f}.
Rough calculations then lead us to the estimates
$$
\aligned
\|v \psi\|_{L^2_f(\Hcal_s)}
&\lesssim \| (s/t) \psi \|_{L^2_f(\Hcal_s)} \| (t/s) v\|_{L^\infty(\Hcal_s)}
\lesssim s^{-1},
\\
\| \psi^* \gamma^0 \psi \|_{L^2_f(\Hcal_s)}
&\lesssim \|(s/t)\psi\|_{L^2_f(\Hcal_s)} \|(t/s)\psi\|_{L^\infty(\Hcal_s)}
\lesssim 1.
\endaligned
$$
We see that one term is at, and the other below, the borderline of integrability. We remark that this is similar to the Model I problem studied in \cite{DW-2020}.
Our next key idea is to notice that a field can be thought of as `Klein-Gordon type' if its $L^2_f(\Hcal_s)$-norm is well-controlled by the natural energy functionals. We know that examples of `Klein-Gordon type' fields include $v, (s/t) \del_\alpha v$ and we discover the further examples
$$
\frac{s}{t} \psi,
\quad
\psi - \frac{x^a}{t} \gamma^0 \gamma^a \psi.
$$
We then show that the Dirac-Dirac interaction appearing in \eqref{eq:D-KG} can be decomposed into terms of Klein-Gordon type factors as
$$
\aligned
\psi^* \gamma^0 \psi
&\sim \Big( \psi - \frac{x^a}{t} \gamma^0 \gamma^a \psi\Big) ^* \gamma^0 \Big( \psi - \frac{x^a}{t} \gamma^0 \gamma^a \psi\Big)
+ \Big(\frac{s}{t}\Big)^2 \psi^* \gamma^0 \psi
\\
& \qquad + \Big(\psi + \frac{x^a}{t} \gamma^0 \gamma^a \psi\Big)^* \gamma^0 \Big(\psi - \frac{x^a}{t} \gamma^0 \gamma^a \psi\Big).
\endaligned
$$
Furthermore, this behaviour is preserved under commutation (Lemma \ref{lem:hidden-KG}) and enjoys a useful identity involving the Lorentz boosts (\ref{lem:L-Dirac}). Interestingly, we find that several other Dirac-Dirac interactions, such as $\psi^*\psi$, do not possess the same useful decomposition (Remark \ref{rem:hidden-KG}).
The next ingredient comes from using nonlinear transformations to remove slowly-decaying nonlinearities (see Lemma \ref{lem:transf-v} and Lemma \ref{lemma:EoM-tilde-psi}). This comes at the expense of introducing cubic nonlinearities and quadratic null forms and we are able to close the bootstrap at lower-orders, provided we can control these null forms.
The final ingredient, needed to control the null forms introduced in the previous paragraph, is to obtain additional $(t-|x|)$-decay for the Dirac spinor.
In the case of pure wave equations it is well-known that one can obtain extra $(t-r)$-decay with the aid of the full range of vector fields $\{\del_\alpha, \Omega_{ab}, L_a, L_0 \}$ (defined in Section \ref{sec:pre-hyp}). For instance, for sufficiently regular functions $\phi$ we have the estimate \cite{Sogge}
\bel{eq:extra-001}
\big| \del \del \phi \big|
\lesssim \big( 1+ |t-r| \big)^{-1} \big( \big|L_0 \del \phi \big| + \sum_a \big| L_a \del \phi \big| \big).
\ee
If we cannot control certain vector fields acting on our solution, then it is usually more difficult to obtain extra $(t-r)$-decay as in \eqref{eq:extra-001}. We recall two examples of similar situations: 1) obtaining extra $(t-r)$-decay in the case of nonlinear elastic waves by Sideris \cite{Sideris} where Lorentz boosts $L_a = t\del_a + x_a \del_t$ are unavailable; 2) obtaining extra $(t-r)$-decay in the case of coupled wave--Klein-Gordon equations by Ma \cite{Ma2017a} where the scaling vector field $L_0 $ is absent. In the DKG model \eqref{eq:D-KG} we also cannot use $L_0$. Our idea is to rewrite the Dirac operator in a frame adapted to the hyperboloidal foliation. This yields
$$
\big|\del_t \psi \big|
\lesssim
{1\over t-r}\sum_a |L_a \psi | + {t\over t-r}|i\gamma^\mu \del_\mu \psi |.
$$
Our argument produces extra $(t-r)$-decay for $\del \psi$ (Lemma \ref{lem:extra-t-r} and Proposition \ref{prop:psi-t}) and is inspired by the work \cite{Ma2017a}.
Finally, we remark that we expect the ideas in the proof of Theorem \ref{thm:main} to have other applications. For instance, it can be used to show uniform energy bounds for the solution to the three dimensional Dirac-Klein-Gordon equations studied by Bachelot in \cite{Bachelot} as well as the $U(1)$-Higgs model studied in \cite{DLW}.
\subsection*{Wave--Klein-Gordon Literature}
To conclude the introduction, we remind the reader of numerous recent works concerning global existence and decay for coupled wave--Klein-Gordon equations in $1+3$ dimensions. These include wave--Klein-Gordon equations derived from mathematical physics, such as the Dirac-Klein-Gordon model, the Dirac-Proca and $U(1)$-electroweak model \cite{DLW, Katayama12a, Tsutsumi, TsutsumiHiggs}, the Einstein-Klein-Gordon equations \cite{Ionescu-P-EKG, PLF-YM-book, PLF-YM-arXiv1, Wang}, the Klein-Gordon-Zakharov equations \cite{{DW-JDE}, OTT, Tsutaya}, the Maxwell-Klein-Gordon equations \cite{Klainerman-QW-SY} and certain geometric problems derived from wave maps \cite{Abbrescia-Chen}.
Very recently, there has been much research concerning global existence and decay for wave--Klein-Gordon equations in $1+2$ dimensions. This was initiated by Ma for quasilinear wave-Klein-Gordon systems \cite{Ma2017a, Ma2017b} and has been succeeded by Ma \cite{Ma2018, Ma2019, Ma2020} and the present authors \cite{DW-2020}. There has also been work by Stingo \cite{Stingo} and the first author \cite{Dong2005} which does not require compactly supported data. Other work has also looked at the Klein-Gordon-Zakharov model in $1+2$ dimensions \cite{Dong2006, DM20, Ma2020} and the wave map model derived in \cite{Abbrescia-Chen} has been studied in the critical case of $1+2$ dimensions in the recent works \cite{DW-2021,DM20}.
\subsection*{Outline}
We organise the rest of the paper as follows. In Section \ref{sec:pre} we introduce some essential notation and the preliminaries of the hyperboloidal foliation method. In Section \ref{sec:hidden} we present the essential hidden structure within the nonlinearities. Finally, Theorem \ref{thm:main} is proved in Section \ref{sec:proof} by using a classical bootstrap argument.
\section{Preliminaries}\label{sec:pre}
\subsection{Basic notations}\label{sec:pre-hyp}
Our problem is in $\RR^{1+2}$. We denote a spacetime point in $\RR^{1+2}$ by $(t, x) = (x^0, x)$, and its spatial radius by $r:= \sqrt{(x^1)^2 + (x^2)^2}$. Following Klainerman's vector field method \cite{Klainerman86}, we introduce the following vector fields
\begin{align*}
\del_\alpha &:=\del_{x^\alpha} &\text{translations},
\\
L_a &:= t\del_a + x_a \del_t &\text{Lorentz boosts},
\\
\Omega_{ab} &:= x_a \del_b - x_b \del_a &\text{rotations},
\\
L_0 &:= t\del_t + x^a \del_a &\text{scaling}.
\end{align*}
We also use the modified Lorentz boosts, first introduced by Bachelot \cite{Bachelot},
$$
\widehat{L}_a := L_a - {1\over 2} \gamma^0 \gamma^a.
$$
These are chosen to be compatible with the Dirac operator, in the sense that they enjoy the following commutative property
$$
[\widehat{L}_a , i\gamma^\mu \del_\mu] = 0,
$$
where we have used the standard notation for commutators $[A, B] := AB - BA$.
We restrict out study to functions supported within the spacetime region $\Kcal := \{ (t, x) : t \geq 2, \, t \geq |x| + 1 \}$ which we foliate using hyperboloids.
A hyperboloid $\Hcal_s$ with hyperbolic time $s \geq s_0 =2$ is defined by $\Hcal_s := \{ (t, x) : t^2 = |x|^2 + s^2 \}$. We find that any point $(t, x) \in \Kcal \cap \Hcal_s$ with $s\geq 2$ obeys the following relations
$$
|x| \leq t,
\qquad
s\leq t \leq s^2.
$$
Without loss of generality we take $s_0 = 2$, and we use $\Kcal_{[s_0, s_1]} := \bigcup_{s_0 \leq s \leq s_1} \Hcal_s \bigcap \Kcal$ to denote the spacetime region between two hyperboloids $\Hcal_{s_0}, \Hcal_{s_1}$.
We follow LeFloch and Ma \cite{PLF-YM-book} and introduce the semi-hyperboloidal frame
$$
\underdel_0 := \del_t,
\qquad
\underdel_a := {L_a \over t} = {x_a \over t} \del_t + \del_a.
$$
The semi-hyperboloidal frame is adapted to the hyperboloidal foliation setting since the set $\underdel_a$ generate the tangent space to the hyperboloids.
The usual partial derivatives, i.e. those in a Cartesian frame, can be expressed in terms of the semi-hyperboloidal frame as
$$
\del_t = \underdel_0,
\qquad
\del_a = -{x_a \over t} \del_t + \underdel_a.
$$
We use $C$ to denote a universal constant, and $A\lesssim B$ to indicate the existence of a constant $C>0$ such that $A\leq BC$. For the ordered set $\{ \Gamma_i\}_{i=1}^7:=\{ \del_{0},\del_1, \del_2, L_1,L_2, \hL_1, \hL_2 \}$, and for any multi-index $I=(\alpha_1, \ldots, \alpha_m)$ of length $|I|:= m$ we denote by $\del^I$ the $m$-th order partial derivative $\del^I:=\Gamma_{\alpha_1}\ldots\Gamma_{\alpha_m}$ (where $1\leq\alpha_i\leq3$). Similar definitions hold for $L^I$, where $4\leq \alpha_i \leq 5$, and for $\hL^I$, where $6 \leq \alpha_i \leq 7$. Spacetime indices are represented by Greek letters while spatial indices are denoted by Roman letters. We adopt Einstein summation convention unless otherwise specified.
\subsection{Energy estimates for wave and Klein-Gordon fields on hyperboloids}
Given a function $\phi = \phi(t, x)$ defined on a hyperboloid $\Hcal_s$, we first define its $\| \cdot \|_{ L^1_f(\Hcal_s)}$ norm as
\bel{eq:L-1-f}
\| \phi \|_{ L^1_f(\Hcal_s)}
= \int_{\Hcal_s} |\phi(t, x)| \, dx
:= \int_{\RR^2} |\phi(\sqrt{s^2 + |x|^2}, x)| \, dx.
\ee
With this, the norm $\| \cdot \|_{ L^p_f(\Hcal_s)}$ for $1\leq p < +\infty$ can be defined. The subscript $f$ comes from that the fact that the volume form above comes from the standard flat metric in $\RR^2$.
Following \cite{PLF-YM-book}, we define the $L^2$-based energy of a function $\phi = \phi(t, x)$, scalar-valued or vector-valued, on a hyperboloid $\Hcal_s$
$$
\aligned
\Ecal_m (s, \phi)
&:=
\int_{\Hcal_s} \Big( \sum_\alpha |\del_\alpha \phi|^2 + {x^a \over t} \big( \del_t \phi^* \del_a \phi + \del_a \phi^* \del_t \phi \big) + m^2 |\phi|^2 \Big) \, dx
\\
&=
\int_{\Hcal_s} \Big(|(s/t) \del_t \phi|^2 + \sum_a |\underdel_a \phi|^2 + m^2 |\phi|^2 \Big) \, dx
\\
&=
\int_{\Hcal_s} \Big( \sum_a |(s/t) \del_a \phi|^2 + t^{-2}|\Omega_{12} \phi|^2 + t^{-2} |L_0 \phi|^2 + m^2 |\phi|^2 \Big) \, dx.
\endaligned
$$
Note in the above $m \geq 0$ is a constant.
From the last two equivalent expressions of the energy functional $\Ecal_m$, we easily obtain
$$
\sum_\alpha \|(s/t) \del_\alpha \phi\|_{L^2_f(\Hcal_s)}
+ \sum_a \| \underdel_a \phi\|_{L^2_f(\Hcal_s)}
\leq C \Ecal_m (s, \phi)^{1/2}.
$$
We also adopt the abbreviation $\Ecal (s, \phi) = \Ecal_0 (s, \phi)$.
We have the following classical energy estimates for wave--Klein-Gordon equations.
\begin{proposition}\label{prop:energy-ineq-KG}
Let $\phi$ be a sufficiently regular function with support in the region $\Kcal_{[s_0, s_1]}$. Then for all $s \in [s_0, s_1]$ we have
$$
\Ecal_m (s, \phi)^{1/2}
\leq \Ecal_m (s_0, \phi)^{1/2} + \int_{s_0}^s \| -\Box \phi + m^2 \phi \|_{L^2_f(\Hcal_\tau)} \, \textrm{d} \tau.
$$
\end{proposition}
\subsection{Energy estimates for Dirac fields on hyperboloids}
Let $\Psi(t,x): \RR^{1+2} \to \CC^2$ be a complex-valued function defined on a hyperboloid $\Hcal_s$. We introduce the energy functionals
\bel{eq:D-fctnal-1}
\aligned
\Ecal^+ (s, \Psi)
&:=\int_{\Hcal_s} \Big( \Psi - {x^a \over t} \gamma^0 \gamma^a \psi \Big)^* \Big( \Psi - {x^a \over t} \gamma^0 \gamma^a \Psi \Big) \, \textrm{d} x.
\\
\Ecal^D (s, \Psi)
&:=
\int_{\Hcal_s} \Big( \Psi^* \Psi - {x^a \over t} \Psi^* \gamma^0 \gamma^a \Psi \Big) \, \textrm{d} x
\endaligned
\ee
These were first introduced in \cite{DLW}, and the following useful identity was also stated
\bel{eq:D-fctnal-2}
\Ecal^D (s, \Psi)
=
{1\over 2} \int_{\Hcal_s} {s^2 \over t^2} \Psi^* \Psi \, \textrm{d} x + {1\over 2} \Ecal^+ (s, \Psi).
\ee
From this identity we obtain both the non-negativity of the functional $\Ecal^D (s, \Psi)$ and also the inequalities
$$
\Big\|\frac{s}{t} \Psi\Big\|_{L^2_f(\Hcal_s)}
+ \Big\|\Big (I_2 - \frac{x^a}{t} \gamma^0 \gamma^a\Big)\Psi \Big\|_{L^2_f(\Hcal_s)}
\leq C \Ecal^D (s, \Psi)^{1/2}.
$$
We have the following energy estimates (see \cite[Prop. 2.3]{DLW}).
\begin{proposition}\label{prop:energy-ineq-Dirac}
Let $\Psi(t,x): \RR^{1+2} \to \CC^2$ be a sufficiently regular function with support in the region $\Kcal_{[s_0, s_1]}$. Then for all $s \in [s_0, s_1]$ we have
$$
\Ecal^D (s, \Psi)^{1/2}
\leq \Ecal^D (s_0, \Psi)^{1/2} + \int_{s_0}^s \| i \gamma^\mu \del_\mu \Psi \|_{L^2_f(\Hcal_\tau)} \, \textrm{d} \tau.
$$
\end{proposition}
\subsection{Estimates for null forms and commutators}
We next state a key estimate for null forms in terms of the hyperboloidal coordinates. The proof is standard and can be found in \cite[\textsection 4]{PLF-YM-book}.
\begin{lemma}\label{lem:null}
Let $\phi, \varphi$ be sufficiently regular functions with support in $\Kcal$ and define $Q_0(\phi, \varphi) := \eta^{\alpha\beta}\del_\alpha \phi \del_\beta \varphi$. Then
$$
\big|Q_0(\phi, \varphi) \big|
\lesssim
\Big( \frac{s}{t} \Big)^2 \big| \del_t \phi \cdot \del_t \varphi \big|
+
\sum_a \big( |\underline{\del}_a \phi \cdot \del_t \varphi| + |\del_t \phi \cdot \underline{\del}_a \varphi| \big)
+
\sum_{a, b} \big| \underline{\del}_a \phi \cdot \underline{\del}_b \varphi \big|.
$$
\end{lemma}
We also have the useful property that for the $Q_0$ null form:
$$\aligned
L_a Q_0( f, g) &= Q_0(L_a f, g) + Q_0( f, L_a g), \\
\del_\alpha Q_0( f, g) &= Q_0(\del_\alpha f, g) + Q_0( f, \del_\alpha g) .
\endaligned
$$
Besides the well-known commutation relations
$$
[\del_\alpha, -\Box + m^2 ]
=
[L_a, -\Box + m^2 ]
=
0,
\qquad
[i\gamma^\alpha \del_\alpha, \widehat{L}_a] = 0,
$$
valid for $m\geq 0$, we also need the following lemma to control some other commutators. A proof can be found in \cite[\textsection3]{PLF-YM-book} and \cite{PLF-YM-cmp}.
\begin{lemma} \label{lem:commu}
Let $\Phi, \phi$ be a sufficiently regular $\CC^2$-valued (resp. $\RR$-valued) function supported in the region $\mathcal{K}$. Then, for any multi-indices $I,J$, there exist generic constants $C=C(|I|, |J|)>0$ such that
$$
\aligned
&\big| [\del_\alpha, L_a] \Phi \big| + \big| [\del_\alpha, \widehat{L}_a] \Phi \big|
\leq C |\del \Phi|,
\\
&\big| [L_a, L_b] \Phi \big| + \big| [\widehat{L}_a, \widehat{L}_b] \Phi \big|
\leq C \sum_c |L_c \Phi|,
\\
& \big| [\del^I L^J, \del_\alpha] \phi \big|
\leq
C \sum_{|J'|<|J|} \sum_\beta \big|\del_\beta \del^I L^{J'} \phi \big|,
\\
& \big| [\del^I L^J, \underdel_a] \phi \big|
\leq
C \Big( \sum_{| I' |<| I |, | J' |< | J |} \sum_b \big|\underdel_b \del^{I'} L^{J'} \phi \big| + t^{-1} \sum_{| I' |\leq | I |, |J'|\leq |J|} \big| \del^{I'} L^{J'} \phi \big| \Big).
\endaligned
$$
Furthermore there exists a constant $C>0$ such that
$$
\aligned
\big| \del_\alpha (s/t) \big|
&\leq C s^{-1},
\\
\big| L_a (s/t) \big| + \big| L_a L_b (s/t) \big|
&\leq C (s/t).
\endaligned
$$
Recall here that Greek indices $\alpha, \beta \in \{0,1,2\}$ and Roman indices $a,b \in \{1,2\}$.
\end{lemma}
\subsection{Weighted Sobolev inequalities on hyperboloids}
To conclude our preliminary section, we need certain weighted Sobolev inequalities to obtain pointwise decay estimates for the Dirac field and the Klein-Gordon field.
\begin{proposition}\label{prop-standard-Sobolev}
Let $\phi = \phi(t, x)$ be a sufficiently smooth function supported in the region $\Kcal$. Then for all $s \geq 2$ we have
\bel{eq:Sobolev}
\sup_{\Hcal_s} \big| t \, \phi(t, x) \big|
\leq C \sum_{|J| \leq 2} \big\| L^J \phi \big\|_{L^2_f(\Hcal_s)}.
\ee
Furthermore, we also have
\bel{eq:Sobolev2}
\sup_{\Hcal_s} \big| s \, \phi(t, x) \big|
\leq C \sum_{|J| \leq 2} \big\| (s/t) L^J \phi \big\|_{L^2_f(\Hcal_s)}.
\ee
\end{proposition}
We recall that such Sobolev inequalities involving hyperboloids were first introduced by Klainerman \cite{Klainerman85}, and then later appeared in work of H\"ormander \cite{Hormander}. In the above Proposition we have used the version given by LeFloch and Ma in \cite{PLF-YM-book} where only the Lorentz boosts are required.
The estimate \eqref{eq:Sobolev2} follows by combining \eqref{eq:Sobolev} with the commutator estimates of Lemma \ref{lem:commu} and is more convenient to use for wave components.
We also have the following modified Sobolev inequalities for spinors which make use of the modified Lorentz boosts $\hL_a$. The proof follows from the fact that the difference between $L_a$ and $\widehat{L}_a$ is a constant matrix.
\begin{corollary}\label{corol-Dirac-Sobolev}
Let $\Psi = \Psi(t, x)$ be a sufficiently smooth $\CC^2$-valued function supported in the region $\Kcal$. Then for all $s \geq 2$ we have
\bel{eq:Sobolev3}
\sup_{\Hcal_s} \big| t \, \Psi(t, x) \big|
\leq C \sum_{|J| \leq 2} \big\| \widehat{L}^J \Psi \big\|_{L^2_f(\Hcal_s)},
\ee
as well as
\bel{eq:Sobolev4}
\sup_{\Hcal_s} \big| s \, \Psi(t, x) \big|
\leq C \sum_{|J| \leq 2} \big\| (s/t) \widehat{L}^J \Psi \big\|_{L^2_f(\Hcal_s)}.
\ee
\end{corollary}
\section{Hidden structure within the Dirac--Klein-Gordon equations}\label{sec:hidden}
\subsection{Hidden null structures}\label{subsec:hidden-null}
In the present section we discuss three types of hidden structure which are present in the Dirac--Klein-Gordon equations. Identifying this structure plays an important role in our bootstrap argument.
\paragraph*{Type 1:}
In this case we are concerned with a Klein-Gordon equation of the type
$$
(-\Box +1)v = w^2 + F_v,
$$
where $w$ satisfies an unspecified semilinear wave equation.
If we set $\widetilde{v}
=
v - w^2,
$
then we have
$$
(-\Box +1)\widetilde{v}
=
F_v - 2 w(-\Box w )+ 2 Q_0(w, w).
$$
Note that the nonlinear transformation allows us to remove the wave-wave interaction $w^2$ at the expenses of bringing in cubic and null terms. This strategy of treating wave-wave interactions in Klein-Gordon equations was first introduced by Tsutsumi \cite{Tsutsumi} to study the Dirac-Proca equations in $\RR^{1+3}$.
\paragraph{Type 2:}
Next we consider a wave equation with the form
$$
\aligned
-\Box w = w v + F_w,
\endaligned
$$
where $v$ satisfies an unspecified semilinear Klein-Gordon equation.
If we set $
\widetilde{w} = w + w v,
$
then we have
$$
\aligned
- \Box \widetilde{w}
=
F_w + (-\Box w) v + w (-\Box v+v)- 2Q_0(w, v) .
\endaligned
$$
The nonlinear transformation allows us to cancel the wave-Klein-Gordon interaction $wu$ at the expense of introducing null and cubic terms. Such a strategy appears in \cite{DW-JDE} and was inspired by the transformation of Tsutsumi mentioned in Type 1.
\paragraph{Type 3:}
In this final case, we consider a Dirac equation with the structure
$$
i\gamma^\mu \del_\mu \psi = v \psi,
$$
where $v$ satisfies an unspecified semilinear Klein-Gordon equation. If we set $\widetilde{\psi} = \psi - i \gamma^\nu \del_\nu (v \psi)$ then we find
$$
i\gamma^\mu \del_\mu \widetilde{\psi}
=
i\gamma^\mu \del_\mu \big( \psi - i \gamma^\nu \del_\nu (v \psi) \big)
=
i\gamma^\mu \del_\mu \psi
- \big(i\gamma^\mu \del_\mu \big) \big( i \gamma^\nu \del_\nu (v \psi) \big).
$$
By recalling the relation $\Box = \big(i\gamma^\mu \del_\mu \big) \big( i \gamma^\nu \del_\nu \big)$ we have
$$
i\gamma^\mu \del_\mu \widetilde{\psi}
=
i\gamma^\mu \del_\mu \psi - \Box (v \psi)
=
v \psi + \psi(-\Box v) + v (-\Box \psi) - 2 \eta^{\alpha\beta}\del_\alpha v \del_\beta \psi.
$$
Thus we arrive at
$$
i\gamma^\mu \del_\mu \widetilde{\psi}
=
\psi(-\Box v+v) + v ( -\Box \psi ) - 2Q_0(v, \psi).
$$
The nonlinear transformation has allowed us to cancel the Dirac-Klein-Gordon interaction $v \psi$ at the expense of introducing null and cubic terms. Such a transformation has, to the authors' knowledge, not been used before and is clearly inspired by the two prior transformations.
\subsection{Hidden Klein-Gordon structure in the Lorentz scalar $\bar{\psi} \psi$}
We now consider the Dirac-Dirac interaction term
$$
\bar{\psi} \psi = \psi^* \gamma^0 \psi,
$$
and show that it can be decomposed into terms with Klein-Gordon type factors. Roughly speaking, we call a field $\phi$ of `Klein-Gordon type' if its norm $ \| \phi \|_{L^2_f(\Hcal_s)}$
can be well controlled. Taking into account our model problem \eqref{eq:D-KG}, examples of `Klein-Gordon type' fields include
$$
v,
\quad
\frac{s}{t}\del_\alpha v,
\quad
\frac{s}{t} \psi,
\quad
\psi - \frac{x^a}{t} \gamma^0 \gamma^a \psi.
$$
\begin{definition}
Let $\Psi$ be a $\CC^2$-valued function. We define
$$
(\Psi)_- := \Psi - {x_a \over t} \gamma^0 \gamma^a \Psi,
\qquad
(\Psi)_+ := \Psi + {x_a \over t} \gamma^0 \gamma^a \Psi \,.
$$
\end{definition}
If no confusion arises, we use the abbreviation $\Psi_- = (\Psi)_-$.
\begin{lemma}\label{lem:hidden-KG}
Let $\Psi, \Phi$ be two $\CC^2$-valued functions, then we have
$$
\Psi^* \gamma^0 \Phi
=
{1\over 4} \left( (\Psi_-)^* \gamma^0 \Phi_- + (\Psi_-)^* \gamma^0 \Phi_+
+ (\Psi_+)^* \gamma^0 \Phi_- + \Big(\frac{s}{t}\Big)^2 \Psi^* \gamma^0 \Phi \right).
$$
\end{lemma}
\begin{proof}
First we note
$$
2 \Psi = \Psi_- + \Psi_+,
\qquad
2 \Phi = \Phi_- + \Phi_+,
$$
and thus we have
$$
\aligned
4 \Psi^* \gamma^0 \Phi
&=
\big( (\Psi_-)^* + (\Psi_+)^* \big) \gamma^0 \big( \Phi_- + \Phi_+ \big)
\\
&=
(\Psi_-)^* \gamma^0 \Phi_-
+
(\Psi_-)^* \gamma^0 \Phi_+
+
(\Psi_+)^* \gamma^0 \Phi_-
+
(\Psi_+)^* \gamma^0 \Phi_+ \,.
\endaligned
$$
Next we expand the last term, noting $(\gamma^0\gamma^a)^* = \gamma^0\gamma^a$,
$$
\aligned
(\Psi_+)^* \gamma^0 \Phi_+
&=
\big( \Psi^* + {x_a \over t} \Psi^* \gamma^0 \gamma^a \big) \gamma^0 \big( \Phi + {x_b \over t} \gamma^0 \gamma^b \Phi \big)
\\
&=
\Psi^* \gamma^0 \Phi
+
{x_a \over t}\Psi^* \gamma^0 \gamma^0 \gamma^a \Phi
+
{x_a \over t} \Psi^* \gamma^0 \gamma^a \gamma^0 \Phi
+
{x_a \over t} {x_b \over t} \Psi^* \gamma^0 \gamma^a \gamma^0 \gamma^0 \gamma^b \Phi \,.
\endaligned
$$
Simple calculations give us
$$
\aligned
{x_a \over t} \Psi^* \gamma^0 \gamma^0 \gamma^a \Phi
+
{x_a \over t} \Psi^* \gamma^0 \gamma^a \gamma^0 \Phi
=
0,
\endaligned
$$
and
\bel{eq:1001}
\aligned
{x_a \over t} {x_b \over t} \Psi^* \gamma^0 \gamma^a \gamma^0 \gamma^0 \gamma^b \Phi
=
{x_a x_b \over t^2} \Psi^* \gamma^0 \gamma^a \gamma^b \Phi
=
- {r^2 \over t^2} \Psi^* \gamma^0 \Phi \,.
\endaligned
\ee
Thus we are led to
\bel{eq:900}
\Psi_+^* \gamma^0 \Phi_+
=
\Psi^* \gamma^0 \Phi
-
{r^2 \over t^2} \Psi^* \gamma^0 \Phi
=
{s^2 \over t^2} \Psi^* \gamma^0 \Phi.
\ee
Gathering together the above results finishes the proof.
\end{proof}
\begin{remark}\label{rem:hidden-KG}
It is interesting to consider what other Dirac-Dirac interactions terms possess a useful hidden decomposition as in Lemma \ref{lem:hidden-KG}.
We recall that $\psi^*\gamma^0\psi = \bar{\psi}\psi$ transforms as a Lorentz scalar and $\bar{\psi}\gamma^\mu\psi$ transforms as a Lorentz vector. This suggests $\bar{\psi}\gamma^\mu\psi$ is the next most obvious nonlinearity to consider. However, a calculation shows that $\bar{\psi}\gamma^\mu\psi$ does not possess any useful structure. For example, replicating the argument for $\bar{\psi}\gamma^0\psi$ in the proof of Lemma \ref{lem:hidden-KG}, we find \eqref{eq:1001} instead appears with a positive sign $+(r/t)^2 \Psi^* \gamma^0 \Phi$. This means that we cannot obtain a factor of $(s/t)^2$ as in \eqref{eq:900}.
\end{remark}
Since the Dirac-Dirac interaction term $\psi^*\gamma^0\psi$ appears as a sourcing for the Klein-Gordon equation, we will need to act \emph{un}modified Lorentz boosts $L$ on this term. The following Lemma surprisingly shows that when distributing these Lorentz boosts across the interaction term, they in fact turn into the modified boosts $\hL$.
\begin{lemma}\label{lem:L-Dirac}
For an arbitrary multi-index $|I|$ there exists a generic constant $C=C(|I|)>0$ such that
$$
|L^I(\psi^* \gamma^0\psi) |
\leq C
\sum_{|J|+|K|\leq |I|} |(\hL^{J} \psi)^* \gamma^0 \hL^{K}\psi|.
$$
\end{lemma}
\begin{proof}
Let $\Psi, \Phi$ be two $\CC^2$-valued functions.
Since ${}^\ast$ denotes the conjugate transpose, and $(\gamma^0 \gamma^a)^* = (\gamma^0\gamma^a)$, we have the identity
$$
L_a (\Psi^*) = (\hL_a \Psi)^* + \frac12 \Psi^* (\gamma^0\gamma^a)^*
= (\hL_a \Psi)^* - \frac12 \Psi^* \gamma^a\gamma^0.
$$
and thus
\begin{align*}
L_a(\Psi^* \gamma^0 \Phi)
&= L_a(\Psi^*) \gamma^0 \Phi
+ \Psi^* \gamma^0 L_a(\Phi)
\\&=
\hL_a(\Psi^*) \gamma^0 \Phi - \frac12 \Psi^* \gamma^a \gamma^0 \gamma^0 \widetilde \Psi
+ \Psi^* \gamma^0 \hL_a(\Phi)
+ \frac12 \psi^* \gamma^0 \gamma^0 \gamma^a \widetilde \Psi
\\&=
\hL_a(\Psi^*) \gamma^0 \Phi
+ \Psi^* \gamma^0 \hL_a(\Phi).
\end{align*}
Hence
$$
L_a(\bar{\psi}\psi) = L_a(\psi^*\gamma^0\psi)
=
(\hL_a \psi)^*\gamma^0\psi + \psi^*\gamma^0(\hL_a\psi).
$$
Similarly,
$$
L_b L_a(\bar{\psi}\psi)
=
(\hL_b\hL_a\psi)^* \gamma^0 \psi + \psi^* \gamma^0 \hL_b \hL_a \psi
+ (\hL_a\psi)^* \gamma^0 (\hL_b \psi)
+ (\hL_b\psi)^* \gamma^0 (\hL_a \psi) .
$$
Carrying on gives the general pattern.
\end{proof}
\section{Proof of Theorem \ref{thm:main}}\label{sec:proof}
\subsection{Bootstrap assumptions and preliminary estimates}
Fix $N\in\mathbb{N}$ a large integer ($N \geq 4$ will work for our argument below). As shown by the local well-posedness theory in \cite[\textsection11]{PLF-YM-book}, initial data posed on the hypersurface $\{t_0=2\}$ and localised in the unit ball $\{x\in\RR^2:|x|\leq 1\}$ can be developed as a solution of \eqref{eq:D-KG} up to the initial hyperboloid $\{ s=s_0\}$ with the smallness \eqref{thm:data-assumpt} conserved. Thus there exists $C_0>0$ such that the following bounds hold for all $|I|+|J|\leq N$:
\bea\label{eq:m1BApre}
\Ecal_1(s_0, \del^I L^J v)^{1/2} + \Ecal^D (s_0, \del^I \hL^J \psi)^{1/2} \leq C_0 \eps.
\eea
Next we assume that the following bounds hold for $s \in [s_0, s_1)$:
\bel{eq:BA-Dirac}
\aligned
\Ecal^D (s, \del^I \widehat{L}^J \psi)^{1/2}
&\leq C_1 \eps,
\quad
&|I| + |J| &\leq N-1,
\\
\Ecal^D (s, \del^I \widehat{L}^J \psi)^{1/2}
&\leq C_1 \eps s^\delta,
\quad
&|I| + |J| &\leq N,
\\
\Ecal_1 (s, \del^I L^J v)^{1/2}
&\leq C_1 \eps,
\quad
&|I| + |J| &\leq N-1,
\\
\Ecal_1 (s, \del^I L^J v)^{1/2}
&\leq C_1 \eps s^\delta,
\quad
&|I| + |J| &\leq N.
\endaligned
\ee
In the above, the constant $C_1 \gg 1$ is to be determined, $\eps \ll 1$ measures the size the initial data, and we let $C_1 \eps \ll 1$, and $0<\delta \leq \tfrac{1}{10}$. The hyperbolic time $s_1$ is defined as
$$
s_1 := \sup \{ s: s>s_0,\, \eqref{eq:BA-Dirac}\,\, \text{holds} \}.
$$
With the bounds in \eqref{eq:BA-Dirac}, we obtain the following preliminary $L^2$ and $L^\infty$ estimates.
\begin{proposition}\label{prop:L2}
Let the estimates in \eqref{eq:BA-Dirac} hold, then for $s \in [s_0, s_1)$ we have
$$
\aligned
\big\| (s/t) \del^I \widehat{L}^J \psi \big\|_{L^2_f(\Hcal_s)}
+
\big\| (s/t) \del^I L^J \psi \big\|_{L^2_f(\Hcal_s)}
+
\big\| (\del^I \widehat{L}^J \psi)_- \big\|_{L^2_f(\Hcal_s)}
&\lesssim
C_1 \eps,
&|I| + |J| &\leq N-1,
\\
\big\| (s/t) \del^I \widehat{L}^J \psi \big\|_{L^2_f(\Hcal_s)}
+
\big\| (s/t) \del^I L^J \psi \big\|_{L^2_f(\Hcal_s)}
+
\big\| (\del^I \widehat{L}^J \psi)_- \big\|_{L^2_f(\Hcal_s)}
&\lesssim
C_1 \eps s^\delta,
&|I| + |J| &\leq N,
\\
\big\| (s/t) \del \del^I L^J v \big\|_{L^2_f(\Hcal_s)}
+
\big\| (s/t) \del^I L^J \del v \big\|_{L^2_f(\Hcal_s)}
+
\big\| \del^I L^J v \big\|_{L^2_f(\Hcal_s)}
&\lesssim
C_1 \eps,
&|I| + |J| &\leq N-1,
\\
\big\| (s/t) \del \del^I L^J v \big\|_{L^2_f(\Hcal_s)}
+
\big\| (s/t) \del^I L^J \del v \big\|_{L^2_f(\Hcal_s)}
+
\big\| \del^I L^J v \big\|_{L^2_f(\Hcal_s)}
&\lesssim
C_1 \eps s^\delta,
&|I| + |J| &\leq N.
\endaligned
$$
\end{proposition}
\begin{proof}
The $\psi$ estimates follow from the definition of the energy functional $\Ecal^D(s, \psi)$, the decomposition \eqref{eq:D-fctnal-2} and the fact that the difference between $L_a$ and $\widehat{L}_a$ is a constant matrix. The estimates for the Klein-Gordon field follow from the definition of the energy functional $\Ecal_1(s, v)$ and the commutator estimates in Lemma \ref{lem:commu}.
\end{proof}
Next we derive the following pointwise estimates.
\begin{proposition}\label{prop:Linfty}
Let the estimates in \eqref{eq:BA-Dirac} hold, then for $s \in [s_0, s_1)$ we have
$$
\aligned
\big| \del^I \widehat{L}^J \psi \big|
+
\big| \del^I L^J \psi \big|
+
(t/s)\big| (\del^I \widehat{L}^J \psi)_- \big|
&\lesssim
C_1 \eps s^{-1},
\qquad
&|I| + |J| \leq N-3,
\\
\big| \del^I \widehat{L}^J \psi \big|
+
\big| \del^I L^J \psi \big|
+
(t/s) \big| (\del^I \widehat{L}^J \psi)_- \big|
&\lesssim
C_1 \eps s^{-1+\delta},
\qquad
&|I| + |J| \leq N-2,
\\
\big| \del \del^I L^J v \big|
+
\big| \del^I L^J \del v \big|
+
(t/s)\big| \del^I L^J v \big|
&\lesssim
C_1 \eps s^{-1},
\qquad
&|I| + |J| \leq N-3,
\\
\big| \del \del^I L^J v \big|
+
\big| \del^I L^J \del v \big|
+
(t/s) \big| \del^I L^J v \big|
&\lesssim
C_1 \eps s^{-1+\delta},
\qquad
&|I| + |J| \leq N-2.
\endaligned
$$
\end{proposition}
\begin{proof}
To show the estimates for the Klein-Gordon components $v$ and $\del v$ we combine the estimates from Proposition \ref{prop:L2} with the Sobolev estimates from Proposition \ref{prop-standard-Sobolev}.
To prove the estimates for $\del^I \hL^J \psi$, and thus $\del^I L^J \psi$, we combine Proposition \ref{prop:L2} with the Dirac-type Sobolev estimates from Corollary \ref{corol-Dirac-Sobolev}. Finally to prove the estimates for $(\psi)_-$ and derivatives thereof, we note $\gamma^0\gamma^0=I_2$ in order to show the commutator identity
$$
[\hatL_b,\gamma^0-(x^a/t)\gamma^a]\psi
= -(x^b/t)(\gamma^0-(x^a/t)\gamma^a)\psi
= -(x^b/t) \gamma^0 (\psi)_-\,.
$$
This implies
$$
\aligned
{[}\hatL_b,I_2-(x^a/t)\gamma^0\gamma^a ] \psi
&=[\hatL_b,\gamma^0(\gamma^0-(x^a/t)\gamma^a)]\psi
\\
&= [\hatL_b,\gamma^0]\gamma^0\psi_- + \gamma^0 [\hatL_b,\gamma^0-(x^a/t)\gamma^a]\psi
\\
&= -\big( \gamma^0\gamma^b+(x^b/t)\big)(\psi)_- \,.
\endaligned
$$
We can control this error term since $|x^b/t| \leq 1$ in the cone.
Using these calculations, one can compute
$$
\aligned
{[}\hatL_c\hatL_b,I_2-(x^a/t)\gamma^0\gamma^a ]\psi
&= -(\gamma^0\gamma^c+(x^c/t))(\hatL_b\psi)_- -(\gamma^0\gamma^b+(x^b/t))(\hatL_c\psi)_-
\\
&\quad +\big[ (x^b/t) \gamma^0 \gamma^c + (x^c/t)\gamma^0 \gamma^b + 2 (x^cx^b)/t^2\big] \psi_-.
\endaligned
$$
Thus, using the first Sobolev estimate in Corollary \ref{corol-Dirac-Sobolev},
$$
\sup_{\Hcal_s} |t \psi_-| \lesssim \sum_{|J|\leq 2} \| \hatL^J \psi_-\|_{L^2_f(\Hcal_s)}
= \sum_{|J|\leq 2} \| \hatL^J (I_2-(x^a/t)\gamma^0\gamma^a)\psi\|_{L^2_f(\Hcal_s)}
\lesssim \sum_{|J|\leq 2} \| (\hatL^J \psi)_-\|_{L^2_f(\Hcal_s)}.
$$
The estimates for $(\del^I \hatL^J \psi)_-$ follow in the same way and the proof is complete.
\end{proof}
A key feature of massless fields is their additional $(t-r)$-decay compared to massive fields. As discussed in the introduction, the Dirac equation in \eqref{eq:D-KG} implies the wave equation appearing in \eqref{eq:W-KG}. Thus we expect the Dirac field $\psi$ to enjoy some improved $(t-r)$-decay. However, we cannot find a way to obtain the improved $(t-r)$-decay from the wave equation appearing in \eqref{eq:W-KG}, since the slowly-decaying nonlinearities prevent an application of the usual tools (such as conformal energy estimates or the pointwise estimates developed in \cite{Ma2017a}).
The following important lemma crucially allows us to obtain extra $(t-r)$-decay for the $\del_t \psi$ component, which we use later when estimating null forms involving $\psi$.
\begin{lemma}\label{lem:extra-t-r}
Let $\Psi$ be a $\CC^2$-valued function solving
$$
i\gamma^\mu \del_\mu \Psi = F_\Psi,
$$
and supported in $\Kcal$. Then we have the following estimate
\bel{eq:del-psi}
\big|\del_t \Psi \big|
\lesssim
{t\over t-r} \Big(\sum_a |\underline{\del}_a \Psi | + |F_\Psi| \Big).
\ee
\end{lemma}
\begin{proof}
We express the Dirac operator $i\gamma^\mu \del_\mu$ in the semi-hyperboloidal frame to get
$$
i \big( \gamma^0 - (x^a/t) \gamma^a \big) \del_t \Psi + i \gamma^a \underline{\del}_a \Psi = F_\Psi.
$$
Multiplying both sides by $\big( \gamma^0 - (x^b/t) \gamma^b \big)$ yields
$$
i \big( \gamma^0 - (x^b/t) \gamma^b \big) \big( \gamma^0 - (x^a/t) \gamma^a \big) \del_t \Psi + i \big( \gamma^0 - (x^b/t) \gamma^b \big) \gamma^a \underline{\del}_a \Psi = \big( \gamma^0 - (x^b/t) \gamma^b \big) F_\Psi.
$$
Simple calculations involving properties of the Dirac matrices imply
$$
\big( \gamma^0 - (x^b/t) \gamma^b \big) \big( \gamma^0 - (x^a/t) \gamma^a \big)
= (s^2 /t^2).
$$
This leads us to
$$
i (s^2 /t^2) \del_t \Psi + i \big( \gamma^0 - (x^b/t) \gamma^b \big) \gamma^a \underline{\del}_a \Psi = \big( \gamma^0 - (x^b/t) \gamma^b \big) F_\Psi,
$$
which further implies
$$
\big| (s^2 /t^2) \del_t \Psi \big|
\leq
\big| \big( \gamma^0 - (x^b/t) \gamma^b \big) \gamma^a \underline{\del}_a \Psi \big|
+
\big| \big( \gamma^0 - (x^b/t) \gamma^b \big) F_\Psi \big|
\lesssim
\sum_a \big| \underline{\del}_a \Psi \big|
+
\big| F_\Psi \big|.
$$
Finally we arrive at \eqref{eq:del-psi} by recalling the following relations, which hold within the cone $\Kcal$,
$$
s^2 = t^2- r^2 = (t-r) (t+r),
\qquad
t\leq t+r \leq 2t.
$$
\end{proof}
We note that Lemma \ref{lem:extra-t-r} is inspired by a similar result in the context of wave equations obtained in \cite{Ma2017a}.
With the aid of this Lemma, we can now prove better estimates for the $\del \psi$ component.
\begin{proposition}\label{prop:psi-t}
Under the bootstrap assumptions in \eqref{eq:BA-Dirac}, the following weighted $L^2$-estimates are valid for $s \in [s_0, s_1)$
$$
\aligned
\big\| (t-r) (s/t) \del \del^I L^J \psi \big\|_{L^2_f(\Hcal_s)}
+
\big\| (t-r) (s/t) \del \del^I \widehat{L}^J \psi \big\|_{L^2_f(\Hcal_s)}
&\lesssim
C_1 \eps s^{\delta},
\quad |I| + |J| \leq N-1,
\endaligned
$$
and the following pointwise estimates also hold for $s \in [s_0, s_1)$
$$
\aligned
\big| \del \del^I L^J \psi \big|
+
\big| \del \del^I \widehat{L}^J \psi \big|
&\lesssim
C_1 \eps (t-r)^{-1} s^{-1+\delta},
\quad |I| + |J| \leq N-3.
\endaligned
$$
\end{proposition}
\begin{proof}
We first act $\del^I \widehat{L}^J$, with $|I|+|J| \leq N-3$, to the $\psi$ equation in \eqref{eq:D-KG} to find
$$
i \gamma^\mu \del_\mu \del^I \widehat{L}^J \psi
= \del^I \widehat{L}^J \big(v \psi\big).
$$
Then by Lemma \ref{lem:extra-t-r} we obtain
$$
\aligned
\big| \del_t \del^I \widehat{L}^J \psi \big|
&\lesssim
{t\over t-r} \Big(\sum_a \big|\underline{\del}_a \del^I \widehat{L}^J \psi \big| + \big|\del^I \widehat{L}^J (v \psi)\big| \Big)
\\
&\lesssim
{t\over t-r} \Big(t^{-1} \sum_a \big|L_a \del^I \widehat{L}^J \psi \big| + \big|\del^I \widehat{L}^J (v \psi)\big| \Big)
\\
&\lesssim
C_1 \eps (t-r)^{-1} s^{-1+\delta},
\endaligned
$$
in which we used the pointwise decay results of Proposition \ref{prop:Linfty}. The estimates $\big| \del_t \del^I L^J \psi \big|$ are a simple consequence of the above, while the case $\big| \del_a \del^I \widehat{L}^J \psi \big|$ (with $a=1, 2$) can be seen from the relation
$$
\del_a \del^I \widehat{L}^J \psi
= -{x_a \over t} \del_t \del^I \widehat{L}^J \psi + \underline{\del}_a \del^I \widehat{L}^J \psi.
$$
Finally the $L^2$--type estimates follow in a similar way, by combining Lemma \ref{lem:extra-t-r} with Propositions \ref{prop:L2} and \ref{prop:Linfty}.
\end{proof}
\subsection{Improved estimates for the Klein-Gordon field}
In order to improve the energy bounds for the Klein-Gordon field, we apply two different arguments for the lower-order energy case and for the top-order energy case. For the lower-order case, we rely on a nonlinear transformation (of Type 1 in section \ref{subsec:hidden-null}) to remove the slowly-decaying term $\psi^* \gamma^0 \psi$. This is at the expense of introducing null and cubic terms yet nevertheless allows us to obtain uniform energy bounds.
On the other hand, when deriving the refined bound for the top-order Klein-Gordon energy the nonlinear transformation is invalid due to issues with regularity. Thus in this case we need to utilise the hidden Klein-Gordon structure of the nonlinearities as shown in Lemma \ref{lem:hidden-KG} and Lemma \ref{lem:L-Dirac}. Using this we can improve the energy bounds with the aid of the linear behavior of $\psi$ in the lower-order case.
\begin{lemma}\label{lem:transf-v}
Let $\widetilde{v} := v - \psi^* \gamma^0 \psi$. Then $\widetilde{v}$ solves the following Klein-Gordon equation
\bel{eq:KG-new}
-\Box \widetilde{v} + \widetilde{v}
= i \del_\nu(v\psi^*) \gamma^\nu \gamma^0 \psi + i \psi^* \gamma^0 \gamma^\nu \del_\nu (v\psi) + 2 \eta^{\alpha\beta}\del_\alpha \psi^* \gamma^0 \del_\beta \psi.
\ee
\end{lemma}
\begin{proof}
We act the Klein-Gordon operator to $\widetilde{v}$ to obtain
$$
\aligned
-\Box \widetilde{v} + \widetilde{v}
=
-\Box \big(v - \psi^* \gamma^0 \psi\big) + \big(v - \psi^* \gamma^0 \psi\big)
=
-\Box v + v - \psi^* \gamma^0 \psi + \Box \big( \psi^* \gamma^0 \psi \big).
\endaligned
$$
We also have
$$
\aligned
-\Box \psi
=
-i\gamma^\nu \del_\nu \big( i\gamma^mu \del_\mu \psi \big)
=
-i\gamma^\nu \del_\nu (v \psi).
\endaligned
$$
Finally recalling the original equations in \eqref{eq:D-KG} leads us to \eqref{eq:KG-new}.
\end{proof}
\begin{lemma}\label{lem:unif-est-tildev}
Under the bootstrap assumptions in \eqref{eq:BA-Dirac}, the following estimates are valid for $s \in [s_0, s_1)$
$$
\aligned
\Ecal_1 (s, \del^I L^J \widetilde{v})^{1/2}
&\lesssim \eps + (C_1 \eps)^2,
\qquad
&|I| + |J| \leq N-1.
\endaligned
$$
\end{lemma}
\begin{proof}
Acting $\del^I L^J$ with $|I| + |J| \leq N-1$ on equation \eqref{eq:KG-new} produces
$$
-\Box \del^I L^J \widetilde{v} + \del^I L^J \widetilde{v}
=\del^I L^J \big( i \del_\nu(v\psi^*) \gamma^\nu \gamma^0 \psi + i \psi^* \gamma^0 \gamma^\nu \del_\nu (v\psi) + 2 \del_\alpha \psi^* \gamma^0 \del^\alpha \psi\big).
$$
The energy estimates of Proposition \ref{prop:energy-ineq-KG} then imply
$$
\aligned
\Ecal_1 (s, \del^I L^J \widetilde{v})^{1/2}
&\lesssim
\Ecal_1 (s_0, \del^I L^J \widetilde{v})^{1/2}
\\
&+
\int_{s_0}^s \Big\| \del^I L^J \big( i \del_\nu(v\psi^*) \gamma^\nu \gamma^0 \psi + i \psi^* \gamma^0 \gamma^\nu \del_\nu (v\psi) + 2 \del_\alpha \psi^* \gamma^0 \del^\alpha \psi\big) \Big\|_{L^2_f(\Hcal_\tau)} \, \textrm{d}\tau.
\endaligned
$$
We estimate each of the terms. We start with the cubic terms and we find
$$
\aligned
\Big\| \del^I &L^J \big( i \del_\nu(v\psi^*) \gamma^\nu \gamma^0 \psi + i \psi^* \gamma^0 \gamma^\nu \del_\nu (v\psi) \big) \Big\|_{L^2_f(\Hcal_\tau)}
\\
\lesssim
&\sum_{\substack{|I_1|+|I_2|+|I_3|\\+|J_1|+|J_2|+|J_3|\\\leq N-1}} \Big\| \big| \del^{I_1} L^{J_1} \psi \big| \big| \del^{I_2} L^{J_2} \psi\big| \big| \del^{I_3} L^{J_3} \del v \big| +
\big| \del^{I_1} L^{J_1} \psi \big| \big| \del^{I_2} L^{J_2} \del \psi\big| \big| \del^{I_3} L^{J_3} v \big| \Big\|_{L^2_f(\Hcal_\tau)}.
\endaligned
$$
The commutator estimates in Lemma \ref{lem:commu} lead us to
$$
\aligned
\Big\| \del^I &L^J \big( i \del_\nu(v\psi^*) \gamma^\nu \gamma^0 \psi + i \psi^* \gamma^0 \gamma^\nu \del_\nu (v\psi) \big) \Big\|_{L^2_f(\Hcal_\tau)}
\\
&\lesssim
\sum_{\substack{|I_1|+|J_1|\leq N\\ |I_2|+|I_3|+|J_2|+|J_3|\leq N-2}} \Big(\big\| (\tau/t) \del^{I_1} L^{J_1} \psi \big\|_{L^2_f(\Hcal_\tau)} + \big\| \del^{I_1} L^{J_1} v \big\|_{L^2_f(\Hcal_\tau)} \Big) \times
\\
&\hskip3cm \Big( \big\| (t/\tau)\big|\del^{I_2} L^{J_2} \psi\big| \big| \del^{I_3} L^{J_3} v \big| + \big| \del^{I_2} L^{J_2} \psi\big| \big| \del^{I_3} L^{J_3} \psi \big| \big\|_{L^\infty(\Hcal_\tau)} \Big)
\\
&\lesssim (C_1 \eps)^3 \tau^{-2+2\delta},
\endaligned
$$
in which the assumption $N\geq 4$ ensures the first inequality, and we used the $L^2$-type as well as $L^\infty$ estimates listed in Propositions \ref{prop:L2}--\ref{prop:Linfty} in the last step.
Next, we want to bound the null form. By the commutator estimates in Lemma \ref{lem:commu} and the null form estimates in Lemma \ref{lem:null}, we have
$$
\aligned
\big\| \del^I &L^J \big(\del_\alpha \psi^* \gamma^0 \del^\alpha \psi\big) \big\|_{L^2_f(\Hcal_\tau)}
\\
&\lesssim
\sum_{\substack{|I_1|+|I_2|+|J_1|\\+|J_2|\leq N-1}} \big\| \del_\alpha \del^{I_1} L^{J_1} \psi^* \gamma^0 \del^\alpha \del^{I_2} L^{J_2} \psi \big\|_{L^2_f(\Hcal_\tau)}
\\
&\lesssim
\sum_{\substack{|I_1|+|I_2|+|J_1|\\+|J_2|\leq N-1}} \Big\| (\tau/t)^2 |\del_t \del^{I_1} L^{J_1}\psi| |\del_t \del^{I_2} L^{J_2}\psi| + \sum_a |\underline{\del}_a \del^{I_1} L^{J_1} \psi| |\del_t \del^{I_2} L^{J_2}\psi|
\\
&\hskip3cm + \sum_{a, b} |\underline{\del}_a \del^{I_1} L^{J_1} \psi| |\underline{\del}_b \del^{I_2} L^{J_2} \psi| \Big\|_{L^2_f(\Hcal_\tau)}.
\endaligned
$$
We now estimate each of the terms. By the results in Propositions \ref{prop:L2}, \ref{prop:psi-t} we find
\begin{align*}
\sum_{\substack{|I_1|+|I_2|+|J_1|\\+|J_2|\leq N-1}}
&\big\| (\tau/t)^2 |\del_t \del^{I_1} L^{J_1}\psi| |\del_t \del^{I_2} L^{J_2}\psi| \big\|_{L^2_f(\Hcal_\tau)}
\\
\lesssim
&\sum_{\substack{|I_3|+|J_3|\leq N-3 \\ |I_4|+|J_4|\leq N-1}} \big\| (\tau/t) \del_t \del^{I_3} L^{J_3}\psi \big\|_{L^\infty(\Hcal_\tau)} \big\| (\tau/t) \del_t \del^{I_4} L^{J_4}\psi \big\|_{L^2_f(\Hcal_\tau)}
\\
\lesssim
& (C_1 \eps)^2\big\| (\tau/t) (t-r)^{-1} \tau^{-1+\delta} \big\|_{L^\infty(\Hcal_\tau \bigcap \Kcal)}
\\
\lesssim
&(C_1 \eps)^2\tau^{-2+\delta},
\end{align*}
in which again the assumption $N\geq 4$ guarantees the first inequality, and in the last step we used the observation (recall $\Kcal = \{ (t, x) : |x| \leq t-1 \}$)
$$
(\tau/t) (t-r)^{-1} \lesssim \tau^{-1}.
$$
We then proceed by estimating
\begin{align*}
\sum_{\substack{|I_1|+|I_2|+|J_1|\\+|J_2|\leq N-1}}
&\big\| |\underline{\del}_a \del^{I_1} L^{J_1} \psi| |\del_t \del^{I_2} L^{J_2}\psi| \big\|_{L^2_f(\Hcal_\tau)}
\\
=
&\sum_{\substack{|I_1|+|I_2|+|J_1|\\+|J_2|\leq N-1}}\big\| t^{-1} |L_a \del^{I_1} L^{J_1} \psi| |\del_t \del^{I_2} L^{J_2}\psi| \big\|_{L^2_f(\Hcal_\tau)}
\\
\lesssim
&\sum_{\substack{|I_1|+|J_1|\leq N-3\\ |I_2|+|J_2|\leq N-1}} \big\| \tau^{-1} L_a \del^{I_1} L^{J_1} \psi \big\|_{L^\infty(\Hcal_\tau)} \big\| (\tau/t) \del_t \del^{I_2} L^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
\\
&\qquad+
\sum_{\substack{|I_1|+|J_1|\leq N-1\\ |I_2|+|J_2|\leq N-3}} \big\| (\tau/t) L_a \del^{I_1} L^{J_1} \psi \big\|_{L^2_f(\Hcal_\tau)} \big\| \tau^{-1} \del_t \del^{I_2} L^{J_2}\psi \big\|_{L^\infty(\Hcal_\tau)}
\\
\lesssim
&\sum_{\substack{|I_1|+|J_1|\leq N-2\\ |I_2|+|J_2|\leq N-1}} \big\| \tau^{-1} \del^{I_1} L^{J_1} \psi \big\|_{L^\infty(\Hcal_\tau)} \big\| (\tau/t) \del_t \del^{I_2} L^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
\\
&\qquad+
\sum_{\substack{|I_1|+|J_1|\leq N\\ |I_2|+|J_2|\leq N-3}} \big\| (\tau/t) \del^{I_1} L^{J_1} \psi \big\|_{L^2_f(\Hcal_\tau)} \big\| \tau^{-1} \del_t \del^{I_2} L^{J_2}\psi \big\|_{L^\infty(\Hcal_\tau)},
\end{align*}
and we used the estimates for commutators in the last step.
Thus we obtain
$$
\sum_{\substack{|I_1|+|I_2|+|J_1|\\+|J_2|\leq N-1}} \big\| |\underline{\del}_a \del^{I_1} L^{J_1} \psi| |\del_t \del^{I_2} L^{J_2}\psi| \big\|_{L^2_f(\Hcal_\tau)}
\lesssim
(C_1 \eps)^2 \tau^{-2+2\delta}.
$$
In the same way, we can also get
$$
\sum_{\substack{|I_1|+|I_2|+|J_1|\\+|J_2|\leq N-1}} \big\| |\underline{\del}_a \del^{I_1} L^{J_1} \psi| |\underline{\del}_b \del^{I_2} L^{J_2}\psi| \big\|_{L^2_f(\Hcal_\tau)}
\lesssim
(C_1 \eps)^2 \tau^{-2+2\delta}.
$$
In conclusion we find, for $|I|+|J|\leq N-1$,
$$
\Ecal_1 (s, \del^I L^J \widetilde{v})^{1/2}
\lesssim
\eps
+
(C_1 \eps)^2 \int_{s_0}^s \tau^{-2+2\delta} \, \textrm{d}\tau
\lesssim
\eps + (C_1 \eps)^2\,.
$$
\end{proof}
The following Lemma is the key to closing the top-order energy bootstraps for the Klein-Gordon field.
\begin{lemma}\label{lem:est-Fv}
Let the estimates in \eqref{eq:BA-Dirac} hold, then for $s \in [s_0, s_1)$ we have
$$
\aligned
\| \del^IL^J (\psi^* \gamma^0 \psi) \|_{L^2_f(\Hcal_s)}
&\lesssim
(C_1 \eps)^2 s^{-1+\delta},\qquad |I|+|J|\leq N.
\endaligned
$$
\end{lemma}
\begin{proof}
By Lemma \ref{lem:L-Dirac} we find
$$
\del^I L^J \big(\psi^* \gamma^0 \psi \big)
=
\sum_{\substack{|I_1|+|J_1|+|I_2|\\+|J_2|=N}}\big(\del^{I_1} \widehat{L}^{J_1} \psi \big)^* \gamma^0 \big( \del^{I_2} \widehat{L}^{J_2} \psi \big).
$$
Next we apply Lemma \ref{lem:hidden-KG} to reveal the hidden Klein-Gordon structure of the nonlinearity:
$$
\aligned
\del^I L^J \big(\psi^* \gamma^0 \psi \big)
&=
{1\over 4}\sum_{\substack{|I_1|+|J_1|+|I_2|\\+|J_2|=N}} \Big( \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_-{}^* \gamma^0 \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_-
+
\big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_-{}^* \gamma^0 \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_+
\\
&\hskip2cm+
\big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_+{}^* \gamma^0 \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_-
+
\Big( \frac{\tau}{t}\Big)^2\big(\del^{I_1} \widehat{L}^{J_1} \psi \big)^* \gamma^0 \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)
\Big).
\endaligned
$$
We recall that $\big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_-$ can be regarded as a Klein-Gordon component in the sense that it enjoys the same $L^2$--type and $L^\infty$ estimates as Klein-Gordon components, while $\big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_+$ enjoys the same good bounds as $\del^{I_1} \widehat{L}^{J_1} \psi$.
We proceed to bound
$$
\aligned
\big\| \del^I &L^J \big(\psi^* \gamma^0 \psi \big) \big\|_{L^2_f(\Hcal_s)}
\\
\lesssim
&\sum_{\substack{|I_1|+|J_1|+|I_2|\\+|J_2|=N}}\Big( \big\| \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_-{}^* \gamma^0 \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_- \big\|_{L^2_f(\Hcal_s)}
+
\big\| \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_-{}^* \gamma^0 \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_+ \big\|_{L^2_f(\Hcal_s)}
\\
&\hskip2.5cm+
\big\| (s/t)^2 \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)^* \gamma^0 \big( \del^{I_2} \widehat{L}^{J_2} \psi \big) \big\|_{L^2_f(\Hcal_s)}
\Big).
\endaligned
$$
We first show
$$
\aligned
\sum_{\substack{|I_1|+|J_1|+|I_2|\\+|J_2|=N}}
&\big\| \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_-{}^* \gamma^0 \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_- \big\|_{L^2_f(\Hcal_s)}
\\
&\lesssim
\sum_{\substack{|I_1|+|J_1|\leq N-3\\ |I_2|+|J_2|\leq N}} \big\| \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_- \big\|_{L^\infty(\Hcal_s)} \big\| \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_- \big\|_{L^2_f(\Hcal_s)}
\\
& \quad +
\sum_{\substack{|I_1|+|J_1|\leq N-2\\ |I_2|+|J_2|\leq N-1}} \big\| \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_- \big\|_{L^\infty(\Hcal_s)} \big\| \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_- \big\|_{L^2_f(\Hcal_s)}
\\&
\lesssim (C_1 \eps)^2 s^{-1+\delta},
\endaligned
$$
in which the assumption $N\geq 4$ was used in the first inequality.
Similarly, we also have
\begin{align*}
\sum_{\substack{|I_1|+|J_1|+|I_2|\\+|J_2|=N}}
&\big\| \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_-{}^* \gamma^0 \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_+ \big\|_{L^2_f(\Hcal_s)}
\\
\lesssim
&\sum_{\substack{|I_1|+|J_1|\leq N-3\\ |I_2|+|J_2|\leq N}} \big\| (t/s) \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_-\big\|_{L^\infty(\Hcal_s)} \big\| (s/t) \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_+ \big\|_{L^2_f(\Hcal_s)}
\\
&\quad +\sum_{\substack{|I_1|+|J_1|\leq N-2\\ |I_2|+|J_2|\leq N-1}} \big\| (t/s) \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_-\big\|_{L^\infty(\Hcal_s)} \big\| (s/t) \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_+ \big\|_{L^2_f(\Hcal_s)}
\\
&\quad +
\sum_{\substack{|I_1|+|J_1|\leq N\\ |I_2|+|J_2|\leq N-3}} \big\| \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_-\big\|_{L^2_f(\Hcal_s)} \big\| (s/t) \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_+ \big\|_{L^\infty(\Hcal_s)}
\\
&\quad +
\sum_{\substack{|I_1|+|J_1|\leq N-1\\ |I_2|+|J_2|\leq N-2}} \big\| \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)_-\big\|_{L^2_f(\Hcal_s)} \big\| (s/t) \big( \del^{I_2} \widehat{L}^{J_2} \psi \big)_+ \big\|_{L^\infty(\Hcal_s)}
\\
\lesssim
& (C_1 \eps)^2 s^{-1+\delta}.
\end{align*}
We then estimate
$$
\aligned
\sum_{\substack{|I_1|+|J_1|+|I_2|\\+|J_2|=N}}
&\big\| (s/t)^2 \big(\del^{I_1} \widehat{L}^{J_1} \psi \big)^* \gamma^0 \big( \del^{I_2} \widehat{L}^{J_2} \psi \big) \big\|_{L^2_f(\Hcal_s)}
\\
\lesssim
&\sum_{\substack{|I_1|+|J_1|\leq N-3\\ |I_2|+|J_2|\leq N}}
\big\| (s/t) \del^{I_1} \widehat{L}^{J_1} \psi \big\|_{L^\infty(\Hcal_s)} \big\| (s/t) \big( \del^{I_2} \widehat{L}^{J_2} \psi \big) \big\|_{L^2_f(\Hcal_s)}
\\
&\sum_{\substack{|I_1|+|J_1|\leq N-2\\ |I_2|+|J_2|\leq N-1}}
\big\| (s/t) \del^{I_1} \widehat{L}^{J_1} \psi \big\|_{L^\infty(\Hcal_s)} \big\| (s/t) \big( \del^{I_2} \widehat{L}^{J_2} \psi \big) \big\|_{L^2_f(\Hcal_s)}
\\
\lesssim
& (C_1 \eps)^2 s^{-1+\delta}.
\endaligned
$$
Gathering the above estimates, we obtain
$$
\big\| \del^I L^J \big(\psi^* \gamma^0 \psi \big) \big\|_{L^2_f(\Hcal_s)}
\lesssim (C_1 \eps)^2 s^{-1+\delta},
\qquad
|I|+|J| \leq N.
$$
\end{proof}
\begin{proposition}\label{prop:KG-improved}
Assuming the estimates in \eqref{eq:BA-Dirac} hold, for $s \in [s_0, s_1)$ we have
$$
\aligned
\Ecal_1 (s, \del^I L^J v)^{1/2}
&\lesssim \eps + (C_1 \eps)^2,
\qquad
&|I| + |J| \leq N-1,
\\
\Ecal_1 (s, \del^I L^J v)^{1/2}
&\lesssim \eps + (C_1 \eps)^2 s^\delta,
\qquad
&|I| + |J| \leq N.
\endaligned
$$
\end{proposition}
\begin{proof}
We first show the improved energy estimates in the case of $|I| + |J| \leq N$.
We act the Klein-Gordon equation in \eqref{eq:D-KG} with $\del^I L^J$ to get
$$
-\Box \del^I L^J v + \del^I L^J v
= \del^I L^J \big(\psi^* \gamma^0 \psi \big),
$$
The energy estimates of Proposition \ref{prop:energy-ineq-KG} and the key result of Lemma \ref{lem:est-Fv} imply
$$
\aligned
\Ecal_1 (s, \del^I L^J \widetilde{v})^{1/2}
&\lesssim
\Ecal_1 (s_0, \del^I L^J \widetilde{v})^{1/2}
+ \int_{s_0}^s \big\| \del^I L^J \big(\psi^* \gamma^0 \psi \big) \big\|_{L^2_f(\Hcal_\tau)} \, \textrm{d} \tau
\\
&\lesssim
\eps + (C_1 \eps)^2 \int_{s_0}^s \tau^{-1+\delta} \, \textrm{d} \tau
\\
&\lesssim
\eps + (C_1 \eps)^2 s^\delta.
\endaligned
$$
We next turn to the uniform energy bounds for $|I|+|J| \leq N-1$. Due to the uniform estimates of Lemma \ref{lem:unif-est-tildev}, we just need to study the difference between $v$ and $\widetilde{v}$. This is a quadratic term $\psi^* \gamma^0 \psi$ which, for $|I|+|J| \leq N-1$, is controlled using Lemma \ref{lem:est-Fv} as
$$
\aligned
\Ecal_1 \big(s, &\, \del^I L^J \big(\psi^* \gamma^0 \psi \big)\big)^{1/2}
\\
\lesssim
&\big\| (s/t) \del_t \del^I L^J \big(\psi^* \gamma^0 \psi \big) \big\|_{L^2_f(\Hcal_s)}
+
\sum_a \big\| \underline{\del}_a \del^I L^J \big(\psi^* \gamma^0 \psi \big) \big\|_{L^2_f(\Hcal_s)}
+
\big\| \del^I L^J \big(\psi^* \gamma^0 \psi \big) \big\|_{L^2_f(\Hcal_s)}
\\
\lesssim & (C_1 \eps)^2 s^{-1+\delta}.
\endaligned
$$
In conclusion we find, for $|I|+|J|\leq N-1$,
$$
\Ecal_1(s, \del^I L^J v)^{1/2}
\lesssim
\Ecal_1(s, \del^I L^J \widetilde{v})^{1/2}
+
\Ecal_1\big(s, \del^I L^J \big(\psi^* \gamma^0 \psi \big)\big)^{1/2}
\lesssim
\eps+ (C_1 \eps)^2.
$$
\end{proof}
\subsection{Improved estimates for the Dirac field}
In order to improve the energy bounds for the Dirac field, we also use two different arguments for the lower-order energy case and for the top-order energy case. For the lower-order case, our strategy is to introduce the new variable
$$
\widetilde{\psi} = \psi - i\gamma^\nu \del_\nu (v \psi),
$$
and derive a uniform energy bound for its lower-order energy. This is a nonlinear transformation of Type 3 in Section \ref{subsec:hidden-null} and it allows us to remove the slowly-decaying nonlinearity $v \psi$ at the expense of introducing null and cubic terms. After obtaining lower-order uniform energy bounds for $\widetilde{\psi}$ we can then easily get improved bounds for the lower-order energy of $\psi$ since the difference between $\psi, \widetilde{\psi}$ is a quadratic term.
Similar to the strategy for the Klein-Gordon field, this transformation to $\widetilde{\psi}$ is not valid at top-order. Nevertheless with the linear behavior of the fields $\psi, v$ in the bootstrap assumptions \eqref{eq:BA-Dirac}, we can also close the bootstrap for the top-order energy estimates.
\begin{lemma}\label{lemma:EoM-tilde-psi}
Let $\widetilde{\psi} := \psi - i\gamma^\nu \del_\nu (v \psi)$. Then $\widetilde{\psi}$ solves the following Dirac equation
\bel{eq:Dirac-new}
i\gamma^\mu \del_\mu \widetilde{\psi}
=
\big(\psi^* \gamma^0 \psi\big) \psi - i \gamma^\nu v \del_\nu (v \psi) - 2Q_0(v,\psi).
\ee
\end{lemma}
\begin{proof}
We act the Dirac operator on $\widetilde{\psi}$ to get
$$
\aligned
i\gamma^\mu \del_\mu \widetilde{\psi}
&=
i\gamma^\mu \del_\mu \big( \psi - i\gamma^\nu \del_\nu (v \psi) \big)
=
i\gamma^\mu \del_\mu \psi - \Box (v \psi)
\\
&=
i\gamma^\mu \del_\mu \psi + (-\Box v) \psi + v (-\Box \psi) - 2 \eta^{\alpha\beta}\del_\alpha v \del_\beta \psi.
\endaligned
$$
We recall
$$
\aligned
-\Box \psi
=
-i\gamma^\nu \del_\nu \big( i\gamma^mu \del_\mu \psi \big)
=
-i\gamma^\nu \del_\nu (v \psi).
\endaligned
$$
Thus we arrive at the desired result by using the original equations in \eqref{eq:D-KG}.
\end{proof}
\begin{lemma}\label{lem:unif-est-tildepsi}
Let the estimates in \eqref{eq:BA-Dirac} hold, then for $s \in [s_0, s_1)$ we have
$$
\Ecal^D (s, \del^I \widehat{L}^J \widetilde{\psi})^{1/2}
\lesssim \eps + (C_1 \eps)^2,
\qquad
|I| + |J| \leq N-1.
$$
\end{lemma}
\begin{proof}
From Proposition \ref{prop:energy-ineq-Dirac} we see that we need to control
\bel{eq:344-goal}
\sum_{|I|+|J|\leq N-1} \int_{s_0}^s \| \del^I\hL^J (i\gamma^\mu \del_\mu \widetilde{\psi}) \|_{L^2_f(\Hcal_\tau)} \textrm{d} \tau.
\ee
From Lemma \ref{lemma:EoM-tilde-psi}, in particular \eqref{eq:Dirac-new}, we see that there are three terms in this integrand. The first term, provided $N \geq 4$, is
\bel{eq:788}
\| \del^I\hL^J (\psi^*\gamma^0\psi \cdot \psi) \|_{L^2_f(\Hcal_\tau)}
\lesssim
\Big[ \sum_{\substack{|I_1|+|J_1|\leq N-1\\|I_2| +|J_2|\leq N-2}} + \sum_{\substack{|I_1|+|J_1|\leq N-2\\|I_2| +|J_2|\leq N-1}} \Big]
\|\del^{I_1} L^{J_1}(\psi^*\gamma^0\psi) \del^{I_2}\hL^{J_2} \psi\|_{L^2_f(\Hcal_\tau)}
\ee
The first expression in \eqref{eq:788} can be treated using Lemma \ref{lem:est-Fv}, Propositions \ref{prop:L2} and Proposition \ref{prop:Linfty}. We obtain
$$
\aligned
\sum_{\substack{|I_1|+|J_1|\leq N-1\\|I_2| +|J_2|\leq N-2}}
&\|\del^{I_1} L^{J_1}(\psi^*\gamma^0\psi) \del^{I_2}\hL^{J_2} \psi\|_{L^2_f(\Hcal_\tau)}
\\
&\lesssim
\sum_{\substack{|I_1|+|J_1|\leq N-1\\|I_2| +|J_2|\leq N-2}}
\|\del^{I_1} L^{J_1}(\psi^*\gamma^0\psi)\|_{L^2_f(\Hcal_\tau)} \| \del^{I_2}\hL^{J_2} \psi\|_{L^\infty(\Hcal_\tau)}
\\
&\lesssim (C_1\eps)^3 \tau^{-2+2\delta}.
\endaligned
$$
In order to study the second term in \eqref{eq:788}, we again use Lemma \ref{lem:hidden-KG} and Lemma \ref{lem:L-Dirac} to find
$$
\aligned
\sum_{|I|+|J|\leq N-2}\| &(t/\tau)\del^I L^J (\psi^*\gamma^0\psi) \|_{L^\infty(\Hcal_\tau)}
\\&\lesssim
\sum_{\substack{|I_1|+|J_1|+|I_2|\\ +|J_2|\leq N-2}} \Big[
\|(t/\tau) (\del^{I_1} \hL^{J_1}\psi)_-\cdot (\del^{I_2} \hL^{J_2} \psi)_-\|_{L^\infty(\Hcal_\tau)}
\\&\qquad
+ \|(t/\tau) (\del^{I_1} \hL^{J_1}\psi)_-\cdot (\del^{I_2} \hL^{J_2} \psi)_+\|_{L^\infty(\Hcal_\tau)}
+ \| (\tau/t)\del^{I_1} \hL^{J_1}\psi \cdot \del^{I_2} \hL^{J_2} \psi\|_{L^\infty(\Hcal_\tau)} \Big]
\\&
\lesssim
(C_1 \eps)^2 \tau^{-2+\delta}.
\endaligned
$$
The final line above follows from Proposition \ref{prop:Linfty}.
Thus the second term in \eqref{eq:788} can be estimated by
$$
\sum_{\substack{|I_1|+|J_1|\leq N-2\\|I_2| +|J_2|\leq N-1}}
\|(\tau/t) \del^{I_2}\hL^{J_2} \psi\|_{L^2_f(\Hcal_\tau)} \|(t/\tau)\del^{I_1} L^{J_1}(\psi^*\gamma^0\psi)\|_{L^\infty(\Hcal_\tau)}
\lesssim (C_1\eps)^3 \tau^{-2+2\delta}.
$$
The second term in the integrand of \eqref{eq:344-goal} is cubic of the rough form $v \psi \del v + v^2 \psi$. This is very straightforward to estimate (compared to the cubic terms appearing in Lemma \ref{lem:unif-est-tildev}) since both terms involve two factors of the faster decaying components $v$ and/or $\del v$. Thus we omit the details and merely state the conclusion:
$$
\sum_{|I|+|J|\leq N-1} \| \del^I\hL^J (v\cdot \del (v \psi)) \|_{L^2_f(\Hcal_\tau)}
\lesssim (C_1\eps)^3 \tau^{-2+2\delta}.
$$
The final term in the integrand of \eqref{eq:344-goal} is a null form. By the null form estimate from Lemma \ref{lem:null} and the commutator estimate from Lemma \ref{lem:commu} we have, for $|I|+|J|\leq N-1$,
\begin{align}
\| &\del^I\hL^J Q_0(v,\psi) \|_{L^2_f(\Hcal_\tau)} \notag
\\
&\lesssim
\sum_{\substack{|I_1|+ |J_1| + |I_2| \\ +|J_2|\leq N-1} }
\Big[
\big\|(\tau/t)^2\del_t \del^{I_1}L^{J_1}v \cdot \del_t \del^{I_2}\hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
+ \sum_a \big\|\underdel_a \del^{I_1}L^{J_1}v \cdot \del_t \del^{I_2}\hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)} \notag
\\&\quad
+ \sum_a \big\|\del_t\del^{I_1}L^{J_1}v \cdot \underdel_a \del^{I_2}\hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
+ \sum_{a,b} \big\|\underdel_a \del^{I_1}L^{J_1}v \cdot \underdel_b \del^{I_2}\hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
\Big]. \label{eq:544-null}
\end{align}
We study each term in \eqref{eq:544-null} separately, using the condition $N \geq 4$ to distribute derivatives. We use Propositions \ref{prop:L2}, \ref{prop:Linfty} and \ref{prop:psi-t}, together with the commutator estimates, to obtain
$$
\aligned
&\sum_{\substack{|I_1|+|I_2|+|J_1|\\+|J_2|\leq N-1}}
\big\|(\tau/t)^2\del_t \del^{I_1}L^{J_1}v \cdot \del_t \del^{I_2}\hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
\\
&\qquad\lesssim
\sum_{\substack{|I_1|+|J_1|\leq N-3 \\ |I_2|+|J_2|\leq N-1}}
\big\|(\tau/t)^2\del_t \del^{I_1}L^{J_1}v \cdot \del_t \del^{I_2}\hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
\\&\qquad\quad
+ \sum_{\substack{|I_1|+|J_1|\leq N-1 \\ |I_2|+|J_2|\leq N-3}}
\big\|(\tau/t)^2\del_t \del^{I_1}L^{J_1}v \cdot \del_t \del^{I_2}\hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
\\
&\qquad\lesssim
\sum_{\substack{|I_1|+|J_1|\leq N-2 \\ |I_2|+|J_2|\leq N-1}} \big\| (\tau/t)^2 (t-r)^{-1}|(t/\tau) \del^{I_1} L^{J_1}v|\big\|_{L^\infty(\Hcal_\tau)} \big\| (\tau/t) (t-r)\del_t \del^{I_2} \hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
\\&\qquad\quad
+\sum_{\substack{|I_1|+|J_1|\leq N-1 \\ |I_2|+|J_2|\leq N-3}} \big\| (\tau/t) \del_t \del^{I_1} L^{J_1}v\big\|_{L^2_f(\Hcal_\tau)}
\big\| (\tau/t) \del_t \del^{I_2} \hL^{J_2}\psi \big\|_{L^\infty(\Hcal_\tau)}
\\
&\qquad\lesssim
(C_1 \eps)^2 \tau^\delta \big\| (\tau/t)^2 (t-r)^{-1} \tau^{-1+\delta} \big\|_{L^\infty(\Hcal_\tau \cap \Kcal)}
+(C_1 \eps)^2 \big\| (\tau/t) (t-r)^{-1} \tau^{-1+\delta} \big\|_{L^\infty(\Hcal_\tau \cap \Kcal)}
\\
&\qquad\lesssim
(C_1 \eps)^2 \tau^{-2+2\delta}.
\endaligned
$$
Note in the penultimate step we used again the observation that in $\Kcal = \{ (t, x) : |x| \leq t-1 \}$
$$
(\tau/t) (t-r)^{-1} \lesssim \tau^{-1}.
$$
We then turn to the next expression in \eqref{eq:544-null}, using Propositions \ref{prop:L2} and \ref{prop:Linfty}, together with the commutator estimates of Lemma \ref{lem:commu}, to find
\begin{align*}
\sum_{\substack{|I_1|+|I_2|+|J_1|\\+|J_2|\leq N-1}}
&\big\|\underdel_a \del^{I_1}L^{J_1}v \cdot \del_t \del^{I_2}\hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
\\
&= \sum_{\substack{|I_1|+|I_2|+|J_1|\\+|J_2|\leq N-1}}
\big\|t^{-1} L_a \del^{I_1}L^{J_1}v \cdot \del_t \del^{I_2}\hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
\\
&\lesssim
\sum_{\substack{|I_1|+|J_1|\leq N-2 \\ |I_2|+|J_2|\leq N-1}} \big\|\tau^{-1} \del^{I_1}L^{J_1}v \big\|_{L^\infty(\Hcal_\tau)}
\big\|(\tau/t) \del_t \del^{I_2}\hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
\\&\quad
+\sum_{\substack{|I_1|+|J_1|\leq N \\ |I_2|+|J_2|\leq N-3}} \big\| \del^{I_1}L^{J_1}v \big\|_{L^2_f(\Hcal_\tau)} \big\|t^{-1}\del_t \del^{I_2}\hL^{J_2}\psi \big\|_{L^\infty(\Hcal_\tau)}
\\
&\lesssim
(C_1 \eps)^2 \tau^{-2+2\delta}.
\end{align*}
In a similar way we have
$$
\sum_{\substack{|I_1|+|I_2|+|J_1|\\+|J_2|\leq N-1}}
\big\|\del_t\del^{I_1}L^{J_1}v \cdot \underdel_a \del^{I_2}\hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
\lesssim
(C_1 \eps)^2 \tau^{-2+2\delta}.
$$
The final term is the easiest in \eqref{eq:544-null}, with
$$
\sum_{\substack{|I_1|+|I_2|+|J_1|\\+|J_2|\leq N-1}}
\big\|\underdel_a \del^{I_1}L^{J_1}v \cdot \underdel_b \del^{I_2}\hL^{J_2}\psi \big\|_{L^2_f(\Hcal_\tau)}
\lesssim
(C_1 \eps)^2 \tau^{-3+2\delta}.
$$
Putting all this together we find
$$
\Ecal^D (s, \del^I \widehat{L}^J \widetilde{\psi})^{1/2}
\lesssim
\Ecal^D (s_0, \del^I \widehat{L}^J \widetilde{\psi})^{1/2}
+
(C_1\eps)^2 \int_{s_0}^s \tau ^{-2+2\delta}\textrm{d} \tau
\lesssim
\eps + (C_1\eps)^2 .
$$
\end{proof}
\begin{proposition}\label{prop:Dirac-improved}
Let the estimates in \eqref{eq:BA-Dirac} hold, then for $s \in [s_0, s_1)$ we have
$$
\aligned
\Ecal^D (s, \del^I \widehat{L}^J \psi)^{1/2}
&\lesssim \eps + (C_1 \eps)^2,
\qquad
&|I| + |J| \leq N-1,
\\
\Ecal^D (s, \del^I \widehat{L}^J \psi)^{1/2}
&\lesssim \eps + (C_1 \eps)^2 s^\delta,
\qquad
&|I| + |J| \leq N.
\endaligned
$$
\end{proposition}
\begin{proof}
We begin with the estimate at top-order. For $|I| + |J| \leq N$, and given $N \geq 4$, we have
$$
\aligned
\big\| \del^I \hL^J \big(v \psi\big) \big\|_{L^2_f(\Hcal_s)}
&\lesssim
\sum_{\substack{|I_1|+|J_1|\leq N \\ |I_2|+|J_2|\leq N-3}} \| \del^{I_1} L^{J_1} v \|_{L^2_f(\Hcal_s)} \|\del^{I_2} \hL^{J_2} \psi\|_{L^\infty(\Hcal_s)}
\\&\quad
+ \sum_{\substack{|I_1|+|J_1|\leq N-3 \\ |I_2|+|J_2|\leq N}} \| (t/s)\del^{I_1} L^{J_1} v \|_{L^\infty(\Hcal_s)} \|(s/t)\del^{I_2} \hL^{J_2} \psi\|_{L^2_f(\Hcal_s)}
\\&\quad
\sum_{\substack{|I_1|+|J_1|\leq N-1 \\ |I_2|+|J_2|\leq N-2}} \| \del^{I_1} L^{J_1} v \|_{L^2_f(\Hcal_s)} \|\del^{I_2} \hL^{J_2} \psi\|_{L^\infty(\Hcal_s)}
\\&\quad
+ \sum_{\substack{|I_1|+|J_1|\leq N-2 \\ |I_2|+|J_2|\leq N-1}} \| (t/s)\del^{I_1} L^{J_1} v \|_{L^\infty(\Hcal_s)} \|(s/t)\del^{I_2} \hL^{J_2} \psi\|_{L^2_f(\Hcal_s)}
\\&\lesssim
(C_1\eps)^2 s^{-1+\delta}.
\endaligned
$$
Note in the final step we carefully used the uniform energy bounds from Proposition \ref{prop:L2} and the sharp pointwise estimates from Proposition \ref{prop:Linfty} so as not to pick up $s^{2\delta}$ growth.
Thus the energy inequality of Proposition \ref{prop:energy-ineq-Dirac} implies
$$
\aligned
\Ecal^D (s, \del^I \hL^J \psi)^{1/2}
&\lesssim
\Ecal^D (s_0, \del^I \hL^J \psi)^{1/2}
+ \int_{s_0}^s \big\| \del^I \hL^J \big(v \psi\big) \big\|_{L^2_f(\Hcal_\tau)} \, \textrm{d} \tau
\\
&\lesssim
\eps + (C_1 \eps)^2\int_{s_0}^s \tau^{-1+\delta} \, d\tau
\\
&\lesssim
\eps + (C_1 \eps)^2 s^\delta.
\endaligned
$$
We next turn to the uniform energy bounds when $|I|+|J| \leq N-1$. Due to the uniform estimates of Lemma \ref{lem:unif-est-tildepsi}, we just need to study the difference between $\psi$ and $\widetilde{\psi}$ which is the quadratic term $i\gamma^v\del_\nu(v \psi)$. So, for $|I|+|J| \leq N-1$, we find
$$
\aligned
\Ecal^D\big(s, &\, \del^I \hL^J \del(v \psi)\big)^{1/2}
\\
&\lesssim
\big\| (s/t)\del^I \hL^J \del(v \psi) \big\|_{L^2_f(\Hcal_s)}
+
\big\|\big(I_2-(x^a/t)\gamma^0\gamma^a\big)\del^I \hL^J \del(v \psi) \big\|_{L^2_f(\Hcal_s)}
\\
&\lesssim
\big\| \del^I \hL^J \big( v \del \psi + \psi \del v\big) \big\|_{L^2_f(\Hcal_s)},
\endaligned
$$
where in the final line we used the fact that the solution is supported in $\Kcal$ so that $|x^a/t|\leq 1$. We find, for $|I|+|J| \leq N-1$,
\begin{align*}
\big\|& \del^I \hL^J \big( v \del \psi\big)\big\|_{L^2_f(\Hcal_s)}
\\
&\lesssim
\sum_{\substack{|I_1|+|J_1|\leq N-1\\ |I_2|+|J_2|\leq N-3}}
\big\| \del^{I_1} L^{J_1} v \del^{I_2} \hL^{J_2}\del \psi \big\|_{L^2_f(\Hcal_s)}
+ \sum_{\substack{|I_1|+|J_1|\leq N-3\\ |I_2|+|J_2|\leq N-1}}
\| \del^{I_1} L^{J_1} v \del^{I_2} \hL^{J_2}\del \psi \|_{L^2_f(\Hcal_s)}
\\
&\lesssim
\sum_{\substack{|I_1|+|J_1|\leq N-1\\ |I_2|+|J_2|\leq N-2}}
\big\| \del^{I_1} L^{J_1} v \|_{L^2_f(\Hcal_s)} \| \del^{I_2} \hL^{J_2}\psi \|_{L^\infty(\Hcal_s)}
\\&\qquad
+ \sum_{\substack{|I_1|+|J_1|\leq N-3 \\ |I_2|+|J_2|\leq N}}
\|(s/t) \del^{I_2} \hL^{J_2} \psi \|_{L^2_f(\Hcal_s)} \|(t/s) \del^{I_1} L^{J_1} v \|_{L^\infty(\Hcal_s)}
\\&\lesssim
(C_1\eps)^2 s^{-1+\delta}.
\end{align*}
Note that in the above we used the condition $N \geq 4$, the commutator estimates and Propositions \ref{prop:L2} and \ref{prop:L2}.
Similarly
$$
\aligned
\big\|& \del^I \hL^J \big( \psi \del v\big)\big\|_{L^2_f(\Hcal_s)}
\\
&\lesssim
\sum_{\substack{|I_1|+|J_1|\leq N-1\\ |I_2|+|J_2|\leq N-3}}
\big\| \del^{I_1} L^{J_1} \del v \del^{I_2} \hL^{J_2}\psi \big\|_{L^2_f(\Hcal_s)}
+ \sum_{\substack{|I_1|+|J_1|\leq N-3\\ |I_2|+|J_2|\leq N-1}}
\| \del^{I_1} L^{J_1}\del v \del^{I_2} \hL^{J_2} \psi \|_{L^2_f(\Hcal_s)}
\\
&\lesssim
\sum_{\substack{|I_1|+|J_1|\leq N\\ |I_2|+|J_2|\leq N-3}}
\big\| \del^{I_1} L^{J_1} v\|_{L^2_f(\Hcal_s)} \| \del^{I_2} \hL^{J_2}\psi \|_{L^\infty(\Hcal_s)}
\\&\qquad
+ \sum_{\substack{|I_1|+|J_1|\leq N-2\\|I_2|+|J_2|\leq N-1}}
\|(s/t) \del^{I_1} \hL^{J_1} \psi \|_{L^2_f(\Hcal_s)} \|(t/s) \del^{I_1} L^{J_1} v \|_{L^\infty(\Hcal_s)}
\\&\lesssim
(C_1\eps)^2 s^{-1+\delta}.
\endaligned
$$
In conclusion we find, for $|I|+|J|\leq N-1$,
$$
\Ecal^D(s, \del^I L^J \psi)^{1/2}
\lesssim
\Ecal^D(s, \del^I L^J \widetilde{\psi})^{1/2}
+
\Ecal_1\big(s, \del^I L^J \big(i\gamma^v\del_\nu(v \psi)\big)\big)^{1/2}
\lesssim
\eps+ (C_1 \eps)^2.
$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main}.]
The results of Propositions \ref{prop:KG-improved} and \ref{prop:Dirac-improved} imply that for a fixed $0<\delta\ll1$ and $\mathbb{N}\ni N \geq 4$ there exists an $\eps_0>0$ sufficiently small that for all $0<\eps\leq \eps_0$ we have
\bel{eq:BA-improved}
\aligned
\Ecal^D (s, \del^I \widehat{L}^J \psi)^{1/2}
&\leq \tfrac12 C_1 \eps_0,
\quad
&|I| + |J| &\leq N-1,
\\
\Ecal^D (s, \del^I \widehat{L}^J \psi)^{1/2}
&\leq \tfrac12 C_1 \eps_0 s^\delta,
\quad
&|I| + |J| &\leq N,
\\
\Ecal_1 (s, \del^I L^J v)^{1/2}
&\leq \tfrac12 C_1 \eps_0,
\quad
&|I| + |J| &\leq N-1,
\\
\Ecal_1 (s, \del^I L^J v)^{1/2}
&\leq \tfrac12 C_1 \eps_0 s^\delta,
\quad
&|I| + |J| &\leq N.
\endaligned
\ee
We can now conclude the bootstrap argument. By classical local existence results for nonlinear hyperbolic PDEs, the bounds \eqref{eq:BA-Dirac} hold whenever the solution exists. Clearly $s_1>s_0$ and, moreover, if $s_1<+\infty$ then one of the inequalities in \eqref{eq:BA-Dirac} must be an equality. However we see from \eqref{eq:BA-improved} that by choosing $C_1$ sufficiently large and $\eps_0$ sufficiently small, the bounds \eqref{eq:BA-Dirac} are in fact refined. This then implies that we must have $s_1=+\infty$ and thus the local solution is in fact a global one.
Finally the decay estimates \eqref{eq:sharp-decay} follow from \eqref{eq:BA-improved} combined with the Sobolev estimates \eqref{eq:Sobolev} and \eqref{eq:Sobolev4}.
\end{proof}
\begin{remark}
We conclude with a remark concerning a more general DKG system allowing for the full range of masses:
\bel{eq:D-KG2}
\aligned
i \gamma^\mu \del_\mu \psi + m_\psi \psi
&= v \psi,
\\
-\Box v + m_v^2 v
&= \psi^*\gamma^0 \psi.
\endaligned
\ee
Our original model \eqref{eq:D-KG} corresponds to the case $m_\psi = 0, m_v \neq 0$. If instead we consider the case of $m_\psi \neq 0, m_v = 0$, then by squaring the Dirac equation in \eqref{eq:D-KG2} we are led to the following system
\bel{eq:D-KG3}
\aligned
-\Box \psi +m_\psi^2\psi
&= 2m_\psi v \psi - v^2 \psi -i(\del_\nu v) \gamma^\nu\psi
\\
-\Box v
&= \psi^*\gamma^0 \psi.
\endaligned
\ee
Surprisingly, \eqref{eq:D-KG3} is a very difficult system to treat due to the slowly-decaying quadratic term $2m_\psi v \psi$. In $\RR^{1+3}$ one would require ideas from \cite{DW-JDE}, while the analysis remains open in $\RR^{1+2}$. The case $m_\psi = m_v = 0$ is similarly very difficult to treat in $\RR^{1+2}$ and $\RR^{1+3}$.
At this point it is useful to recall the work of Bachelot \cite{Bachelot}. Bachelot proved small-data global existence for systems like \eqref{eq:D-KG2} in $\RR^{1+3}$ for the cases $m_\psi = 0, m_v \neq 0$ and $m_\psi \neq 0, m_v = 0$. However, \emph{crucially}, his Dirac nonlinearity was of the form $v\gamma^5\psi$, instead of $v I_2 \psi$ as we considered in \eqref{eq:D-KG2}. There is a special property for the $\gamma^5$ matrix\footnote{We recall the $\gamma^5$ matrix is defined by $\gamma^5 := i\gamma^0\gamma^1\gamma^2\gamma^3$ in dimension $1+3$.}, namely $\gamma^\mu \gamma^5+\gamma^5 \gamma^\mu =0$ for $\mu = 0,1,2,3$, which implies the following simplication in Bachelot's Dirac equation
\bel{eq:D-KG4}
\aligned
i \gamma^\mu \del_\mu \psi + m_\psi \psi
&= v \gamma^5 \psi \quad \Rightarrow -\Box \psi + m_\psi^2 \psi = - v^2 \psi -i(\del_\nu v) \gamma^\nu\gamma^5\psi.
\endaligned
\ee
We emphasise the absence of the term $v\psi$ in \eqref{eq:D-KG4}. However, in $1+2$ dimensions $\gamma^5=i\gamma^0\gamma^1\gamma^2$ does not satisfy the same special anticommutative property above, and so the structure appearing in \eqref{eq:D-KG4} is unavailable to us. Finally we note that the case $m_\psi \neq 0, m_v \neq 0$ has been treated in $\RR^{1+2}$, subject to a mass resonance restriction, in \cite{TsutsumiHiggs}.
\end{remark}
\subsubsection*{Acknowledgements}
SD and ZW are grateful to Philippe LeFloch (Sorbonne) for introducing them to the hyperboloidal foliation method. The authors also thank Pieter Blue (Edinburgh), and SD thanks additionally Zhen Lei (Fudan), for their constant encouragement.
{\footnotesize
|
1,108,101,564,343 | arxiv | \section{Introduction}
The nonabelian nature of Quantum Chromodynamics (QCD)---the theory of the strong interactions---makes it possible to form bound states of gauge bosons, the so-called glueballs \cite{Fritzsch:1972jv,Fritzsch:1975tx,Jaffe:1975fd}.
In pure Yang-Mills theory, these are in fact the only possible particle states
and their spectrum has been studied
in detail in lattice gauge theory \cite{Morningstar:1999rf,Chen:2005mg,Loan:2005ff}.
Glueballs are obtained for a range of quantum numbers $J^{PC}$,
where $J$ denotes total spin, $P$ parity, and $C$ charge conjugation; the lowest
glueball has the quantum numbers of the vacuum, $J^{PC}=0^{++}$.
In the presence of quarks, the situation becomes complicated because
glueballs can mix with $q\bar q$ states of the same quantum numbers.
Lattice simulations of QCD including quarks are more difficult, but
recent unquenched calculations
continue to indicate the existence of glueballs \cite{Gregory:2012hu}
with the lightest glueball around 1600--1800 MeV.
The identification of glueballs in experimental data, however, remains elusive
\cite{Bugg:2004xu,Klempt:2007cp,Crede:2008vw,Ochs:2013gi}
and will be one of the objectives of the PANDA experiment at FAIR \cite{Lutz:2009ff,Wiedner:2011mf}.
Experimentally, the $0^{++}$ meson sector turns out to be
particularly challenging.
Listings of the Particle Data Group (PDG) \cite{Agashe:2014kda}\ contain five isospin-zero scalar states in
the energy region below 2 GeV: $f_0(500)$ or $\sigma$, $f_0(980)$, $f_0(1370)$, $f_0(1500)$ and $f_0(1710)$,
with the last two rather narrow states being frequently discussed as
potential candidates for states with dominant glueball content
\cite{Amsler:1995td,Lee:1999kv,Close:2001ga,Amsler:2004ps,Close:2005vf,Giacosa:2005zt,Albaladejo:2008qa,Mathieu:2008me,Janowski:2011gt,Janowski:2014ppa}.
Alternative scenarios with broad glueball resonances around 1 GeV and
mixing with the $\sigma$ meson are also discussed
in the literature \cite{Narison:1996fm,Minkowski:1998mf,Minkowski:2002nf,Ochs:2006rb}.
A similarly unclear situation is found
in the case of the lightest tensor glueball: lattice simulations obtain a
mass between 2.3 GeV and 2.6 GeV \cite{Morningstar:1999rf,Chen:2005mg,Loan:2005ff,Gregory:2012hu}
while the PDG lists $f_2(1950)$, $f_2(2010)$, $f_2(2300)$ and $f_2(2340)$ as established states
around and above 2 GeV, with several needing confirmation
[e.g., the narrow $f_J(2220)$ state that may have spin two or four, or may not exist at all].
Various approaches to low-energy QCD have also been applied to this region \cite{Burakovsky:1999ug,Cotanch:2005ja,Anisovich:2004vj,Anisovich:2005iv,
Giacosa:2005bw} but a clear identification of a tensor glueball in the
meson spectrum is missing.
A central difficulty is the paucity of theoretical predictions of glueball couplings and
decay rates from first principles. Lattice gauge theory provides information on Euclidean
correlators and the extraction of real-time quantities is involved and fraught with uncertainties.
Glueballs are particularly difficult to study when dynamical quarks are included.
A completely different approach to strongly coupled gauge theories has been developed over the last
one and a half decades in the form of anti-de Sitter/conformal field theory (AdS/CFT) correspondence,
or more generally, gauge-string duality \cite{Maldacena:1997re,Aharony:1999ti}.
The AdS/CFT correspondence posits a map of correlation functions of
gauge invariant composite operators with large number of colors $N_c$ and large 't Hooft coupling
to perturbations of certain backgrounds in classical (super-) gravity.
Already in 1998, Witten \cite{Witten:1998zw} proposed a top-down construction of such a duality based
on type-IIA supergravity, where both supersymmetry and conformal invariance are broken
such
that at low energies, below a Kaluza-Klein mass scale $M_{\rm KK}$,
the dual gauge theory is four-dimensional large-$N_c$ Yang-Mills theory.
The calculation of glueball spectra from type-IIA supergravity was in fact one of the first
applications of ``holographic QCD'' \cite{hep-th/9805129,Csaki:1998qr
Hashimoto:1998if,Csaki:1999vb,Constable:1999gb,Brower:2000rp}. (Glueballs have
subsequently been studied further in more phenomenological, bottom-up holographic models
in, e.g., Ref.~\cite{BoschiFilho:2002ta,Colangelo:2007pt,Forkel:2007ru}.)
Quarks in the fundamental representation can be added to the AdS/CFT correspondence
in the form of probe flavor D-branes \cite{Karch:2002sh}. In type-IIA superstring theory there
are D-branes of even spatial dimensionality, and the first attempt to include quarks
in Witten's model of nonsupersymmetric Yang-Mills theory was based on D6 branes
\cite{Kruczenski:2003uq}. This made it possible to study
chiral symmetry breaking in the case
of one flavor, which however did not permit a correct generalization to
flavor number $N_f>1$, an issue that was solved in 2004 by Sakai and Sugimoto \cite{Sakai:2004cn,Sakai:2005yt} by adding pairs of D8 and anti-D8 branes
intersecting the color D4 branes of the Witten model. This model
has been remarkably successful in reproducing various features of
low-energy QCD while being firmly rooted in string theory with a minimal
set of parameters---for given $N_c$ and $N_f$, the only dimensionless parameter is
the 't Hooft coupling $\lambda$ at the Kaluza-Klein scale $M_{\rm KK}$.
In this paper we shall use the Witten-Sakai-Sugimoto model to study
glueball-meson interactions and to calculate glueball decay rates from
the resulting effective interaction Lagrangians. This was first
carried out by Hashimoto, Tan, and Terashima in Ref.~\cite{Hashimoto:2007ze},
whose calculations we repeat (with important corrections) and extend.
In addition to the lowest glueball mode in the Witten model, which happens
to be rather different from the dilaton mode that plays this role in
simpler bottom-up models of holographic QCD, we consider the (predominantly
but not purely) dilatonic mode of the Witten model, as well as the tensor glueball
and their excitations. We calculate decay rates into two and four pions, and
we confirm the prediction of Ref.~\cite{Hashimoto:2007ze} that scalar glueball
decay into four $\pi^0$ mesons is suppressed by evaluating the rate
quantitatively. The latter receives contributions from multi-glueball interactions
as well as from higher-order terms in the DBI action of the D8 branes,
with the latter yielding the dominant piece.
One of the main conclusion of our work is that the lowest gravitational mode
in the Witten-Sakai-Sugimoto model appears to be ill suited to model
the lowest glueball of QCD as found in
lattice simulations, while the dilatonic mode has reasonable properties
regarding its mass and decay rates.
The lowest mode either has to be discarded on grounds of
its exotic polarization along the compactified dimension of the type-IIA background
or perhaps could find a physical role as a pure-glue component of the $\sigma$-meson
\cite{Narison:1996fm} (which itself
is absent in the Sakai-Sugimoto model) or the ``red dragon'' of Ref.~\cite{Minkowski:1998mf}.
We also make quantitative comparisons
with experimental data on glueball candidates among scalar mesons at or
above 1.5 GeV by extrapolating the mass of the holographic glueball
and assuming weak mixing with $q\bar q$ states as the latter
is parametrically suppressed at large $N_c$ \cite{Lucini:2012gg}
and thus also in the Witten-Sakai-Sugimoto model \cite{Hashimoto:2007ze}.
Moreover, the decay pattern of the tensor glueball is worked out
in detail, where also extrapolations to decays into massive pseudo-Goldstone bosons
appear possible.
In view of Refs.~\cite{Ellis:1984jv,Janowski:2014ppa},
a particularly interesting feature of the holographic approach is that it admits
narrow glueball states in the mass range predicted by
lattice simulations, while the prediction of the gluon condensate
is small, close to its standard SVZ value \cite{Shifman:1978bx}.
\section{The Witten model of nonsupersymmetric Yang-Mills theory}
\begin{table}
\begin{tabular}{c|ccc}
\toprule
Here & \cite{Hashimoto:2007ze} & \cite{Brower:2000rp} & \cite{Sakai:2004cn} \\
\colrule
$x^{11}$ & $x^4$ & $x^{11}$ & -- \\
$R_{11}$ & $R_{11}$ & $R_1$ & -- \\
$x^4$ & $\tau$ & $\tau$ & $\tau$ \\
$R_4\equiv M_{\rm KK}^{-1}$ & $M_{\rm KK}^{-1}$ & $R_2$ & $M_{\rm KK}^{-1}$\\
$r_{\rm KK}$ & $R$ & $R$ & -- \\
$R_{\rm D4}\equiv L/2$ & $R_{\rm SS}$ & $R_{\rm AdS}/2$ & $R$ \\
\botrule
\end{tabular}
\caption{
Notations used here versus notation in Hashimoto et al.\ \cite{Hashimoto:2007ze},
Brower et al.\ \cite{Brower:2000rp},
and Sakai \& Sugimoto \cite{Sakai:2004cn,Sakai:2005yt}
}
\label{tabnotation}
\end{table}
The Witten model of nonsupersymmetric (and nonconformal) Yang-Mills theory in
3+1 dimensions is based on
the AdS/CFT correspondence for a six-dimensional (0,2) superconformal field theory
that is obtained from a large number $N_c$
of coincident M5 branes in 11-dimensional M-theory.
Their near-horizon 11-d supergravity geometry is the product space AdS$_7\times S^4$
with a curvature radius $L$ of the AdS$_7$ space that is twice the radius of the $S^4$.
With M5 branes extended along the directions 0,1,2,3,4, and 11,
the line element of this space reads \cite{Becker:2007zj}
\begin{equation}
ds^2=\frac{r^2}{L^2}\left[ \eta_{\mu\nu}dx^\mu dx^\nu+(dx^4)^2+(dx^{11})^2
\right]+\frac{L^2}{r^2}{dr^2}+\frac{L^2}{4}d\Omega_4^2,
\end{equation}
where $\mu,\nu=0,\ldots,3$ are (3+1)-dimensional indices (following \cite{Becker:2007zj} we are skipping
the index value 10).
The six-dimensional gauge theory living on the boundary of AdS$_7$ is
a rather elusive maximally supersymmetric conformal field theory without a
Lagrangian formulation.
Dimensional reduction on a supersymmetry preserving circle with
\begin{equation}
x^{11}\simeq x^{11}+2\pi R_{11},\quad R_{11}=\gsl_{s},\quad l_{s}^2=\alpha'
\end{equation}
leads to the near-horizon geometry of (nonconformal) D4 branes of type-IIA supergravity,
whose dual theory is a five-dimensional super-Yang-Mills theory.
Already in 1998, Witten proposed to use this correspondence as a basis for a
holographic model of the low-energy regime of pure-glue Yang-Mills theory
by a further circle compactification which breaks supersymmetry in the same way
as supersymmetry is broken in the imaginary-time formulation of thermal field theory.
The fermionic gluinos are subject to antiperiodic boundary conditions and thus
become massive at tree level, whereas adjoint scalars acquire masses through
loop corrections since they are not protected by gauge symmetry. In the limit
of large Kaluza-Klein mass scale, the only remaining degrees of freedom are the gauge bosons.
The dual geometry is given by a doubly Wick-rotated black hole
in AdS$_7\times S^4$,
\begin{eqnarray}
ds^2&=&\frac{r^2}{L^2}\left( f(r)dx_4^2+\eta_{\mu\nu}dx^\mu dx^\nu
+dx_{11}^2 \right)\nonumber\\
&&+\frac{L^2}{r^2}\frac{dr^2}{f(r)}+\frac{L^2}{4}d\Omega_4^2,
\end{eqnarray}
with $f(r)=1-r_{\rm KK}^6/r^6$ and a would-be thermal circle
\begin{equation}
x^4\simeq x^4+2\pi R_4,\quad R_4\equiv\frac1M_{\rm KK}=\frac{L^2}{3r_{\rm KK}},
\end{equation}
where the relation between $r_{\rm KK}$ and $M_{\rm KK}$ is determined by the absence
of a conical singularity at $r=r_{\rm KK}$.
The background also has a Ramond-Ramond (R-R)
nonvanishing antisymmetric tensor gauge field with $N_c$
units of flux through the $S^4$.
The relation to the type IIA string-frame metric is
\begin{equation}\label{ds210from11}
ds^2=G_{\hat M \hat N}dx^{\hat M} dx^{\hat N}=e^{-2\Phi/3}g_{MN}dx^M dx^N+e^{4\Phi/3}(dx^{11}+A_M dx^M)^2,
\end{equation}
with $M,N=0,\ldots 9$ and $\hat M, \hat N$ additionally including the index 11.
This leads to a nonconstant dilaton
$e^\Phi=(r/L)^{3/2}$
and $A_m=0$ for the above background geometry.
For later use we introduce the alternative radial coordinates
$U\in(U_{\rm KK},\infty)$ and $Z\in(0,\infty)$, used also in Refs.~\cite{Sakai:2004cn,Sakai:2005yt}),
through
\begin{equation}
U=\frac{r^2/2}{L},\quad
K(Z)\equiv 1+Z^2=\frac{r^6}{r_{\rm KK}^6}=\frac{U^3}{U_{\rm KK}^3}.
\end{equation}
Note that the holographic boundary is at infinite values of $r$, $U$, and $Z$.
In terms of the radial coordinate $U$ the 10-dimensional metric reads
\begin{equation}\label{ds210}
ds^2=\left(\frac{U}{R_{\rm D4}}\right)^{3/2} \left[\eta_{\mu\nu}dx^\mu dx^\nu
+f(U)(dx^4)^2\right]+\left(\frac{R_{\rm D4}}{U}\right)^{3/2}\left[\frac{dU^2}{f(U)}+U^2 d\Omega_4^2 \right]
\end{equation}
with $f(U)=1-(U_{\rm KK}/U)^3$; the nonconstant dilaton is given by
\begin{equation}\label{Phibackground}
e^\Phi=(U/R_{\rm D4})^{3/4}.
\end{equation}
The parameters of the dual field theory
are given by \cite{Kruczenski:2003uq,Sakai:2004cn,Sakai:2005yt,Kanitscheider:2008kd}
\footnote{This is based on a normalization of the Yang-Mills action as
$-\frac{1}{4 g_{\rm YM}^2}{\rm Tr}F_{\mu\nu}F^{\mu\nu}$,
which differs, however, from the convention used in particle physics,
where the coupling constant of SU($N_c$) gauge theories
is invariably defined as
$\mathcal L=-\frac{1}{2 g^2}{\rm Tr}F_{\mu\nu}F^{\mu\nu}$
so that
$g^2=2 g_{\rm YM}^2$. This means
that the QCD coupling is given by $\alpha_s\equiv g^2/(4\pi)=
g_{\rm YM}^2/(2\pi)=\lambda/(2\pi N_c)$ in terms of
the 't Hooft coupling $\lambda\equiv N_c g^2$ as used here.
Since we do not attempt to match with perturbative QCD here, this is of no
concern for the calculations performed below
(it is, however, important to take into account
when comparing quantitatively with weak-coupling results, see also
footnote 1 in Ref.~\cite{Blaizot:2006tk}).}
\begin{eqnarray}\label{gYMNc}
g_{\rm YM}^2=\frac{g_5^2}{2\pi R_4}=2\pig_{s}\lsM_{\rm KK},\quad
(L/2)^3\equiv R_{\rm D4}^3=\pi g_{s} N_{c} l_{s}^3.\qquad
\end{eqnarray}
At scales much larger than $M_{\rm KK}$, the dual theory turns into
5-dimensional super-Yang-Mills theory. However, it is not possible
to make $M_{\rm KK}$ arbitrarily large without leaving the supergravity approximation.
The dual gauge theory exhibits confinement. Wilson loops connecting heavy quarks
at the boundary with large spatial separation along $x$ are represented by fundamental strings that
minimize their energy by having most of their length at minimal radial coordinate. The effective
string tension therefore tends to the value
\begin{equation}\label{sigmastring}
\sigma=\frac1{2\pi l_s^2}\sqrt{-g_{tt}g_{xx}}\Big|_{U=U_{\rm KK}}=
\frac1{2\pi l_s^2}\left(\frac{U_{\rm KK}}{R}\right)^{3/2}=
\frac{2 g_{\rm YM}^2 N_c}{27\pi}M_{\rm KK}^2.
\end{equation}
In accordance with confinement, the dual theory has a mass gap for fluctuations of the background
geometry with scale set by $M_{\rm KK}$.
\subsection{Holographic glueball spectrum}\label{secGB}
Ignoring all Kaluza-Klein modes on the compactification circles and all nontrivial harmonics on the $S^4$
with nonzero $R$ charge, the bosonic normal modes of the supergravity multiplet can be interpreted
as glueballs in the dual 3+1-dimensional Yang-Mills theory \cite{hep-th/9805129,Csaki:1998qr
Hashimoto:1998if,Csaki:1999vb,Constable:1999gb,Brower:2000rp}.\footnote{In Ref.~\cite{Elander:2013jqa} this analysis was recently extended to modes obtained
by breaking the symmetry of the $S^4$.}
There are in total six independent wave equations for various scalar, vector, and tensor modes,
which were denoted as S$_4$, T$_4$, V$_4$, N$_4$, M$_4$, and L$_4$ in \cite{Brower:2000rp},
see Table \ref{tab1}.
These give three distinct possibilities to obtain modes with $J^{PC}=0^{++}$ quantum numbers,
corresponding to the 3+1-dimensional scalars $G_{11,11}$, $G_{4,4}$, and the $S^4$ volume fluctuation
$G^\alpha{}_\alpha$, where
the index $\alpha$ refers to the $S^4$. The latter, termed L$_4$ in Table \ref{tab1}, has
a lowest mass eigenvalue $\approx 3.57M_{\rm KK}$ which is larger than those of all the other
wave equations and will be ignored in what follows.
The remaining two towers of scalar modes are described by the wave equations
denoted S$_4$ and T$_4$. The lowest mass eigenvalue is found in S$_4$, which corresponds asymptotically
to 11-dimensionally traceless metric fluctuations in
$G_{ii}$, $G_{11,11}$, and $G_{44}$.
The other scalar mode does not involve $G_{44}$ and can be attributed to the dilaton derived from $G_{11,11}$.
It is degenerate with the $2^{++}$ tensor mode (wave equation T$_4$) that is provided by transverse-traceless
fluctuations in $G_{ij}$, $i,j=1,2,3$. (It is also degenerate with
the vector mode $1^{++}$ derived from $G_{11,i}$, but this mode is discarded as spurious from
the point of view of the 3+1-dimensional Yang-Mills theory because of negative ``$\tau$-parity'' \cite{Brower:2000rp},
implying that its dual operator is odd under a reflection $x^4\to-x^4$.)
Pseudoscalar ($0^{-+}$) modes are obtained from the 1-form field component $C_4$
descending from $G_{11,4}$ (wave equation V$_4$), whereas the 3-form field of 11-dimensional supergravity
is responsible for vector modes:
a vector $1^{+-}$ from the antisymmetric tensor field $B_{ij}$ (wave equation N$_4$), and a vector $1^{--}$ from the
3-form field components $C_{ij4}$ (wave equation M$_4$). All other modes can be
discarded due to negative $\tau$-parity.
The glueball mass spectrum resulting from the numerical results listed in Table \ref{tab1} is displayed
in Fig.~\ref{figGB}, where it is compared with recent lattice results at large $N_c$ from Ref.~\cite{Lucini:2010nv},
which is in fact rather similar to that obtained for $N_c=3$ \cite{Morningstar:1999rf,Chen:2005mg}.
When juxtaposed such that the lowest tensor mode is matched, the holographic spectrum roughly reproduces the pattern
obtained in lattice gauge theory. Missing states of spin 2 with $PC\not=++$ and
higher spin states might be due to closed string modes.
On the other hand, there is a certain proliferation of $0^{++}$ states due to the existence of
modes involving $G_{44}$, which have been termed ``exotic'' in Ref.~\cite{Constable:1999gb}, where
they were first considered. In fact, Ref.~\cite{Constable:1999gb} suspected that only one of the towers of scalar
states may survive in the limit $M_{\rm KK}\to\infty$, where the Witten model would turn into an exact
string-gauge dual of large-$N_c$ Yang-Mills theory.
\begin{figure}[t]
\centerline{\includegraphics[width=0.7\textwidt
]{gbspctrm}}
\centerline{\small \hfil (a) \hfil\hfil (b) \hfil}
\medskip
\caption{
The glueball spectrum of the Witten model (a) in units of $M_{\rm KK}$, (``exotic'' scalar
modes in green), compared to the spectrum obtained in the recent large-$N_c$ lattice
calculations of Ref.~\cite{Lucini:2010nv} (b) in
units of the square root of the string tension
$\sqrt{\sigma}$, juxtaposed such that the lowest tensor mode
is matched. The dotted lines in figure (b) give the glueball spectrum
of the Witten model when expressed in terms of the string tension
of the Witten model with the standard set of parameters (\ref{kappaSS})
for the Sakai-Sugimoto model.}
\label{figGB}
\end{figure}
\begin{table}
\begin{tabular}{l|cccccc}
\toprule
Mode &S$_4$&T$_4$&V$_4$&N$_4$&M$_4$&L$_4$\\
$J^{PC}$&$0^{++}$&$0^{++}/2^{++}$&$0^{-+}$&$1^{+-}$&$1^{--}$&$0^{++}$\\
\colrule
n=0&7.30835&22.0966&31.9853&53.3758&83.0449&115.002\\
n=1&46.9855&55.5833&72.4793&109.446&143.581&189.632\\
n=2&94.4816&102.452&126.144&177.231&217.397&277.283\\
n=3&154.963&162.699&193.133&257.959&304.531&378.099\\
n=4&228.709&236.328&273.482&351.895&405.011&492.171\\
\botrule
\end{tabular}
\caption{Our results for the mass spectrum $m_n^2$ of AdS$_7$
black hole metric fluctuations
in the notation of \cite{Brower:2000rp}
(i.e.\ in units of $r_{\rm KK}^2/L^4=M_{\rm KK}^2/9$) obtained by
spectral methods cross-checked with a shooting method. The results for
the lowest modes agree completely with Ref.~\cite{Brower:2000rp}, while
for certain higher modes there are deviations in the last few digits.
$J^{PC}$ assignments are given only for the modes with even ``$\tau$-parity'' that
are expected to have a counterpart in QCD.}
\label{tab1}
\end{table}
\subsection{Normalization of glueball modes}
In order to be able to derive effective actions of the glueball modes and their interactions,
we need to calculate the normalization factors required for a canonical kinetic term.
For this purpose it is convenient to use the 11-dimensional
notation, where the fluctuations take their simplest form.
\subsubsection{Lowest (exotic) scalar glueball}
The lowest scalar glueball $0^{++}$ is associated with
fluctuations involving asymptotically (for $r\to\infty$)
$\delta G_{44}=-4 \delta G_{11}=-4 \delta G_{22}=-4 \delta G_{33}=-4 \delta G_{11,11}$.
In the bulk, other metric components are also involved, leading to the following
``exotic polarization'' \cite{Constable:1999gb}
\begin{eqnarray}\label{deltaGG}
&&\delta G_{44} = -\frac{r^2}{L^2}fH_{E}(r)G_{E}(x) \nonumber\\
&&\delta G_{\mu\nu} = \frac{r^2}{L^2}
H_{E}(r)\left[
\frac14 \eta_{\mu\nu}
- \left(\frac14 + \frac{3r_{\rm KK}^6}{5r^6-2r_{\rm KK}^6}\right)
\frac{\partial_\mu \partial_\nu}{M_{E}^2}
\right] G_{E}(x),\nonumber\\
&&\delta G_{11,11} = \frac{r^2}{L^2}\frac14
H_{E}(r)G_{E}(x), \nonumber \\
&&\delta G_{rr} = -\frac{L^2}{r^2}f^{-1} \frac{3r_{\rm KK}^6}{5r^6-2r_{\rm KK}^6}
H_{E}(r) G_{E}(x) ,\nonumber\\
&&\delta G_{r\mu} = \frac{90\, r^7 r_{\rm KK}^6}{M_E^2 L^2 (5r^6-2r_{\rm KK}^6)^2}
H_{E}(r)\partial_\mu G_{E}(x),
\end{eqnarray}
where the eigenvalue equation is given by
\begin{equation}
\frac1{r^3}\frac{d}{dr}r(r^6-r_{\rm KK}^6)\frac{d}{dr} H_{E}(r)
+\left(\frac{432\,r^2\,r_{\rm KK}^{12}}{(5r^6-2r_{\rm KK}^6)^2}+L^4 M_{E}^2 \right) H_{E}(r)=0.
\end{equation}
Integration over the $S^4$ reduces the 11-dimensional supergravity action to
\begin{equation}
S=\frac1{2\kappa_{11}^2}(L/2)^4\Omega_4 \int d^7x \sqrt{-\det G}\left(R(G)+\frac{30}{L^2}\right)
\end{equation}
with $2\kappa_{11}^2=(2\pi)^8 l_s^9 g_s^3$ and $\Omega_4=8\pi^2/3$.
Inserting the metric fluctuations (\ref{deltaGG}) into the 7-dimensional action gives
\begin{eqnarray}\label{deltaRG}
&&\int d^7x \sqrt{-\det G}\left.\left(R(G)+\frac{30}{L^2}\right)\right|_{H_{E}^2}
\nonumber\\
&=&
-\mathcal C_{E}
\int dx^{11}\,d^4x\,dx^4\, \frac12\left[(\partial _\mu G_{E})^2+M_{E}^2 G_{E}^2\right]
\end{eqnarray}
with
\begin{equation}
\mathcal C_{E}=\int_{r_{\rm KK}}^\infty \frac{dr\,r^3}{L^3}\frac{5}{8}H_{E}(r)^2.
\end{equation}
For the lowest eigenmode $H_{E}$ we obtain numerically
\begin{equation}
\mathcal C_{E}=0.057395\, [H_{E}(r_{\rm KK})]^2\frac{r_{\rm KK}^4}{L^3}.
\end{equation}
[This deviates from the result given in Ref.~\cite{Hashimoto:2007ze} by
a factor $\frac12$ that seems to be missing in their Eq.~(2.19).]
Requiring that upon integration over $x^4$ and $x^{11}$ the scalar field $G_{E}(x)$
is canonically normalized leads to
\begin{eqnarray}\label{Hnorm}
[H_{E}(r_{\rm KK})]^{-1}=[H_{E}(Z\!=\!0)]^{-1}&=&\frac1{\sqrt2}
0.0097839\,\lambda^{1/2}\,N_c\,M_{\rm KK} \nonumber\\
&=&0.0069183\,\lambda^{1/2}\,N_c\,M_{\rm KK}.
\end{eqnarray}
(This differs from \cite{Hashimoto:2007ze} only by the explicitly
written factor $1/{\sqrt2}$.)
\subsubsection{Scalar and tensor modes from the tensor multiplet}
A scalar mode $0^{++}$ that does not involve metric components with index 4 is
obtained from\footnote{As discussed recently in Ref.~\cite{Elander:2013jqa},
more possibilities for scalar (and other) glueball modes are obtained if
Ramond-Ramond field fluctuations which partially break the SO(5)
symmetry are included.}
\begin{eqnarray}\label{deltaGD}
\delta G_{11,11}&=&-3\frac{r^2}{L^2}H_D(r)G_{D}(x)\nonumber\\
\delta G_{\mu\nu}&=&\frac{r^2}{L^2}H_D(r)\left[\eta_{\mu\nu}-\frac{\partial _\mu\partial _\nu}{\Box}\right]G_{D}(x).
\end{eqnarray}
Since upon reduction to 10 dimensions $\delta G_{11,11}$ is essentially the dilaton,
we shall refer to this mode as predominantly dilatonic. [Note that
also the ``exotic'' mode (\ref{deltaGG}) involves a dilaton component, but that
there the dominant component is $\delta G_{44}$. It should also be kept in mind
that the attribute ``exotic'' only refers to the holographic origin of this
mode, and not to any exotic $J^{PC}$ quantum numbers in the dual field theory.]
The tensor glueball $2^{++}$ is dual to metric fluctuations that have
neither $\delta G_{44}$ nor $\delta G_{11,11}$, but
contain a transverse traceless polarization tensor in $\delta G_{\mu\nu}$. For example,
one can choose as only nonvanishing components
\begin{equation}\label{deltaGT}
\delta G_{11}=-\delta G_{22}=-\frac{r^2}{L^2}H_T(r)G_T(x).
\end{equation}
The radial functions $H_{D,T}$ are determined by the equation
\begin{equation}
\frac1{r^3}\frac{d}{dr}r(r^6-r_{\rm KK}^6)\frac{d}{dr} H_{D,T}(r)
+L^4 M^2 H_{D,T}(r)=0,
\end{equation}
with $M^2=M_D^2=M_T^2$.
Calculating the normalization of these glueball modes in analogy to (\ref{deltaRG}) leads to
\begin{equation}
\mathcal C_{D,T}=\int_{r_{\rm KK}}^\infty \frac{dr\,r^3}{L^3}\left\{ 6 H_D(r)^2 \atop H_T(r)^2 \right.
\end{equation}
For the lowest eigenmode $H_{E}$ we obtain numerically
\begin{equation}
\mathcal C_T=0.2254
\, [H_T(r_{\rm KK})]^2\frac{r_{\rm KK}^4}{L^3}
\end{equation}
and an analogous result for $\mathcal C_D$ with a coefficient 6 times as large.
This leads to
\begin{eqnarray}\label{HDTnorm}
[H_{D,T}(r_{\rm KK})]^{-1}=[H_{D,T}(Z\!=\!0)]^{-1}=\lambda^{1/2}\,N_c\,M_{\rm KK}
\left\{ 0.033588
\atop 0.013712
\right.
\end{eqnarray}
\subsection{Glueball field/operator correspondence}
The above metric perturbations are sourced by operators in the dual field theory, which
is five-dimensional super-Yang-Mills theory compactified on the circle along $x^4$.
The operator dual to the tensor perturbations is simply the five-dimensional
energy-momentum tensor with three-dimensional indices. Omitting the adjoint scalars of the five-dimensional theory,
we have
\begin{equation}
T_{mn}^{(5)}=T_{mn}^{\rm YM}+F_{4m}F_{4n}-\frac12 \delta_{mn} F_{4\mu}F_4{}^\mu+\ldots,
\end{equation}
where $A_4$ is a further scalar that like the adjoint scalars of the five-dimensional
theory becomes massive through loop corrections.
The operators dual to the exotic and the predominantly dilatonic scalar modes can
be inferred from their couplings to the fields in the DBI action of D4 branes
in the limit of $r\to\infty$ \cite{Hashimoto:1998if}. The exotic scalar mode $\delta G^E_{MN}$
and the dilatonic one, $\frac14\delta G^D_{MN}$, turn out to source, respectively,\footnote{We
disagree here with Ref.~\cite{Brower:2000rp} which attributed $F_{\mu\nu}^2$ to $\delta G^D$
and $T_{00}$ to $\delta G^E$.}
\begin{eqnarray}
\mathcal O^E&=&-\frac58 F_{\mu\nu}F^{\mu\nu}-\frac12 T_{00}^{\rm YM}+F_{4\mu}F_4{}^\mu-\frac12 F_{40}^2+\ldots,\\
\mathcal O^D&=&+\frac38 F_{\mu\nu}F^{\mu\nu}-\frac12 T_{00}^{\rm YM}+F_{4\mu}F_4{}^\mu-\frac12 F_{40}^2+\ldots.
\end{eqnarray}
The difference $\mathcal O^D-\mathcal O^E=F_{\mu\nu}F^{\mu\nu}$ is the purely four-dimensional glueball operator,
which is dual to $\frac14\delta G^D_{MN}-\delta G^E_{MN}$. However this linear combination is not a normal
mode in the gravitational background. We therefore need to keep the exotic and the predominantly dilatonic mode,
of which both, or perhaps only one of them, might correspond to the glueballs of the four-dimensional
Yang-Mills theory. To really end up with the latter, one would however need to
take the limit of large Kaluza-Klein mass $M_{\rm KK}$, which is necessarily leaving the supergravity approximation.
In this limit, both modes will presumably receive important corrections. If one of the modes drops out of the spectrum,
one might suspect that it will more likely be $\delta G^E_{MN}$ as it includes a then spurious
polarization component $\delta G_{44}$.
In the following we shall consider both modes, as well as the tensor mode, when calculating
glueball-meson interactions within the Witten-Sakai-Sugimoto model,
extending the analysis of Ref.~\cite{Hashimoto:2007ze}, which only
studied the lowest (exotic) $0^{++}$ mode.
\section{The Witten-Sakai-Sugimoto model}
Sakai and Sugimoto introduced chiral quarks in Witten's model
of pure-glue Yang-Mills theory by means of $N_f$
probe D8 and anti-D8 branes that fill all spatial directions except the
Kaluza-Klein circle \cite{Sakai:2004cn,Sakai:2005yt}. Quarks and antiquarks
are thus localized on separate points $x^4$ of the 4+1-dimensional boundary
theory. The global flavor symmetry $\mathrm U(N_f)_L\times \mathrm U(N_f)_R$
is however broken spontaneously,
because the subspace $x^4$-$U$ has the topology of a cigar
forcing the D8 and anti-D8 branes to join in the bulk.
The action of the joined D8 branes which describes the dynamics
of $q\bar q$ mesons through flavor gauge fields on the branes reads
\begin{eqnarray}\label{SD8full}
S_{\rm D8}&=&-T_{\rm D8}{\rm Tr}\,\int d^9x e^{-\Phi}
\sqrt{-\det\left(\tilde g_{MN}+2\pi\alpha' F_{MN}\right)}+S_{\rm CS}\nonumber\\
&&=-(2\pi\alpha')^2T_{\rm D8}{\rm Tr}\,\int d^9x e^{-\Phi}\sqrt{-\tilde g}
\left(\mathbf 1+\frac14 \tilde g^{PR}\tilde g^{QS}F_{PQ}F_{RS}+O(F^4)\right)+S_{\rm CS}
\end{eqnarray}
with $T_{\rm D8}=(2\pi)^{-8}l_s^{-9}$, $\tilde g_{MN}$ the metric
on the 8+1-dimensional world volume induced by (\ref{ds210}), and $\Phi$ shifted such that
$e^\Phi=g_s (U/R_{\rm D4})^{3/4}$.
Because no backreaction of the D8 branes on the 10-dimensional background of the Witten model
is taken into account, this corresponds to the quenched approximation of QCD, as indeed
appropriate for the large-$N_c$ limit at fixed $N_f$. (For attempts to go beyond the
quenched approximation see Refs.~\cite{Burrington:2007qd,Bigazzi:2014qsa}.)
In the original version
of the Sakai-Sugimoto model that we shall use here, the D8 and anti-D8 branes are put
at antipodal points so that they join at the minimal value $U=U_{\rm KK}$. In this case it is most
convenient to use the dimensionless coordinate $Z=\sqrt{(U/U_{\rm KK})^3-1}$ introduced already above,
but extended to the range $-\infty\ldots+\infty$ so that the radial integrations of the D8 and the anti-D8 branes are combined. The part of the DBI action quadratic in
the flavor field strength then reads
\begin{equation}\label{SD8F2}
S_{\rm D8}^{(F^2)}=-\kappa\,{\rm Tr}\,\int d^4x \int_{-\infty}^\infty dZ\left[
\frac12 K^{-1/3}\eta^{\mu\rho}\eta^{\nu\sigma}F_{\mu\nu}F_{\rho\sigma}
+KM_{\rm KK}^2\eta^{\mu\nu}F_{\mu Z}F_{\nu Z}\right]
\end{equation}
with $K\equiv 1+Z^2$ and
\begin{equation}
\kappa=(2\pi\alpha')^2 T_{\rm D8} g_s^{-1}\Omega_4 \frac13 R_{\rm D4}^{9/2}U_{\rm KK}^{1/2}
=\frac{\lambda N_c}{216\pi^3},
\end{equation}
where (\ref{gYMNc}) as well as $\Omega_4=8\pi^2/3$ and $M_{\rm KK}^2=(3/2)^2U_{\rm KK}/R_{\rm D4}^3$ have been used.
The Goldstone bosons of chiral symmetry breaking appear as
\begin{equation}\label{SD80}
S_{\rm D8}=\frac{f_\pi^2}{4}\int d^4x\, {\rm Tr}\,\left( U^{-1}\partial _\mu U \right)^2+\ldots,
\quad U=\mathrm P \exp\left\{ i\int_{-\infty}^\infty dZ A_Z\right\},
\end{equation}
which determines the so-called pion decay constant in terms of $\lambda$ and $M_{\rm KK}$ as
\begin{equation}\label{fpi2}
f_\pi^2=\frac1{54\pi^4}\lambda N_cM_{\rm KK}^2.
\end{equation}
Massive vector and axial vector mesons arise as even and odd eigenmodes
of $A_\mu^{(n)}=\psi_n(Z) v^{(n)}_\mu(x)$ with eigenvalue equation
\begin{equation}\label{psin}
-(1+Z^2)^{1/3}\partial _Z\left( (1+Z^2)\partial _Z \psi_n \right)=\lambda_n \psi_n,\quad \psi_n(\pm\infty)=0.
\end{equation}
The lowest mode $v_\mu^{(1)}$ is
interpreted as the isotriplet $\rho$ meson (or the $\omega$ meson
for the U(1) generator) with mass $m_\rho^2=\lambda_1 M_{\rm KK}$
with the numerical result $\lambda_1=0.669314\ldots$.
The next-highest mode $v_\mu^{(2)}$ with eigenvalue $\lambda_2\approx 1.569$
is an axial vector that can be identified \cite{Sakai:2004cn} with the
meson $a_1(1260)$. The experimental value for the ratio $m_{a_1}/m_\rho\approx 1.59$
is remarkably close to $\sqrt(\lambda_2/\lambda_1)\approx 1.53$.
Also the experimental value for the mass of the excited $\rho(1450)$
with $m_{\rho^*}/m_\rho\approx 1.89$ is
is close to $\sqrt(\lambda_3/\lambda_1)\approx 2.07$. This
nice agreement may however be a bit fortuitous, since recent lattice simulations \cite{Bali:2013kia}
at large $N_c$, extrapolated to zero quark mass, give the higher values
$m_{a_1}/m_\rho\approx 1.86$ and $m_{\rho^*}/m_\rho\approx 2.40$.
This would correspond to errors 21\% and 16\%, respectively, which
may still be considered a success given that already the mass of $v_\mu^{(2)}$ is
above $M_{\rm KK}$. (For more checks of the quantitative predictions of
the Witten-Sakai-Sugimoto model see Ref.~\cite{Rebhan:2014rxa}.)
Optimistically, one can therefore hope that the Witten-Sakai-Sugimoto
model is a useful approximation to QCD up to masses of two or three times $M_{\rm KK}$.
\subsection{Choice of parameters}
Matching the result for the $\rho$ meson mass with its experimental value, $m_\rho=\sqrt{\lambda_1}M_{\rm KK}\approx 776$~MeV,\footnote{The mass of the
$\omega$ meson, which is degenerate with the $\rho$ meson in the Sakai-Sugimoto
model, is only slightly higher in real QCD.}
fixes the Kaluza-Klein mass to \cite{Sakai:2004cn,Sakai:2005yt} $M_{\rm KK}=949$~MeV.
This determines the masses of the other vector and axial vector mesons, which come out
in rough agreement with experiment.
The masses of the lowest (exotic) and the predominantly dilatonic
scalar glueball, the tensor glueball (degenerate
with the dilatonic scalar),
and the lowest pseudoscalar glueball are fixed to,
respectively,
\begin{eqnarray}\label{MGs}
&&M_{E}=\sqrt{7.30834/9}\,M_{\rm KK}\approx 855\,{\rm MeV},\nonumber\\
&&M_D=M_T=\sqrt{22.0966/9}\,M_{\rm KK}\approx 1487\,{\rm MeV},\nonumber\\
&&M_P=\sqrt{31.9853/9}\,M_{\rm KK}\approx 1789\,{\rm MeV},\nonumber\\
&&M_{E^*}=\sqrt{46.9855/9}\,M_{\rm KK}\approx 2168\,{\rm MeV},\nonumber\\
&&M_{D^*}=M_{T^*}=\sqrt{55.5833/9}\,M_{\rm KK}\approx 2358\,{\rm MeV},
\end{eqnarray}
where we have also given the masses of some of the corresponding excited states (marked by a star).
The lowest scalar glueball involving the exotic polarization (\ref{deltaGG}) with a dominant $\delta G_{44}$
component is found to be only 10\% heavier than the $\rho$ meson. This is in stark contrast to lattice results both
for quenched $N_c=3$ and $N_c=\infty$ QCD \cite{Lucini:2010nv}, where the lightest glueball is about twice as heavy.
A possible modification of the Sakai-Sugimoto model consists of
choosing a nonmaximal separation of the D8-$\overline{\rm D8}$ branes \cite{Antonyan:2006vw,Aharony:2006da}.
The latter then join at a value $U=U_0>U_{\rm KK}$ and the mass of a string stretched
between $U_{\rm KK}$ and $U_0$ has been interpreted as a ``constituent'' quark mass.
Unfortunately, this only makes the problem worse: Nonmaximal separation
increases the eigenvalue $\lambda_1$ \cite{Peeters:2007ab} while the glueball spectrum
is unaffected. With a constituent quark mass of 310~MeV and keeping
the mass of the $\rho$ meson fixed as done in Ref.~\cite{Callebaut:2011ab}, $M_{\rm KK}$ is reduced to
720~MeV, which reduces all values in (\ref{MGs}) by 25\%.
With maximal separation and the standard choice $M_{\rm KK}=949$~MeV, the mass of the dilatonic glueball is
not far from the numerical result obtained in lattice gauge theory for the lightest scalar glueball state,
while a degeneracy with the tensor glueball is not observed there---the latter is instead significantly heavier.
This degeneracy might perhaps be lifted by higher-derivative corrections when going beyond the leading
supergravity approximation. Similarly, it is conceivable that only the dilatonic glueball survives
in the (unfortunately inaccessible) limit to a complete holographic QCD and that therefore the lowest
scalar mode is to be discarded. We shall come back to this question when calculating the decay width
of the various glueball states.
In order to calculate glueball-meson interactions, we shall need to extrapolate
to finite coupling and finite $N_c=3$.
The original \cite{Sakai:2004cn,Sakai:2005yt} and most widely used choice is
obtained from matching $f_\pi\approx 92.4\,{\rm MeV}$ in (\ref{fpi2}) which gives
\begin{equation}\label{kappaSS}
\kappa\equiv
\lambda N_c/(216\pi^3)=7.45\cdot10^{-3}
\;\Rightarrow\; \lambda\approx 16.63 \quad (N_c=3).
\end{equation}
[The original and published version of Ref.~\cite{Sakai:2004cn,Sakai:2005yt}
contained an error in the prefactor of the D8 brane action for $N_f>1$
involving a different definition of $\kappa$, which led
to a 't Hooft coupling of about 8.3 and effectively
a correspondingly reduced pion decay constant.
This error,
which was later corrected in the e-print versions of Ref.~\cite{Sakai:2004cn,Sakai:2005yt}, did not
affect the mass spectra of mesons
obtained in Ref.~\cite{Sakai:2004cn,Sakai:2005yt}, but
it does affect all interactions. Unfortunately,
Ref.~\cite{Hashimoto:2007ze} still employed the incorrectly
matched 't Hooft coupling, affecting
all meson and glueball decay rates calculated therein.]
In what follows, we shall take (\ref{kappaSS}) as the standard choice, but also
consider as an alternative a value of the 't Hooft coupling obtained by
matching $m_\rho/\sqrt{\sigma}$, where $\sigma$ is the string tension (\ref{sigmastring}), to
the large-$N_c$ lattice result of Ref.~\cite{Bali:2013kia}.
Ref.~\cite{Bali:2013kia} obtained $m_\rho/\sqrt{\sigma}=1.504(50)$, whose central value corresponds to
$\lambda=12.55$. With the ``standard'' value $\lambda\approx 16.63$ the Sakai-Sugimoto model
predicts $m_\rho/\sqrt{\sigma}\approx 1.306$, which agrees within 15\% but points to a smaller
't Hooft coupling and thus a smaller string tension. A smaller 't Hooft coupling has also been
argued for in Ref.~\cite{Imoto:2010ef}, where the spectrum of higher-spin mesons obtained
from massive open string modes has been considered. We shall therefore consider a downward
variation of $\lambda\approx 16.63\ldots 12.55$ to get an idea of the variability of the predictions
of the Witten-Sakai-Sugimoto model.
Before turning to decay rates, we consider two other predictions of the Witten-Sakai-Sugimoto model
at finite $N_c$ where the concrete value of $\lambda$ matters.
At infinite $N_c$, the Goldstone bosons include also a massless $\eta'$ pseudoscalar meson
from the spontaneous breaking of the $\mathrm U_A(1)$ symmetry, whose anomaly is suppressed
at $N_c\to\infty$. However, at finite $N_c$, the Sakai-Sugimoto model predicts
a finite mass for the $\eta'$ meson through a Witten-Veneziano formula evaluated
already in \cite{Sakai:2004cn} with the result
\begin{equation}\label{metaprime}
m_{\eta'}=\frac1{3\sqrt3 \pi}\sqrt{\frac{N_f}{N_c}}\lambdaM_{\rm KK}.
\end{equation}
With $M_{\rm KK}=949$~MeV and $\lambda\approx 16.63$ (or 12.55) the numerical value
for $N_c=N_f=3$ turns out to be 967~MeV (730~MeV). The higher value is surprisingly
close to the experimental value 958~MeV, but actually a smaller value than that might
perhaps be expected given the absence of a strange quark mass. At any rate,
the right ballpark seems to be reached with the parameters considered here.
Another quantity of interest, in particular in connection with glueball physics, is
the gluon condensate which was calculated in Ref.~\cite{Kanitscheider:2008kd} as
\begin{equation}\label{gluoncondensate}
C^4\equiv\left<\frac{\alpha_s}{\pi}G_{\mu\nu}^a
G^{a\mu\nu}\right>=\frac{4N_c}{3^7\pi^4}\lambda^2M_{\rm KK}^4.
\end{equation}
For $\lambda\approx 16.63$ this yields $C^4=0.0126\,{\rm GeV}^4$, almost
identical to the standard SVZ sum rule value \cite{Shifman:1978bx}, while
for $\lambda=12.55$ a significantly smaller value of 0.0072 GeV$^4$ is obtained.
Using sum rules both smaller \cite{Ioffe:2005ym} and larger \cite{Narison:2011xe} values than the standard SVZ sum rule value
are discussed in the literature,
while lattice simulations typically give significantly larger values,
which are however of the same size as ambiguities from the subtraction procedure \cite{Bali:2014sja}.
While a quantitative comparison thus does not seem to be in order, we note that
the gluon condensate is predicted to be small.
\subsection{Normalization of $q\bar q$ modes}
For the calculation of decay rates we will initially consider $N_f=2$, dropping
the strange quark whose nonnegligible mass cannot be easily accommodated
within the Sakai-Sugimoto model (see however Ref.~\cite{0708.2839,Aharony:2008an,Hashimoto:2008sr,McNees:2008km});
the possible effects of the finite quark masses will be discussed in Section \ref{sec:extrapol}.
In the chiral Sakai-Sugimoto model, the Goldstone bosons are the massless pions
contained in
\begin{equation}
A_Z=U_{\rm KK}\phi_0(Z)\pi(x^\mu),
\end{equation}
where $U_{\rm KK}$ has been included to
render the mode function $\phi_0(Z)$ dimensionless.\footnote{For our purposes it is most convenient
to keep $A_Z$ nonzero. The frequently adopted gauge choice $A_Z=0$ leads to a different
but physically equivalent
field parametrization of the Goldstone bosons.}
The U(1) part of $A_Z$ corresponds to the $\eta'$ meson, which is a Goldstone boson only at infinite $N_c$;
for finite $N_c$ it receives a mass through the Witten-Veneziano mechanism \cite{Sakai:2004cn} (see Eq.~(\ref{metaprime}) below).
The only vector mesons that we shall consider will be
the isotriplet $\rho$ meson described by the traceless part of
\begin{equation}
A_\mu=\psi_1(Z)\rho_\mu(x^\nu),
\end{equation}
and the isosinglet $\omega$ meson given by the corresponding expression proportional to the unit matrix.
Following Ref.~\cite{Hashimoto:2007ze} (which here differs from \cite{Sakai:2004cn,Sakai:2005yt})
we choose the generators of the SU(2) flavor group such that ${\rm Tr}\, T^a T^b=\delta^{ab}$. Canonical
normalization of the fields $\pi^a$ and $\rho^a_\mu$ in (\ref{SD8F2})
such that upon integration over $Z$ one has
\begin{equation}
S=-{\rm Tr}\,\int d^4x\left[\frac12(\partial _\mu\pi)^2+\frac14 F_{\mu\nu}^2+
\frac12 \lambda_1 M_{\rm KK}^2\rho_\mu^2+\ldots\right]
\end{equation}
leads to
\begin{eqnarray}
&&2\kappa\int_{-\infty}^\infty dZ\,K^{-1/3}(\psi_1)^2=1, \\
&&2\kappa(\UKKM_{\rm KK})^2 \int_{-\infty}^\infty dZ\,K(\phi_0)^2=1.
\end{eqnarray}
The first relation determines the value of $\psi_1$ at $Z=0$
with the help of the numerical result
\begin{equation}
\int_{-\infty}^\infty dZ\,K^{-1/3}(\psi_1)^2=2.80302\ldots \, \psi_1^2(0),
\end{equation}
while the second fixes the normalization of $\phi_0\propto 1/K\equiv 1/(1+Z^2)$ as
\begin{equation}
\UKKM_{\rm KK}\,\phi_0=\frac{1}{\sqrt{2\pi\kappa}} \frac1K.
\end{equation}
\subsection{$\rho$ and $\omega$ meson decay}
The $\rho$-$\pi$ interactions
are determined by the second term of (\ref{SD8F2}), using\footnote{Here we
follow the conventions of Ref.~\cite{Hashimoto:2007ze}. Note that in Ref.~\cite{Sakai:2004cn,Sakai:2005yt} the matrix-valued flavor gauge fields are
antihermitean.}
$F_{\mu Z}=\partial _\mu A_Z - \partial _Z A_\mu-i[A_\mu,A_Z]$. The effective vertex between the four-dimensional fields $\rho$ and $\pi$'s are
obtained upon integration of the resulting products of the mode functions $\psi_1(Z)$ and $\phi_0\propto 1/K$.
For the process $\rho\to\pi\pi$ we need specifically
\begin{equation}
\mathcal L_{\rho\pi\pi}=-g_{\rho\pi\pi}
\epsilon_{abc}(\partial_\mu \pi^a)
\rho^{b\mu}\pi^c,\quad
g_{\rho\pi\pi}=\sqrt2
\int dZ \frac1{\pi K}\psi_1=\sqrt{2}\times 24.03
\,\lambda^{-\frac12} N_c^{-\frac12}.
\end{equation}
This agrees with the numerical value given in table 3$\cdot$34 of
Ref.~\cite{Sakai:2005yt}
for $g_{v^1\pi\pi}\equiv g_{\rho\pi\pi}$.
($g_{\rho\pi\pi}/\sqrt2$
was denoted as $c_6$ in \cite{Hashimoto:2007ze}; we will
reserve $c_i$, $i=1,2,3\ldots$, for the coefficients in the interactions of
the glueball field with mesons, for which we will follow the
conventions chosen in \cite{Hashimoto:2007ze}.)
The amplitude for the decay of a $\rho$ meson at rest
with polarization $\epsilon^\mu=(0,\mathbf e)$
into two pions with momenta
$p^\mu=(|\mathbf p|,\mathbf p)$ and
$q^\mu=(|\mathbf p|,-\mathbf p)$
reads
\begin{equation}
\mathcal M=ig_{\rho\pi\pi}\, \epsilon^\mu(p_\mu-q_\mu)
=2ig_{\rho\pi\pi}\, \mathbf e\cdot \mathbf p.
\end{equation}
The expression for the decay rate involves a directional average, leading to
\begin{equation}\label{Gammarho}
\Gamma_\rho/m_\rh
=\frac1{4\pi}\int d\Omega \frac{|\mathcal M|^2}{16\pi m_\rho^2}=
\frac{g_{\rho\pi\pi}^2}{48\pi}\approx \frac{7.659}{\lambda N_c}
\approx \left\{ 0.1535 \; (\lambda=16.63) \atop 0.2034 \; (\lambda=12.55) \right.
\end{equation}
which compares remarkably well with the current experimental
value $\Gamma_\rho/m_\rho= 0.191(1)$ from Ref.~\cite{Agashe:2014kda}\ (although it should
be noted that in this process the finite pion mass implies a reduction
by about 20\% compared to a decay into massless particles so that the coupling
$g_{\rho\pi\pi}$ appears somewhat underestimated with our
range of parameters for the Sakai-Sugimoto model).
The decay of the $\omega$ meson into $\pi^0\gamma$ and $\pi^0\pi^+\pi^-$, which
is due to the Chern-Simons part of the D8 brane action, has been calculated
in \cite{Sakai:2005yt}, with the result $2.58$~MeV for the dominant 3-pion decay,
which is significantly below the experimental value $\approx 7.6$~MeV.
However, the result of \cite{Sakai:2005yt} is proportional to $\lambda^{-4}$.
Varying again $\lambda$ from 16.63 to 12.55 gives the range 2.58\ldots7.96~MeV, which
happens to include the experimental value.
So the model appears to make reasonable semi-quantitative estimates for meson interactions,
which is quite remarkable given that after fixing the mass scale and setting $N_c=3$,
there is only one free parameter, namely $\lambda$. This certainly
makes it interesting to consider the predictions of this model for glueball decay rates in detail.
\section{Glueball-meson interactions}
The glueball modes, which have been obtained in Sect.~\ref{secGB} in terms of
11-dimensional metric perturbations $\delta G_{\hat M \hat N}$, translate to perturbations of the type-IIA
string metric $g_{MN}$ and the dilaton $\Phi$
according to (\ref{ds210from11}). Explicitly, this gives
\begin{eqnarray}\label{deltag10}
g_{\mu\nu}&=& \frac{r^3}{L^3}\left[ \left(1+\frac{L^2}{2r^2} \delta G_{11,11}\right)\eta_{\mu\nu} +\frac{L^2}{r^2} \delta G_{\mu\nu} \right],\nonumber\\
g_{44}&=& \frac{r^3f}{L^3}\left(1+\frac{L^2}{2r^2} \delta G_{11,11}
+\frac{L^2}{r^2 f}\delta G_{44}\right),\nonumber\\
g_{rr}&=& \frac{L}{rf}\left(1+\frac{L^2}{2r^2} \delta G_{11,11} + \frac{r^2 f}{L^2} \delta G_{rr} \right),\nonumber\\
g_{r\mu}&=& \frac{r}{L} \delta G_{r\mu} ,\nonumber\\
g_{\Omega\Omega}&=&\frac{r}{L} \left(\frac{L}{2}\right)^2
\left(1+\frac{L^2}{2r^2} \delta G_{11,11}\right) ,\nonumber\\
e^{4\Phi/3}&=&\frac{r^2}{L^2}\left(1+\frac{L^2}{r^2}\delta G_{11,11}\right).
\end{eqnarray}
Here we differ from Ref.~\cite{Hashimoto:2007ze} where the metric
fluctuations $g_{\Omega\Omega}$ on the $S^4$ have been omitted.
As one can check (Appendix \ref{sec:tendimfieldeq}), the 10-dimensional
equations for the glueball modes
are satisfied only when the fluctuation in $g_{\Omega\Omega}$ is kept.\footnote{%
In 10 dimensions, the induced fluctuations in $g_{\Omega\Omega}$ are
in fact necessary to decouple the mode L$_4$, which
in 11 dimensions corresponds
to pure $S^4$ volume fluctuations, as can be seen from
the explicit 10-dimensional calculations in Ref.~\cite{Hashimoto:1998if}.}
We shall consider in turn the lowest glueball dual to the metric fluctuations (\ref{deltaGG}), referred to as
``exotic'' because it involves $\delta G_{44}$ besides dilaton fluctuations in $\delta G_{11,11}$,
the predominantly dilatonic
glueball associated to (\ref{deltaGD}), and the tensor glueball with metric fluctuations (\ref{deltaGT}).
Inserting the respective metric fluctuations in the D8 brane action and integrating over the bulk coordinates
yields effective interaction Lagrangians which are given in full detail in Appendix \ref{sec:Lglueint}.
\subsection{Glueball decay to two pions}\label{sec:G2pi}
\begin{figure}[t]
\includegraphics[width=0.3\textwidth]{g2pi}
\caption{Leading-order glueball decay into two pions.}
\label{fig:g2pi}
\end{figure}
The effective, 3+1-dimensional interaction Lagrangian for the lowest (exotic) $0^{++}$ glueball
$G_{E}$ reads (omitting terms that vanish when $G_{E}$ is on-shell)
\begin{equation}
\mathcal L^{G_{E}\to\pi\pi}=-{\rm Tr}\,\left[\frac12 c_1 \partial _\mu\pi \partial _\nu\pi
\frac{\partial ^\mu\partial ^\nu}{M_{E}^2}G_{E}
+\frac12 \breve c_1 \partial _\mu\pi \partial ^\mu\pi\, G_{E}\right]
\end{equation}
with coupling constants $c_1$ and $\breve c_1$ defined in
(\ref{cbrevec}) and numerically given in Table \ref{tabcG}.
\begin{table}
\begin{tabular}{l|c|r}
\toprule
&vertex&value\\
\colrule
$c_1/\sqrt{2}$ & $G_{E}\partial\pi\partial\pi$ & $44.304
\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$c_2/\sqrt{2}$ & $G_{E}\rho\rho$ & $5.0318\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$c_3/\sqrt{2}$ & $G_{E}\partial\rho\partial\rho$ & $49.334\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$c_4/\sqrt{2}$ & $G_{E}\rho\partial\rho$ & $-7.4810\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{+1}$ \\
$c_5/\sqrt{2}$ & $G_{E}\rho\pi\partial\pi$ & $1428.1 \; \lambda^{-1}\, N_c^{-\frac32} M_{\rm KK}^{-1}$ \\
$\breve c_1/\sqrt{2}$ & $G_{E}\partial\pi\partial\pi$ & $11.590
\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$\breve c_2/\sqrt{2}$ & $G_{E}\rho\rho$ & $2.0970\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$\breve c_3/\sqrt{2}$ & $G_{E}\partial\rho\partial\rho$ & $12.814\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$\breve c_5/\sqrt{2}$ & $G_{E}\rho\pi\partial\pi$ & $359.33 \; \lambda^{-1}\, N_c^{-\frac32} M_{\rm KK}^{-1}$ \\
\colrule
$c_1^*/\sqrt{2}$ & $G_{E}^*\partial\pi\partial\pi$ & $24.64
\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$c^*_2/\sqrt{2}$ & $G_{E}^*\rho\rho$ & $-0.822
\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$c^*_3/\sqrt{2}$ & $G_{E}^*\partial\rho\partial\rho$ & $27.90
\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$c^*_4/\sqrt{2}$ & $G_{E}^*\rho\partial\rho$ & $-1.746
\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{+1}$ \\
$c^*_5/\sqrt{2}$ & $G_{E}^*\rho\pi\partial\pi$ & $858.6
\; \lambda^{-1}\, N_c^{-\frac32} M_{\rm KK}^{-1}$ \\
$\breve c^*_1/\sqrt{2}$ & $G_{E}^*\partial\pi\partial\pi$ & $4.584
\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$\breve c^*_2/\sqrt{2}$ & $G_{E}^*\rho\rho$ & $-1.239
\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$\breve c^*_3/\sqrt{2}$ & $G_{E}^*\partial\rho\partial\rho$ & $5.382
\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$\breve c^*_5/\sqrt{2}$ & $G_{E}^*\rho\pi\partial\pi$ & $176.9
\; \lambda^{-1}\, N_c^{-\frac32} M_{\rm KK}^{-1}$ \\
\botrule
\end{tabular}
\caption{Coupling coefficients in interaction Lagrangian of lowest glueball.
(Here we give numerical results for $c_i/\sqrt{2}$
to permit a comparison with the results listed in \cite{Hashimoto:2007ze},
with which we disagree by a factor of $\sqrt{2}$ in (\ref{Hnorm}).
Taking this into account we agree with all
numerical values, with the exception of
$c_4$, which in \cite{Hashimoto:2007ze} seems to be missing
the numerical factor contained in the normalization of $H_{E}(Z)$.)
The coefficients $\breve c_{1,2,3,5}$ are coupling constants due to
$S_4$ volume fluctuations induced by the lowest glueball that were
apparently dropped in \protect\cite{Hashimoto:2007ze}.
Coefficients with a star
indicate the corresponding constants for the first excited exotic mode.}
\label{tabcG}
\end{table}
\begin{table}
\begin{tabular}{l|c|r}
\toprule
&vertex&value\\
\colrule
$d_1$ & $\tilde G\partial\pi\partial\pi$ & $17.22
\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$d_2$ & $\tilde G\rho\rho$ & $4.3714 \; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$d_3$ & $\tilde G\partial\rho\partial\rho$ & $18.873\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$d_5$ & $\tilde G\rho\pi\partial\pi$ & $512.20 \; \lambda^{-1}\, N_c^{-\frac32} M_{\rm KK}^{-1}$ \\
\colrule
$d^*_1$ & $\tilde G^*\partial\pi\partial\pi$ & $11.906
\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$d^*_2$ & $\tilde G^*\rho\rho$ & $-0.9415\; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$d^*_3$ & $\tilde G^*\partial\rho\partial\rho$ & $13.680 \; \lambda^{-\frac12}\, N_c^{-1} M_{\rm KK}^{-1}$ \\
$d^*_5$ & $\tilde G^*\rho\pi\partial\pi$ & $419.46 \; \lambda^{-1}\, N_c^{-\frac32} M_{\rm KK}^{-1}$ \\
\botrule
\end{tabular}
\caption{Coupling coefficients $d_i$ ($t_i\equiv\sqrt6\,d_i$)
in the interaction Lagrangian of the lowest glueballs in the tensor multiplet (dilaton and tensor),
collectively denoted as $\tilde G$, with a star indicating the first excited mode.
(Note that there is no term
analogous to the one involving $c_4$ for the lowest (exotic) glueball.)
}
\label{tabct}
\end{table}
The corresponding result for the dilatonic scalar $0^{++}$ and the $2^{++}$ mode,
denoted $G_{D}$ and $T^{\mu\nu}$, respectively, is
\begin{eqnarray}
\mathcal L^{G_{D}\to\pi\pi}&=& \frac{1}{2}d_1 {\rm Tr}\, \partial_\mu\pi\partial_\nu\pi
\left(\eta^{\mu\nu}-\frac{\partial^\mu \partial^\nu}{M_D^2}\right)G_{D},\\
\mathcal L^{G_T\to\pi\pi}&=& \frac12 t_1
{\rm Tr}\,\partial _\mu\pi\partial _\nu\pi \, T^{\mu\nu},
\quad t_1\equiv \sqrt6\,d_1.
\end{eqnarray}
$G_{D}$ is a canonically normalized real scalar,
and $T^{\mu\nu}$ a massive tensor field with transverse traceless
polarizations, normalized such that
\begin{equation}
\mathcal L^T=\frac14 T_{\mu\nu}(\Box-M_T^2)
T^{\mu\nu}+B_\mu \partial _\nu T^{\mu\nu}+B \eta_{\mu\nu} T^{\mu\nu} +\ldots,
\end{equation}
where $B_\mu$ and $B$ are Lagrange multiplier fields.
The coefficient $d_1$ is given
in Table \ref{tabct}.
For the two scalar glueballs described by $G_{E}$ and $G_{D}$, the
decay width into two pions is given by the simple expression
\begin{equation}
\Gamma_{G_{E,D}\to\pi\pi}=\frac{|\mathbf p|}{8\pi M^2_{E,D}}|\mathcal M_{E,D}|^2\times3\times\frac12,
\end{equation}
where $\mathbf p$ is the momentum of one of the pions
in the rest frame of the glueball with $|\mathbf p|=M_{E,D}/2$, the factor of 3 comes from the sum
over the isospin quantum number,
and the factor of $\frac12$ is included because the two pions
are identical. The amplitude for the decay of $G_{E}$ and $G_{D}$
is, respectively,
\begin{eqnarray}
|\mathcal M_E|&=&|(c_1+\breve c_1) p_0 q_0-\breve c_1 \mathbf p\cdot\mathbf q|=|c_1+2\breve c_1|\frac{M^2_E}{4},\\
\quad |\mathcal M_D|&=&|d_1 \mathbf p\cdot\mathbf q|=
|d_1|\frac{M^2_D}{4}.
\end{eqnarray}
For the tensor glueball an average over the polarizations
of the tensor is needed. Alternatively, we can choose a
fixed polarization $\epsilon^{11}=-\epsilon^{22}=1$ and
integrate over the orientation of our Cartesian coordinates.
This leads to the scattering amplitude (in the rest frame
of the tensor glueball)
\begin{equation}\label{MTpipi}
|\mathcal M_T|=|t_1(p_x^2-p_y^2)|,\quad |\mathbf p|=M_T/2,
\end{equation}
and the decay width
\begin{equation}\label{GTpipi}
\Gamma_{T\to\pi\pi}=\frac{|\mathbf p|}{8\pi M_T^2}\int \frac{d\Omega}{4\pi}|\mathcal M_T|^2
\times\frac32=\frac1{640\pi}
|t_1|^2 M_T^3.
\end{equation}
Numerically we obtain\footnote{Ignoring the contribution
involving $\breve c_1$, the result for
the relative width of the scalar glueball $G$ would read $0.040$ in agreement
with the result of \cite{Hashimoto:2007ze}, because
the fact that the coefficient in $|c_1|^2$ is twice that of
\cite{Hashimoto:2007ze} is exactly compensated
by $\lambda^{-1}$ in $|c_1|^2$ being half that in \cite{Hashimoto:2007ze}.}
with $\lambda\approx 16.63$
for the scalar glueballs $G_{E}$, $G_{D}$, and $G_{D}^*$
\begin{eqnarray}
\label{GEtopipi}
\Gamma_{G_{E}\to\pi\pi}/M_{E}&=&\frac{3|c_1+2\breve c_1|^2 M_{E}^2}{512\pi}\approx
\frac{13.79}{\lambda N_c^2}\approx 0.092
\quad(M_{E}\approx 855{\rm MeV})\\
\Gamma_{G_{D}\to\pi\pi}/M_D&=&\frac{3|d_1|^2 M_D^2}{512\pi}\approx
\frac{1.359}{\lambda N_c^2}\approx 0.009
\quad(M_D\approx 1487{\rm MeV})\\
\label{Ds2piwidth}
\Gamma_{G_{D}^*\to\pi\pi}/M_{D^*}&=&\frac{3|d^*_1|^2 M_{D^*}^2}{512\pi}\approx
\frac{1.633}{\lambda N_c^2}\approx 0.011
\quad(M_{D^*}\approx 2358{\rm MeV})\end{eqnarray}
and for the tensor
\begin{eqnarray}\label{Ttopipi}
\Gamma_{T\to\pi\pi}/M_T&=&\frac{|t_1|^2 M_T^2}{640\pi}\approx
\frac{2.174}{\lambda N_c^2}\approx0.0145
\quad(M_T\approx 1487{\rm MeV})\\
\label{Tstopipi}
\Gamma_{T^*\to\pi\pi}/M_{T^*}&=&\frac{|t_1^*|^2 M_{T^*}^2}{640\pi}\approx
\frac{2.613}{\lambda N_c^2}\approx0.0175
\quad(M_{T^*}\approx 2358{\rm MeV})
\end{eqnarray}
If we replace the standard choice $\lambda\approx 16.63$ by the smaller
value 12.55 as discussed above, all these decay rates which are proportional to $\lambda^{-1}$
increase by 33\% (see Table \ref{tab2pions} for a summary).
\begin{table}
\begin{tabular}{l|r|c}
\toprule
& $M$ & $\Gamma/M$ \\
\colrule
$G_{E}\to2\pi$ & 855 & 0.092 \ldots 0.122 \\
$G_{E}^*\to2\pi$ & 2168 & 0.149 \ldots 0.197 \\
\colrule
$G_{D}\to2\pi$ & 1487 & 0.009 \ldots 0.012 \\
$G_{D}^*\to2\pi$ & 2358 & 0.011 \ldots 0.014 \\
\colrule
$T\to2\pi$ & 1487 & 0.0145 \ldots 0.0193 \\
$T^*\to2\pi$ & 2358 & 0.0175 \ldots 0.0233 \\
\botrule
\end{tabular}
\caption{Decay width of scalar and tensor glueballs into 2 (massless) pions divided by glueball mass
for $\lambda=16.63\ldots12.55$}
\label{tab2pions}
\end{table}
A somewhat anomalous feature of the lowest (exotic) scalar glueball is that its
width is much larger than the next-to-lowest (dilatonic) scalar glueball while
having a rather low mass.
This appears rather unnatural if the dilatonic scalar glueball is interpreted as an
excited scalar glueball and
may be another indication that the exotic mode should be discarded altogether.
Interestingly enough, a scenario with a broad glueball around 1 GeV in combination
with a narrow glueball in the range predicted by quenched (as well as unquenched \cite{Gregory:2012hu})
lattice gauge theory has been
proposed in Ref.~\cite{Narison:1996fm,Narison:2005wc,Mennessier:2008kk,Kaminski:2009qg} on the basis of
QCD spectral sum rules. There the lighter glueball, called $\sigma_{\rm B}$, plays the role of an important
bare glueball component of the $\sigma$-meson $f_0(500)$, while a higher narrow glueball
around 1.5-1.6 GeV is required by the consistency of subtracted and unsubtracted sum rules.
The glueball state $\sigma_{\rm B}$ of Ref.~\cite{Narison:1996fm,Narison:2005wc,Kaminski:2009qg}
has a broad decay width into two pions, in fact even much broader than (\ref{GEtopipi}), which
makes us speculate that the exotic scalar glueball of the Witten-Sakai-Sugimoto model
could find a role as the holographic dual of a pure-glue component of the $\sigma$-meson, perhaps while having
to be discarded from the spectrum of the pure pure-glue Witten model.\footnote{This dichotomy might be
due to the fact that the flavor D8 branes of the Sakai-Sugimoto model are localized in the $x^4$
direction along which the graviton mode associated with $G_{E}$ is polarized,
whereas this extra spatial direction should play no active role in the Witten model---while
the requirement of even $x^4$-parity does not rule out the exotic mode $G_{E}$, some further projection
may be appropriate for the pure-glue case.}
This would also be in line with the fact that the gluon condensate of the Witten model, Eq.~(\ref{gluoncondensate}),
is small, close to its standard SVZ value \cite{Shifman:1978bx},
while models with only one scalar glueball field
\cite{Ellis:1984jv,Janowski:2014ppa} cannot reconcile a small gluon
condensate with narrow glueball states.
In the range 1.5-1.8 GeV, where lattice gauge theory locates the lowest scalar glueball,
there are, experimentally, two isoscalar mesons $f_0(1500)$ and $f_0(1710)$ which are
frequently and alternatingly considered as predominantly glue.
The experimental results for the decay width into two pions are
\begin{eqnarray}
&&\Gamma^{\rm (ex)}(f_0(1500)\to\pi\pi)/(1505{\rm MeV})=0.025(3),\\
&&\Gamma^{\rm (ex)}(f_0(1710)\to\pi\pi)/(1722{\rm MeV})= \left\{0.017(4) \atop 0.009(2) \right.
\end{eqnarray}
where the first result is taken from Ref.~\cite{Agashe:2014kda},
the second from
Ref.~\cite{1208.0204} using data from the BES collaboration
\cite{hep-ex/0603048} (upper entry) and the WA102 collaboration \cite{hep-ex/9907055} (lower entry), respectively.
The lowest (exotic) scalar glueball mode $G_{E}$ appears to have a much too large decay width to
be consistent with a dominantly glueball interpretation of either $f_0(1500)$ or $f_0(1710)$.
On the other hand, the dilatonic mode has a decay width
below but comparable to the data for the two glueball candidates; in the case of the WA102 data
for the $f_0(1710)$ there happens to be even complete agreement.
In order to get a more complete picture, we shall now consider also the other
couplings between glueballs and mesons as determined by the Witten-Sakai-Sugimoto model.
\subsection{Glueball decay to four and more pions}
\begin{figure}[t]
\includegraphics[width=0.7\textwidth]{g4pi}
\caption{Leading-order glueball decay into four pions, isospin indices $a\not=b$.}
\label{fig:g4pi}
\end{figure}
To leading order in $1/\alpha'$ or equivalently inverse 't Hooft coupling,
the D8 brane action (\ref{SD8F2}) does not give direct couplings of
glueballs to more than two pions. These appear only through
higher DBI corrections with terms quartic in the field strength $F_{\mu Z}$
as will be discussed further below.
Decays into more than four pions can however proceed through vertices involving
vector mesons.
The vertices coupling a single glueball to $\pi$ and/or $\rho$ mesons that are obtained
from the Yang-Mills part of the D8 brane action (\ref{SD8F2}) arise from terms of the form
(dropping derivatives and Lorentz indices)
\begin{equation}\label{GverticesLO}
G{\rm Tr}\,(\pi\pi), \quad G{\rm Tr}\,(\rho\rho), \quad G{\rm Tr}\,(\rho[\pi,\pi]), \quad G{\rm Tr}\,([\pi,\rho]^2),
\quad G{\rm Tr}\,(\rho[\rho,\rho]),\quad G{\rm Tr}\,([\rho,\rho]^2).
\end{equation}
Only the first three couplings are relevant for the decay of a glueball to $\le4$ pions.
The corresponding interaction Lagrangians for the exotic and the dilatonic scalar glueball
are given explicitly in Appendix \ref{sec:Lglueint} with the coupling constants for the lowest glueball states
listed in Table \ref{tabcG} and \ref{tabct}.
The relative width of the decay of a glueball to two pions was found above to be
$\Gamma_{G\to\pi\pi}/M \propto \lambda^{-1}N_c^{-2}$, parametrically suppressed
by a factor $1/N_c$ compared to the decay of the $\rho$ meson.
For glueballs with mass larger than $2m_\rho$, the decay into two $\rho$ mesons
is of the same parametric order. However, both the lowest exotic glueball and the lowest dilatonic
glueball have mass below the $2\rho$ threshold. In this case at least one $\rho$ meson
has to be off-shell, which leads to an additional suppression by a factor $\Gamma_\rho/m_\rho\propto \lambda^{-1}N_c^{-1}$.
Because the vertex coupling a single $\rho$ meson to two pions involves ${\rm Tr}\,(\rho[\pi,\pi])$,
the leading-order decay into four pions produces pairs of pions with different isospin index.
(The parametrically suppressed
decay $G\to2G\to4\pi^0$ and $G\to G+2\pi^0\to4\pi^0$ which only needs the leading Yang-Mills part
of the DBI action will be discussed together with the direct
decay $G\to4\pi^0$ from higher-order DBI corrections further below.)
\subsubsection{Leading-order decay rate of scalar glueballs to four pions involving $\pi^\pm$}
The Feynman diagrams for the amplitude of the decay $G_{E,D}\to 2\pi^a+2\pi^b$ with $a\not=b$
are shown in Fig.~\ref{fig:g4pi}.
Some details of the rather lengthy calculation of the decay rate are given in Appendix \ref{sec:phint}.
Because one internal $\rho$ meson can reach its mass shell,
while it has nonnegligible width, we include (following Ref.~\cite{Hashimoto:2007ze})
$\Gamma_\rho$ in the $\rho$ meson propagator according to
$\Delta_\rho(r)=1/(r_0^2-\mathbf r^2-m_\rho^2+im_\rho \Gamma_\rho)$ with $\Gamma_\rho$ given by
(\ref{Gammarho}). This corresponds to a partial summation of higher-order terms in
inverse powers of $\lambda N_c$. As a crosscheck of our calculations,
we have verified that in the limit $\lambda N_c\to\infty$
the resulting decay rate agrees with the rate for $G_{E,D}\to \rho\pi\pi$,
and in the case of glueballs above the $2\rho$ threshold, with $G\to2\rho$ (Appendix \ref{sec:Grhopipi}).
Because $m_\rho<m_{E,D}<2 m_\rho$, the leading parametric order of the decay width of $G_{E}$ and $G_{D}$ into four pions
is given by the process $G\to\rho\pi\pi$ and reads $\lambda^{-2}N_c^{-3}$. Decays through off-shell
$\rho$ mesons contribute terms of order $\lambda^{-3}N_c^{-4}$.
For $G_{E}$, which is only 10\% heavier
than a $\rho$ meson, the contribution from one on-shell $\rho$ meson is strongly suppressed by phase space, but the finite
width of the $\rho$ meson helps to increase the rate. For $\lambda\approx 16.63$ we
find\footnote
Omitting the contributions from the interaction
terms involving the $\breve c$ coefficients as in \cite{Hashimoto:2007ze}
would give the even lower value $5.1\times 10^{-5}$.
In contrast to the decay into 2 pions, in the 4-pion decay rates the factors $\sqrt2$
in the coupling constant $g_{\rm YM}$ and in the normalization of the lowest scalar glueball
by which we differ from Ref.\ \cite{Hashimoto:2007ze}
no longer cancel. However, even when using exactly the couplings of Ref.~\cite{Hashimoto:2007ze}
we have not been able to reproduce the numerical
result $2.2\times 10^{-5}$
given in Eq.~(3.26) of \cite{Hashimoto:2007ze}.
}
\begin{equation}
\Gamma_{G_{E}\to4\pi}/M_E \approx 1.33\times 10^{-4} \quad (\lambda\approx 16.63).
\end{equation}
For the heavier dilatonic glueball $G_{D}$, the process $G\to\rho\pi\pi$ is more dominant, leading to a significantly
larger relative width
\begin{equation}
\Gamma_{G_{D}\to4\pi}/M_D \approx 2.44\times 10^{-3} \quad (\lambda\approx 16.63).
\end{equation}
Evidently, the $4\pi$ decay of the lowest holographic glueball state, be it $G_{E}$ or $G_{D}$, is
strongly suppressed. Table \ref{tab4pi} summarizes these results and also shows
them for smaller $\lambda=12.55$.
(In Section \ref{sec:extrapol} we shall consider the extrapolation of
these lowest states to the higher masses of experimental glueball candidates in the range
predicted by lattice gauge theory.)
\subsubsection{Decay of excited scalar glueballs to two vector mesons}
For the excited dilatonic glueball with mass $M_{D^*}\approx 2358.4$~MeV, which
is above the $2\rho$ threshold,
a similar calculation, but with coefficients $d^*_i$ in place of $d_i$ (see Table \ref{tabct}),
gives
\begin{equation}\label{Ds4pi}
\Gamma_{G_{D}^*\to4\pi}/M_{D^*} \approx 0.104 \quad (\lambda\approx 16.63).
\end{equation}
This result, which involves resummed $\rho$ propagators, is in fact well approximated by
the decay rate to two on-shell $\rho$ mesons:
\begin{equation}\label{Ds2rho}
\Gamma_{G_{D}^*\to2\rho}/M_{D^*} \approx \frac{14.330}{\lambda N_c^2} \approx 0.096 \quad (\lambda\approx 16.63),
\end{equation}
which corresponds to the strictly leading-order part of (\ref{Ds4pi}) as explained in Appendix \ref{sec:Grhopipi}.
The result (\ref{Ds2rho}), divided by its isospin factor of 3, also gives the decay into two isosinglet
vector mesons $\omega$, whose mass is only 1\% higher than that of the $\rho$ meson.
Since the decay width into two pions given in (\ref{Ds2piwidth}) is much smaller than the width into two vector mesons,
the excited dilatonic glueball turns out to decay predominantly into four pions and six pions.
The excited exotic scalar (if we do not discard this mode altogether) is instead dominated by the decay into two pions, which makes this state extremely broad. Calculating also the decay into two vector mesons, we find that the decay into two $\rho$ mesons
accounts for only about a third of the total decay into four pions,
\begin{equation}\label{Es2rho}
\Gamma_{G_{E}^*\to2\rho}/M_{E^*} \approx \frac{2.078}{\lambda N_c^2} \approx 0.014 \quad (\lambda\approx 16.63).
\end{equation}
This means that the decay into four pions is coming largely from the
$G_{E}^*\rho\pi\pi$ vertex.
In Table \ref{tabexcsc} the results for the decay widths of the excited exotic and dilatonic scalar glueballs
in the Witten-Sakai-Sugimoto model are summarized.
While the excited dilatonic scalar glueball has a
more moderate decay width compared to the very broad excited exotic scalar, it turns out to be
still quite large, around 500 MeV.
\begin{table}
\begin{tabular}{l|r|c}
\toprule
& $M$ & $\Gamma/M$ \\
\colrule
$G_{E}\to4\pi$ & 855 & $1.3\times10^{-4}$ \ldots $3.0\times10^{-4}$ \\
\colrule
$G_{D}\to4\pi$ & 1487 & $2.4\times10^{-3}$ \ldots $3.9\times10^{-3}$ \\
$G_{D}\to4\pi^0$ (NLO-DBI) & 1487 & $4.0\times10^{-6}$ \ldots $2.9\times10^{-5}$\\
$G_{D}\to G_{E}+2\pi^0\to4\pi^0$ & 1487 & $2.6\times 10^{-6}$ \ldots $4.5\times 10^{-6}$\\
$G_{D}\to G_{D}+2\pi^0\to4\pi^0$ & 1487 & $1.9\times 10^{-9}$ \ldots $4.5\times 10^{-9}$\\
\botrule
\end{tabular}
\caption{Decay widths of lowest exotic and lowest dilatonic scalar glueballs into four (massless) pions divided by glueball mass for $\lambda=16.63\ldots12.55$.}
\label{tab4pi}
\end{table}
\begin{table}
\begin{tabular}{l|r|c}
\toprule
& $M$ & $\Gamma/M$ \\
\colrule
$G_{E}^*\to\{2\pi,2K,2\eta\}$ & 2168 & 0.397\ldots0.526\\
$G_{E}^*\to4\pi$ & 2168 & 0.037\ldots0.061 \\
$G_{E}^*\to2\omega\to6\pi$ & 2168 & 0.005\ldots0.006 \\
$G_{E}^*\to2\phi$ & 2168 & 0.005\ldots0.006 \\
$G_{E}^*$ (total) & 2168 & 0.443\ldots0.599\\
\colrule
$G_{D}^*\to4\pi$ & 2358 & 0.104\ldots0.142 \\
$G_{D}^*\to2\omega\to6\pi$ & 2358 & 0.032\ldots0.043 \\
$G_{D}^*\to2\phi$ & 2358 & 0.032\ldots0.043 \\
$G_{D}^*\to\{2\pi,2K,2\eta\}$ & 2358 & 0.029\ldots0.039\\
$G_{D}^*$ (total) & 2358 & 0.197\ldots0.267\\
\botrule
\end{tabular}
\caption{Decay widths of excited scalar glueballs divided by glueball mass for $\lambda=16.63\ldots12.55$ (chiral limit, with a ratio 3:4:1 for the combined decay into $2\pi,2K,2\eta$).}
\label{tabexcsc}
\end{table}
\subsubsection{Scalar glueball decay to four $\pi^0$}
The glueball decays into four pions
that we have considered above involve pairs of pions with different isospin index.
A decay to four $\pi^0$ is suppressed by powers of inverse 't Hooft coupling, because it
either has to come from higher-order contributions in the DBI action of the D8 branes (Fig.~\ref{fig:g4pi0})
or has to involve glueball self-interactions and virtual glueballs (Fig.~\ref{fig:gg4pi0}).
As shown in Appendix \ref{sec:Lglueint}, the parametric order of the vertex
formed by a single glueball and four $\pi^0$ turns out to be $\lambda^{-7/2}N_c^{-2}$,
whereas the amplitude for $G\to2G\to4\pi^0$ and $G\to G+2\pi^0\to4\pi^0$
is proportional to $\lambda^{-3/2}N_c^{-3}$.
The former thus has stronger suppression in inverse powers of $\lambda$, while
the latter is more strongly suppressed with respect to inverse powers of $N_c$.
For simplicity, we only consider the
dilatonic glueball, since the exotic glueball has a much more complicated interaction
Lagrangian. In Appendix \ref{sec:LD4pi0} the interaction Lagrangian for a dilatonic glueball
with four $\pi^0$ resulting from the next-to-leading terms of the DBI action
has been obtained, and in Appendix \ref{sec:LDD2pi0} the vertex for $G_{D}\to G_{D,E}+2\pi^0$.
Numerically evaluating the respective decay rates of the dilatonic glueball
shows that at finite 't Hooft coupling and $N_c=3$ the dominant decay process
comes from the direct coupling of $G_{D}$ to four $\pi^0$. For $\lambda\approx 16.63$
we find (see Appendix \ref{sec:GDto4pi0} for details)
\begin{equation}\label{GammaDto4pi0}
\Gamma^{\rm (NLO-DBI)}_{G_D\to4\pi^0}/M_D\approx 4.02\times 10^{-6} \quad (\lambda\approx 16.63).
\end{equation}
The decay through virtual glueballs, while not as strongly suppressed by inverse powers
of $\lambda$, is subleading at large $N_c$ and is disfavored by phase space.
To check whether it might nevertheless be important at $N_c=3$ and our range
of 't Hooft coupling, we have evaluated the first diagram in Fig.~\ref{fig:gg4pi0}
involving one virtual glueball and found that its contribution is smaller than (\ref{GammaDto4pi0})
by
several orders of magnitude (see Table \ref{tab4pi}),
\begin{equation}
\Gamma^{\rm (LO-DBI)}_{G_{D}\toG_{D}+2\pi^0\to4\pi^0}/M_D\approx 1.94\times 10^{-9} \quad (\lambda\approx 16.63).
\end{equation}
If we do not discard the exotic glueball as a physical state
(for instance if we were to interpret the latter as holographic dual of
a glueball component of the $\sigma$-meson, as speculated at the end of Section~\ref{sec:G2pi}),
we should also
consider the process $G_{D}\toG_{E}+2\pi^0$ which is less suppressed kinematically
(but still by $N_c^{-1}$). This would be of similar magnitude as
the result (\ref{GammaDto4pi0}):\footnote{By contrast, in the scenario of Ref.~\cite{Narison:1996fm}, where the $\sigma$-meson has
a large glue contribution,
the heavier glueball is claimed to have important $4\pi^0$ decays.}
\begin{equation}\label{GDGE00}
\Gamma^{\rm (LO-DBI)}_{G_{D}\toG_{E}+2\pi^0\to4\pi^0}/M_D\approx 2.56\times 10^{-6}
\quad (\lambda\approx 16.63).
\end{equation}
(As shown in Table \ref{tab4pi}, at smaller $\lambda$
this contribution is less important compared to
the next-to-leading DBI contribution (\ref{GammaDto4pi0}).)
\begin{figure}[t]
\centerline{\includegraphics[width=0.3\textwidth]{g4pi0}}
\caption{Glueball decay into four $\pi^0$ through a vertex from the next-to-leading
order terms of the DBI action;}
\label{fig:g4pi0}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[height=0.23\textwidth]{gg4pi0}\includegraphics[height=0.23\textwidth]{ggg4pi0}}
\centerline{\hfil (a) \hfil\hfil (b) \hfil}
\caption{Glueball decay into four $\pi^0$
(a) through terms in the Yang-Mills part of the DBI action
that are quadratic in the glueball mode;
(b) through a pair of virtual glueballs.}
\label{fig:gg4pi0}
\end{figure}
Decays into four $\pi^0$ have been seen for the glueball candidate $f_0(1500)$
at a level of about an order of magnitude below the general $4\pi$ decay \cite{Agashe:2014kda},
whereas no such data seem to be available for $f_0(1710)$.
The smallness of the holographic result (\ref{GammaDto4pi0}) however would correspond to a much stronger
suppression than the one observed experimentally for
$f_0(1500)$.
\subsubsection{Tensor glueball decay to two vector mesons}\label{sec:exctensordecay}
Unless the mass of the lowest tensor glueball is manually adjusted (as we shall
consider to do in Section \ref{sec:extrapol}), only the excited tensor glueball
of the Witten model with mass $M_{T^*}=M_{D^*}\approx 2358.4$~MeV can decay into two $\rho$ or two $\omega$ mesons.
The decay rate involves two sums over the polarizations of the two vector mesons.
The average over the polarization of the tensor can again be performed by choosing
the particular polarization $\epsilon^{11}=-\epsilon^{22}=1$ and averaging over spatial directions. The rate for two vector mesons with fixed isospin quantum number reads
\begin{equation}\label{Tstorhorho}
\Gamma=\frac{1}{16\pi M_{T^*}} \sqrt{(M_{T^*}/2)^2-m_\rho^2}
\int\frac{d\Omega}{4\pi}\sum_{\lambda_1,\lambda_2=1}^3 |\mathcal M_{\epsilon_T}(\lambda_1,\lambda_2)|^2,
\end{equation}
where $\lambda_{1,2}$ are labels for the polarizations of the two vector mesons and
$\epsilon_T$ refers to the specific tensor polarization.
The amplitude $\mathcal M_{\epsilon_T}(\lambda_1,\lambda_2)$ and
the final result of the summations and the integration are given in Appendix \ref{sec:MT2rho}.
With coupling constants $t_i^*\equiv\sqrt6 d_i^*$ and $d_i^*$ from Table
\ref{tabct}, the result for the decay into two $\rho$ mesons is
\begin{equation}\label{Tsrhorho}
\frac{\Gamma_{T^*\to2\rho\to4\pi}}{M_{T^*}}\approx\frac{21.236}{\lambda N_c^2}\approx 0.142\quad (\lambda\approx 16.63),
\end{equation}
and $1/3$ of that result for the decay $T^*\to2\omega\to6\pi$.
This should be compared to the decay rate into two pions, Eq.~(\ref{Tstopipi}), which
is less than $1/8$ of
(\ref{Tsrhorho}).
As we shall discuss below, a similar pattern arises when the lowest tensor
glueball is extrapolated in mass such that it is above the $2\rho$ threshold.
\section{Extrapolations and comparison with experimental data}\label{sec:extrapol}
When comparing our results for decay rates with experiment, it seems reasonable to
do so with the dimensionless ratio $\Gamma/M$ when extrapolating the mass $M$
of the holographic glueball to the mass of the experimental glueball candidates $f_0(1500)$
or $f_0(1710)$. In the case of decay into two massless pions, Eqs.~(\ref{GEtopipi})--(\ref{Tstopipi}),
this ratio involves two explicit powers of the glueball mass $M$ that cancel the inverse mass scale squared
coming from the normalization of the glueball field, Eq.~(\ref{Hnorm}) or (\ref{HDTnorm}).
When extrapolating to higher glueball masses, we thus assume that the normalization of the glueball field
scales according to the glueball mass. While this keeps $\Gamma/M$ for two-pion decays unchanged,
the decay rates into two vector mesons or four pions are modified and
depend in fact strongly on whether the glueball mass is above or below the $2\rho$ threshold.
\subsection{Extrapolations for the scalar glueball candidates $f_0(1500)$
and $f_0(1710)$}
The results of such an extrapolation to the experimental masses of the isoscalar
mesons $f_0(1500)$
or $f_0(1710)$ is given
in Table \ref{tab4piexp}, where the holographic results of the
(chiral) Witten-Sakai-Sugimoto model for the lowest (``exotic'') and the dilatonic $0^{++}$
glueball are
compared to the experimental results for the total and the partial decay widths.
Here we have generalized our results to $N_f=3$ and assumed that pions, kaons and $\eta$ mesons
appear in ratios $3:4:1$, respecting SU(3) flavor symmetry.
Explicit masses for quarks would require a modification of the Sakai-Sugimoto model, for example
along the lines of Ref.~\cite{Aharony:2008an,Hashimoto:2008sr,McNees:2008km}, which we intend to study in future work.
This will necessarily modify the coupling of scalar glueballs through contributions that depend on
the mass of the pseudo-Goldstone bosons, and this may either increase or decrease
the decay amplitudes into the heavier pseudo-Goldstone bosons.
A significant enhancement would be in line with the so-called chiral suppression of scalar glueball decays
that is suggested by the lattice results of
Ref.~\cite{Sexton:1995kd} and the analysis of Ref.~\cite{Chanowitz:2005du}.
(In the dilaton
effective theory of Ref.~\cite{Ellis:1984jv} also an increase of
the amplitude for the decay into a pair of heavier pseudo-Goldstone bosons was found, however
such that it is approximately canceled
by the kinematical suppression from the phase space integral.)
When comparing the extrapolated decay rates of the holographic glueballs with those
of the isoscalar mesons $f_0(1500)$ or $f_0(1710)$ we find that the lowest (exotic) glueball
is much too broad to be identified as their dominant glueball component.
The dilatonic glueball, however, is sufficiently
narrow for this purpose. It leads to a total decay width that is quite close to the experimental
width of $f_0(1710)$, while being somewhat more strongly below that of $f_0(1500)$.
With mass equal to that of $f_0(1500)$, the dilatonic glueball has significantly smaller width in
$2\pi$ decays, and still smaller for decays into four pions, which is the dominant decay mode
of the $f_0(1500)$.
Regarding the $f_0(1710)$, the decay into $2\pi$ is found to
be nicely comparable to the experimental value,
while the stronger rate into pairs of heavier pseudo-Goldstone bosons
remains unaccounted for with our assumption of SU(3) invariance.
A significant enhancement of decays into kaons and $\eta$ mesons may however be
brought about by mass terms for the latter which inevitably will give additional contributions
to the coupling with scalar glueballs.
If our extrapolation of the decay width into $4\pi$ can be trusted, this appears now
uncomfortably large considering that the decay of $f_0(1710)$ into $4\pi$ has not been observed. It should be
noted, however, that the experimental data for the branching ratios of the $f_0(1710)$ still have
large uncertainties and are not covered by the Particle Data Group \cite{Agashe:2014kda}. The quoted results are
from Refs.~\cite{1208.0204,Janowski:2014ppa}, which assume that decays into $\pi\pi$, $\eta\eta$, and $K \bar K$
add up to the total width with negligible contribution from $4\pi$ decays.
Our extrapolations also predict decays into two $\omega$ mesons at a nonnegligible level.
According to \cite{Agashe:2014kda}, decays of $f_0(1710)$ to two $\omega$ mesons have at least been seen.
The Witten-Sakai-Sugimoto model for a (pure) glueball candidate suggests that the rate into four pions
should be about twice as large.
\begin{table}
\begin{tabular}{l|r|r||r|r}
\toprule
decay & $M^{\rm exp}$ & $\Gamma/M$ (exp.) & $\Gamma/M[G_{E}({M^{\rm exp}})]$ & $\Gamma/M[G_{D}({M^{\rm exp}})]$ \\
\colrule
$f_0(1500)$ (total) & 1505 & 0.072(5) & 0.249\ldots0.332 & 0.027\ldots0.037 \\
$f_0(1500)\to4\pi$ & 1505 & 0.036(3) & \it 0.003\ldots0.006 & 0.003\ldots 0.005 \\
$f_0(1500)\to2\pi$ & 1505 & 0.025(2) & 0.092\ldots0.122 & 0.009\ldots0.012\\
$f_0(1500)\to 2K$ & 1505& 0.006(1) & 0.123\ldots0.163 & 0.012\ldots0.016 \\
$f_0(1500)\to 2\eta$ & 1505& 0.004(1) & 0.031\ldots0.041 & 0.003\ldots0.004 \\
\colrule
$f_0(1710)$ (total) & 1722 & 0.078(4) & 0.252\ldots0.336 & \it 0.059\ldots0.076 \\[4pt]
$f_0(1710)\to 2K$ & 1722& * $\left\{ 0.041(20) \atop 0.047(17) \right.$& 0.123\ldots0.163 & 0.012\ldots0.016 \\[12pt]
$f_0(1710)\to 2\eta$ & 1722& * $\left\{0.020(10) \atop 0.022(11) \right.$ & 0.031\ldots0.041 & 0.003\ldots0.004 \\[12pt]
$f_0(1710)\to2\pi$ & 1722 & * $\left\{0.017(4) \atop 0.009(2) \right.$ & 0.092\ldots0.122 & 0.009\ldots0.012 \\[8pt]
$f_0(1710)\to4\pi$ & 1722 & ? & \it 0.006\ldots 0.010 & \it 0.024\ldots 0.030 \\
$f_0(1710)\to2\omega\to6\pi$ & 1722 & seen & \it 0.00016\ldots0.00021 & \it 0.011\ldots 0.014 \\
\botrule
\end{tabular}
\caption{Experimental data for the decay rates of the isoscalar mesons $f_0(1500)$ and $f_0(1710)$
juxtaposed to
the holographic results for the various decay channels
of the lowest (exotic) glueball ($G_E$) and predominantly dilatonic glueball ($G_D$)
with mass $m_{E,D}$ artificially raised to the respective experimental values
(still in the chiral limit, i.e.\ with massless pions, kaons, and $\eta$) and 't~Hooft coupling
varied from 16.63 to 12.55. Experimental data are from Ref.~\cite{Agashe:2014kda}\ except for those marked by a star, which
are from Ref.~\cite{1208.0204} where the total width of $f_0(1710)$ was split under the assumption of
a negligible branching ratio to four pions,
using data from BES \cite{hep-ex/0603048} (upper entry) and WA102 \cite{hep-ex/9907055} (lower entry),
respectively.
(Holographic predictions that are substantially increased due to the manually adjusted
glueball mass are rendered in italics.)
}
\label{tab4piexp}
\end{table}
In this context it is worth mentioning that there are still many open questions
surrounding the nature of $f_0(1710)$ \cite{Klempt:2007cp}. For example, some authors have argued
that the nearby resonance \cite{Ablikim:2004wn} $f_0(1790)$, which is not yet covered by the Particle
Data Group, should be combined with $f_0(1710)$ into one object $f_0(1760)$, for which
Ref.~\cite{Anisovich:2002ij} was able to fit disparate decay patterns, with and without
significant decay into four pions.
\subsection{Extrapolations for the tensor glueball}
In the Witten model, with $M_{\rm KK}=949$~MeV, the mass of the tensor glueball equals the mass of
the dilatonic scalar glueball, and the tensor glueball has roughly similar decay rates into two and four pions.
The rate into two pions practically exhausts the decays into pions, and
has been calculated above in Eq.~(\ref{Ttopipi}). The lowest tensor glueball
thus turns out to be a rather narrow state, however this is due to the fact that
it stays below the $2\rho$ threshold.
Indeed, the situation is markedly different for the excited tensor glueball $T^*$.
Its mass equals that of the excited dilatonic glueball, and
because this is above the threshold for two $\rho$ mesons, there is a significant contribution to
four-pion decays, and also from other vector meson decays, as we have seen in Section \ref{sec:exctensordecay}.
Extrapolating the couplings of the lowest tensor glueball to a similarly high mass,
2 or 2.4~GeV (where the latter is roughly the prediction of lattice gauge theory for the lowest tensor mode), equally
gives large contributions from decay into two vector mesons, as listed
in Table \ref{tabtensorextrapolmpi}. Reassuringly, these results
are quite close to those for the unmodified results for $T^*$, cp.~Eq.~(\ref{Tsrhorho}), so that we consider them
as plausible extrapolations to the likely situation of a tensor glueball with mass above 2 GeV.
\begin{table}
\begin{tabular}{l|r|r}
\toprule
decay & M & $\Gamma/M[T(M)]$ \\
\colrule
$T\to 2\pi$ & 1487 & 0.013\ldots0.018 \\
$T\to 2K$ & 1487 & 0.004\ldots0.006 \\
$T\to 2\eta$ & 1487 & 0.0005\ldots0.0007 \\
$T$ (total) & 1487 & $\approx 0.02\ldots0.03$\\
\colrule
$T\to 2\rho\to 4\pi$ & 2000 & 0.135\ldots 0.178 \\ %
$T\to 2\omega\to 6\pi$ & 2000 & 0.045\ldots 0.059 \\
$T\to 2\pi$ & 2000 & 0.014\ldots0.018 \\
$T\to 2K$ & 2000 & 0.010\ldots0.013 \\
$T\to 2\eta$ & 2000 & 0.0018\ldots0.0024 \\
$T$ (total) & 2000 & $\approx 0.16\ldots0.21$\\
\colrule
$T\to 2\rho\to 4\pi$ & 2400 & 0.159\ldots 0.211 \\ %
$T\to 2\omega\to 6\pi$ & 2400 & 0.053\ldots 0.070 \\
$T\to 2\phi$ & 2400 & 0.053\ldots 0.070 \\
$T\to 2\pi$ & 2400 & 0.014\ldots0.019 \\
$T\to 2K$ & 2400 & 0.012\ldots0.016 \\
$T\to 2\eta$ & 2400 & 0.0025\ldots0.0034 \\
$T$ (total) & 2400 & $\approx 0.29\ldots0.39$\\
\botrule
\end{tabular}
\caption{Extrapolation of tensor glueball decay for the case of massive pseudo-Goldstone bosons with glueball mass
$M=M_T=M_D$ and when the latter is raised to 2 GeV or
the lattice prediction $\sim 2.4 \,{\rm GeV}$.
The 't~Hooft coupling is again
varied from 16.63 to 12.55.
}
\label{tabtensorextrapolmpi}
\end{table}
In Table \ref{tabtensorextrapolmpi} we have also extrapolated to decays into kaons and $\eta$ mesons.
In the holographic setup,
a tensor glueball presumably does not couple to an explicit mass term of the pseudoscalar mesons,
so the effect of the latter should be purely kinematic. The results (\ref{MTpipi}) and
(\ref{GTpipi}) imply a pseudoscalar mass dependence of the form $(1-4m_\pi^2/M_T^2)^{5/2}$.
This suppression is such that it overcompensates the ratio 4/3 that favors kaons over pions.
Adding up the individual contributions, we find a rather broad width for a 2.4 GeV tensor glueball, 700 to 900 MeV,
which is significantly broader than all the $f_2$ mesons listed in \cite{Agashe:2014kda}. With a mass of 2 GeV, the relative width
turns out to be comparable with that of the tensor meson $f_2(1950)$, which has $\Gamma/M= 0.24(1)$.
The latter is indeed occasionally discussed as a candidate for a tensor glueball as it
appears to have largely flavor-blind decay modes.
\section{Conclusion}
Using the Witten-Sakai-Sugimoto model for holographic QCD, which
only has one free dimensionless parameter,
we have repeated and extended
the calculation of glueball decay rates of Ref.~\cite{Hashimoto:2007ze},
where only the lowest scalar mode was studied.
This lowest mode is associated with an exotic polarization of
the gravitational field, involving components in the direction of
compactification from a 5-dimensional super-Yang-Mills theory down to
nonsupersymmetric Yang-Mills theory.
The mass of this lowest (``exotic'') $0^{++}$ mode turns out to be only
slightly above the mass of the $\rho$ meson and is therefore
much smaller than the mass scale of glueballs found in lattice gauge theory.
The background of the Witten model also contains another
tower of scalar glueball modes which are predominantly dilatonic
and whose lowest mass is about 1.5 GeV, not far from the predictions
of lattice simulations.
Besides its very low mass,
the lowest (exotic) scalar glueball turns out to have a decay rate that is significantly higher
than that of the heavier dilatonic mode, which seems counterintuitive
if the latter were to represent an excitation of the former.
We are therefore led to the conjecture that the exotic scalar mode should be
discarded so that the glueball spectrum begins with the
(predominantly) dilatonic mode as lowest glueball.
Another, more speculative possibility that we have mentioned in
Section \ref{sec:G2pi} is that the exotic scalar mode
represents a broad glueball component of the $\sigma$-meson in line
with the scenario of Ref.~\cite{Narison:1996fm,Narison:2005wc,Kaminski:2009qg},
which features a broad glueball around 1 GeV and a narrower one around 1.5 GeV.
The decay widths of glueballs obtained in the Witten-Sakai-Sugimoto model
are parametrically suppressed by a factor of $\lambda^{-1}N_c^{-2}$, but
the numerical results vary substantially for the different modes and decay channels,
and thus do not give a picture of ``universal narrowness'' despite
the large-$N_c$ nature of the Witten-Sakai-Sugimoto model.
A very strong parametric suppression is obtained for the decay into $4\pi^0$,
as already pointed out in Ref.~\cite{Hashimoto:2007ze}. We have confirmed
that also the final numerical value turns out to be very small.
A noteworthy feature of the Witten-Sakai-Sugimoto model is that
the value of the gluon condensate is small, close to its standard SVZ value \cite{Shifman:1978bx},
whereas phenomenological models which incorporate a scalar glueball through
a QCD dilaton field \cite{Ellis:1984jv,Janowski:2014ppa} would require
very large gluon condensates to admit only narrow glueball states.
We have also extrapolated our results so that they can be
compared with experimental data for the scalar glueball candidates
$f_0(1500)$ or $f_0(1710)$. In the case of $f_0(1500)$,
our results for the decay widths of the dilatonic glueball are significantly
below the observed rates for decay into two pions and even more so for the
experimentally dominant decay into four pions.
In the case of the $f_0(1710)$ meson, the decay rate into two pions
comes out in nice agreement with available experimental data. The much stronger rate into kaons
is not accounted for, but this may be due to the fact that the Witten-Sakai-Sugimoto model
is strictly chiral and the mechanism of chiral suppression \cite{Sexton:1995kd,Chanowitz:2005du}.
However our (crude) extrapolation to the mass of $f_0(1710)$
predicts also a significant branching ratio into four pions that
has not been seen experimentally. [Although in this context it should be noted that
the identification of
$f_0(1710)$ and its separation from the nearby $f_0(1790)$ \cite{Ablikim:2004wn} has been a matter of debate \cite{Anisovich:2002ij,Klempt:2007cp}].
Furthermore, we have studied the decay of tensor glueballs, which
in the Witten-Sakai-Sugimoto model have a narrow width into two pions
and (when the mass is above the $2\rho$ threshold) a larger width into
four pions, such that only the isoscalar tensor meson $f_2(1950)$
appears to be compatible with our the holographic result, while heavier
tensor glueball modes would have to be broader than the tensor mesons
so far discussed in the literature.
In the case of the tensor glueball we can already plausibly anticipate
the effects of nonzero pseudo-Goldstone masses. In the case of scalar glueballs
the situation is less clear and we intend to study this issue in extensions
of the Witten-Sakai-Sugimoto model in
a future work. This would be particularly interesting in view of the glueball
candidate $f_0(1710)$ which according to Ref.~\cite{Janowski:2014ppa}
could be a nearly unmixed glueball and which has a ratio \cite{Agashe:2014kda}\
$\Gamma(\pi\pi)/\Gamma(KK)$
that is significantly below the flavor-symmetric value $3/4$.
Since the holographic results pertain only to pure glueballs,
it would clearly be most interesting to study mixing of glueballs with
$q\bar q$ states as this can strongly obscure signatures of glueball
content.
In the holographic setup, mixing is suppressed by $1/N_c$ \cite{Hashimoto:2007ze}
and would presumably require more difficult stringy corrections
that are not captured by the effective Lagrangian following from
the Witten-Sakai-Sugimoto model.
Absent those, it might be interesting to consider a more phenomenological approach such as extended
linear sigma models \cite{Janowski:2014ppa}, where holographic results
for the glueball-meson interactions could be used as input instead of fitting to experimental data.
\begin{acknowledgments}
We would like to thank
Koji Hashimoto, Chung-I Tan, and Seiji Terashima for correspondence and
David Bugg, Francesco Giacosa, Stanislaus Janowski, and Dirk Rischke for useful discussions.
This work was supported by the Austrian Science
Fund FWF, project no. P26366, and the FWF doctoral program
Particles \& Interactions, project no. W1252.
\end{acknowledgments}
|
1,108,101,564,344 | arxiv | \subsection{Acknowledgements}
I am very thankful to Ivan Fesenko and Matthew Morrow for many valuable
discussions, especially on an ad\`{e}le interpretation. I thank the Research
Group of Prof. Marc Levine for the stimulating scientific environment. I
heartily thank the anonymous referee for greatly improving the presentation,
especially in \S \ref{section_CubeComplex}, and observing the fact $H^{2}=0$
in eq. \ref{lBTA_35}, which clarifies a crucial cancellation in the proof of
Prop. \ref{BT_KeyLemmaFormula}.
\subsection{What is not here}
In the present text I only discuss the `linear algebra setting' of Tate's
central extension (\cite[\S 1]{BFM_Conformal} for the case $n=1$). There is
also a `differential operator setting' (\cite[\S 2]{BFM_Conformal}), which I
will treat in a future text. Roughly speaking, $\mathfrak{G}$ will be replaced
by much smaller algebras of differential operators on a vector bundle.
Moreover, I do not treat the true multiloop analogue of an affine Kac-Moody
algebra in the present text. Already for $n=1$ I only consider the `plain'
affine Lie algebras without extending by a derivation. From the perspective of
a triangular decomposition, this is a rather horrible omission: the root
spaces are infinite-dimensional! However, as the reader can probably imagine
from the computations in \S \ref{BT_sect_applications_residue},
\S \ref{BT_sect_applications_multiloop} the calculation gets a lot more
complicated in the presence of derivations. Thus, this aspect will also be
deferred to a future text. The same applies to the analogue of the plain
Virasoro algebra. There should also be a nonlinear analogue, distinguished
cohomology classes for multiloop groups. The cases $n=1,2$ (along with a
higher representation theory in categories) are treated in detail by Frenkel
and Zhu in \cite{Frenkel08092011}.
One should also mention that there are completely orthogonal generalizations
of Kac-Moody/Virasoro cocycles to multiloop Lie algebras, see for example
\cite[\S 9]{MR934284}, \cite{MR2743761}.
\section{\label{section_Frameworks}Basic framework}
For an associative algebra $A$ we shall write $A_{Lie}$ to denote the
associated Lie algebra.
\begin{definition}
[\cite{MR565095}]\label{BT_DefCubicallyDecompAlgebra}An \emph{(
$n$\emph{-fold)} \emph{cubically decomposed algebra} (over a field $k$) is the
datum $(A,(I_{i}^{\pm}),\tau)$:
\begin{itemize}
\item an associative unital (not necessarily commutative) $k$-algebra $A$;
\item two-sided ideals $I_{i}^{+},I_{i}^{-}$ such that $I_{i}^{+}+I_{i}^{-}=A
$ for $i=1,\ldots,n$;
\item writing $I_{i}^{0}:=I_{i}^{+}\cap I_{i}^{-}$ and $I_{\operatorname*{tr
}:=I_{1}^{0}\cap\cdots\cap I_{n}^{0}$, a $k$-linear ma
\[
\tau:I_{\operatorname*{tr},Lie}/[I_{\operatorname*{tr},Lie},A_{Lie
]\rightarrow k\text{.
\]
\end{itemize}
\end{definition}
For any finite-dimensional $k$-vector space $V$ certain infinite matrix
algebras act naturally on the $k$-vector space of multiple Laurent polynomials
$V[t_{1}^{\pm1},\ldots,t_{n}^{\pm1}]$. This yields an example of this
structure, see \S \ref{TATE_section_InfiniteMatrixAlgebras} below. There is
also an analogue for $V((t_{1}))\cdots((t_{n}))$, which we leave to the reader
to formulate (this links to higher local fields, see \cite{MR1804915}). Local
components of Parshin-Beilinson ad\`{e}les of schemes yield another example,
see \cite[\S 1]{MR565095}. In \textit{loc. cit.} the ideals $I_{i}^{+
,I_{i}^{-}$ are called $X^{i},Y^{i}$. The latter gives the multidimensional
generalization of the ad\`{e}le formulation of Tate \cite{MR0227171}. See
\cite{MR2658047}, \cite{0763.14006}, \cite{MR1374916},
\cite{MorrowResiduePaper} for more background on higher-dimensional ad\`{e}les
and their uses.
\subsection{\label{TATE_section_InfiniteMatrixAlgebras}Infinite matrix
algebras}
Fix a field $k$. Let $R$ be an associative $k$-algebra, not necessarily unital
or commutative. Define an algebra of infinite matrice
\begin{equation}
E(R):=\{\phi=(\phi_{ij})_{i,j\in\mathbf{Z}},\phi_{ij}\in R\mid\exists K_{\phi
}:\left\vert i-j\right\vert >K_{\phi}\Rightarrow\phi_{ij}=0\}\text{.}
\label{TATEMATRIX_l1
\end{equation}
Define a product by $(\phi\cdot\phi^{\prime})_{ik}:=\sum_{j\in\mathbf{Z}
\phi_{ij}\phi_{jk}^{\prime}$, the usual matrix multiplication formula; this
sum only has finitely many non-zero terms and one can choose $K_{\phi
\phi^{\prime}}:=K_{\phi}+K_{\phi^{\prime}}$. Then $E(R)$ becomes an
associative $k$-algebra. If $R$ is unital, $E(R)$ is also unital. $E$ is a
functor from associative algebras to associative algebras; for a morphism
$\varphi:R\rightarrow S$ there is an induced morphism $E(\varphi
):E(R)\rightarrow E(S)$ by using $\varphi$ entry-by-entry, i.e. $(E(\varphi
)\phi)_{ij}:=\varphi(\phi_{ij})$. If $I\subseteq R$ is an ideal (which is in
particular a non-unital associative ring), $E(I)\subseteq E(R)$ is an ideal.
Moreover, for ideals $I_{1},I_{2}$ one has $E(I_{1}\cap I_{2})=E(I_{1})\cap
E(I_{2})$ and $E(I_{1}+I_{2})=E(I_{1})+E(I_{2})$, as a sum of ideals. Next,
defin
\begin{align*}
I^{+}(R):= & \{\phi\in E(R)\mid\exists B_{\phi}:i<B_{\phi}\Rightarrow
\phi_{ij}=0\}\\
I^{-}(R):= & \{\phi\in E(R)\mid\exists B_{\phi}:j>B_{\phi}\Rightarrow
\phi_{ij}=0\}
\end{align*}
and one checks easily that $I^{+}(R),I^{-}(R)$ are two-sided ideals in $E(R)$.
The following figure attempts to visualize the shape of the matrices in
$E(R),I^{+}(R)$ and $I^{-}(R)$ respectively
\begin{center}
\includegraphics[
height=0.889in,
width=2.7994in
{Infmatrix.eps
\end{center}
Define $I^{0}(R):=I^{+}(R)\cap I^{-}(R)$ and one checks tha
\[
I^{0}(R):=\{\phi\in E(R)\mid\phi_{ij}=0\text{ for all but finitely many
}\left( i,j\right) \}\text{.
\]
There is a trace morphis
\begin{equation}
\operatorname*{tr}:I^{0}(R)\rightarrow R\text{;}\qquad\operatorname*{tr}\phi:
{\textstyle\sum\nolimits_{i\in\mathbf{Z}}}
\phi_{ii}\text{,} \label{TATEMATRIX_l3
\end{equation}
the sum is obviously finite. One easily verifies that $\operatorname*{tr
[\phi,\phi^{\prime}]
{\textstyle\sum\nolimits_{i,j\in\mathbf{Z}}}
[\phi_{ij},\phi_{ji}^{\prime}]$ and thus $\operatorname*{tr}[I^{0
(R),E(R)]\subseteq\lbrack R,R] $. More generally, if $R^{\prime}\subseteq R$
is a subalgebra
\[
\operatorname*{tr}[I^{0}(R^{\prime}),E(R)]\subseteq\lbrack R^{\prime
},R]\text{.
\]
We note that this trace does not necessarily vanish on commutators. Moreover,
every $\phi\in E(R)$ can be written as $\phi=\phi^{+}+\phi^{-}$ with
$\phi_{ij}^{+}:=\delta_{i\geq0}\phi_{ij}$ (for this $R$ need not be unital,
use $\phi_{ij}$ for $i\geq0$ and $0$ otherwise) and $\phi^{-}=\phi-\phi^{+}$.
One checks that $\phi^{\pm}\in I^{\pm}(R)$. It follows that $I^{+
(R)+I^{-}(R)=E(R)$.
Finally, let $M$ be an $R$-bimodule (over $k$, i.e. a left-$(A\otimes
_{k}A^{op})$-module; $R$-bimodules form an abelian category). Analogously to
$E(R)$, defin
\begin{equation}
E(M):=\{\phi=(\phi_{ij})_{i,j\in\mathbf{Z}},\phi_{ij}\in M\mid\exists K_{\phi
}:\left\vert i-j\right\vert >K_{\phi}\Rightarrow\phi_{ij}=0\}\text{.}
\label{TATEMATRIX_l2
\end{equation}
Again using the matrix multiplication formula, $E(M)$ is an $E(R)$-bimodule.
If $0\rightarrow M^{\prime}\rightarrow M\rightarrow M^{\prime\prime
}\rightarrow0$ is an exact sequence of $R$-bimodules, $0\rightarrow
E(M^{\prime})\rightarrow E(M)\rightarrow E(M^{\prime\prime})\rightarrow0$ is
an exact sequence of $E(R)$-bimodules. Note that for an ideal $I\subseteq R$
the object $E(I)$ is well-defined, regardless whether we regard $I$ as an
associative ring as in eq. \ref{TATEMATRIX_l1} or an $R$-bimodule as in eq.
\ref{TATEMATRIX_l2}.
Now let $V$ be a finite-dimensional $k$-vector space and $R_{0}$ an arbitrary
unital subalgebra of $\operatorname*{End}_{k}(V)$. Define $R_{i}:=E(R_{i-1})$
for $i=1,\ldots,n$. Note that via $k\rightarrow R_{0}$, $\alpha\mapsto
\alpha\cdot\mathbf{1}_{\operatorname*{End}_{k}(V)}$, $k$ is embedded into the
center of $R_{i}$. Then $R_{n}=(E\circ\cdots\circ E)(R_{0})$ is a unital
associative $k$-algebra. Its elements may be indexed $\phi=(\phi_{(i_{n
,j_{n}),\ldots,(i_{1},j_{1})\in\mathbf{Z}^{2n}}\in R_{0})$. By the properties
discussed above
\[
I_{i}^{\pm}:=(\underset{n}{E}\cdots\underset{i+1}{E}\circ\underset{i}{I^{\pm
}\circ\underset{i-1}{E}\cdots\underset{1}{E})(R_{0})\qquad\text{(}I^{\pm
}\text{ in the }i\text{-th place),
\]
is an ideal in $R_{n}$ (we use centered subscripts only to emphasize the
numbering). Moreover
\begin{align*}
I_{i}^{+}+I_{i}^{-} & =(E\cdots E\circ I^{+}\circ E\cdots E)(R_{0})+(E\cdots
E\circ I^{-}\circ E\cdots E)(R_{0})\\
& =(E\cdots E\circ E\circ E\cdots E)(R_{0})=R_{n}\text{.
\end{align*}
By composing the traces of eq. \ref{TATEMATRIX_l3} we arrive at a $k$-linear
map $\tau$
\begin{align*}
\tau & :I_{\operatorname*{tr}}=I_{1}^{0}\cap\cdots\cap I_{n}^{0}=(I^{0
\circ\cdots\circ I^{0})(R_{0})\\
& \qquad\overset{\operatorname*{tr}}{\longrightarrow}\cdots\overset
{\operatorname*{tr}}{\longrightarrow}I^{0}(I^{0}(R_{0}))\overset
{\operatorname*{tr}}{\longrightarrow}I^{0}(R_{0})\overset{\operatorname*{tr
}{\longrightarrow}R_{0}\overset{\operatorname*{Tr}}{\longrightarrow}k\text{,
\end{align*}
where \textquotedblleft$\operatorname*{Tr}$\textquotedblright\ (as opposed to
\textquotedblleft$\operatorname*{tr}$\textquotedblright) denotes the ordinary
matrix trace of $\operatorname*{End}_{k}(V)$ ($\supseteq R_{0}$). Here we have
used that $V$ is finite-dimensional over $k$. Using $\operatorname*{tr
[I^{0}(R^{\prime}),E(R)]\subseteq\lbrack R^{\prime},R]$ (for subalgebras
$R^{\prime}\subseteq R$) inductively, one sees tha
\begin{align*}
\tau\lbrack I_{\operatorname*{tr}},R_{n}] & =\operatorname*{Tr
(\operatorname*{tr}\circ\cdots\circ\operatorname*{tr}\circ\operatorname*{tr
)[I^{0}(I^{0}(\cdots)),E(E(\cdots))]\\
& \subseteq\operatorname*{Tr}(\operatorname*{tr}\circ\cdots\circ
\operatorname*{tr})[I^{0}(\cdots),E(\cdots)]\subseteq\operatorname*{Tr
[R_{0},R_{0}]=0
\end{align*}
since the ordinary trace $\operatorname*{Tr}$ vanishes on commutators. Hence,
$\tau$ factors to a morphism $\tau:I_{\operatorname*{tr},Lie
/[I_{\operatorname*{tr},Lie},R_{Lie}]\rightarrow k$. Summarizing, for every
$n\geq1$, every finite-dimensional $k$-vector space $V$ and every unital
subalgebra $R_{0}\subseteq\operatorname*{End}_{k}(V)$, $(R_{n},(I_{i}^{\pm
}),\tau)$ is a cubically decomposed algebra.\newline Finally, note that for
any associative algebra $R$, $E(R)$ is a right-$R$-submodule of \textit{right
-$R$-module endomorphisms $\operatorname*{End}_{R}(R[t,t^{-1}])$ of
$R[t,t^{-1}]$. Write elements as $a=\sum_{i\in\mathbf{Z}}a_{i}t^{i}$, also
denoted $a=(a_{i})_{i}$ with $a_{i}\in R$, and let $\phi=(\phi_{ij})$ act by
$\left( \phi\cdot a\right) _{i}:=\sum_{k}\phi_{ik}a_{k}$. Moreover, each
$a\in R[t,t^{-1}]$ determines a right-$R$-module endomorphism via the
multiplication operator $x\mapsto a\cdot x$. We fin
\[
R[t,t^{-1}]\hookrightarrow E(R)\hookrightarrow\operatorname*{End
\nolimits_{R}(R[t,t^{-1}])\text{.
\]
Multiplication with $t^{i}$ is represented by a matrix with a diagonal
$\ldots,1,1,1,\ldots$, shifted by $i$ off the principal diagonal. Inductively
\begin{equation}
R_{0}[t_{1}^{\pm1},\ldots,t_{n}^{\pm1}]\hookrightarrow R_{n}\hookrightarrow
\operatorname*{End}\nolimits_{R_{0}}(R_{0}[t_{1}^{\pm1},\ldots,t_{n}^{\pm
1}])\text{.} \label{lBTA_30
\end{equation}
See for example \cite[\S 1]{MR723457}, \cite[Lec. 4]{MR1021978} for more
information regarding the case $n=1$ and \cite[\S 3]{Frenkel08092011} for a
similar procedure when $n=2$.
\section{\label{section_CEComplexes}Modified Chevalley-Eilenberg complexes}
Suppose $k$ is a field and $\mathfrak{g}$ a Lie algebra over $k$. We recall
that for any $\mathfrak{g}$-module the conventional Chevalley-Eilenberg
complex is given by $C(M)_{r}:=M\otime
{\textstyle\bigwedge\nolimits^{r}}
\mathfrak{g}$ along with the differentia
\begin{align}
\delta & :=\delta^{\lbrack1]}+\delta^{\lbrack2]}:C(M)_{r}\rightarrow
C(M)_{r-1}\label{lBT_CEComplexDifferential}\\
\delta^{\lbrack1]}(f_{0}\otimes f_{1}\wedge\ldots\wedge f_{r}) & :
{\textstyle\sum\nolimits_{i=1}^{r}}
(-1)^{i}[f_{0},f_{i}]\otimes f_{1}\wedge\ldots\wedge\widehat{f_{i}
\wedge\ldots\wedge f_{r}\nonumber\\
\delta^{\lbrack2]}(f_{0}\otimes f_{1}\wedge\ldots\wedge f_{r}) & :
{\textstyle\sum\nolimits_{1\leq i<j\leq r}}
(-1)^{i+j+1}f_{0}\otimes\lbrack f_{i},f_{j}]\wedge f_{1}\ldots\widehat{f_{i
}\ldots\widehat{f_{j}}\ldots\wedge f_{r}\nonumber
\end{align}
for $f_{0}\in M$ and $f_{1},\ldots,f_{r}\in\mathfrak{g}$. Its homology is (if
one wants by definition) Lie homology with coefficients in $M$. There is also
a cohomological analogue; we refer the reader to the literature for details,
e.g. \cite[Ch. 10]{MR1217970}. We may view $k$ itself as a $\mathfrak{g
$-module with the trivial structure. There is an obvious morphis
\begin{equation}
I:C(\mathfrak{g})_{r}\rightarrow C(k)_{r+1}\qquad f_{0}\otimes f_{1
\wedge\ldots\wedge f_{r}\mapsto\left( -1\right) ^{r}\mathbf{1}_{k}\otimes
f_{0}\wedge f_{1}\wedge\ldots\wedge f_{r} \label{lBT_revLieImap
\end{equation}
and one checks easily that this commutes with the respective differentials and
thus induces morphisms $H_{r}\left( \mathfrak{g},\mathfrak{g}\right)
\rightarrow H_{r+1}\left( \mathfrak{g},k\right) $. The linear dual
$\mathfrak{g}^{\ast}:=\operatorname*{Hom}\nolimits_{k}(\mathfrak{g},k)$ is
canonically a $\mathfrak{g}$-module via $\left( f\cdot\varphi\right)
(g):=\varphi([g,f])$ for $\varphi\in\mathfrak{g}^{\ast}$ and $f,g\in
\mathfrak{g}$. The cohomological analogue of eq. \ref{lBT_revLieImap} is the
morphism $I:H^{r+1}\left( \mathfrak{g},k\right) \rightarrow H^{r}\left(
\mathfrak{g},\mathfrak{g}^{\ast}\right) $ given b
\[
(I\phi)(f_{1}\wedge\ldots\wedge f_{r})(f_{0}):=\left( -1\right) ^{r
\phi(f_{0}\wedge f_{1}\wedge\ldots\wedge f_{r})\text{.
\]
\begin{remark}
\label{lBT_revCyclicAnalogue}These maps could be viewed as a Lie-theoretic
analogue of map $I$ in Connes' periodicity sequence, see \cite[\S 2.2
{MR1217970}. We may view $H_{\ast-1}(\mathfrak{g},\mathfrak{g})$ as a partial
\textquotedblleft uncyclic\textquotedblright\ counterpart of Lie homology. The
true Hochschild analogue would be Leibniz homology, cf. \cite[\S 10.6
{MR1217970}. For the present purposes we have however no use for this analogue.
\end{remark}
Let $\mathfrak{j}\subseteq\mathfrak{g}$ be a Lie ideal. As such, it is a
$\mathfrak{g}$-module and we may consider $C(\mathfrak{j})_{\bullet}$.
Following \cite{MR565095} we may work with a `cyclically symmetrized'
counterpart: We write $\mathfrak{j}\wedg
{\textstyle\bigwedge\nolimits^{r-1}}
\mathfrak{g}$ to denote the $\mathfrak{g}$-submodule of $\mathfrak{g}\wedg
{\textstyle\bigwedge\nolimits^{r-1}}
\mathfrak{g}
{\textstyle\bigwedge\nolimits^{r}}
\mathfrak{g}$ generated by elements $j\wedge f_{1}\wedge\ldots\wedge f_{r-1}$
such that $j\in\mathfrak{j}$ and $f_{1},\ldots,f_{r-1}\in\mathfrak{g}$. If
$\mathfrak{j}_{i}$, $i=1,2,\ldots$, are Lie ideals, we denote by
$(\bigoplus_{i}\mathfrak{j}_{i})\wedg
{\textstyle\bigwedge\nolimits^{r-1}}
\mathfrak{g}$ the module $\bigoplus_{i}(\mathfrak{j}_{i}\wedg
{\textstyle\bigwedge\nolimits^{r-1}}
\mathfrak{g})$.
\begin{example}
If $k\left\langle s,t,u\right\rangle \,$and $k\left\langle s\right\rangle $
denote a $3$-dimensional abelian Lie algebra along with a $1$-dimensional Lie
ideal, then
{\textstyle\bigwedge\nolimits^{2}}
k\left\langle s,t,u\right\rangle $ is $3$-dimensional with basis $s\wedge t$,
$s\wedge u$ and $t\wedge u$. Then $k\left\langle s\right\rangle \wedge
k\left\langle s,t,u\right\rangle $ is $2$-dimensional with basis $s\wedge t$,
$s\wedge u$.
\end{example}
The $k$-vector spaces $CE(\mathfrak{j})_{r}:=\mathfrak{j}\wedg
{\textstyle\bigwedge\nolimits^{r-1}}
\mathfrak{g}$ (for $r\geq1$) and $CE(\mathfrak{j})_{0}:=k$ define a subcomplex
of $C(k)_{\bullet}$. In particular, the differential is given b
\begin{equation}
\delta(f_{0}\wedge f_{1}\wedge\ldots\wedge f_{r}):
{\textstyle\sum\nolimits_{0\leq i<j\leq r}}
(-1)^{i+j}[f_{i},f_{j}]\wedge f_{0}\wedge\ldots\widehat{f_{i}}\ldots
\widehat{f_{j}}\ldots\wedge f_{r}\text{.} \label{lBT_effectiveCEdifferential
\end{equation}
It is well-defined since $\mathfrak{j}$ is a Lie ideal. We get morphisms
generalizing $I$, notably $H_{r}(\mathfrak{g},\mathfrak{j})\rightarrow
H_{r+1}(CE(\mathfrak{j}))$ via $\mathfrak{j}\otime
{\textstyle\bigwedge\nolimits^{r}}
\mathfrak{g}\rightarrow\mathfrak{j}\wedg
{\textstyle\bigwedge\nolimits^{r}}
\mathfrak{g}$ and analogously $H^{r+1}(CE(\mathfrak{j}))\rightarrow
H^{r}(\mathfrak{g},\mathfrak{j}^{\ast})$. We have resisted the temptation to
re-index $CE(-)_{\bullet}$ despite the unpleasant $\left( +1\right) $-shift
in eq. \ref{lBT_revLieImap} in order to remain compatible with standard usage
in the following sense:
\begin{lemma}
[{\cite[Lemma 1(a)]{MR565095}}]\label{BT_LemmaOnComputingLieHomology
$CE(\mathfrak{g})_{\bullet}$ is a complex of $k$-vector spaces and is
quasi-isomorphic to $k\otimes_{U\mathfrak{g}}^{\mathbf{L}}k$. In particula
\[
H_{i}(\mathfrak{g},k)=H_{i}(CE(\mathfrak{g})_{\bullet})\text{ and
H^{i}(\mathfrak{g},k)=H^{i}(\operatorname*{Hom}\nolimits_{k}(CE(\mathfrak{g
)_{\bullet},k))\text{.
\]
\end{lemma}
\begin{proof}
As we have explained above, $CE(\mathfrak{g})_{\bullet}$ agrees with the
standard Chevalley-Eilenberg complex and the latter is well-known to represent
$k\otimes_{U\mathfrak{g}}^{\mathbf{L}}k$.
\end{proof}
We easily comput
\begin{align}
& H_{0}(\mathfrak{g},\mathfrak{j})\overset{\cong}{\underset{I
{\longrightarrow}}H_{1}(CE(\mathfrak{j}))\cong\mathfrak{j}/[\mathfrak{g
,\mathfrak{j}]\label{lBTrev_curious_identities}\\
& H^{1}(CE(\mathfrak{j}))\overset{\cong}{\underset{I}{\longrightarrow}
H^{0}(\mathfrak{g},\mathfrak{j}^{\ast})\cong\left( \mathfrak{j
/[\mathfrak{g},\mathfrak{j}]\right) ^{\ast}\text{.}\nonumber
\end{align}
In higher degrees the map $I$ ceases to be an isomorphism.
Nonetheless, this computation hints at the principle of computation which we
shall use below. Beilinson uses $CE(-)_{\bullet}$ in his paper \cite{MR565095
, whereas we will only be able to do manageable computations with
$C(-)_{\bullet}$. The map $I$ will serve to deduce facts about $CE(-)_{\bullet
}$ while working with $C(-)_{\bullet}$.
\section{Cubically decomposed algebras}
Let $(A,(I_{i}^{\pm}),\tau)$ be an $n$-fold cubically decomposed algebra over
a field $k$, see Def. \ref{BT_DefCubicallyDecompAlgebra}, i.e. we are given
the following datum:
\begin{itemize}
\item an associative unital (not necessarily commutative) $k$-algebra $A$;
\item two-sided ideals $I_{i}^{+},I_{i}^{-}$ such that $I_{i}^{+}+I_{i}^{-}=A
$ for $i=1,\ldots,n$;
\item writing $I_{i}^{0}:=I_{i}^{+}\cap I_{i}^{-}$ and $I_{\operatorname*{tr
}:=I_{1}^{0}\cap\cdots\cap I_{n}^{0}$, a $k$-linear ma
\[
\tau:I_{\operatorname*{tr},Lie}/[I_{\operatorname*{tr},Lie},A_{Lie
]\rightarrow k\text{.
\]
\end{itemize}
See \S \ref{section_Frameworks} to see how this type of structure arises. As a
shorthand, define $\mathfrak{g}:=A_{Lie}$. For any elements $s_{1
,\ldots,s_{n}\in\{+,-,0\}$ we define the \emph{degree }$\deg(s_{1
,\ldots,s_{n}):=1+\#\{i\mid s_{i}=0\}$. Next, following \cite{MR565095} we
shall construct complexes of $\mathfrak{g}$-modules:
\begin{definition}
[\cite{MR565095}]For every $1\leq p\leq n+1$ defin
\begin{equation}
\left. ^{\wedge}T_{\bullet}^{p}\right. :=\coprod_{\substack{s_{1
,\ldots,s_{n}\in\{\pm,0\}\\\deg(s_{1}\ldots s_{n})=p}}\bigcap_{i=1
^{n}\left\{
\begin{array}
[c]{ll
CE(I_{i}^{+})_{\bullet} & \text{for }s_{i}=+\\
CE(I_{i}^{-})_{\bullet} & \text{for }s_{i}=-\\
CE(I_{i}^{+})_{\bullet}\cap CE(I_{i}^{-})_{\bullet} & \text{for }s_{i}=0
\end{array}
\right. \label{lBT_DefComplexTWedge
\end{equation}
and $\left. ^{\wedge}T_{\bullet}^{0}\right. :=CE(\mathfrak{g})_{\bullet}$.
\end{definition}
Each $CE(I_{i}^{\pm})_{\bullet}$ is a complex and all their differentials are
defined by the same formula, eq. \ref{lBT_effectiveCEdifferential}, as such
the intersection of these complexes has a well-defined differential and is a
complex itself. Same for the coproduct. The complex $\left. ^{\wedge
}T_{\bullet}^{\bullet}\right. $ is inspired by a cubical object used by
Beilinson \cite{MR565095}.
\begin{example}
For $n=2$ we get complexe
\begin{align*}
\left. ^{\wedge}T_{\bullet}^{1}\right. &
{\textstyle\coprod\nolimits_{s_{1},s_{2}\in\{\pm\}}}
CE(I_{1}^{s_{1}})_{\bullet}\cap CE(I_{2}^{s_{2}})_{\bullet}\\
\left. ^{\wedge}T_{\bullet}^{2}\right. &
{\textstyle\coprod\nolimits_{s_{1}\in\{\pm\}}}
CE(I_{1}^{s_{1}})_{\bullet}\cap CE(I_{2}^{+})_{\bullet}\cap CE(I_{2
^{-})_{\bullet}\\
& \qquad\oplu
{\textstyle\coprod\nolimits_{s_{2}\in\{\pm\}}}
CE(I_{1}^{+})_{\bullet}\cap CE(I_{1}^{-})_{\bullet}\cap CE(I_{2}^{s_{2
})_{\bullet}\\
\left. ^{\wedge}T_{\bullet}^{3}\right. & =CE(I_{1}^{+})_{\bullet}\cap
CE(I_{1}^{-})_{\bullet}\cap CE(I_{2}^{+})_{\bullet}\cap CE(I_{2}^{-
)_{\bullet}\text{.
\end{align*}
Note that $CE(I_{1}^{+})_{\bullet}\cap CE(I_{1}^{-})_{\bullet}\neq
CE(I_{1}^{+}\cap I_{1}^{-})_{\bullet}$, e.g. $I_{1}^{+}\wedge I_{1}^{-}$ is a
subspace in degree two of the left-hand side, but not of the right-hand side.
\end{example}
Diverging from \cite{MR565095} we shall primarily use the following slightly
different auxiliary construction (which we will later relate to the above one):
\begin{definition}
For $1\leq p\leq n+1$ le
\begin{equation}
\left. ^{\otimes}T_{\bullet}^{p}\right. :=\coprod_{\substack{s_{1
,\ldots,s_{n}\in\{\pm,0\}\\\deg(s_{1}\ldots s_{n})=p}}C(I_{1}^{s_{1}}\cap
I_{2}^{s_{2}}\cap\cdots\cap I_{n}^{s_{n}})_{\bullet}
\label{lBT_DefComplexTTensor
\end{equation}
and $\left. ^{\otimes}T_{\bullet}^{p}\right. :=C(\mathfrak{g})_{\bullet}$.
\end{definition}
So, instead of the modified Chevalley-Eilenberg complex of
\S \ref{section_CEComplexes} we just use the standard complexes for Lie
homology with suitable coefficients. Clearly the morphism $I:C(\mathfrak{g
)_{r}\rightarrow C(k)_{r+1}$ descends to morphism
\begin{align*}
C(\mathfrak{g})_{r}\supseteq\qquad C(I_{i}^{s_{i}})_{r} & \rightarrow
CE(I_{i}^{s_{i}})_{r+1}\qquad\subseteq C(k)_{r+1}\\
\underset{\in I_{i}^{s_{i}}}{f_{0}}\otimes f_{1}\wedge\ldots\wedge f_{r} &
\mapsto\left( -1\right) ^{r}\underset{\in I_{i}^{s_{i}}}{f_{0}}\wedge
f_{1}\wedge\ldots\wedge f_{r
\end{align*}
As we take intersections of Lie ideals on the left $C(I_{1}^{s_{1}}\cap
\ldots)_{\bullet}$, as in eq. \ref{lBT_DefComplexTTensor}, the image lies in
the intersection of the individual images, i.e. $CE(I_{1}^{s_{i}})_{\bullet
}\cap\ldots$, as in eq. \ref{lBT_DefComplexTWedge}. As a result, we obtain
morphism
\[
\left. ^{\otimes}T_{\bullet}^{p}\right. \overset{I}{\longrightarrow}\left.
^{\wedge}T_{\bullet+1}^{p}\right. \qquad\text{(for all }p\text{)
\]
and since they are a restriction of the map $I$ to subcomplexes, this is a
morphism of complexes, and thus induces maps on homology.
\section{\label{section_CubeComplex}The cube complex}
Next, we shall define maps $\cdots\rightarrow\left. ^{\otimes}T_{\bullet
^{2}\right. \rightarrow\left. ^{\otimes}T_{\bullet}^{1}\right.
\rightarrow\left. ^{\otimes}T_{\bullet}^{0}\right. \rightarrow0$, so that
$(\left. ^{\otimes}T_{\bullet}\right. )^{\bullet}$ becomes an exact
superscript-indexed complex of (subscript-indexed complexes); and the same for
$\left. ^{\wedge}T_{\bullet}^{\bullet}\right. $. We begin by discussing
$\left. ^{\otimes}T_{\bullet}^{\bullet}\right. $.\newline We define a
$\mathfrak{g}$-module $N^{0}:=\mathfrak{g}$ and for $p\geq1
\begin{equation}
N^{p}:=\coprod\nolimits_{s_{1},\ldots,s_{n}\in\{+,-,0\}}I_{1}^{s_{1}}\cap
I_{2}^{s_{2}}\cap\cdots\cap I_{n}^{s_{n}}\text{\qquad(with }\deg(s_{1
,\ldots,s_{n})=p\text{).} \label{lBTA_12
\end{equation}
We shall denote the components $f=(f_{s_{1}\ldots s_{n}})$ of elements in
$N^{p}$ with indices in terms of $s_{1},\ldots,s_{n}\in\{+,-,0\}$. Clearly
$N^{p}=0$ for $p>n+1$. We shall treat all $N^{p}$ as $\mathfrak{g}$-modules
and observe tha
\[
\left. ^{\otimes}T_{\bullet}^{p}\right. =C(N^{p})_{\bullet
\]
(by definition!), so by the functoriality and flatness\footnote{We just tensor
$N^{p}$ with the vector spaces
{\textstyle\bigwedge\nolimits^{i}}
\mathfrak{g}$. Being over a field, this preserves exact sequences.} of
$C_{\bullet}$ it suffices to construct an exact complex $N^{\bullet}$ out of
the $N^{p}$ and then $\left. ^{\otimes}T_{\bullet}^{p}\right. $ will be an
exact complex in $p$.
\begin{example}
For $n=1$ we hav
\[
N^{2}=I_{1}^{0}\text{,\qquad}N^{1}=I_{1}^{+}\oplus I_{1}^{-
\]
and elements would be denoted $f=(f_{0})\in N^{2}$ and $g=(g_{+},g_{-})\in
N^{1}$. For $n=2$ we hav
\begin{align*}
N^{3} & =I_{1}^{0}\cap I_{2}^{0}\text{,\qquad}N^{2}=\left(
{\textstyle\coprod\nolimits_{s_{1}\in\{+,-\}}}
I_{1}^{s_{1}}\cap I_{2}^{0}\right) \oplus\left(
{\textstyle\coprod\nolimits_{s_{2}\in\{+,-\}}}
I_{1}^{0}\cap I_{2}^{s_{2}}\right) \\
N^{1} &
{\textstyle\coprod\nolimits_{s_{1},s_{2}\in\{+,-\}}}
I_{1}^{s_{1}}\cap I_{2}^{s_{2}}\text{.
\end{align*}
\end{example}
We shall use the shorthand $s_{1}\ldots\pm\ldots s_{n}$ (resp. $0$ instead of
$\pm$) to indicate that $s_{i}\in\{+,-\}$ (resp. $s_{i}=0$) sits in the $i$-th
place. Define $\mathfrak{g}$-module homomorphism
\begin{align}
\left( \partial_{i}f\right) _{s_{1}\ldots\pm\ldots s_{n}}:= & \left(
-1\right) ^{\#\left\{ j\mid j>i\text{ and }s_{j}=0\right\} }f_{s_{1
\ldots0\ldots s_{n}}\nonumber\\
(\partial_{i}f)_{s_{1}\ldots0\ldots s_{n}}:= &
0\label{lBT_revDifferentialEasyDef}\\
\partial:= &
{\textstyle\sum\nolimits_{i=1}^{n}}
\partial_{i}\nonumber
\end{align}
One checks easily that $\partial_{i}^{2}=0$ and $\partial_{i}\partial
_{j}+\partial_{j}\partial_{i}=0$ for all $i,j=1,\ldots,n$. As a consequence,
$\partial^{2}=0$. The components are given explicitly b
\begin{align}
\left( \partial f\right) _{s_{1}\ldots s_{n}} &
{\textstyle\sum\nolimits_{i=1}^{n}}
(\partial_{i}f)_{s_{1}\ldots s_{n}}\nonumber\\
&
{\textstyle\sum\limits_{\{i\mid s_{i}=+,-\}}}
\left( -1\right) ^{\#\left\{ j\mid j>i\text{ and }s_{j}=0\right\}
}f_{s_{1}\ldots0\ldots s_{n}}\text{.} \label{lBT_revDiffCubeComplex1
\end{align}
\begin{definition}
\label{BT_Def_IdempotentsForA}Let $(A,(I_{i}^{\pm}),\tau)$ be an $n$-fold
cubically decomposed algebra over a field $k$. A \emph{system of good
idempotents} are pairwise commuting elements $P_{i}^{+}\in A$ for
$i=1,\ldots,n$ such that for all $i$:
\begin{enumerate}
\item $P_{i}^{+2}=P_{i}^{+}$.
\item $P_{i}^{+}A\subseteq I_{i}^{+}$.
\item $P_{i}^{-}A\subseteq I_{i}^{-}\qquad$(where we define $P_{i
^{-}:=\mathbf{1}_{A}-P_{i}^{+}$).
\end{enumerate}
\end{definition}
We note that the $P_{i}^{-}$ are also pairwise commuting idempotents and
$P_{i}^{+}+P_{i}^{-}=\mathbf{1}_{A}$. Next, for $s_{i}\in\{+,-\}$ define
$k$-vector space homomorphism
\begin{align*}
\left( \varepsilon_{i}f\right) _{s_{1}\ldots s_{i}\ldots s_{n}}:= &
\left( -1\right) ^{s_{i}}P_{i}^{s_{i}
{\textstyle\sum\nolimits_{\gamma_{i}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{i}}f_{s_{1}\ldots\gamma_{i}\ldots s_{n}}\\
\left( \varepsilon_{i}f\right) _{s_{1}\ldots0\ldots s_{n}}:= & 0\text{,
\end{align*}
where $(-1)^{\pm}=\pm1$. By direct calculation one verifies the identities
$\varepsilon_{i}^{2}=\varepsilon_{i}$ and $\varepsilon_{i}\varepsilon
_{j}=\varepsilon_{j}\varepsilon_{i}$ for all $i,j=1,\ldots,n$. Finally, defin
\begin{align*}
\left( H_{i}f\right) _{s_{1}\ldots0\ldots s_{n}} & :=\left( -1\right)
^{\#\left\{ j\mid j>i\text{ and }s_{j}=0\right\}
{\textstyle\sum\nolimits_{\gamma_{i}\in\{\pm\}}}
P_{i}^{-\gamma_{i}}f_{s_{1}\ldots\gamma_{i}\ldots s_{n}}\\
\left( H_{i}f\right) _{s_{1}\ldots\pm\ldots s_{n}} & :=0\text{.
\end{align*}
The expression $P_{i}^{-\gamma_{i}}$ means $P_{i}^{-}$ for $\gamma_{i}=+$ and
$P_{i}^{+}$ for $\gamma_{i}=-$. One checks tha
\begin{align*}
H_{i}^{2}=0\qquad & \text{and}\qquad H_{i}H_{j}+H_{j}H_{i}=0\\
\partial_{i}\varepsilon_{j}=\varepsilon_{j}\partial_{i}\qquad &
\text{and}\qquad H_{i}\varepsilon_{j}=\varepsilon_{j}H_{i
\end{align*}
for all $i,j=1,\ldots,n$. Moreover, $\partial_{i}H_{j}+H_{j}\partial_{i}=0$
whenever $i\neq j$. In the special case $i=j$ one finds instead tha
\[
\partial_{i}H_{i}+H_{i}\partial_{i}=\mathbf{1}-\varepsilon_{i}\text{.
\]
Define $H:=H_{1}+\varepsilon_{1}H_{2}+\cdots+\varepsilon_{1}\varepsilon
_{2}\cdots\varepsilon_{n-1}H_{n}$. Using the identities established above, one
finds very easil
\begin{equation}
H^{2}=0\qquad\text{and}\qquad\partial H+H\partial=\mathbf{1}-\varepsilon
_{1}\cdots\varepsilon_{n}\text{.} \label{lBTA_35
\end{equation}
The fact $H^{2}=0$ was observed by the anonymous referee; it explains a
certain cancellation in the proof of Prop. \ref{BT_KeyLemmaFormula}, which had
been rather mysterious in an earlier version of this text.
\begin{lemma}
An explicit formula for $H$ is given b
\begin{align}
(Hf)_{s_{1}\ldots s_{n}} & =\left( -1\right) ^{\deg(s_{1}\ldots s_{n
)}\left( -1\right) ^{s_{1}+\cdots+s_{b}}P_{1}^{s_{1}}\cdots P_{i}^{s_{b
}\label{lBTA_17}\\
&
{\textstyle\sum\limits_{\gamma_{1}\ldots\gamma_{b+1}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{1}+\cdots+\gamma_{b}}P_{b+1}^{-\gamma_{b+1
}f_{\gamma_{1}\ldots\gamma_{b+1}s_{b+2}\ldots s_{n}}\text{,}\nonumber
\end{align}
where $b$ denotes the largest index such that $s_{1},\ldots,s_{b}\in\{\pm\}$
or $b=0$ if none (and so $s_{b+1}=0$ if $b<n$; $b+1$ is the index of the
\textquotedblleft leftmost zero\textquotedblright).
\end{lemma}
\begin{proof}
One shows tha
\begin{align}
\left( \varepsilon_{1}\cdots\varepsilon_{i}f\right) _{s_{1}\ldots s_{n}} &
=\left( -1\right) ^{s_{1}+\cdots+s_{i}}P_{1}^{s_{1}}\cdots P_{i}^{s_{i
}\nonumber\\
&
{\textstyle\sum\limits_{\gamma_{1}\ldots\gamma_{i}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{1}+\cdots+\gamma_{i}}f_{\gamma_{1}\ldots
\gamma_{i}s_{i+1}\ldots s_{n}}\label{lBTA_34}\\
& \qquad\qquad\qquad\text{(for }s_{1},\ldots,s_{i}\in\{\pm\}\text{)
\nonumber\\
\left( \varepsilon_{1}\cdots\varepsilon_{i}f\right) _{s_{1}\ldots s_{n}} &
=0\text{,}\qquad\text{(if }0\in\{s_{1},\ldots,s_{i}\}\text{)}\nonumber
\end{align}
by evaluating $(\varepsilon_{j}\cdots\varepsilon_{i}f)$ inductively along
$j=i,i-1,\ldots,1$. Plug in $H_{i+1}f$ for $f$ to obtai
\begin{align*}
\left( \varepsilon_{1}\cdots\varepsilon_{i}H_{i+1}f\right) _{s_{1}\ldots
s_{n}} & =\left( -1\right) ^{\#\left\{ j\mid j>i+1\text{ and
s_{j}=0\right\} }\left( -1\right) ^{s_{1}+\cdots+s_{i}}P_{1}^{s_{1}}\cdots
P_{i}^{s_{i}}\\
&
{\textstyle\sum\limits_{\gamma_{1}\ldots\gamma_{i+1}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{1}+\cdots+\gamma_{i}}P_{i+1}^{-\gamma_{i+1
}f_{\gamma_{1}\ldots\gamma_{i}\gamma_{i+1}s_{i+2}\ldots s_{n}
\end{align*}
for $s_{1},\ldots,s_{i}\in\{\pm\}$ and $s_{i+1}=0$. Otherwise, i.e. for
$0\in\{s_{1},\ldots,s_{i}\}$ or $s_{i+1}\in\{\pm\}$, the respective component
is zero. Thus
\[
H_{s_{1}\ldots s_{n}}
{\textstyle\sum\nolimits_{i=1}^{n}}
\left( \varepsilon_{1}\cdots\varepsilon_{i}H_{i+1}f\right) _{s_{1}\ldots
s_{n}}\text{.
\]
The summands with $i>b$ vanish since for them $0\in\{s_{1},\ldots,s_{i}\}$.
The summands with $i<b$ vanish since for them $s_{i+1}\in\{\pm\}$. Thus
\[
H_{s_{1}\ldots s_{n}}=\left( \varepsilon_{1}\cdots\varepsilon_{b
H_{b+1}f\right) _{s_{1}\ldots s_{n}
\]
and we use the above explicit formula. Note that $\#\left\{ j\mid j>b+1\text{
and }s_{j}=0\right\} $ is just one below the total number of slots with value
$0$ since $s_{1},\ldots,s_{b}\in\{\pm\}$ and $s_{b+1}=0$. Thus, $\left(
-1\right) ^{\#\left\{ j\mid j>i+1\text{ and }s_{j}=0\right\} }=\left(
-1\right) ^{\deg(s_{1}\ldots s_{n})}$.
\end{proof}
The above maps are defined for $N^{p}$ in degrees $\geq1$. We extend them to
degree zero b
\[
\hat{\partial}:N^{1}\rightarrow N^{0}\qquad\text{and}\qquad\hat{H
:N^{0}\rightarrow N^{1
\
\begin{align}
\hat{\partial}f & :
{\textstyle\sum\limits_{s_{1}\ldots s_{n}\in\{+,-\}}}
\left( -1\right) ^{s_{1}+\cdots+s_{n}}f_{s_{1}\ldots s_{n}}\nonumber\\
(\hat{H}f)_{s_{1}\ldots s_{n}} & :=(-1)^{s_{1}+\cdots+s_{n}}P_{1}^{s_{1
}\cdots P_{n}^{s_{n}}f\text{.} \label{lBTA_14
\end{align}
Along with these, we obtain the following crucial fact:
\begin{lemma}
\label{BT_Prop_EstablishKeyCubeComplex}Equipped with these morphism
\begin{equation}
N^{\bullet}=[N^{n+1}\underset{H}{\overset{\partial}{\rightleftarrows}
N^{n}\underset{H}{\overset{\partial}{\rightleftarrows}}\cdots\underset
{H}{\overset{\partial}{\rightleftarrows}}N^{1}\underset{\hat{H}}{\overset
{\hat{\partial}}{\rightleftarrows}}N^{0}]_{n+1,0} \label{lBTA_7
\end{equation}
is a complex of $\mathfrak{g}$-modules with differentials $\partial_{\bullet}$
(resp. $\hat{\partial}$) and contracting homotopies $H_{\bullet}$ (resp
$\hat{H}$) in the category of $k$-vector spaces.
\end{lemma}
\begin{proof}
The identities $\partial^{2}=0$ and $\hat{\partial}\circ\partial
=0:N^{2}\rightarrow N^{0}$ are easy to check. Next, we confirm the contracting
homotopy. We find $\partial H+H\partial=\mathbf{1}-\varepsilon_{1
\cdots\varepsilon_{n}$ by a telescope cancellation. For $f\in N^{i}$ with
$i\geq2$ for each component $f_{s_{1}\ldots s_{n}}$ there must be at least one
$i$ with $s_{i}=0$ and thus $\varepsilon_{1}\cdots\varepsilon_{n}\mid_{N^{i
}=0$ for $i\geq2$. It remains to treat $i=0,1$. For $i=1$ we comput
\[
\hat{H}\hat{\partial}f=(-1)^{s_{1}+\cdots+s_{n}}P_{1}^{s_{1}}\cdots
P_{n}^{s_{n}
{\textstyle\sum\limits_{s_{1}\ldots s_{n}\in\{+,-\}}}
\left( -1\right) ^{s_{1}+\cdots+s_{n}}f_{s_{1}\ldots s_{n}}=\varepsilon
_{1}\cdots\varepsilon_{n}f
\]
(as in eq. \ref{lBTA_34}). Thus, $\partial H+\hat{H}\hat{\partial}=\mathbf{1}$
on $N^{1}$. Finally, for $i=0$ we compute $\hat{\partial}\hat{H}f=f$.
\end{proof}
\begin{corollary}
$0\rightarrow\left. ^{\otimes}T_{\bullet}^{n+1}\right. \rightarrow\left.
^{\otimes}T_{\bullet}^{n}\right. \rightarrow\cdots\rightarrow\left.
^{\otimes}T_{\bullet}^{0}\right. \rightarrow0$ with differential (and a
contracting homotopy) induced by $\partial\otimes\operatorname*{id
\nolimits_{\wedge^{\bullet}\mathfrak{g}}$ (and $H\otimes\operatorname*{id
\nolimits_{\wedge^{\bullet}\mathfrak{g}}$) is an exact complex of (complexes
of $k$-vector spaces).
\end{corollary}
For the corollary just use that tensoring with
{\textstyle\bigwedge\nolimits^{r}}
\mathfrak{g}$ is exact.
\section{\label{section_CubeComplexII}The cube complex II}
Next, it would be nice to give a discussion of the $\left. ^{\wedge
}T_{\bullet}^{\bullet}\right. $ parallel to the one for $\left. ^{\otimes
}T_{\bullet}^{\bullet}\right. $ in the previous section. We can only do this
to a limited extent, however.
\begin{lemma}
\label{BTrev_LemmaConstructTWedgeComplex}The definitio
\begin{equation}
\left( \partial f\right) _{s_{1}\ldots s_{n}}
{\textstyle\sum\limits_{\{i\mid s_{i}=+,-\}}}
\left( -1\right) ^{\#\left\{ j\mid j>i\text{ and }s_{j}=0\right\}
}f_{s_{1}\ldots0\ldots s_{n}} \label{lBT_revDiffCubeComplex2
\end{equation}
turns $\left. ^{\wedge}T_{\bullet}^{\bullet}\right. $ into a complex of
(complexes of $k$-vector spaces) with respect to the superscript index. The
morphisms $\left. ^{\otimes}T_{\bullet}^{p}\right. \overset{I
{\longrightarrow}\left. ^{\wedge}T_{\bullet+1}^{p}\right. $ yield a morphism
of complexes.
\end{lemma}
\begin{proof}
Easy. Just check that the map $\partial$ is well-defined and satisfies
$\partial^{2}=0$; in fact exactly the same computation as in eqs.
\ref{lBT_revDifferentialEasyDef} applies. For the second claim, we just need
to show that the map $I$ commutes with the differential of either complex, but
this is clear since the differentials are given by the same formula, compare
eq. \ref{lBT_revDiffCubeComplex1} with eq. \ref{lBT_revDiffCubeComplex2}.
\end{proof}
The complex $\left. ^{\wedge}T_{\bullet}^{\bullet}\right. $ is the central
object in Beilinson's construction \cite{MR565095}. We will use its analogue
$\left. ^{\otimes}T_{\bullet}^{\bullet}\right. $ as an auxiliary
computational device. Firstly, let us explain Beilinson's construction. We
need the following entirely homological tool:
\begin{lemma}
\label{BT_PropExplicitDifferentialInSpecSeq}Suppose we are given an exact
sequenc
\[
S^{\bullet}=[S^{n+1}\rightarrow S^{n}\rightarrow\cdots\rightarrow
S^{0}]_{n+1,0
\]
with entries in $\mathbf{Ch}^{+}\mathcal{M}od_{k}$, i.e. each $S^{i
=S_{\bullet}^{i}$ is a bounded below complex of $k$-vector spaces\footnote{One
may alternatively view this as a bicomplex supported horizontally in degrees
$[0,n+1]$, bounded from below, and whose rows are exact.}.
\begin{enumerate}
\item There is a second quadrant homological spectral sequence $(E_{p,q
^{r},d_{r})$ converging to zero such tha
\[
E_{p,q}^{1}=H_{q}(S_{\bullet}^{p})\text{.\qquad\emph{(}}d_{r}:E_{p,q
^{r}\rightarrow E_{p-r,q+r-1}^{r}\text{\emph{)}
\]
\item There is a first quadrant cohomological spectral sequence $(E_{r
^{p,q},d^{r})$ converging to zero such tha
\[
E_{1}^{p,q}=H^{q}(\operatorname*{Hom}\nolimits_{k}(S_{\bullet}^{p
,k))\text{.}\qquad\emph{(}d^{r}:E_{r}^{p,q}\rightarrow E_{r}^{p+r,q-r+1
\emph{)
\]
\item The following differentials are isomorphisms
\[
d_{n+1}:E_{n+1,1}^{n+1}\rightarrow E_{0,n+1}^{n+1}\qquad\text{and}\qquad
d^{n+1}:E_{n+1}^{0,n+1}\rightarrow E_{n+1}^{n+1,1}\,\text{.
\]
\item \label{lemma_specseq_homotopypart}Suppose $H_{p}:S^{p}\rightarrow
S^{p+1}$ is a contracting homotopy for $S^{\bullet}$. The
\[
(d_{n+1})^{-1}=H_{n}\delta_{1}H_{n-1}\cdots\delta_{n-1}H_{1}\delta_{n
H_{0}=H_{n
{\textstyle\prod\nolimits_{i=1,\ldots,n}}
(\delta_{i}H_{n-i})
\]
(where the last product depends on the ordering and refers to composition),
an
\[
(d^{n+1})^{-1}=H_{0}^{\ast}\delta_{n}^{\ast}H_{1}^{\ast}\cdots\delta_{1
^{\ast}H_{n}^{\ast}=H_{0}^{\ast
{\textstyle\prod\nolimits_{i=n,\ldots,1}}
(\delta_{i}^{\ast}H_{n+1-i}^{\ast})\text{,
\]
where we write $f^{\ast}=\operatorname*{Hom}\nolimits_{k}(f,k)$ as a shorthand.
\end{enumerate}
The construction is functorial in $S^{\bullet}$, i.e. if $S^{\bullet
}\rightarrow S^{\prime\bullet}$ is a morphism of complexes as in our
assumptions, then there are induced morphisms between their spectral sequences.
\end{lemma}
\begin{proof}
Parts (1)-(3) are \cite[Lemma 1(a)]{MR565095}. More precisely, for
\textbf{(1)} use the bicomplex spectral sequence fo
\[
E_{p,q}^{0}=S_{q}^{p}\text{\quad and\quad}E_{0}^{p,q}=\operatorname*{Hom
\nolimits_{k}(S_{q}^{p},k)\text{.
\]
If we take differentials `$\rightarrow$' for forming the $E^{0}$-page, the
$E^{1}$-page vanishes since $S_{\bullet}$ is exact (as a complex of complexes)
and so the individual sequences of $k$-vector spaces $S_{\bullet}^{i}$ for
constant $i$ are exact, so $E^{\infty}=E^{1}=0$. Then use the bicomplex
spectral sequences with differential `$\downarrow$' on the $E^{0}$-page for
our claim. It also converges to zero then; \textbf{(2)} is analogous.
\textbf{(3)} The bicomplex is horizontally supported in $[0,n+1]$.
\textbf{(4)}\ Diagram chase.
\end{proof}
We combine Lemma \ref{BTrev_LemmaConstructTWedgeComplex}\ with Lemma
\ref{BT_PropExplicitDifferentialInSpecSeq}: Apply the latter to $S_{q
^{p}:=\left. ^{\wedge}T_{q}^{p}\right. $; we denote the resulting spectral
sequence by $\left. ^{\wedge}E_{\bullet,\bullet}^{\bullet}\right. $. The
fact that the (bi)complex of Lemma \ref{BT_PropExplicitDifferentialInSpecSeq}
is supported horizontally in $[n+1,0]$ (homologically, i.e. for $\left.
^{\wedge}E_{\bullet,\bullet}^{\bullet}\right. $) and $[0,n+1]$ respectively
(cohomologically, i.e. for $\left. ^{\wedge}E_{\bullet}^{\bullet,\bullet
}\right. $) implies that we have edge morphism
\begin{align*}
\rho_{1}:\left. ^{\wedge}E_{n+1,1}^{n+1}\right. \rightarrow\left. ^{\wedge
}E_{n+1,1}^{1}\right. \text{\qquad} & \text{and\qquad}\rho_{2}:\left.
^{\wedge}E_{0,n+1}^{1}\right. \rightarrow\left. ^{\wedge}E_{0,n+1
^{n+1}\right. \\
\wp_{1}:\left. ^{\wedge}E_{n+1}^{0,n+1}\right. \rightarrow\left. ^{\wedge
}E_{1}^{0,n+1}\right. \text{\qquad} & \text{and\qquad}\wp_{2}:\left.
^{\wedge}E_{1}^{n+1,1}\right. \rightarrow\left. ^{\wedge}E_{n+1
^{n+1,1}\right. \text{.
\end{align*}
Next, we identify the involved objects: Using Lemma
\ref{BT_LemmaOnComputingLieHomology} we comput
\begin{align*}
\left. ^{\wedge}E_{0,n+1}^{1}\right. & =H_{n+1}(\left. ^{\wedge
}T_{\bullet}^{0}\right. )=H_{n+1}(CE(\mathfrak{g})_{\bullet})\cong
H_{n+1}(\mathfrak{g},k)\\
\left. ^{\wedge}E_{n+1,1}^{1}\right. & =H_{1}(\left. ^{\wedge}T_{\bullet
}^{n+1}\right. )=H_{1}
{\textstyle\bigcap\nolimits_{i=1,\ldots,n}}
{\textstyle\bigcap\nolimits_{s_{i}\in\{\pm\}}}
CE(I_{i}^{s_{i}})_{\bullet})=I_{\operatorname*{tr}}/[I_{\operatorname*{tr
},\mathfrak{g}]\\
\left. ^{\wedge}E_{1}^{n+1,1}\right. & =\operatorname*{Hom}\nolimits_{k
(I_{\operatorname*{tr}}/[I_{\operatorname*{tr}},\mathfrak{g}],k)\qquad
\text{and}\qquad\left. ^{\wedge}E_{1}^{0,n+1}\right. =H^{n+1}(\mathfrak{g
,k)\text{.
\end{align*}
\begin{definition}
[\cite{MR565095}]\label{BT_Def_TateBeilinsonAbstractResidueMaps}Let
$(A,(I_{i}^{\pm}),\tau)$ be an $n$-fold cubically decomposed algebra over a
field $k$ and $\mathfrak{g}:=A_{Lie}$ its Lie algebra. Defin
\[
\operatorname*{res}\nolimits_{\ast}:H_{n+1}(\mathfrak{g},k)\rightarrow
k\qquad\operatorname*{res}\nolimits_{\ast}:=\tau\circ\rho_{1}\circ
(d_{n+1})^{-1}\circ\rho_{2
\]
an
\[
\operatorname*{res}\nolimits^{\ast}:k\rightarrow H^{n+1}(\mathfrak{g
,k)\quad\operatorname*{res}\nolimits^{\ast}(1):=(\wp_{1}\circ(d^{n+1
)^{-1}\circ\wp_{2})\tau\text{,
\]
where for $\operatorname*{res}\nolimits^{\ast}$ we read $\tau$ as an element
of $E_{1}^{n+1,1}$. We will call $\phi:=\operatorname*{res}\nolimits^{\ast
}(1)$ the \emph{Tate extension class}.
\end{definition}
In the case $n=1$ it would also be justified to name this cohomology class
after\ Kac-Petersen \cite{MR619827}; it also appears in the works of the
Japanese school, e.g. \cite{MR723457}.
\begin{remark}
It follows from the construction of $\operatorname*{res}\nolimits_{\ast}$,
$\operatorname*{res}\nolimits^{\ast}$ tha
\begin{equation}
\operatorname*{res}\nolimits^{\ast}(\alpha)(X_{0}\wedge\ldots\wedge
X_{n})=\alpha\operatorname*{res}\nolimits_{\ast}X_{0}\wedge\ldots\wedge
X_{n}\text{.} \label{lBT_39
\end{equation}
\end{remark}
Now we would like to compute these maps explicitly. Clearly, the most elusive
map in the construction is the differential $d_{n+1}$ (resp. $d^{n+1}$). We
can render it explicit using Lemma \ref{BT_PropExplicitDifferentialInSpecSeq
.\ref{lemma_specseq_homotopypart} as soon as we have an explicit contracting
homotopy available. However, it seems to be quite difficult to construct such
a homotopy for the complex $\left. ^{\wedge}T^{\bullet}\right. $. On the
other hand, we \textit{do} have such a contracting homotopy for $\left.
^{\otimes}T^{\bullet}\right. $ by Lemma \ref{BT_Prop_EstablishKeyCubeComplex}
and its corollary. Luckily for us, these complexes are closely connected. We
may apply Lemma \ref{BT_PropExplicitDifferentialInSpecSeq} also to $S_{q
^{p}:=\left. ^{\otimes}T_{q-1}^{p}\right. $; this time denote the resulting
spectral sequence by $\left. ^{\otimes}E_{\bullet,\bullet}^{\bullet}\right.
$. We easily compute
\begin{align*}
\left. ^{\otimes}E_{0,n+1}^{1}\right. & =H_{n+1}(\left. ^{\otimes
}T_{\bullet-1}^{0}\right. )=H_{n}(C(\mathfrak{g})_{\bullet})\cong
H_{n}(\mathfrak{g},\mathfrak{g})\\
\left. ^{\otimes}E_{n+1,1}^{1}\right. & =H_{1}(\left. ^{\otimes
}T_{\bullet-1}^{n+1}\right. )=H_{0}(C
{\textstyle\bigcap\nolimits_{i=1,\ldots,n}}
{\textstyle\bigcap\nolimits_{s_{i}\in\{\pm\}}}
I_{i}^{s_{i}})_{\bullet})=I_{\operatorname*{tr}}/[I_{\operatorname*{tr
},\mathfrak{g}]\\
\left. ^{\otimes}E_{1}^{n+1,1}\right. & =\operatorname*{Hom
\nolimits_{k}(I_{\operatorname*{tr}}/[I_{\operatorname*{tr}},\mathfrak{g
],k)\qquad\text{and}\qquad\left. ^{\otimes}E_{1}^{0,n+1}\right.
=H^{n}(\mathfrak{g},\mathfrak{g}^{\ast})\text{.
\end{align*}
We note that some groups even agree with their $\left. ^{\wedge}T_{q
^{p}\right. $-counterpart; as we had already observed in eq.
\ref{lBTrev_curious_identities}.
\begin{definition}
Write $\left. ^{\otimes}\operatorname*{res}\nolimits_{\ast}\right.
:H_{n}(\mathfrak{g},\mathfrak{g})\rightarrow k$ and $\left. ^{\otimes
}\operatorname*{res}\nolimits^{\ast}(1)\right. \in H^{n}(\mathfrak{g
,\mathfrak{g}^{\ast})$ for the counterparts of $\left. \operatorname*{res
\nolimits_{\ast}\right. ,\left. \operatorname*{res}\nolimits^{\ast}\right.
$ in Def. \ref{BT_Def_TateBeilinsonAbstractResidueMaps} using $\left.
^{\otimes}E\right. $ instead of $\left. ^{\wedge}E\right. $.
\end{definition}
\begin{lemma}
[Compatibility]\label{BTrev_ComputeWedgeResViaTensorRes}The morphism of
bicomplexes $\left. ^{\otimes}T_{\bullet}^{\bullet}\right. \overset
{I}{\longrightarrow}\left. ^{\wedge}T_{\bullet+1}^{\bullet}\right. $ induces
a commutative diagra
\[
\xymatrix{ H_{n}(\mathfrak{g},\mathfrak{g}) \ar[d] \ar[r] \ar@/^2pc/[rrr]^{\emph{comes with contracting homotopy}} & {\left. ^{\otimes }E^{n+1}_{0,n+1}\right.} \ar[d] & {\left. ^{\otimes }E^{n+1}_{n+1,1}\right.} \ar[d] \ar[l]^{d_{n+1}}_{\cong} \ar[r] & H_{0}(\mathfrak{g},\mathfrak{g}) \ar[d]^{\cong} \\ H_{n+1}(\mathfrak{g},k) \ar[r] \ar@/_2pc/[rrr]_{\emph{Beilinson's residue}} & {\left. ^{\wedge }E^{n+1}_{0,n+1}\right.} & {\left. ^{\wedge }E^{n+1}_{n+1,1}\right.} \ar[l]_{d_{n+1}}^{\cong} \ar[r] & H_{1}(\mathfrak{g},k). }
\]
\end{lemma}
\begin{proof}
We had already observed in Lemma \ref{BTrev_LemmaConstructTWedgeComplex} that
the morphisms $I$ induce a morphism of bicomplexes. The spectral sequences
$\left. ^{\otimes}E_{\bullet,\bullet}^{\bullet}\right. $ and $\left.
^{\wedge}E_{\bullet,\bullet}^{\bullet}\right. $ both arise from Lemma
\ref{BT_PropExplicitDifferentialInSpecSeq}, so by the functoriality of the
construction we get an induced morphism of spectral sequences. In particular,
all square
\[
\xymatrix{ {\left. ^{\otimes }E^{r}_{p,q}\right.} \ar[r]^-{d_{r}} \ar[d] & {\left. ^{\otimes }E^{r}_{p-r,q+r-1}\right.} \ar[d] \\ {\left. ^{\wedge }E^{r}_{p,q}\right.} \ar[r]_-{d_{r}} & {\left. ^{\wedge }E^{r}_{p-r,q+r-1}\right.} }
\]
commute, giving the middle square in our claim. The same applies to the edge
maps, giving the outer squares.
\end{proof}
Absolutely analogously we obtain a cohomological counterpart
\[
\xymatrix{
H^{1}(\mathfrak{g},k) \ar[r] \ar[d]_{\cong} &
H^{n+1}(\mathfrak{g},k) \ar[d] \\
H^{0}(\mathfrak{g},\mathfrak{g}^{\ast }) \ar[r] &
H^{n}(\mathfrak{g},\mathfrak{g}^{\ast }),
}
\]
where we have a contracting homotopy for the lower row. We leave the details
of this formulation to the reader.
\section{\label{section_ConcreteFormalism}Concrete Formalism}
Let $(A,(I_{i}^{\pm}),\tau)$ be an $n$-fold cubically decomposed algebra over
a field $k$. In \S \ref{section_CubeComplexII} we have constructed a canonical
morphis
\
\begin{array}
[c]{cccc
\operatorname*{res}\nolimits_{\ast}: & H_{n+1}(\mathfrak{g},k) &
\longrightarrow & k\\
& \uparrow & & \\
& H_{n}(\mathfrak{g},\mathfrak{g})\text{,} & &
\end{array}
\]
where $\mathfrak{g}:=A_{Lie}$ is the Lie algebra associated to $A$. By Lemma
\ref{BTrev_ComputeWedgeResViaTensorRes}, its values on the image of
$H_{n}(\mathfrak{g},\mathfrak{g})\rightarrow H_{n+1}(\mathfrak{g},k)$ can be
computed via $\left. ^{\otimes}\operatorname*{res}\nolimits_{\ast}\right. $.
In this section we will obtain an explicit formula for the latter morphism.
Given the definition of $\left. ^{\otimes}\operatorname*{res}\nolimits_{\ast
}\right. $, Lemma \ref{BT_PropExplicitDifferentialInSpecSeq
.\ref{lemma_specseq_homotopypart} tells us that it can be given explicitly in
terms of differentials of the ordinary Chevalley-Eilenberg complexes
$C(-)_{\bullet}$ (cf. \S \ref{section_CEComplexes}) and contracting homotopies
of the cube complex $N^{\bullet}$ (cf. Lemma
\ref{BT_Prop_EstablishKeyCubeComplex} and its corollary), namel
\begin{equation}
\left. ^{\otimes}\operatorname*{res}\nolimits_{\ast}\right. =\tau\circ
\rho_{1}\circ(^{\otimes}d_{n+1})^{-1}\circ\rho_{2}=\tau\circ\rho_{1}
{\textstyle\prod\nolimits_{i=1,\ldots,n}}
(\delta_{i}H)\rho_{2} \label{lBTA_20
\end{equation}
via the spectral sequence $\left. ^{\otimes}E_{\bullet,\bullet}^{\bullet
}\right. $. The contracting homotopy $H$ depends on the choice of a good
system of idempotents, see Def. \ref{BT_Def_IdempotentsForA}. Different
choices will yield formulas that may look different, but as $\left.
^{\otimes}\operatorname*{res}\nolimits_{\ast}\right. $ (just like $\left.
\operatorname*{res}\nolimits_{\ast}\right. $ itself) was defined entirely
independently of the choice of any idempotents, all such formulas actually
must agree.
Suppose a representative $\theta:=f_{0}\otimes f_{1}\ldots\wedge f_{n}$ with
$f_{0},\ldots,f_{n}\in N^{0}$ is given (note that $N^{0}$ equals
$\mathfrak{g}$ as a left-$U\mathfrak{g}$-module by definition, so it is valid
to treat all $f_{i}$ on equal footing). We shall compute $\left. ^{\otimes
}\operatorname*{res}\nolimits_{\ast}\right. \theta$ in several steps,
starting with $\theta_{0,n}:=\rho_{2}\theta$, then followin
\begin{equation
\begin{tabular}
[c]{ccccccc|c}
& & & & & & $0$ & \\
& & & & & & $\mid$ & \\
& & & & $\theta_{1,n}$ & $\overset{H}{\longleftarrow}$ & $\theta_{0,n}$ &
$n$\\
& & & & $\vdots$ & & & $\vdots$\\
& & $\theta_{n,1}$ & $\overset{H}{\longleftarrow}$ & $\theta_{n-1,1}$ & & &
$1$\\
& & $\downarrow$ & & & & & \\
$\theta_{n+1,0}$ & $\overset{H}{\longleftarrow}$ & $\theta_{n,0}$ & & & &
& $0$\\\hline
$n+1$ & & $n$ & & $n-1$ & $\cdots$ & $0$ &
\end{tabular}
\ \ \qqua
\begin{array}
[c]{ccc}
& & q\\
& & \uparrow\\
p & \leftarrow & +
\end{array}
\label{BT_FigLiftingTheThetaElements
\end{equation}
as prescribed by eq. \ref{lBTA_20}. This graphical arrangement elucidates the
position of the term of each step in the computation in the spectral sequence
from which eq. \ref{lBTA_20} originates, see Lemma
\ref{BT_PropExplicitDifferentialInSpecSeq}. However, for us each $\theta
_{\ast,\ast}$ will be an $E^{0}$-page representative of the respective
$E^{\ast}$-page term. Finally $\left. ^{\otimes}\operatorname*{res
\nolimits_{\ast}\right. \theta=\tau\rho_{1}\theta_{n+1,0}$. We note that
$\rho_{1},\rho_{2}$ are just edge maps, i.e. an inclusion of a subobject and a
quotient surjection. Hence, as we work with explicit representatives anyway,
the operation of these maps is essentially invisible (e.g. in the quotient
case it just means that our representative generates a larger equivalence class).
We will need a convenient notation for elements of this complex.\newlin
\textit{(Notation A)} We will write $\theta_{p,q-p\mid s_{1}\ldots s_{n
}^{w_{1}\ldots w_{p}}\in N^{p}$ for the summands in any expression of the
shap
\begin{equation}
\theta_{p,q-p}=\sum_{\substack{w_{1}\ldots w_{p}\\\in\{1,\ldots,n\}
}\sum_{s_{1}\ldots s_{n}}\theta_{p,q-p\mid s_{1}\ldots s_{n}}^{w_{1}\ldots
w_{p}}\otimes f_{1}\wedge\ldots\wedge\widehat{f_{w_{1}}}\wedge\ldots
\wedge\widehat{f_{w_{p}}}\wedge\ldots\wedge f_{n}\text{,} \label{lBTA_15
\end{equation}
where
\begin{itemize}
\item $(p,q-p)$ denotes the location of the element in the bicomplex as in
fig. \ref{BT_FigLiftingTheThetaElements},
\item $s_{1},\ldots,s_{n}\in\{0,+,-\}$ denotes the component (= direct
summand) of $N^{p}$ as in eq. \ref{lBTA_12}, $f_{1},\ldots,f_{n
\in\mathfrak{g}$,
\item the additional superscripts $w_{1},\ldots,w_{p}\in\{1,\ldots,n\}$ are
used to indicate the omission of wedge factors.
\end{itemize}
Note that the values $\theta_{p,q\mid s_{1}\ldots s_{n}}^{w_{1}\ldots w_{p}}$
are not necessarily uniquely determined since the individual wedge tails need
not be linearly independent.\newline\textit{(Notation B)} We also need a
shorthand for the summands in any expression of the shap
\begin{align}
\theta_{p,q-p-1} & =\sum_{\substack{w_{1}\ldots w_{p},w_{a},w_{b
\\\in\{1,\ldots,n\}}}\sum_{s_{1}\ldots s_{n}}\theta_{p,q\mid s_{1}\ldots
s_{n}}^{w_{1}\ldots w_{p}\parallel w_{a},w_{b}}\label{lBTA_21}\\
& \otimes\lbrack f_{w_{a}},f_{w_{b}}]\wedge f_{1}\wedge\ldots\widehat
{f_{w_{1}}}\ldots\widehat{f_{w_{a}}}\ldots\widehat{f_{w_{b}}}\ldots
\widehat{f_{w_{p}}}\ldots\wedge f_{n}\text{.}\nonumber
\end{align}
Again $s_{1},\ldots,s_{n}$ denotes the component in $N^{p}$, $w_{1
,\ldots,w_{p}$ omitted wedge factors. Moreover, $w_{a}$ and $w_{b}$ denote two
additional omitted wedge factors and simultaneously indicate that $[f_{w_{a
},f_{w_{b}}]$ appears as an additional wedge factor. As for the previous
notation, the elements $\theta_{p,q\mid s_{1}\ldots s_{n}}^{w_{1}\ldots
w_{p}\parallel w_{a},w_{b}}\in N^{p}$ are not uniquely determined. We will
explain how these expressions arise soon.
\textit{Combinatorial Preparation:} We define for arbitrary $1\leq p\leq n$
and $w_{1},\ldots,w_{p}\in\{1,\ldots,n\}$ the `sign function' (a
generalization of the signum of a permutation
\begin{equation}
\rho(w_{1},\ldots,w_{p}):=\left( -1\right) ^{\sum_{k=1}^{p}\sum_{j<k
\delta_{w_{j}<w_{k}}}\text{.} \label{lBTA_19
\end{equation}
By abuse of language we do not carry the value $p$ in the notation for $\rho$
as it will always be clear from the number of arguments which variant is used.
It is easy to see that $\rho(w_{1})=+1$ and $\rho(w_{1},w_{2})=(-1)^{\delta
_{w_{1}<w_{2}}}$. For $p=n$ we hav
\begin{equation}
\rho(w_{1},\ldots,w_{n})=\operatorname*{sgn}\left(
\begin{array}
[c]{ccc
1 & \cdots & n\\
w_{1} & \cdots & w_{n
\end{array}
\right) \text{.} \label{lBTA_22
\end{equation}
We shall need the inductive formula (which is easy to check by induction
\begin{equation}
(-1)^{\#\{w_{i}\mid1\leq i\leq p\text{ s.t. }w_{i}<w_{p+1}\}}\rho(w_{1
,\ldots,w_{p})=\rho(w_{1},\ldots,w_{p+1})\text{.} \label{lBTA_18
\end{equation}
\begin{proposition}
\label{BT_KeyLemmaFormula}Suppose $\theta:=f_{0}\otimes f_{1}\wedge
\ldots\wedge f_{n}$ with $f_{i}\in N_{0}=\mathfrak{g}$. Moreover, suppose
$P_{1}^{+},\ldots,P_{n}^{+}$ is a good system of idempotents as in Def.
\ref{BT_Def_IdempotentsForA}. Then for every $p\geq0$ the element
$\theta_{p+1,q}$ is of the shape as in eq. \ref{lBTA_15} and for $\gamma
_{1}\ldots\gamma_{n-p}\in\{+,-\}$ we hav
\begin{align*}
\theta_{p+1,q\mid\gamma_{1}\ldots\gamma_{n-p}\underset{p}{\underbrace
{0\ldots0}}}^{w_{1}\ldots w_{p}} & =(-1)^{\sum_{u=1}^{p-1}(u+1)}\left(
-1\right) ^{w_{1}+\cdots+w_{p}}\rho(w_{1},\ldots,w_{p})\\
& \left( -1\right) ^{\gamma_{1}+\cdots+\gamma_{n-p}}P_{1}^{\gamma_{1
}\cdots P_{n-p}^{\gamma_{n-p}}\\
&
{\textstyle\sum_{\gamma_{n-p+1}^{\ast}\ldots\gamma_{n}^{\ast}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{n-p+1}^{\ast}+\cdots+\gamma_{n}^{\ast}}\\
& \left( P_{n-p+1}^{\left( -\gamma_{n-p+1}^{\ast}\right)
\operatorname*{ad}(f_{w_{p}})P_{n-p+1}^{\gamma_{n-p+1}^{\ast}}\right) \\
& \cdots\left( P_{n}^{\left( -\gamma_{n}^{\ast}\right) }\operatorname*{ad
(f_{w_{1}})P_{n}^{\gamma_{n}^{\ast}}\right) f_{0}\text{.
\end{align*}
Here $\rho(w_{1},\ldots,w_{p})$ is the sign function defined in eq.
\ref{lBTA_19}. For $p=0$ the expression $\rho(w_{1},\ldots,w_{p})$ and the
whole sum $(\Sigma_{\{\pm\}}(\cdots))$ in $(\Sigma_{\{\pm\}}(\cdots))f_{0}$
should be read as $+1$ (giving the right-hand side of eq. \ref{lBTA_29} below).
\end{proposition}
\begin{itemize}
\item Note that no terms of the shape as in eq. \ref{lBTA_21} appear. This is
not entirely obvious in view of the definition of $\delta^{\lbrack2]}$, see
eq. \ref{lBT_CEComplexDifferential}.
\item The formula does not compute $\theta_{p+1,q\mid s_{1}\ldots s_{n
}^{w_{1}\ldots w_{p}}$ for arbitrary $s_{1}\ldots s_{n}$ of degree $p+1$. This
is due to the fact that we only have further use for the ones treated.
\item For $p\leq1$ read $\sum_{u=1}^{p-1}(u+1)$ as zero.
\end{itemize}
\begin{proof}
We prove this by induction. For $p=0$ the claim read
\begin{equation}
\theta_{1,q\mid\gamma_{1}\ldots\gamma_{n}}=\left( -1\right) ^{\gamma
_{1}+\cdots+\gamma_{n}}P_{1}^{\gamma_{1}}\cdots P_{n}^{\gamma_{n}}f_{0}
\label{lBTA_29
\end{equation}
and in view of eq. \ref{lBTA_14} this proves the claim in this case. Now we
proceed by induction. Assume the case $p$ is settled, i.e. in the notation of
eq. \ref{lBTA_15} $\theta_{p+1,q\mid\gamma_{1}\ldots\gamma_{n-p}\underset
{p}{\underbrace{0\ldots0}}}^{w_{1}\ldots w_{p}}$ is exactly as in our claim.
Next, we need to apply the differential $\delta_{q}=\delta_{q}^{[1]
+\delta_{q}^{[2]}$ of the Chevalley-Eilenberg resolution, see eq.
\ref{lBT_CEComplexDifferential}. The contribution of $\delta_{q}^{[1]}$ will
be relevant, but for $\delta_{q}^{[2]}$ we shall see that (after applying the
next contracting homotopy) the contribution vanishes.\ We treat each
$\delta^{\lbrack i]}$, $i=1,2$ separately:\newline\textbf{(1)} Consider
$\delta_{q}^{[1]}$ in eq. \ref{lBT_CEComplexDifferential}. The sum $\Sigma
_{i}$ \textit{loc. cit.} maps components indexed by $w_{1},\ldots,w_{p}$ to
components of $\delta^{\lbrack1]}\theta_{p,q}$, indexed by $w_{1},\ldots
,w_{p}$ and an additional $w_{p+1}\in\{1,\ldots,n\}\setminus\{w_{1
,\ldots,w_{p}\}$ -- they correspond to the summands of $\delta^{\lbrack
1]}\theta_{p,q}$ and to the additional omitted wedge factor respectively.
Moreover, the formula imposes signs $(-1)^{i+1}$, but here $i$ depends on the
numbering of the wedges $(\ldots\wedge\ldots\wedge\ldots)$. In the notation of
eq. \ref{lBTA_15} the subscript $j$ of $f_{j}$ does not necessarily indicate
the $f_{j}$ sits in the $j$-th wedge, due to the possible omission of wedge
factors $f_{w_{1}},\ldots,f_{w_{p}}$ on the left-hand side of it. To
compensate for that in the following computation the term $(-1)^{\#\{w_{i
\mid1\leq i\leq p\text{ s.t. }w_{i}<w_{p+1}\}}$ appears, sign-counting the
omission on the left of the new-to-be-omitted $w_{p+1}$ in the component of
$\delta^{\lbrack1]}\theta_{p+1,q}$. As $p$ remains constant, the indexing
$\gamma_{1}\ldots\gamma_{n-p}0\ldots0$ remains unaffected. We get for
$(\delta^{\lbrack1]}\theta_{p+1,q})_{p+1,q-1\mid\gamma_{1}\ldots\gamma
_{n-p}\underset{p}{\underbrace{0\ldots0}}}^{w_{1}\ldots w_{p}w_{p+1}}$ the
expressio
\begin{align*}
& =(-1)^{\sum_{u=1}^{p-1}(u+1)}(-1)^{w_{p+1}+1}(-1)^{\#\{w_{i}\mid1\leq i\leq
p\text{ s.t. }w_{i}<w_{p+1}\}}\operatorname*{ad}(f_{w_{p+1}})\\
& \left( -1\right) ^{w_{1}+\cdots+w_{p}}\rho(w_{1},\ldots,w_{p})\\
& \left( -1\right) ^{\gamma_{1}+\cdots+\gamma_{n-p}}P_{1}^{\gamma_{1
}\cdots P_{n-p}^{\gamma_{n-p}}\\
&
{\textstyle\sum_{\gamma_{n-p+1}^{\ast}\ldots\gamma_{n}^{\ast}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{n-p+1}^{\ast}+\cdots+\gamma_{n}^{\ast}}\\
& \left( P_{n-p+1}^{\left( -\gamma_{n-p+1}^{\ast}\right)
\operatorname*{ad}(f_{w_{p}})P_{n-p+1}^{\gamma_{n-p+1}^{\ast}}\right)
\cdots\left( P_{n}^{\left( -\gamma_{n}^{\ast}\right) }\operatorname*{ad
(f_{w_{1}})P_{n}^{\gamma_{n}^{\ast}}\right) f_{0}\text{.
\end{align*}
Next, we need to apply the contracting homotopy $H:N^{p+1}\rightarrow N^{p+2
$. Note that we have $p+1\geq1$, so eq. \ref{lBTA_17} applies. Note that for
indices $\gamma_{1}^{\dag}\ldots\gamma_{n-p-1}^{\dag}\underset{p+1
{\underbrace{0\ldots0}}$ with $\gamma_{1}^{\dag}\ldots\gamma_{n-p-1}^{\dag
\in\{\pm\}$ (i.e. indices of degree $p+2$, cf. eq. \ref{lBTA_12}) the index
$\gamma_{1}^{\dag}\ldots\gamma_{n-p-1}^{\dag}\underset{p}{\underbrace
{0\ldots0}}$ has degree $p+1$. The latter have been computed above. We obtain
fo
\[
(H\delta^{\lbrack1]}\theta_{p+1,q})_{p+2,q-1\mid\gamma_{1}^{\dag}\ldots
\gamma_{n-p-1}^{\dag}\underset{p+1}{\underbrace{0\ldots0}}}^{w_{1}\ldots
w_{p}w_{p+1}
\]
the expression
\begin{align*}
& =(-1)^{p}(-1)^{\gamma_{1}^{\dag}+\cdots+\gamma_{n-p-1}^{\dag}}P_{1
^{\gamma_{1}^{\dag}}\cdots P_{n-p-1}^{\gamma_{n-p-1}^{\dag}}\\
&
{\textstyle\sum_{\gamma_{1},\ldots,\gamma_{(n-p-1)+1}\in\{\pm\}}}
(-1)^{\gamma_{1}+\cdots+\gamma_{n-p-1}}P_{(n-p-1)+1}^{-\gamma_{(n-p-1)+1}}\\
& (\delta\theta_{p+1,q})_{p+1,q-1\mid\gamma_{1}\cdots\gamma_{n-p}\underset
{p}{\underbrace{0\ldots0}}}^{w_{1}\ldots w_{p+1}}\text{.
\end{align*}
In principle the first factor is $\left( -1\right) ^{\deg(\ldots
)}=(-1)^{p+2}$, but switching to $p$ preserves the correct sign. Next, we
expand this using our previous computation and obtain (by noting that many
signs are squares and thus $+1$
\begin{align*}
& =(-1)^{\sum_{u=1}^{p-1}(u+1)}\left( -1\right) ^{p+1}\\
& (-1)^{\gamma_{1}^{\dag}+\cdots+\gamma_{n-p-1}^{\dag}}(-1)^{\#\{w_{i
\mid1\leq i\leq p\text{ s.t. }w_{i}<w_{p+1}\}}\\
& \left( -1\right) ^{w_{1}+\cdots+w_{p+1}}\rho(w_{1},\ldots,w_{p
)P_{1}^{\gamma_{1}^{\dag}}\cdots P_{n-p-1}^{\gamma_{n-p-1}^{\dag}
{\textstyle\sum_{\gamma_{n-p}\in\{\pm\}}}
(-1)^{\gamma_{n-p}}\\
& \left(
{\textstyle\sum_{\gamma_{1},\ldots,\gamma_{n-p-1}\in\{\pm\}}}
P_{1}^{\gamma_{1}}\cdots P_{n-p-1}^{\gamma_{n-p-1}}\right) P_{n-p
^{-\gamma_{n-p}}\operatorname*{ad}(f_{w_{p+1}})P_{n-p}^{\gamma_{n-p}}\\
&
{\textstyle\sum_{\gamma_{n-p+1}^{\ast}\ldots\gamma_{n}^{\ast}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{n-p+1}^{\ast}+\cdots+\gamma_{n}^{\ast}}\\
& \left( P_{n-p+1}^{\left( -\gamma_{n-p+1}^{\ast}\right)
\operatorname*{ad}(f_{w_{p}})P_{n-p+1}^{\gamma_{n-p+1}^{\ast}}\right)
\cdots\left( P_{n}^{\left( -\gamma_{n}^{\ast}\right) }\operatorname*{ad
(f_{w_{1}})P_{n}^{\gamma_{n}^{\ast}}\right) f_{0}\text{.
\end{align*}
The sum in parantheses is the identity since for all $i$ we have $P_{i
^{+}+P_{i}^{-}=\mathbf{1}$ by Def. \ref{BT_Def_IdempotentsForA}. Up to the
naming of the indices, and after using eq. \ref{lBTA_18}, this is exactly our
claim in the case $p+1$ (and this is true despite the fact that we have only
considered $\delta^{\lbrack1]}$ so far -- because we shall next show that the
contribution from $H\circ\delta^{\lbrack2]}$ vanishes).\newline\textbf{(2)}
Consider $\delta_{q}^{[2]}$ in eq. \ref{lBT_CEComplexDifferential}. Using the
notation of eq. \ref{lBTA_15} we may writ
\[
\theta_{p+1,q}
{\textstyle\bigoplus\nolimits_{\deg(s_{1}\ldots s_{n})=p+1}}
{\textstyle\sum\nolimits_{\substack{w_{1}\ldots w_{p}\\\in\{1,\ldots
,n\}\text{,}\\\text{pairw. diff.}}}}
\theta_{p+1,q\mid s_{1}\ldots s_{n}}^{w_{1}\ldots w_{p}}\otimes f_{1
\wedge\widehat{f_{w_{1}}}\ldots\widehat{f_{w_{p}}}\wedge f_{n
\]
Therefore $\delta^{\lbrack2]}\theta_{p+1,q}$ equal
\begin{align*}
&
{\textstyle\bigoplus\nolimits_{\deg(s_{1}\ldots s_{n})=p+1}}
{\textstyle\sum\nolimits_{\substack{w_{1}\ldots w_{p}\\\in\{1,\ldots
,n\}\text{,}\\\text{pairw. diff.}}}}
{\textstyle\sum\nolimits_{\substack{w_{p+1}<w_{p+2}\\\in\{1,\ldots
,n\}\setminus\{w_{1},\ldots,w_{p}\}}}}
(-1)^{w_{p+1}+w_{p+2}}\\
& (-1)^{\#\{w_{i}\mid1\leq i\leq p\text{ s.t. }w_{i}<w_{p+1}\}
(-1)^{\#\{w_{i}\mid1\leq i\leq p\text{ s.t. }w_{i}<w_{p+2}\}}\\
& \theta_{p+1,q\mid s_{1}\ldots s_{n}}^{w_{1}\ldots w_{p}}\otimes\lbrack
f_{w_{p+1}},f_{w_{p+2}}]\wedge f_{1}\wedge\widehat{f_{w_{1}}}\ldots
\widehat{f_{w_{p+1}}}\ldots\widehat{f_{w_{p+2}}}\ldots\widehat{f_{w_{p}
}\wedge f_{n}\text{.
\end{align*}
The two terms $(-1)^{\#\{w_{i}\mid1\leq i\leq p\text{ s.t. }w_{i}<w_{p+1}\}}$
(and with $w_{i}<w_{p+2}$ respectively) appear since the original summand in
$\delta^{\lbrack2]}$ carries the sign $(-1)^{i+j}$, so we need to compute the
number of the wedge slot correctly, respecting the omitted wedge
factors;\ compare with the discussion in the first part of this proof. We
observe that the first wedge factor remains unchanged under $\delta
^{\lbrack2]}$. Hence, when we apply the contracting homotopy $H$ in this
induction step and in the next again, the summand will vanish thanks to
$H^{2}=0$, cf. eq. \ref{lBTA_35}. It will not do harm to verify this
explicitly: We use the notation of eq. \ref{lBTA_21} and write the above in
terms o
\begin{align*}
(\delta^{\lbrack2]}\theta_{p+1,q})_{p+1,q-1\mid s_{1}\ldots s_{n}
^{w_{1}\ldots w_{p}\parallel w_{p+1},w_{p+2}} & =(-1)^{w_{p+1}+w_{p+2
}(-1)^{\#\{w_{i}\mid1\leq i\leq p\text{ s.t. }w_{i}<w_{p+1}\}}\\
& (-1)^{\#\{w_{i}\mid1\leq i\leq p\text{ s.t. }w_{i}<w_{p+2}\}
\theta_{p+1,q\mid s_{1}\ldots s_{n}}^{w_{1}\ldots w_{p}}\text{.
\end{align*}
Next, we apply $H:N^{p+1}\rightarrow N^{p+2}$ (see eq. \ref{lBTA_17}):\ Then
for indices $s_{1}\ldots s_{n}=\gamma_{1}^{\dag}\ldots\gamma_{n-p-1}^{\dag
}0\ldots0$ and $\gamma_{1}^{\dag}\ldots\gamma_{n-p-1}^{\dag}\in\{\pm\}$ (which
is of degree $p+2$) we obtain the expression
\begin{align*}
& (H\delta^{\lbrack2]}\theta_{p+1,q})_{p+2,q-1\mid\gamma_{1}^{\dag
\ldots\gamma_{n-p-1}^{\dag}\underset{p+1}{\underbrace{0\ldots0}}}^{w_{1}\ldots
w_{p}\parallel w_{p+1},w_{p+2}}\\
& =P_{1}^{\gamma_{1}^{\dag}}\cdots P_{n-p-1}^{\gamma_{n-p-1}^{\dag}
{\textstyle\sum_{\gamma_{1},\ldots,\gamma_{n-p}\in\{\pm\}}}
(-1)^{(\ldots)}P_{n-p}^{-\gamma_{n-p}}\theta_{p+1,q\mid\gamma_{1}\cdots
\gamma_{n-p}\underset{p}{\underbrace{0\ldots0}}}^{w_{1}\ldots w_{p}}\text{,
\end{align*}
where we have plugged in our previous computation and started to disregard the
precise sign. We know the last term of this expression by our induction
hypothesis and therefore obtai
\begin{align*}
& =P_{1}^{\gamma_{1}^{\dag}}\cdots P_{n-p-1}^{\gamma_{n-p-1}^{\dag}}\\
&
{\textstyle\sum_{\gamma_{1},\ldots,\gamma_{n-p}\in\{\pm\}}}
{\textstyle\sum_{\gamma_{n-p+1}^{\ast}\ldots\gamma_{n}^{\ast}\in\{\pm\}}}
(-1)^{(\ldots)}\underline{P_{n-p}^{-\gamma_{n-p}}P_{1}^{\gamma_{1}}\cdots
P_{n-p}^{\gamma_{n-p}}}\\
& \left( P_{n-p+1}^{\left( -\gamma_{n-p+1}^{\ast}\right)
\operatorname*{ad}(f_{w_{p}})P_{n-p+1}^{\gamma_{n-p+1}^{\ast}}\right)
\cdots\left( P_{n}^{\left( -\gamma_{n}^{\ast}\right) }\operatorname*{ad
(f_{w_{1}})P_{n}^{\gamma_{n}^{\ast}}\right) f_{0}\text{.
\end{align*}
As the $P_{1}^{+},\ldots,P_{n}^{+}$ commute pairwise, the same holds for all
$P_{1}^{\pm},\ldots,P_{n}^{\pm}$ (by Def. \ref{BT_Def_IdempotentsForA}). Thus,
the underlined expression can be rearranged to $P_{n-p}^{-\gamma_{n-p}
P_{n-p}^{\gamma_{n-p}}\ldots$, but $P_{i}^{+}P_{i}^{-}=P_{i}^{+
(\mathbf{1}-P_{i}^{+})=0$ as $P_{i}^{+}$ is an idempotent. The same for
$P_{i}^{-}P_{i}^{+}$. Hence, in all the indices $s_{1}\ldots s_{n}$ relevant
for our claim $H\delta^{\lbrack2]}\theta_{p+1,q}$ is zero.
\end{proof}
This readily implies the following key computation:
\begin{theorem}
[Main Theorem]\label{BT_MainThmInner}Let $(A,(I_{i}^{\pm}),\tau)$ be an
$n$-fold cubically decomposed algebra over a field $k$. The
\begin{align*}
& \left. ^{\otimes}\operatorname*{res}\nolimits_{\ast}\right. (f_{0}\otimes
f_{1}\wedge\ldots\wedge f_{n})=-(-1)^{\frac{(n-1)n}{2}}\\
& \qquad\qquad\ta
{\textstyle\sum_{\pi\in\mathfrak{S}_{n}}}
\operatorname*{sgn}(\pi
{\textstyle\sum_{\gamma_{1}\ldots\gamma_{n}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{1}+\cdots+\gamma_{n}}(P_{1}^{-\gamma_{1
}\operatorname*{ad}f_{\pi(1)}P_{1}^{\gamma_{1}})\\
& \qquad\qquad\cdots(P_{n}^{-\gamma_{n}}\operatorname*{ad}f_{\pi(n)
P_{n}^{\gamma_{n}})f_{0}\text{,
\end{align*}
where $P_{1}^{+},\ldots,P_{n}^{+}$ is any system of pairwise commuting good
idempotents in the sense of Def. \ref{BT_Def_IdempotentsForA} (the value does
not depend on the choice of the latter). Analogously
\[
(\left. ^{\otimes}\operatorname*{res}\nolimits^{\ast}\right. \varphi
)(f_{1}\wedge\ldots\wedge f_{n})(f_{0}):=\varphi\cdot\left. ^{\otimes
}\operatorname*{res}\nolimits_{\ast}\right. (f_{0}\otimes f_{1}\wedge
\ldots\wedge f_{n})
\]
for every $\varphi\in k$.
\end{theorem}
We remark that one can also write the above formula a
\begin{align*}
& \left. ^{\otimes}\operatorname*{res}\nolimits_{\ast}\right. (f_{0}\otimes
f_{1}\wedge\ldots\wedge f_{n})=-(-1)^{\frac{(n-1)n}{2}}\\
& \ta
{\textstyle\sum_{\pi\in\mathfrak{S}_{n}}}
\operatorname*{sgn}(\pi
{\textstyle\sum_{\gamma_{1}\ldots\gamma_{n}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{1}+\cdots+\gamma_{n}}(P_{1}^{-\gamma_{1}
f_{\pi(1)}P_{1}^{\gamma_{1}})\cdots(P_{n}^{-\gamma_{n}}f_{\pi(n)}P_{n
^{\gamma_{n}})f_{0
\end{align*}
since for any expression $g$ we hav
\begin{align}
P_{i}^{-\gamma_{i}}\operatorname*{ad}(f_{w})P_{i}^{\gamma_{i}}g &
=P_{i}^{-\gamma_{i}}[f_{w},P_{i}^{\gamma_{i}}g]=P_{i}^{-\gamma_{i}}f_{w
P_{i}^{\gamma_{i}}g-P_{i}^{-\gamma_{i}}P_{i}^{\gamma_{i}}gf_{w}\label{lBTA_33
\\
& =P_{i}^{-\gamma_{i}}f_{w}P_{i}^{\gamma_{i}}g\nonumber
\end{align}
since $P_{i}^{-\gamma_{i}}P_{i}^{\gamma_{i}}=(\mathbf{1}-P_{i}^{\gamma_{i
})P_{i}^{\gamma_{i}}=0$ and $P_{i}^{\gamma_{i}}$ is an idempotent.
\begin{proof}
Use Prop. \ref{BT_KeyLemmaFormula} with $p=n$. Plugging these components into
the shorthand notation of eq. \ref{lBTA_15} we unwind for $\left. ^{\otimes
}\operatorname*{res}\nolimits_{\ast}\right. (f_{0}\otimes f_{1}\wedge
\ldots\wedge f_{n})$ the formula
\begin{align*}
& =-\tau\left( -1\right) ^{\frac{n^{2}+n}{2}}\sum_{\substack{w_{1}\ldots
w_{n}\\=\{1,\ldots,n\}}}\rho(w_{1},\ldots,w_{n})(-1)^{w_{1}+\cdots+w_{n}}\\
&
{\textstyle\sum_{\gamma_{1}\ldots\gamma_{n}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{1}+\cdots+\gamma_{n}}(P_{1}^{-\gamma_{1
}\operatorname*{ad}(f_{w_{n}})P_{1}^{\gamma_{1}})\cdots(P_{n}^{-\gamma_{n
}\operatorname*{ad}(f_{w_{1}})P_{n}^{\gamma_{n}})f_{0}\text{.
\end{align*}
We can clearly replace $w_{1},\ldots,w_{n}$ by a sum over all permutations of
$\{1,\ldots,n\}$. In order to obtain a nice formula (in the above formula the
$P_{i}$ appear in ascending order, while the $w_{i}$ appear in descending
order), we prefer to compose each permutation with the order-reversing
permutation $w_{i}:=\pi(n-i+1)$: Hence
\begin{align*}
& =-\tau\left( -1\right) ^{\frac{n^{2}+n}{2}}\sum_{\pi\in\mathfrak{S}_{n
}\rho(\pi(n),\ldots,\pi(1))(-1)^{1+\cdots+n}\\
&
{\textstyle\sum_{\gamma_{1}\ldots\gamma_{n}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{1}+\cdots+\gamma_{n}}(P_{1}^{-\gamma_{1
}\operatorname*{ad}(f_{\pi(1)})P_{1}^{\gamma_{1}})\cdots(P_{n}^{-\gamma_{n
}\operatorname*{ad}(f_{\pi(n)})P_{n}^{\gamma_{n}})f_{0}\text{.
\end{align*}
To conclude, use eq. \ref{lBTA_22} and the (easy)\ fact that the
order-reversing permutation has signum $(-1)^{\frac{(n-1)n}{2}}$, giving the
sign of our claim.
\end{proof}
\begin{proof}
[Proof of Thms. \ref{intro_Thm1UniversalTateCocycle} \&
\ref{intro_Thm2CocycleFormula}]We define $\mathfrak{G}:=E^{n}(k)$, where $E$
is the functor defined in \S \ref{TATE_section_InfiniteMatrixAlgebras}. As
already discussed in \S \ref{TATE_section_InfiniteMatrixAlgebras} this
contains $k[t_{1}^{\pm},\ldots,t_{n}^{\pm}]$ as a Lie subalgebra, acting as
multiplication operators $x\mapsto f\cdot x$. It is also easily checked that
the differential operators $t_{1}^{s_{1}}\cdots t_{n}^{s_{n}}\partial_{t_{i}}$
can be written as infinite matrices. If $\mathfrak{g}$ is a \textit{finite
-dimensional Lie algebra, observe that $\mathfrak{G}=E^{n}(k)$ and
$E^{n}(\operatorname*{End}_{k}(\mathfrak{g}))$ are actually isomorphic. If
$\mathfrak{g}$ is simple, it is centreless, so the adjoint representation
gives an embedding $\mathfrak{g}\hookrightarrow\operatorname*{End
\nolimits_{k}(\mathfrak{g})$, and thu
\[
\mathfrak{g}[t_{1}^{\pm},\ldots,t_{n}^{\pm}]\hookrightarrow E^{n
(\operatorname*{End}\nolimits_{k}(\mathfrak{g}))\simeq E^{n}(k)=\mathfrak{G
\text{.
\]
This shows that all Lie algebras in the claim are subalgebras of
$\mathfrak{G}$. As shown in \S \ref{TATE_section_InfiniteMatrixAlgebras},
$\mathfrak{G}$ is a cubically decomposed algebra, so we define $\phi$ as in
Def. \ref{BT_Def_TateBeilinsonAbstractResidueMaps}, $\phi:=\left.
\operatorname*{res}\nolimits^{\ast}(1)\right. $. Since we work with field
coefficients, the Universal Coefficient Theorem for Lie algebras tells us tha
\[
H^{n+1}(\mathfrak{g},k)\cong H_{n+1}(\mathfrak{g},k)^{\ast}\text{,
\]
i.e. knowing the values of a cocycle only on Lie cycles (instead of all of
{\textstyle\bigwedge\nolimits^{\bullet}}
\mathfrak{g}$) determines the cocycle uniquely, $\left. \operatorname*{res
\nolimits^{\ast}(1)\right. (\alpha)=\left. \operatorname*{res
\nolimits_{\ast}\right. \alpha$. However, by Lemma
\ref{BTrev_ComputeWedgeResViaTensorRes} we may evaluate the cocycle on the
image of $I$ by using $\left. ^{\otimes}\operatorname*{res}\nolimits_{\ast
}\right. $ instead. Using Thm. \ref{BT_MainThmInner} we get an explicit
formula for $\left. ^{\otimes}\operatorname*{res}\nolimits^{\ast}(1)\right.
$, proving Thm. \ref{intro_Thm2CocycleFormula}. Using the explicit formula, it
is a direct computation to check that for $n=1$ the cocycle agrees with the
ones mentioned in the claim of Thm. \ref{intro_Thm1UniversalTateCocycle}.
\end{proof}
\section{\label{BT_sect_applications_residue}Application to the
Multidimensional Residue}
In this section we will show that the Lie cohomology class of Def.
\ref{BT_Def_TateBeilinsonAbstractResidueMaps} naturally gives the
multidimensional (Parshin) residue.
We work in the framework of multivariate Laurent polynomial rings over a field
$k$, see \S \ref{TATE_section_InfiniteMatrixAlgebras}. In other words, as our
cubically decomposed algebra we take an infinite matrix algebra $A=E^{n}(k)$
and $\mathfrak{g}=A_{Lie}$. Via eq. \ref{lBTA_30} it acts on the $k$-vector
space $k[t_{1}^{\pm},\ldots,t_{n}^{\pm}]$. The latter, now interpreted as a
ring, also embeds as a \textit{commutative} subalgebra into $A$. In order to
distinguish very clearly between the subalgebra of $A$ and the vector space it
acts on, we shall from now on write $k[\mathbf{t}_{1}^{\pm},\ldots
,\mathbf{t}_{n}^{\pm}]$ for the $k$-vector space. Thus, when we write $t_{i}$
we always refer to the associated multiplication operator $x\mapsto t_{i}\cdot
x$ in $A$, e.g. $t_{i}^{m}\cdot\mathbf{t}_{i}^{l}=\mathbf{t}_{i}^{m+l
$.\newline Following \cite[Lemma 1(b)]{MR565095} we may introduce a (not quite
well-defined\footnote{It does not respect the relation $\mathrm{d
(ab)=b\mathrm{d}a+a\mathrm{d}b$; this artifact already occurs in Beilinson's
paper \cite{MR565095}. However, this ambiguity dissolves after composing with
the residue (as in the theorem) and it is very convenient to treat this as
some sort of a map for the moment.}) `map'
\begin{equation}
\varkappa:\Omega_{k[t_{1}^{\pm},\ldots,t_{n}^{\pm}]/k}^{n}\rightarrow
H_{n+1}(\mathfrak{g},k)\qquad f_{0}\mathrm{d}f_{1}\wedge\ldots\wedge
\mathrm{d}f_{n}\mapsto f_{0}\wedge f_{1}\wedge\ldots\wedge f_{n}\text{.}
\label{lBT_36
\end{equation}
As $k[t_{1}^{\pm},\ldots,t_{n}^{\pm}]$ is commutative, the $f_{i}$ commute
pairwise and thus $f_{0}\wedge\ldots\wedge f_{n}$ is indeed a Lie homology cycle.
\begin{theorem}
\label{BT_PropDetFormulaForResidue}The morphis
\[
\operatorname*{res}\nolimits_{\ast}\circ\varkappa:\Omega_{k[t_{1}^{\pm
,\ldots,t_{n}^{\pm}]/k}^{n}\longrightarrow k
\]
(with $\varkappa$ as in eq. \ref{lBT_36} and $\operatorname*{res
\nolimits_{\ast}$ as in Def. \ref{BT_Def_TateBeilinsonAbstractResidueMaps})
for $c_{i,j}\in\mathbf{Z}$ is explicitly given b
\[
t_{1}^{c_{0,1}}\ldots t_{n}^{c_{0,n}}\mathrm{d}(t_{1}^{c_{1,1}}\ldots
t_{n}^{c_{1,n}})\wedge\ldots\wedge\mathrm{d}(t_{1}^{c_{n,1}}\ldots
t_{n}^{c_{n,n}})\mapsto-(-1)^{\frac{n^{2}+n}{2}}\de
\begin{pmatrix}
c_{1,1} & \cdots & c_{n,1}\\
\vdots & \ddots & \vdots\\
c_{1,n} & \cdots & c_{n,n
\end{pmatrix}
\]
whenever $\sum_{p=0}^{n}c_{p,i}=0$ and is zero otherwise. In particular
$-(-1)^{\frac{n^{2}+n}{2}}(\operatorname*{res}\nolimits_{\ast}\circ\varkappa)$
is the conventional multidimensional (Parshin) residue.
\end{theorem}
The complicated sign $-(-1)^{\frac{n^{2}+n}{2}}$ should not concern us too
much; it is an artifact of homological algebra. Just by changing our sign
conventions for bicomplexes, we could easily switch to an overall opposite
sign. Letting $c_{i,j}=\delta_{i=j}$ for $i,j\in\{1,\ldots,n\}$ gives the
familia
\[
-(-1)^{\frac{n^{2}+n}{2}}\operatorname*{res}\nolimits_{\ast}(at_{1}^{c_{0,1
}\ldots t_{n}^{c_{0,n}}\wedge t_{1}\wedge\ldots\wedge t_{n})=\delta
_{c_{0,1}=-1}\cdots\delta_{c_{0,n}=-1}a
\]
for $a\in k$. In particular this assures us that the map $\operatorname*{res
\nolimits_{\ast}$ gives the correct notion of residue: it is the
$(-1,\ldots,-1)$-coefficient of the Laurent expansion.
\begin{proof}
After unwinding $\varkappa$ it remains to evaluate $\operatorname*{res
\nolimits_{\ast}(f_{0}\wedge f_{1}\wedge\ldots\wedge f_{n})$ for $f_{i
:=t_{1}^{c_{i,1}}\cdots t_{n}^{c_{i,n}}$ ($i=0,\ldots,n$). Clearly
$f_{0}\otimes f_{1}\wedge\ldots\wedge f_{n}$ is a cycle in $H_{n
(\mathfrak{g},\mathfrak{g})$, and so by Lemma
\ref{BTrev_ComputeWedgeResViaTensorRes} we may use $\left. ^{\otimes
}\operatorname*{res}\nolimits_{\ast}\right. $ instead of $\operatorname*{res
\nolimits_{\ast}$. Then Thm. \ref{BT_MainThmInner} reduces this to the matrix
trac
\begin{align}
& \operatorname*{res}\nolimits_{\ast}(f_{0}\wedge f_{1}\wedge\ldots\wedge
f_{n})=-(-1)^{\frac{(n-1)n}{2}
{\textstyle\sum_{\pi\in\mathfrak{S}_{n}}}
\operatorname*{sgn}(\pi)\tau M_{\pi}\text{, where}\label{lBT_38}\\
& M_{\pi}:
{\textstyle\sum_{\gamma_{1}\ldots\gamma_{n}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{1}+\cdots+\gamma_{n}}(P_{1}^{-\gamma_{1}
f_{\pi(1)}P_{1}^{\gamma_{1}})\cdots(P_{n}^{-\gamma_{n}}f_{\pi(n)}P_{n
^{\gamma_{n}})f_{0}\text{.}\nonumber
\end{align}
For the evaluation of $\tau M_{\pi}$ fix a permutation $\pi$ and pick the
(pairwise commuting) system of idempotents given b
\begin{equation}
P_{j}^{+}\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t}_{n}^{\lambda_{n}
=\delta_{\lambda_{j}\geq0}\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t
_{n}^{\lambda_{n}}\text{.}\qquad\text{(with }\lambda_{1},\ldots,\lambda_{n
\in\mathbf{Z}\text{)} \label{lBTA_32
\end{equation}
Next, observe that the Laurent polynomial ring $W:=k[\mathbf{t}_{1}^{\pm
},\ldots,\mathbf{t}_{n}^{\pm}]$ is stable (i.e. $\phi W\subseteq W$) under the
endomorphisms $f_{0},\ldots,f_{n}$ and the idempotents $P_{i}^{\pm}$, and
therefore under $M_{\pi}$. Hence, it follows that it suffices to evaluate the
trace of $M_{\pi}$ on the $k$-vector subspace $k[\mathbf{t}_{1}^{\pm
,\ldots,\mathbf{t}_{n}^{\pm}]$. We compute successivel
\begin{align*}
f_{k}P_{j}^{+}\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t}_{n}^{\lambda_{n}}
& =\delta_{\lambda_{j}\geq0}\mathbf{t}_{1}^{\lambda_{1}+c_{k,1}
\cdots\mathbf{t}_{n}^{\lambda_{n}+c_{k,n}}\\
P_{j}^{-}f_{k}P_{j}^{+}\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t
_{n}^{\lambda_{n}} & =\delta_{0\leq\lambda_{j}<-c_{k,j}}\mathbf{t
_{1}^{\lambda_{1}+c_{k,1}}\cdots\mathbf{t}_{n}^{\lambda_{n}+c_{k,n}
\end{align*}
and analogously for $P_{j}^{+}f_{k}P_{j}^{-}$. We fin
\begin{align}
&
{\textstyle\sum\nolimits_{\gamma_{j}\in\{\pm\}}}
(-1)^{\gamma_{j}}\left( P_{j}^{-\gamma_{j}}f_{k}P_{j}^{\gamma_{j}}\right)
\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t}_{n}^{\lambda_{n}}\label{lBT_28}\\
& \qquad=(\delta_{0\leq\lambda_{j}<-c_{k,j}}-\delta_{-c_{k,j}\leq\lambda
_{j}<0})\mathbf{t}_{1}^{\lambda_{1}+c_{k,1}}\cdots\mathbf{t}_{n}^{\lambda
_{n}+c_{k,n}}\text{.}\nonumber
\end{align}
Now we claim:\medskip
\begin{itemize}
\item Subclaim: \textit{Writing }$w_{i}:=\pi(i)$ \textit{we have
\begin{align}
M_{\pi}\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t}_{n}^{\lambda_{n}} &
{\textstyle\prod\limits_{i=1}^{n}}
(\delta_{0\leq\lambda_{i}+c_{0,i}+\sum_{p=i+1}^{n}c_{w_{p},i}<-c_{w_{i},i
}\nonumber\\
& -\delta_{-c_{w_{i},i}\leq\lambda_{i}+c_{0,i}+\sum_{p=i+1}^{n}c_{w_{p},i
<0})\nonumber\\
& \mathbf{t}_{1}^{\lambda_{1}+c_{0,1}+\sum_{p=1}^{n}c_{w_{p},1}
\cdots\mathbf{t}_{n}^{\lambda_{n}+c_{0,n}+\sum_{p=1}^{n}c_{w_{p},n}}\text{.}
\label{lBT_37
\end{align}
\medskip
\end{itemize}
(\textit{Proof:} Define for $i=1,\ldots,n+1$ the truncated su
\[
M_{\pi}^{(i)}:=\left[
{\textstyle\sum_{\gamma_{i}\ldots\gamma_{n}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{i}+\cdots+\gamma_{n}}(P_{i}^{-\gamma_{i}
f_{w_{i}}P_{i}^{\gamma_{i}})\cdots(P_{n}^{-\gamma_{n}}f_{w_{n}}P_{n
^{\gamma_{n}})\right] f_{0
\]
so that $M_{\pi}^{(1)}=M_{\pi}$ and $M_{\pi}^{(n+1)}=f_{0}$. We claim tha
\begin{equation}
M_{\pi}^{(i)}\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t}_{n}^{\lambda_{n
}=\alpha\mathbf{t}_{1}^{\lambda_{1}+c_{0,1}+\sum_{p=i}^{n}c_{w_{p},1}
\cdots\mathbf{t}_{n}^{\lambda_{n}+c_{0,n}+\sum_{p=i}^{n}c_{w_{p},n}}
\label{lBT_29
\end{equation}
for some factor $\alpha\in\{\pm1,0\}$. For $i=n+1$ this is clear since
$f_{0}=t_{1}^{c_{0,1}}\cdots t_{n}^{c_{0,n}}$, in particular $\alpha=1$.
Assuming this holds for $i+1$, for $i$ we get by using eq. \ref{lBT_28} (with
the appropriate values plugged in: $j:=i$ and $k:=w_{i}$, and $\lambda_{i}$ as
in eq. \ref{lBT_29}
\begin{align}
M_{\pi}^{(i)}\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t}_{n}^{\lambda_{n}}
&
{\textstyle\sum\nolimits_{\gamma_{i}\in\{\pm\}}}
(-1)^{\gamma_{i}}\left( P_{i}^{-\gamma_{i}}f_{w_{i}}P_{i}^{\gamma_{i
}\right) M_{\pi}^{(i+1)}\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t
_{n}^{\lambda_{n}}\nonumber\\
& \mathbf{=}(\delta_{0\leq\lambda_{i}+c_{0,i}+\sum_{p=i+1}^{n}c_{w_{p
,i}<-c_{w_{i},i}}-\delta_{-c_{w_{i},i}\leq\lambda_{i}+c_{0,i}+\sum_{p=i+1
^{n}c_{w_{p},i}<0})\label{lBT_31}\\
& \alpha\mathbf{t}_{1}^{\lambda_{1}+c_{0,1}+\sum_{p=i+1}^{n}c_{w_{p
,1}+c_{w_{i},1}}\cdots\mathbf{t}_{n}^{\lambda_{n}+c_{0,n}+\sum_{p=i+1
^{n}c_{w_{p},n}+c_{w_{i},n}}\text{.}\nonumber
\end{align}
This proves our claim for all $i$ by induction. We observe that the pre-factor
$\alpha$ in each step just gets multiplied with the expression is eq.
\ref{lBT_31}, giving the product in our claim.\medskip)
Next, we need to evaluate the trace of $M_{\pi}$ as given in eq. \ref{lBT_37}.
The endomorphism is nilpotent unles
\begin{equation}
\forall i:c_{0,1}
{\textstyle\sum\nolimits_{p=1}^{n}}
c_{w_{p},i}=0\text{.} \label{lBT_23
\end{equation}
We remark that $w_{1},\ldots,w_{n}$ is just a permutation of $\{1,\ldots,n\}$,
so these conditions can be rewritten as $\sum_{p=0}^{n}c_{p,i}=0$. In the
nilpotent case the trace is clearly zero. Hence, we may assume we are in the
case where eq. \ref{lBT_23} holds. Using these equations and the useful
convention $w_{n+1}:=0$, our expression for $M_{\pi}$ simplifies t
\begin{align}
M_{\pi}\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t}_{n}^{\lambda_{n}} &
{\textstyle\prod\limits_{i=1}^{n}}
(\delta_{0\leq\lambda_{i}+\sum_{p=i+1}^{n+1}c_{w_{p},i}<-c_{w_{i},i
}\label{lBT_27}\\
& -\delta_{0\leq\lambda_{i}+c_{w_{i},i}+\sum_{p=i+1}^{n+1}c_{w_{p
,i}<c_{w_{i},i}})\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t}_{n}^{\lambda
_{n}}\text{.}\nonumber
\end{align}
The endomorphism $M_{\pi}$ is visibly diagonal of finite rank and we may
reduce the computation of the trace to a (finite-dimensional) stable vector
subspace. A finite subset of the $\mathbf{t}_{1}^{\lambda_{1}}\cdots
\mathbf{t}_{n}^{\lambda_{n}}$ ($\lambda_{1},\ldots,\lambda_{n}\in\mathbf{Z}$)
provides a basis. We see in eq. \ref{lBT_27} that $M_{\pi}$ acts diagonally on
these basis vectors with eigenvalues $\pm1$ or $0$. Moreover, for each $i$ we
either have $c_{w_{i},i}\geq0$ or $c_{w_{i},i}<0$, which shows that each
bracket of the shape $(\delta_{0\leq\lambda<-c}-\delta_{-c\leq\lambda<0})$ in
eq. \ref{lBT_27} either attains only values in $\{+1,0\}$ when we run through
all $\lambda_{1},\ldots,\lambda_{n}\in\mathbf{Z}$, or only values in
$\{-1,0\}$. This shows that we only need to count (with appropriate sign) the
non-zero eigenvalues of $M_{\pi}$ in order to evaluate the trace. Note that
our finite subset of $\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t
_{n}^{\lambda_{n}}$ ($\lambda_{1},\ldots,\lambda_{n}\in\mathbf{Z}$) indexes a
basis, so we need to count the number of such basis vectors with non-zero
eigenvalue. We introduce the non-standard shorthand $\left\lfloor
x\right\rfloor :=\min(0,x)$. Inspecting eq. \ref{lBT_27} shows that when
running through $\lambda_{i}$ we have
\begin{itemize}
\item $\left\lfloor -c_{w_{i},i}\right\rfloor $ times the eigenvalue $+1$,
\item $\left\lfloor +c_{w_{i},i}\right\rfloor $ times the eigenvalue $-1$.
\end{itemize}
The value of a fixed bracket $(\delta_{0\leq\lambda<-c}-\delta_{-c\leq
\lambda<0})$ - when non-zero - is always either $+1$, or always $-1$. Thus,
the number of non-zero eigenvalues is simply the number of elements within the
hypercube such that each $\lambda_{i}$ lies within the range of length
$\left\lfloor \pm c_{w_{i},i}\right\rfloor $ counted above, and therefor
\[
\tau M_{\pi}
{\textstyle\prod\nolimits_{i=1}^{n}}
(\left\lfloor -c_{w_{i},i}\right\rfloor -\left\lfloor +c_{w_{i},i
\right\rfloor )
{\textstyle\prod\nolimits_{i=1}^{n}}
(-c_{w_{i},i})=(-1)^{n
{\textstyle\prod\nolimits_{i=1}^{n}}
c_{\pi(i),i
\]
(because $\left\lfloor -a\right\rfloor -\left\lfloor a\right\rfloor =-a$ for
all $a\in\mathbf{Z}$). We plug this into eq. \ref{lBT_38} and recognize the
usual formula for the determinant. This finishes the proof.
\end{proof}
We are now ready to prove the remaining theorems from the introduction:
\begin{proof}
[Proof of Thms. \ref{Prop_MainResidueThm} \&
\ref{intro_Thm5CocycleAgreesWithBeilinsons}]We use Thm.
\ref{BT_PropDetFormulaForResidue} to obtain Thm. \ref{Prop_MainResidueThm
.\ref{resthm_part2}. Then Thm. \ref{Prop_MainResidueThm}.\ref{resthm_part3}
follows as a special case. For Thm. \ref{Prop_MainResidueThm
.\ref{resthm_part1} use the shorthands $\pi=P_{1}^{+}=P^{+}$ (following both
the notation of Arbarello, de Concini and Kac and ours). On the one hand we
comput
\begin{align*}
\lbrack\pi,f_{1}]f_{0} & =[P,f_{1}]f_{0}=Pf_{1}f_{0}-f_{1}Pf_{0
=[Pf_{0},f_{1}]\\
& =(P^{+}+P^{-})[P^{+}f_{0},f_{1}]=P^{-}[P^{+}f_{0},f_{1}]+P^{+}[P^{+
f_{0},f_{1}]
\end{align*}
and we have $[P^{+}f_{0},f_{1}]+[P^{-}f_{0},f_{1}]=[f_{0},f_{1}]=0$, so this
equal
\[
=P^{-}[P^{+}f_{0},f_{1}]-P^{+}[P^{-}f_{0},f_{1}]\text{.
\]
On the other hand, we unwin
\begin{align*}
\operatorname*{res}f_{0}\mathrm{d}f_{1} & =\left( -1\right) ^{1
\operatorname*{tr
{\textstyle\sum_{\gamma_{1}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{1}}(P_{1}^{-\gamma_{1}}\operatorname*{ad
(f_{\pi(1)})P_{1}^{\gamma_{1}})f_{0}\\
& =-P^{-}[f_{1},P_{1}^{+}f_{0}]+P^{+}[f_{1},P_{1}^{-}f_{0}]
\end{align*}
and these expressions clearly coincide. Finally Thm.
\ref{intro_Thm5CocycleAgreesWithBeilinsons} is true since we use the cocycle
defined in Def. \ref{BT_Def_TateBeilinsonAbstractResidueMaps}, i.e. it is
constructed exactly as stated in Thm.
\ref{intro_Thm5CocycleAgreesWithBeilinsons}.
\end{proof}
\section{\label{BT_sect_applications_multiloop}Application to Multiloop Lie
Algebras}
Suppose $k$ is a field and $\mathfrak{g}/k$ is a finite-dimensional centreless
Lie algebra (e.g. $\mathfrak{g}$ finite-dimensional, semisimple). Then the
adjoint representation $\operatorname*{ad}:\mathfrak{g}\hookrightarrow
\operatorname*{End}\nolimits_{k}(\mathfrak{g})$ is injective. Thus, we obtain
a Lie algebra inclusio
\[
i:\mathfrak{g}[\mathbf{t}_{1}^{\pm},\ldots,\mathbf{t}_{n}^{\pm
]\hookrightarrow E^{n}(\operatorname*{End}\nolimits_{k}(\mathfrak{g
))_{Lie}\text{,
\]
where $E$ is the functor described in
\S \ref{TATE_section_InfiniteMatrixAlgebras} (the right-hand side is equipped
with the Lie bracket $[a,b]=ab-ba$ based on the associative algebra
structure). Thus, we have the pullbac
\[
i^{\ast}:H^{n+1}(E^{n}(\operatorname*{End}\nolimits_{R}(\mathfrak{g
))_{Lie},k)\rightarrow H^{n+1}(\mathfrak{g}[\mathbf{t}_{1}^{\pm
,\ldots,\mathbf{t}_{n}^{\pm}],k)\text{,
\]
which we may apply to the class $\operatorname*{res}\nolimits^{\ast}(1)$, see
Def. \ref{BT_Def_TateBeilinsonAbstractResidueMaps}.
\begin{theorem}
Suppose $k$ is a field and $\mathfrak{g}/k$ is a finite-dimensional centreless
Lie algebra. For $Y_{0},\ldots,Y_{n}\in\mathfrak{g}$ we cal
\begin{equation}
B(Y_{0},\ldots,Y_{n}):=\operatorname*{tr}\nolimits_{\operatorname*{End
_{k}(\mathfrak{g})}(\operatorname*{ad}(Y_{0})\operatorname*{ad}(Y_{1
)\cdots\operatorname*{ad}(Y_{n})) \label{lTateLieCase_GeneralizedKillingForm
\end{equation}
the `generalized Killing form'. For $n=1$ and if $\mathfrak{g}$ is semisimple,
this is the classical Killing form of $\mathfrak{g}$.
\begin{enumerate}
\item Then on all Lie cycles admitting a lift under $I$ as in eq.
\ref{lBT_revIIntro}, the pullback~$i^{\ast}\operatorname*{res}\nolimits^{\ast
}(1)\in H^{n+1}(\mathfrak{g}[\mathbf{t}_{1}^{\pm},\ldots,\mathbf{t}_{n}^{\pm
}],k)$ is explicitly given b
\begin{align*}
& (i^{\ast}\phi)(Y_{0}\mathbf{t}_{1}^{c_{0,1}}\cdots\mathbf{t}_{n}^{c_{0,n
}\wedge\cdots\wedge Y_{n}\mathbf{t}_{1}^{c_{n,1}}\cdots\mathbf{t}_{n
^{c_{n,n}})\\
& =-(-1)^{\frac{n^{2}+n}{2}}\sum_{\pi\in\mathfrak{S}_{n}}\operatorname*{sgn
(\pi)B(Y_{\pi(1)},\ldots,Y_{\pi(n)},Y_{0})\prod\limits_{i=1}^{n}c_{\pi
(i),i}\text{.
\end{align*}
whenever $\forall i\in\{1,\ldots,n\}:\sum_{p=0}^{n}c_{p,i}=0$ and zero otherwise.
\item If $\mathfrak{g}$ is finite-dimensional and semisimple and $n=1$, then
$i^{\ast}\operatorname*{res}\nolimits^{\ast}(1)\in H^{2}(\mathfrak{g
[\mathbf{t}_{1}^{\pm}],k)$ is the universal central extension of the loop Lie
algebra $\mathfrak{g}[\mathbf{t}_{1},\mathbf{t}_{1}^{-1}]$ giving the
associated affine Lie algebra $\widehat{\mathfrak{g}}$ (without extending by a
derivation)
\[
0\longrightarrow k\left\langle c\right\rangle \longrightarrow\widehat
{\mathfrak{g}}\longrightarrow\mathfrak{g}[\mathbf{t}_{1},\mathbf{t}_{1
^{-1}]\longrightarrow0\text{.
\]
\end{enumerate}
\end{theorem}
\begin{proof}
\textbf{(1)} According to Lemma \ref{BTrev_ComputeWedgeResViaTensorRes}, Thm.
\ref{BT_MainThmInner} and eq. \ref{lBT_39} the cocycle is explicitly given b
\begin{align*}
\operatorname*{res}\nolimits^{\ast}(1)(f_{0}\wedge\cdots\wedge f_{n}) &
=\left. ^{\otimes}\operatorname*{res}\nolimits^{\ast}\right. (1)(f_{0
\otimes f_{1}\wedge\cdots\wedge f_{n})\\
& =\ta
{\textstyle\sum_{\pi\in\mathfrak{S}_{n}}}
\operatorname*{sgn}(\pi)M_{\pi}\text{, where}\\
M_{\pi} &
{\textstyle\sum_{\gamma_{1}\ldots\gamma_{n}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{1}+\cdots+\gamma_{n}}\\
& (P_{1}^{-\gamma_{1}}f_{\pi(1)}P_{1}^{\gamma_{1}})\cdots(P_{n}^{-\gamma_{n
}f_{\pi(n)}P_{n}^{\gamma_{n}})f_{0}\text{.
\end{align*}
Note that $M_{\pi}\in E^{n}(\operatorname*{End}\nolimits_{k}(\mathfrak{g}))$.
As we consider the pullback of the cohomology class along $i:\mathfrak{g
[t_{1}^{\pm},\ldots,t_{n}^{\pm}]\hookrightarrow E^{n}(\operatorname*{End
\nolimits_{k}(\mathfrak{g}))_{Lie}$, it suffices to treat elements
$f_{i}:=Y_{i}t_{1}^{c_{i,1}}\cdots t_{n}^{c_{i,n}}$ with $c_{i,1
,\ldots,c_{i,n}\in\mathbf{Z}$ (for $i=0,\ldots,n$) and $Y_{i}\in\mathfrak{g}$.
Note that by our embedding $i$ an element $f_{i}$ is mapped to the
endomorphism $\operatorname*{ad}(Y_{i})t_{1}^{c_{i,1}}\cdots t_{n}^{c_{i,n}}$
in $E^{n}(\operatorname*{End}\nolimits_{k}(\mathfrak{g}))$. Let $\pi
\in\mathfrak{S}_{n}$ be a fixed permutation. In order to compute the trace, it
suffices to study the action of $M_{\pi}$ on the basis elements $X\mathbf{t
_{1}^{\lambda_{1}}\cdots\mathbf{t}_{n}^{\lambda_{n}}$ of $\mathfrak{g
[t_{1}^{\pm},\ldots,t_{n}^{\pm}]$, where $\lambda_{1},\ldots,\lambda_{n
\in\mathbf{Z}$ and $X\in\mathfrak{g}$ runs through a basis of $\mathfrak{g}$.
We denote them with bold letters $\mathbf{t}_{i}$ instead of $t_{i}$ to
distinguish clearly between a basis element and $t_{i}$ as an endomorphism
$t_{i}:x\mapsto t_{i}\cdot x$ in $E^{n}(\operatorname*{End}\nolimits_{k
(\mathfrak{g}))$. As in the proof of Thm. \ref{BT_PropDetFormulaForResidue} we
comput
\[
P_{j}^{-}f_{k}P_{j}^{+}X\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t
_{n}^{\lambda_{n}}=\delta_{0\leq\lambda_{j}<-c_{k,j}}\operatorname*{ad
(Y_{k})X\mathbf{t}_{1}^{\lambda_{1}+c_{k,1}}\cdots\mathbf{t}_{n}^{\lambda
_{n}+c_{k,n}}\text{.
\]
and as a consequence we fin
\begin{align*}
&
{\textstyle\sum\nolimits_{\gamma_{j}\in\{\pm\}}}
\left( -1\right) ^{\gamma_{j}}(P_{j}^{-\gamma_{j}}x_{k}P_{j}^{\gamma_{j
})X\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t}_{n}^{\lambda_{n}}\\
& \qquad=(\delta_{0\leq\lambda_{j}<-c_{k,j}}-\delta_{-c_{k,j}\leq\lambda
_{j}<0})\operatorname*{ad}(Y_{k})X\mathbf{t}_{1}^{\lambda_{1}+c_{k,1}
\cdots\mathbf{t}_{n}^{\lambda_{n}+c_{k,n}}\text{.
\end{align*}
With an inductive computation entirely analogous to eq. \ref{lBT_37} we fin
\begin{align*}
M_{\pi}X\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t}_{n}^{\lambda_{n}} &
{\textstyle\prod\nolimits_{i=1}^{n}}
(\delta_{0\leq\lambda_{i}+c_{0,i}
{\textstyle\sum\nolimits_{p=i+1}^{n}}
c_{w_{p},i}<-c_{w_{i},i}}\\
& -\delta_{-c_{w_{i},i}\leq\lambda_{i}+c_{0,i}
{\textstyle\sum\nolimits_{p=i+1}^{n}}
c_{w_{p},i}<0})\\
& \qquad\operatorname*{ad}(Y_{w_{1}})\cdots\operatorname*{ad}(Y_{w_{n
})\operatorname*{ad}(Y_{0})X\\
& \mathbf{t}_{1}^{\lambda_{1}+\sum\nolimits_{p=0}^{n}c_{p,1}}\cdots
\mathbf{t}_{n}^{\lambda_{n}+\sum\nolimits_{p=0}^{n}c_{p,n}}\text{,
\end{align*}
where $w_{i}:=\pi(i)$. Unless $\forall i
{\textstyle\sum\nolimits_{p=0}^{n}}
c_{p,i}=0$ holds, $M_{\pi}$ is clearly nilpotent and thus has trace $\tau
M_{\pi}=0$. This condition is clearly independent of $\pi$, showing that
$(i^{\ast}\operatorname*{res}\nolimits^{\ast}(1))(f_{0}\wedge\cdots\wedge
f_{n})=0$ in this case. From now on assume $\forall i
{\textstyle\sum\nolimits_{p=0}^{n}}
c_{p,i}=0$. Then $M_{\pi}$ respects the decompositio
\[
\mathfrak{g}[\mathbf{t}_{1}^{\pm},\ldots,\mathbf{t}_{n}^{\pm}]
{\textstyle\coprod\nolimits_{\lambda_{1},\ldots,\lambda_{n}\in\mathbf{Z}^{n}}}
\mathfrak{g}\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t}_{n}^{\lambda_{n}
\]
and therefore (as $\tau$ is essentially a trace) $\tau M_{\pi}
{\textstyle\sum_{\lambda_{1},\ldots,\lambda_{n}}}
\tau M_{\pi}\mid_{\mathfrak{g}\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t
_{n}^{\lambda_{n}}}$.\ For each summand of the latter we obtai
\begin{align*}
\tau M_{\pi}\mid_{\mathfrak{g}\mathbf{t}_{1}^{\lambda_{1}}\cdots\mathbf{t
_{n}^{\lambda_{n}}} &
{\textstyle\prod\nolimits_{i=1}^{n}}
(\delta_{0\leq\lambda_{i}+c_{0,i}
{\textstyle\sum\nolimits_{p=i+1}^{n}}
c_{w_{p},i}<-c_{w_{i},i}}\\
& \qquad-\delta_{-c_{w_{i},i}\leq\lambda_{i}+c_{0,i}
{\textstyle\sum\nolimits_{p=i+1}^{n}}
c_{w_{p},i}<0})\\
& \operatorname*{tr}(\operatorname*{ad}(Y_{w_{1}})\cdots\operatorname*{ad
(Y_{w_{n}})\operatorname*{ad}(Y_{0}))\text{.
\end{align*}
The trace term is independent of $\lambda_{1},\ldots,\lambda_{n}$ (and in the
shape of eq. \ref{lTateLieCase_GeneralizedKillingForm}), so we may rewrite
$\tau M_{\pi}$ a
\begin{align*}
\tau M_{\pi} & =B(Y_{w_{1}},\ldots,Y_{w_{n}},Y_{0}
{\textstyle\sum\nolimits_{\lambda_{1},\ldots,\lambda_{n}}}
{\textstyle\prod\nolimits_{i=1}^{n}}
(\delta_{0\leq\lambda_{i}+c_{0,i}
{\textstyle\sum\nolimits_{p=i+1}^{n}}
c_{w_{p},i}<-c_{w_{i},i}}\\
& -\delta_{-c_{w_{i},i}\leq\lambda_{i}+c_{0,i}
{\textstyle\sum\nolimits_{p=i+1}^{n}}
c_{w_{p},i}<0})\text{.
\end{align*}
For the evaluation of the sum
{\textstyle\sum\nolimits_{\lambda_{1},\ldots,\lambda_{n}}}
$ we can apply the same eigenvalue count as in the proof of Thm.
\ref{BT_PropDetFormulaForResidue}.\ This time instead of counting eigenvalues,
we count non-zero summands. This yield
\[
\tau M_{\pi}=(-1)^{n}B(Y_{w_{1}},\ldots,Y_{w_{n}},Y_{0}
{\textstyle\prod\nolimits_{i=1}^{n}}
c_{w_{i},i
\]
and thus our claim. \textbf{(2)} For $n=1$ we obtain
\[
(i^{\ast}\operatorname*{res}\nolimits^{\ast}(1))(Y_{0}\mathbf{t}_{1}^{c_{0,1
}\wedge Y_{1}\mathbf{t}_{1}^{c_{1,1}})=-c_{1,1}\delta_{c_{0,1}+c_{1,1
=0}B(Y_{1},Y_{0})\text{.
\]
This is well-known to be the defining cocycle of the affine Lie algebra
$\widehat{\mathfrak{g}}$ (usually with a positive sign, but the class is only
well-defined up to non-zero scalar multiple anyway).
\end{proof}
The natural further cases of the Virasoro algebra as well as affine Kac-Moody
algebras (i.e. $\widehat{\mathfrak{g}}$ extended by derivations) will be
discussed elsewhere. The computations become more involved, but no further
ideas are needed.
\addcontentsline{toc}{chapter}{References}
\bibliographystyle{amsplain}
|
1,108,101,564,345 | arxiv | \section{Introduction}
In the study of infinite matrix representations of operators in $\mathcal{B}(\mathcal{H})$, and especially the structure of commutators, it is common and natural to split up a target operator $T$ into two (or finite) sum of natural parts.
For example, every finite matrix is the sum of its upper triangular part and its lower triangular part (including the diagonal in either part as you choose).
Formally this obviously holds also for infinite matrices, but not in $\mathcal{B}(\mathcal{H})$.
That is, as is well-known, the upper or lower triangular part of a matrix representation for a bounded operator is not necessarily a bounded operator.
The Laurent operator with zero-diagonal matrix representation $\delim(){\frac{1}{i-j}}_{i \neq j}$ represents a bounded operator but its upper and lower triangular parts represent unbounded operators.
From this we can produce a compact operator whose upper triangular part is unbounded.
\begin{example}
\label{ex:bdd-upp-triangular-unbdd}
Consider the zero-diagonal Laurent matrix $\delim(){\frac{1}{i-j}}_{i \neq j}$, which corresponds to the Laurent multiplication operator $M_{\phi} \in \mathcal{B}(L^2(\mathbb{S}^1))$ where
\begin{equation*}
\phi(z) := \sum_{0 \neq n \in \mathbb{Z}} \frac{z^n}{n} = \sum_{n=1}^{\infty} \frac{z^n}{n} - \sum_{n=1}^{\infty} \frac{\conj{z}^n}{n} = \log(1-z) - \log(1-\conj{z}) = \log\delim()*{\frac{1-z}{\conj{1-z}}},
\end{equation*}
which is bounded since the principle logarithm of a unit modulus function $\phi \in L^{\infty}(\mathbb{S}^1)$.
On the other hand, the upper triangular part $\Delta(M_{\phi})$ of $M_{\phi}$ corresponds to multiplication by $\log(1-z) \notin L^{\infty}(\mathbb{S}^1)$, and is therefore not a bounded operator.
Additionally, as is well-known, the same boundedness/unboundedness properties are shared by the corresponding Toeplitz operator $T_{\phi}$ and its $\Delta(T_{\phi})$.
Indeed, this follows from the fact that if $P \in \mathcal{B}(L^2(\mathbb{S}^1))$ is the projection onto the Hardy space $H^2$, then $P M_{\phi} P$ and $P^{\perp} M_{\phi} P^{\perp}$ are unitarily equivalent, and $P M_{\phi} P^{\perp} = P \Delta(M_{\phi}) P^{\perp}$ is bounded.
To produce a compact operator whose upper triangular part is unbounded, take successive corners $P_n T_{\phi} P_n$ where $P_n$ is the projection onto $\spans \delim\{\}{e_1,\ldots,e_n}$.
Since $\norm{P_n T_{\phi} P_n} \uparrow \norm{T_{\phi}}$ and $\norm{P_n \Delta(T_{\phi}) P_n} \uparrow \infty$, then $\bigoplus_{n=1}^{\infty} \frac{P_n T_{\phi} P_n}{\norm{P_n \Delta(T_{\phi}) P_n}^{1/2}}$ is compact and its upper triangular part is unbounded.
Similarly, $\bigoplus_{n=1}^{\infty} \frac{P_n T_{\phi} P_n}{\norm{P_n \Delta(T_{\phi}) P_n}}$ is compact but its upper triangular part is bounded and noncompact.
\end{example}
Focusing attention on $\mathcal{B}(\mathcal{H})$ ideals yields a fruitful area of study:
for a Hilbert--Schmidt operator, in any basis, any partition of the entries of its matrix representation has its parts again Hilbert--Schmidt.\footnotemark{}
This leads to a natural question for which the authors are unaware of the answer: is the Hilbert--Schmidt ideal the \emph{only} (nonzero) ideal with this property?
\footnotetext{%
Of course, for any ideal $\mathcal{I}$ contained within the Hilbert--Schmidt ideal $\mathcal{L}_2$, and any $T \in \mathcal{I}$, the upper triangular part $\Delta(T) \in \mathcal{L}_2$, but one may wonder if anything stronger can be said.
In the case of the trace-class ideal $\mathcal{L}_1$, Gohberg--Krein \cite[Theorem~III.2.1]{GK-1970} showed that $\Delta(T)$, in the terminology of \cite{DFWW-2004-AM}, lies in the arithmetic mean closure of the principal ideal generated by $\diag(\frac{1}{n})$.%
}
For the compact operators $\mathcal{K}(\mathcal{H})$, depending on the shape of the matrix parts for $T$, the problem of determining when its parts are in $\mathcal{K}(\mathcal{H})$ (i.e., ideal invariant) can be a little subtler.
Indeed, as noted in \Cref{ex:bdd-upp-triangular-unbdd}, the upper triangular part of a compact operator may not be compact (nay bounded);
on the other hand, it is well-known and elementary that the diagonal sequence $\delim(){d_n}$ of a compact operator converges to zero (i.e., $\diag \delim(){d_n}$ is compact), and the same holds for all the sub/super-diagonals as well.
In contrast, this fails for certain matrix representations for a finite rank operator;
that is, the diagonal of a finite rank operator may not be finite rank (e.g., $(\frac{1}{ij})_{i,j \ge 1}$ is rank-1 but its diagonal $\diag(\frac{1}{j^2}) \notin \mathcal{F}(\mathcal{H})$).
Here we study this question for general $\mathcal{B}(\mathcal{H})$-ideals: For an ideal $\mathcal{I}$ and all pairs $\delim\{\}{P_n}, \delim\{\}{Q_n}$ of sequences of mutually orthogonal projections, when are the generalized diagonals $\sum Q_n T P_n \in \mathcal{I}$ whenever $T \in \mathcal{I}$? (The canonical block diagonals are $\sum P_{n+k} T P_n$ and $\sum P_n T P_{n+k}$.)
We find this especially pertinent in our current search for commutator forms of compact operators \cite{PPW-2021}, growing out of \cite{BPW-2014-VLOT}; and, in view of the second author’s work with V. Kaftal \cite{KW-2011-IUMJ} on diagonal invariance for ideals, useful in recent discoveries by the second author with S. Petrovic and S. Patnaik \cite{PPW-2020-TmloVLOt} on their universal finite-block tridiagonalization for arbitrary $\mathcal{B}(\mathcal{H})$ operators and the consequent work on commutators \cite{PPW-2021}.
Evolution of questions:
\begin{enumerate}
\item For which $\mathcal{B}(\mathcal{H})$-ideals $\mathcal{I}$ does a tridiagonal operator $T$ have its three diagonal parts also in $\mathcal{I}$?
This question arose from the stronger question: for which tridiagonal operators $T \in \mathcal{K}(\mathcal{H})$ are the diagonals parts in $\delim<>{T}$?
\Cref{thm:bandable} guarantees the latter is always true, even for finite band operators.
\item The same questions but more generally for a block tridiagonal $T$ (see \Cref{def:block-decomposition}) and its three block diagonals (see \Cref{def:shift-representation}).
Again, \Cref{thm:bandable} guarantees this is always true, and likewise for finite block band operators.
That is, if
\(
T =
\begin{pNiceMatrix}[small,xdots/line-style=solid]
B & A & 0 & {} \\[-1em]
C & \Ddots & \Ddots & \Ddots \\[-1em]
0 & \Ddots & & {} \\[-1em]
& \Ddots & & {} \\
\end{pNiceMatrix}
\in \mathcal{I}
\),
then
\(
\begin{pNiceMatrix}[small,xdots/line-style=solid]
0 & A & 0 & {} \\[-1em]
0 & \Ddots & \Ddots & \Ddots \\[-1em]
0 & \Ddots & & {} \\[-1em]
& \Ddots & & {} \\
\end{pNiceMatrix}
\in \mathcal{I}
\),
and similarly for $B,C$.
\item A more general context: given two sequences of (separately) mutually orthogonal projections, $\delim\{\}{P_n}_{n=1}^{\infty}, \delim\{\}{Q_n}_{n=1}^{\infty}$, for $T \in \mathcal{I}$ what can be said about ideal membership for $\sum_{n=1}^{\infty} Q_n T P_n$?
In \Cref{thm:sum-off-diagonal-corners-am-closure} we establish that $\sum_{n=1}^{\infty} Q_n T P_n$ always lies in the arithmetic mean closure $\amclosure{\mathcal{I}}$ defined in \cite{DFWW-2004-AM} (see herein \cpageref{def:am-closed}).
This follows from a generalization (see \Cref{thm:fans-theorem-pinching}) of Fan's famous submajorization theorem \cite[Theorem~1]{Fan-1951-PNASUSA} concerning partial sums of diagonals of operators.
\end{enumerate}
Throughout the paper we will prefer bi-infinite sequences (i.e., indexed by $\mathbb{Z}$ instead of $\mathbb{N}$) of projections, but this is only to make the descriptions simpler;
we will not, however, use the term \term{bi-infinite} unless necessary for context.
The projections are allowed to be zero, so this is no restriction.
We first establish some terminology.
\begin{definition}
\label{def:block-decomposition}
A sequence $\delim\{\}{P_n}_{n \in \mathbb{Z}}$ of mutually orthogonal projections $P_n \in \mathcal{B}(\mathcal{H})$ for which $\sum P_n = I$ is a \term{block decomposition} and for $T \in \mathcal{B}(\mathcal{H})$, partitions it into a (bi-)infinite matrix of operators $T_{i,j} := P_i T P_j$.
We say that an operator $T$ is a \term{block band operator relative to $\delim\{\}{P_n}$} if there is some $M \ge 0$, called the \term{block bandwidth}, for which $T_{i,j} = 0$ whenever $\abs{i - j} > M$.
If $M=0$ (resp. $M=1$), we say $T$ is \emph{block diagonal (resp. block tridiagonal) relative to $\delim\{\}{P_n}$}.
Finally, in all the above definitions, if $\trace P_n \le 1$ for all $n \in \mathbb{Z}$, which, up to a choice of phase for each range vector, simply corresponds to a choice of orthonormal basis, then we omit the word ``block.''
In this case, the operators $T_{i,j}$ are scalars and $\delim(){T_{i,j}}$ is the matrix representation (again, up to a choice of phase for each vector) for $T$ relative to this basis.
If $\delim\{\}{Q_n}_{n \in \mathbb{Z}}$ is an (unrelated) block decomposition, the pair $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$ still determines a (bi-)infinite matrix of operators $T_{i,j} = Q_i T P_j$, but this time there is an inherent asymmetry in that $(T^{*})_{i,j} \neq (T_{j,i})^{*}$.
In this case, the terms defined just above may be modified with the adjective ``asymmetric.''
\end{definition}
\begin{definition}
\label{def:shift-representation}
Suppose that $\delim\{\}{P_n}_{n \in \mathbb{Z}}$ is a block decomposition for an operator $T \in \mathcal{B}(\mathcal{H})$.
For each $k \in \mathbb{Z}$, we call
\begin{equation*}
T_k := \sum_{n \in \mathbb{Z}} T_{n,n+k} = \sum_{n \in \mathbb{Z}} P_n T P_{n+k}
\end{equation*}
the \term{$k^{\mathrm{th}}$ block diagonal} of $T$, which converges in the strong operator topology.
Visually, these operators may be described with the following diagram\footnotemark{}:
\begin{center}
\includegraphics{shift-decomposition-figure.pdf}
\end{center}
\footnotetext{%
For the case when the projections $P_n = 0$ for $n \in \mathbb{Z} \setminus \mathbb{N}$, the matrix below is uni-infinite.
This recovers uni-infinite matrix results from the bi-infinite approach we described in the paragraph preceding \Cref{def:block-decomposition}.%
We call the collection $\delim\{\}{T_k}_{k \in \mathbb{Z}}$ the \term{shift decomposition} of $T$ (relative to the block decomposition $\delim\{\}{P_n}_{n \in \mathbb{Z}}$).
The \term{asymmetric shift decomposition} $\delim\{\}{T_k}_{k \in \mathbb{Z}}$ relative to \emph{different} block decompositions $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$ is given by
\begin{equation*}
T_k := \sum_{n \in \mathbb{Z}} Q_n T P_{n+k}.
\end{equation*}
We note for future reference that sums of the above form don't require the sequences of projections to sum to the identity in order to converge in the strong operator topology, only that each sequence consists of mutually orthogonal projections.
Moreover, it is elementary to show that when $T$ is compact, so is $T_k$ for all $k \in \mathbb{Z}$.
\end{definition}
\begin{remark}
\label{rem:shift-decomposition-explanation}
Although one has the formal equality $T = \sum_{k \in \mathbb{Z}} T_k$ in the sense that $T$ is uniquely determined by $\delim\{\}{T_k}_{k \in \mathbb{Z}}$, this sum doesn't necessarily converge even in the weak operator topology \cite{Mer-1985-PAMS}, hence it doesn't converge in any of the usual operator topologies.
If $\rank P_n = 1$ (and $Q_n = P_n$) for all $n \in \mathbb{Z}$ then $\sum_{k \in \mathbb{Z}} T_k$ does converge to $T$ in the \term{Bures topology}\footnotemark{} \cite{Bur-1971,Mer-1985-PAMS}.
On the other hand, if $T$ is a block band operator relative to this block decomposition, then convergence is irrelevant: $T = \sum_{k=-M}^M T_k$.
\footnotetext{%
The Bures topology on $B(\mathcal{H})$ is a locally convex topology constructed from the (rank-1) projections $P_n$ as follows.
Let $\mathcal{D} = \bigoplus_{n \in \mathbb{Z}} P_n B(\mathcal{H}) P_n$ be the algebra of diagonal matrices and $E : B(\mathcal{H}) \to \mathcal{D}$ the conditional expectation given by $T \mapsto T_0 := \sum_{n \in \mathbb{Z}} P_n T P_n$.
Then to each $\omega \in \ell_1 \cong \mathcal{D}_{*}$, associate the seminorm $T \mapsto \trace(\diag(\omega) E(T^{*}T)^{\frac{1}{2}})$, where $\diag : \ell_{\infty} \to \mathcal{D}$ is the natural *-isomorphism.
These seminorms generate the Bures topology.
}
The reason for our ``shift'' terminology in \Cref{def:shift-representation} is that if the block decomposition $\delim\{\}{P_n}_{n \in \mathbb{Z}}$ consists of rank-1 projections, then the operators $T_k$ have the form $T_k = U^k D_k$ where $D_k$ are diagonal operators and $U$ is the bilateral shift relative to any orthonormal basis corresponding to $\delim\{\}{P_n}_{n \in \mathbb{Z}}$.
\end{remark}
\begin{remark}
\label{rem:tridiagonalizability}
All compact selfadjoint operators are diagonalizable via the spectral theorem.
However, this is certainly not the case for arbitrary selfadjoint operators, the selfadjoint approximation theorem of Weyl--von Neumann notwithstanding.
Nevertheless, every selfadjoint operator with a cyclic vector is \emph{tri}diagonalizable;
for $T = T^{*}$ with cyclic vector $v$, apply Gram--Schmidt to the linearly independent spanning collection $\delim\{\}{T^n v}_{n=0}^{\infty}$ and then $T$ is tridiagonal in the resulting orthonormal basis.
Consequently, every selfadjoint operator is block diagonal with each nonzero block in the direct sum itself tridiagonal.
The second author, along with Patnaik and Petrovic \cite{PPW-2020-TmloVLOt,PPW-2021}, recently established that every bounded operator is \emph{block} tridiagonalizable, meaning $T = T_{-1} + T_0 + T_1$, hence block banded (with block bandwidth $1$) and with finite block sizes growing no faster than exponential.
\end{remark}
Our first main theorem is an algebraic equality of ideals for block band operators relative to some block decomposition.
\begin{theorem}
\label{thm:bandable}
Let $T \in \mathcal{B}(\mathcal{H})$ be an asymmetric block band operator of bandwidth $M$ relative to the block decompositions $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$, and let $\delim\{\}{T_k}_{k=-M}^M$ be the asymmetric shift decomposition of $T$.
Then the following ideal equality holds:
\begin{equation*}
\delim<>{T} = \sum_{k=-M}^M \delim<>{T_k}.
\end{equation*}
\end{theorem}
\begin{proof}
The proof is essentially due to the following observation:
if you zoom out and squint, then a band matrix looks diagonal.
That is, we exploit the relative thinness of the diagonal strip of support entries.
The ideal inclusion $\delim<>{T} \subseteq \sum_{k=-M}^M \delim<>{T_k}$ is obvious since $T = \sum_{k=-M}^M T_k$.
Therefore it suffices to prove $T_k \in \delim<>{T}$ for each $-M \le k \le M$.
Indeed, for $-M \le j,k \le M$ define projections $R_{k,j} := \sum_{n \in \mathbb{Z}} P_{n(2M+1) + j + k}$ and $S_j := \sum_{n \in \mathbb{Z}} Q_{n(2M+1) + j}$
Then whenever $n \neq m$, $Q_{n(2M+1) + j} T P_{m(2M+1) + j + k} = 0$ since the bandwidth of $T$ is $M$ and
\begin{align*}
\abs{\delim()!{n(2M+1) + j} - \delim()!{m(2M+1) + j + k} }
&\ge \abs{n-m}(2M+1) - k \\
&\ge (2M+1) - M > M.
\end{align*}
Therefore, for each $k,j$,
\begin{equation*}
S_j T R_{k,j} = \sum_{n \in \mathbb{Z}} Q_{n(2M+1) + j} T P_{n(2M+1) + j + k}
\end{equation*}
converges in the strong operator topology, and summing over $j$ yields
\begin{equation*}
\sum_{j=-M}^M S_j T R_{k,j} = \sum_{j=-M}^{M} \sum_{n \in \mathbb{Z}} Q_{n(2M+1) + j} T P_{n(2M+1) + j + k} = \sum_{n \in \mathbb{Z}} Q_n T P_{n+k} = T_k.
\end{equation*}
As a finite sum, the left-hand side is trivially in $\delim<>{T}$ and therefore so is each $k^{\mathrm{th}}$ generalized block diagonal $T_k$.
\end{proof}
Before establishing our second main theorem (\Cref{thm:sum-off-diagonal-corners-am-closure}), we acquaint the reader with the prerequisite ideas concerning Fan's theorem \cite[Theorem~1]{Fan-1951-PNASUSA}, Hardy--Littlewood submajorization, fundamentals of the theory of operator ideals and arithmetic mean closed ideals, all of which are intimately related.
For a single operator, Fan's submajorization theorem \cite[Theorem~1]{Fan-1951-PNASUSA} states that if the matrix representation for a compact operator $T \in \mathcal{K}(\mathcal{H})$ has diagonal sequence $\delim(){d_j}_{j \in J}$ (with any index set $J$), then
\begin{equation}
\label{eq:submajorization}
\sum_{n=1}^m \abs{d_n}^{*} \le \sum_{n=1}^m s_n(T) \quad\text{for all } m \in \mathbb{N},
\end{equation}
where $s(T) := \delim(){s_n(T)}_{n \in \mathbb{N}}$ denotes the (monotone) \term{singular value sequence} of $T$, and where $\delim(){\abs{d_n}^{*}}_{n \in \mathbb{N}}$ denotes the \term{monotonization}\footnotemark{} of the (possibly unordered) sequence $\delim(){\abs{d_j}}_{j \in J}$;
the monotonization is always an element of the convex cone $\cz^{*}$ of nonnegative nonincreasing sequences (indexed by $\mathbb{N}$) converging to zero, even when $\delim(){\abs{d_j}}_{j \in J}$ is indexed by another set $J$ different from $\mathbb{N}$.
The set of inequalities \eqref{eq:submajorization} may be encapsulated, for pairs of sequences in $\cz^{*}$, by saying that $\delim(){\abs{d_n}^{*}}$ is \term{submajorized} by $s(T)$, which is often denoted $\delim(){\abs{d_n}^{*}} \pprec s(T)$, although the precise notation for submajorization varies throughout the literature.
We remark the trivial fact that the submajorization order is finer than the usual pointwise order on $\cz^{*}$;
that is, $\delim(){a_n} \le \delim(){b_n}$ implies $\delim(){a_n} \pprec \delim(){b_n}$ for any $\delim(){a_n}, \delim(){b_n} \in \cz^{*}$.
\footnotetext{%
This is the measure-theoretic \term{nonincreasing rearrangement} relative to the counting measure on the index set, say $J$, of $\delim(){\abs{d_n}}$.
Associated to this, there is a injection (not necessarily a bijection) $\pi : \mathbb{N} \to J$ with $d^{-1}(\mathbb{C}\setminus\delim\{\}{0}) \subseteq \pi(\mathbb{N})$ such that $\abs{d_n}^{*} = \abs{d_{\pi(n)}}$.
This of course requires $0 \notin (d \circ \pi)(\mathbb{N})$ when $d^{-1}(\mathbb{C}\setminus\delim\{\}{0})$ is infinite since $\delim(){\abs{d_n}^{*}}$ is nonincreasing.
}
However, we view Fan's theorem in a slightly different way which is more amenable to our purposes.
In particular, consider the canonical trace-preserving conditional expectation\footnotemark{} $E : \mathcal{B}(\mathcal{H}) \to \mathcal{D}$ onto the masa (maximal abelian selfadjoint algebra) of diagonal operators relative to a fixed, but arbitrary, orthonormal basis.
Then the sequence $\delim(){\abs{d_n}^{*}}$ is simply $s(E(T))$, and in this language:
\footnotetext{%
For an inclusion of unital C*-algebras $\mathcal{B} \subseteq \mathcal{A}$ (with $1_{\mathcal{B}} = 1_{\mathcal{A}}$), a \term{conditional expectation of $\mathcal{A}$ onto $\mathcal{B}$} is a unital positive linear map $E : \mathcal{A} \to \mathcal{B}$ such that $E(bab') = bE(a)b'$ for all $a \in \mathcal{A}$ and $b,b' \in \mathcal{B}$.
A conditional expectation is called \term{faithful} if $a \ge 0$ and $E(a) = 0$ imply $a = 0$.
If $\mathcal{A}$ is a semifinite von Neumann algebra with a faithful normal semifinite trace $\tau$, then the expectation is said to be \term{trace-preserving} if $\tau(a) = \tau(E(a))$ for all $a \in \mathcal{A}_+$.%
}
\begin{theorem}[\protect{\cite[Theorem~1]{Fan-1951-PNASUSA}}]
\label{thm:fans-theorem}
If $T \in \mathcal{K}(\mathcal{H})$ and $E : \mathcal{B}(\mathcal{H}) \to \mathcal{D}$ is the canonical conditional expectation onto a masa of diagonal operators, then
\begin{equation*}
s(E(T)) \pprec s(T),
\end{equation*}
that is, $s(E(T))$ is submajorized by $s(T)$.
\end{theorem}
The submajorization order features prominently in operator theory, but especially in the theory of diagonals of operators and in the related theory of operator ideals in $\mathcal{B}(\mathcal{H})$.
For the reader's convenience we briefly review the basics of ideal theory.
Let $\cz^{*}$ denote the convex cone of nonnegative nonincreasing sequences converging to zero.
To an ideal $\mathcal{I}$, Schatten \cite{Sch-1970}, in a manner quite similar to Calkin \cite{Cal-1941-AoM2}, associated the convex subcone $\Sigma(\mathcal{I}) := \delimpair\{{[m]\vert}\}{ s(T) \in \cz^{*} }{ T \in \mathcal{I} }$, called the \term{characteristic set} of $\mathcal{I}$, which satisfies the properties:
\begin{enumerate}
\item If $\delim(){a_n} \le \delim(){b_n}$ (pointwise) and $\delim(){b_n} \in \Sigma(\mathcal{I})$, then $\delim(){a_n} \in \Sigma(\mathcal{I})$;
that is, $\Sigma(\mathcal{I})$ is a \term{hereditary subcone} of $\cz^{*}$ with respect to the usual pointwise ordering.
\item If $\delim(){a_n} \in \Sigma(\mathcal{I})$, then $\delim(){a_{\delim\lceil\rceil{\frac{n}{2}}}} \in \Sigma(\mathcal{I})$;
that is, $\Sigma(\mathcal{I})$ is closed under \term{$2$-ampliations}.
\end{enumerate}
Likewise, if $S$ is a hereditary (with respect to the pointwise order) convex subcone of $\cz^{*}$ which is closed under $2$-ampliations, then $\mathcal{I}_S := \delimpair\{{[m]\vert}\}{ T \in \mathcal{K}(\mathcal{H}) }{ s(T) \in S }$ is an ideal of $\mathcal{B}(\mathcal{H})$.
Finally, the maps $S \mapsto \mathcal{I}_S$ and $\mathcal{I} \mapsto \Sigma(\mathcal{I})$ are inclusion-preserving inverses between the classes of $\mathcal{B}(\mathcal{H})$-ideals and characteristic subsets of $\cz^{*}$.
Ideals whose characteristic sets are also hereditary subcones with respect to the submajorization order (i.e., $B \in \mathcal{I}$ and $s(A) \pprec s(B)$ implies $A \in \mathcal{I}$) were introduced by Dykema, Figiel, Weiss and Wodzicki\fnmark{dfww} in \cite{DFWW-2004-AM} and are said to be \label{def:am-closed}\term{arithmetic mean closed}\footnotemark{} (abbreviated as \term{am-closed}).
Given an ideal $\mathcal{I}$, the smallest am-closed ideal containing $\mathcal{I}$ is called the am-closure, denoted $\amclosure{\mathcal{I}}$, and its characteristic set consists simply of the hereditary closure (with respect to the submajorization order) of $\Sigma(\mathcal{I})$.
That is,
\begin{equation*}
\Sigma\paren1{\amclosure{\mathcal{I}}} = \setb1{ \delim(){a_n} \in \cz^{*} }{ \exists \delim(){b_n} \in \Sigma(\mathcal{I}), \delim(){a_n} \pprec \delim(){b_n} }.
\end{equation*}
In general, ideals are not am-closed.
Indeed, the sequence $\delim(){1,0,0,\dots}$ corresponding to a rank-1 projection $P$ submajorizes any (nonnegative) sequence $\delim(){a_n}$ whose sum is at most $1$.
Consequently, if $T \in \mathcal{L}_1$, the trace class, then $s(T) \pprec s(\trace(\abs{T})P)$.
Therefore, since any ideal $\mathcal{I}$ contains the finite rank operators, if it is am-closed it must also contain the trace class $\mathcal{L}_1$.
Additionally, it is immediate that $\mathcal{L}_1$ is am-closed, making it the minimum am-closed ideal.
\footnotetext[\arabic{dfww}]{%
The description given \cite{DFWW-2004-AM} is not in terms of the submajorization order, but these two definitions are easily shown to be equivalent.
Instead, for an ideal $\mathcal{I}$, \cite{DFWW-2004-AM} defines the \term{arithmetic mean ideal} $\mathcal{I}_a$ and \term{pre-arithmetic mean ideal} ${}_a\mathcal{I}$ whose characteristic sets are given by
\begin{gather*}
\Sigma(\mathcal{I}_a) := \delimpair\{{[m]\vert}\}*{ \delim(){a_n} \in \cz^{*} }{ \exists \delim(){b_n} \in \Sigma(\mathcal{I}), a_n \le \frac{1}{n} \sum_{k=1}^n b_k } \\
\Sigma({}_a\mathcal{I}) := \delimpair\{{[m]\vert}\}*{ \delim(){a_n} \in \cz^{*} }{ \exists \delim(){b_n} \in \Sigma(\mathcal{I}), \frac{1}{n} \sum_{k=1}^n a_k \le b_n }
\end{gather*}
Then the \term{arithmetic mean closure} of $\mathcal{I}$ is $\amclosure{\mathcal{I}} := {}_a(\mathcal{I}_a)$, and $\mathcal{I}$ is called \term{am-closed} if $\mathcal{I} = \amclosure{\mathcal{I}}$.
This viewpoint also allows one to define the \term{arithmetic mean interior} $({}_a\mathcal{I})_a$, and one always has the inclusions ${}_a\mathcal{I} \subseteq ({}_a\mathcal{I})_a \subseteq \mathcal{I} \subseteq {}_a(\mathcal{I}_a) \subseteq \mathcal{I}_a$.
}
\footnotetext{%
Although am-closed ideals were introduced in this generality by \cite{DFWW-2004-AM}, they had been studied at least as early as \cite{GK-1969-ITTTOLNO,Rus-1969-FAA}, but only in the context of \term{symmetrically normed ideals}.
In the study of symmetrically normed ideals by Gohberg and Krein \cite{GK-1969-ITTTOLNO}, they only considered those which were already am-closed, but they did not have any terminology associated to this concept.
Around the same time, both Mityagin \cite{Mit-1964-IANSSM} and Russu \cite{Rus-1969-FAA} concerned themselves with the existence of so-called \term{intermediate} symmetrically normed ideals, which are necessarily not am-closed, or in the language of Russu, do not possess the \term{majorant property}.
In \cite{Rus-1969-FAA}, Russu also established that the majorant property is equivalent to the \term{interpolation property} studied by Mityagin \cite{Mit-1965-MSNS} and Calder\'on \cite{Cal-1966-SM}.
In the modern theory of symmetrically normed ideals, those which are am-closed (equivalently, have the majorant or interpolation properties), are said to be \term{fully symmetric}, but this term also implies the norm preserves the submajorization order.
For more information on fully symmetrically normed ideals and related topics, we refer the reader to \cite{LSZ-2013-STTAA}.%
}
Arithmetic mean closed ideals are important within the lattice of operator ideals not least for their connection to Fan's theorem, but also because of the following sort of converse due to the second author with Kaftal.
\begin{theorem}[\protect{\cite[Corollaries~4.4,~4.5]{KW-2011-IUMJ}}]
\label{thm:diagonal-invariance}
For an operator ideal $\mathcal{I}$, and the canonical conditional expectation $E : \mathcal{B}(\mathcal{H}) \to \mathcal{D}$ onto a masa of diagonal operators,
\begin{equation*}
E(\mathcal{I}) = \amclosure{\mathcal{I}} \cap \mathcal{D}.
\end{equation*}
Consequently, $\mathcal{I}$ is am-closed if and only if $E(\mathcal{I}) \subseteq \mathcal{I}$.
\end{theorem}
They used the term \term{diagonal invariance} to refer to $E(\mathcal{I}) \subseteq \mathcal{I}$, and so $\mathcal{I}$ is am-closed if and only if it is diagonally invariant.
The reader should note that the inclusion $E(\mathcal{I}) \subseteq \amclosure{\mathcal{I}} \cap \mathcal{D}$ is a direct consequence of Fan's theorem, when viewed through the lens of \Cref{thm:fans-theorem}, so the new content of \Cref{thm:diagonal-invariance} lies primarily in the reverse inclusion.
At this point, we note an important contrapositive consequence of \Cref{thm:bandable} and \Cref{thm:diagonal-invariance}.
Suppose $T$ is positive and $\delim<>{T}$ is not am-closed, then by \Cref{thm:diagonal-invariance} there is some basis in which the main diagonal of $T$ does not lie in $\delim<>{T}$, and therefore by \Cref{thm:bandable}, $T$ is not a band operator in this basis.
The next theorem, due originally to Gohberg--Krein \cite[Theorems~II.5.1 and III.4.2]{GK-1969-ITTTOLNO}, bootstraps \Cref{thm:fans-theorem} to apply to conditional expectations onto block diagonal algebras instead of simply diagonal masas.
We include this more modern proof both for completeness and to make the statement accord with that of \Cref{thm:fans-theorem}.
\begin{theorem}
\label{thm:fans-theorem-pinching}
Let $\mathcal{P} = \delim\{\}{P_n}_{n \in \mathbb{Z}}$ be a block decomposition and consider the associated conditional expectation $E_{\mathcal{P}} : \mathcal{B}(\mathcal{H}) \to \bigoplus_{n \in \mathbb{Z}} P_n \mathcal{B}(\mathcal{H}) P_n$ defined by $E_{\mathcal{P}}(T) := T_0 = \sum_{n \in \mathbb{Z}} P_n T P_n$.
If $T \in \mathcal{K}(\mathcal{H})$, then $s(E_{\mathcal{P}}(T))$ is submajorized by $s(T)$, i.e.,
\begin{equation*}
s(E_{\mathcal{P}}(T)) \pprec s(T).
\end{equation*}
Moreover, if $T \in \mathcal{I}$, then $E_{\mathcal{P}}(T) \in \amclosure{\mathcal{I}}$.
In addition, if $s(E_{\mathcal{P}}(T)) = s(T)$, then $E_{\mathcal{P}}(T) = T$.
\end{theorem}
\begin{proof}
Suppose that $\mathcal{D}$ is a diagonal masa contained in the algebra $\bigoplus_{n \in \mathbb{Z}} P_n \mathcal{B}(\mathcal{H}) P_n$, and let $E : \mathcal{B}(\mathcal{H}) \to \mathcal{D}$ be the associated canonical trace-preserving conditional expectation.
Because of the algebra inclusions, we see that $E \circ E_{\mathcal{P}} = E$.
Let $T \in \mathcal{K}(\mathcal{H})$ and consider $E_{\mathcal{P}}(T)$.
By applying the Schmidt decomposition to each $P_n T P_n$ one obtains partial isometries $U_n, V_n$ (the latter may even be chosen unitary) in $P_n \mathcal{B}(\mathcal{H}) P_n$ so that $U_n P_n T P_n V_n$ is a positive operator in $\mathcal{D}$.
Then $U := \bigoplus_{n \in \mathbb{Z}} U_n, V := \bigoplus_{n \in \mathbb{Z}} V_n$ are partial isometries for which $s((E(U E_{\mathcal{P}}(T) V)) = s(E_{\mathcal{P}}(T))$.
Then since $U,V \in \bigoplus_{n \in \mathbb{Z}} P_n \mathcal{B}(\mathcal{H}) P_n$ they commute with the conditional expectation $E_{\mathcal{P}}$ and hence
\begin{equation*}
s(E_{\mathcal{P}}(T)) = s((E(U E_{\mathcal{P}}(T) V)) = s(E(E_{\mathcal{P}}(UTV))) = s(E(UTV)).
\end{equation*}
By Fan's theorem (\Cref{thm:fans-theorem}), $s(E(UTV)) \pprec s(UTV) \le \norm{U} s(T) \norm{V} = s(T)$, and therefore $s(E_{\mathcal{P}}(T)) \pprec s(T)$.
Finally, this fact along with the definition of the arithmetic mean closure guarantees $T \in \mathcal{I}$ implies $E_{\mathcal{P}}(T) \in \amclosure{\mathcal{I}}$.
For the case of equality, now suppose that $s(E_{\mathcal{P}}(T)) = s(T)$.
Let $\delim\{\}{e_n}_{n \in \mathbb{N}}$ be an orthonormal sequence of eigenvectors of $E_{\mathcal{P}}(T)^{*} E_{\mathcal{P}}(T)$, each of which is inside one of the subspaces $P_j \mathcal{H}$, satisfying $E_{\mathcal{P}}(T)^{*} E_{\mathcal{P}}(T) e_n = s_n(T)^2 e_n$.
Then the projections $Q_n$ onto $\spans\delim\{\}{e_1,\ldots,e_n}$ commute with each $P_j$, and hence also with the expectation $E_{\mathcal{P}}$.
We note for later reference that
\begin{equation}
\label{eq:epTQnperp}
\norm{E_{\mathcal{P}}(T)Q_n^{\perp}}^2 = \norm{Q_n^{\perp}E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T) Q_n^{\perp}} \le s_{n+1}(E_{\mathcal{P}}(T))^2.
\end{equation}
Observe that for any operator $X$, because $P_j X^{*} P_j X P_j \le P_j X^{*} X P_j$
\begin{equation}
\label{eq:epX}
E_{\mathcal{P}}(X)^{*}E_{\mathcal{P}}(X) = \sum_{j \in \mathbb{Z}} P_j X^{*} P_j X P_j \le \sum_{j \in \mathbb{Z}} P_j X^{*} X P_j = E_{\mathcal{P}}(X^{*}X),
\end{equation}
with equality if and only if $P_j X^{*} P_j^{\perp} X P_j = 0$ for all $j \in \mathbb{Z}$ if and only if $P_j^{\perp} X P_j = 0$ for all $j \in \mathbb{Z}$ if and only if $X = E_{\mathcal{P}}(X)$.
Applying \eqref{eq:epX} to $X = TQ_n$,
\begin{align*}
\sum_{j=1}^n s_j(E_{\mathcal{P}}(T))^2 &= \trace(Q_n E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T) Q_n) \\
&= \trace(E_{\mathcal{P}}(TQ_n)^{*} E_{\mathcal{P}}(TQ_n)) \\
&\le \trace(E_{\mathcal{P}}(Q_n T^{*} T Q_n)) \\
&= \trace(Q_n T^{*} T Q_n) \le \sum_{j=1}^n s_j(T)^2,
\end{align*}
where the last inequality follows from \Cref{thm:fans-theorem}.
We must have equality throughout since $s(E_{\mathcal{P}}(T)) = s(T)$.
Consequently, $TQ_n = E_{\mathcal{P}}(TQ_n) = E_{\mathcal{P}}(T)Q_n$ for all $n \in \mathbb{N}$ by the equality case of \eqref{eq:epX}.
By construction, $\norm{E_{\mathcal{P}}(T)Q_n^{\perp}} \to 0$ as $n \to \infty$, but we also claim
\begin{equation}
\label{eq:TQnperp}
\norm{TQ_n^{\perp}} \le s_{n+1}(T).
\end{equation}
Suppose not.
Then we could find some unit vector $x \in Q_n^{\perp} \mathcal{H}$ with $\delimpair<{[.],}>{T^{*}T x}{x} = \norm{T x}^2 > s_{n+1}(T)^2$, and therefore, for the projection $R = Q_n + (x \otimes x)$,
\begin{equation*}
\trace(RT^{*}TR) = \trace(Q_n T^{*}T Q_n) + \delimpair<{[.],}>{T^{*}T x}{x} > \sum_{j=1}^{n+1} s_j(T)^2,
\end{equation*}
contradicting the fact that, because $R$ is a projection of rank $n+1$, by \Cref{thm:fans-theorem}
\begin{equation*}
\trace(RT^{*}TR) \le \sum_{j=1}^{n+1} s_j(RT^{*}TR) \le \sum_{j=1}^{n+1} s_j(T)^2.
\end{equation*}
Finally, again noting that $TQ_n = E_{\mathcal{P}}(T)Q_n$,
\begin{equation*}
0 \le \norm{T - E_{\mathcal{P}}(T)} \le \norm{T - TQ_n} + \norm{E_{\mathcal{P}}(T)Q_n - E_{\mathcal{P}}(T)} = \norm{TQ_n^{\perp}} + \norm{E_{\mathcal{P}}(T)Q_n^{\perp}}.
\end{equation*}
Since $\norm{TQ_n^{\perp}} \le s_{n+1}(T)$ by \eqref{eq:TQnperp} and $\norm{E_{\mathcal{P}}(T)Q_n^{\perp}} \le s_{n+1}(E_{\mathcal{P}}(T))$ by \eqref{eq:epTQnperp}, the right-hand side converges to zero as $n \to \infty$.
Therefore, $\norm{T - E_{\mathcal{P}}(T)} = 0$ and hence $T = E_{\mathcal{P}}(T)$.
\end{proof}.
\begin{remark}
\label{rem:T=ep(T)-hilbert-schmidt}
When $T$ is Hilbert--Schmidt, the proof that $s(E_{\mathcal{P}}(T)) = s(T)$ implies $E_{\mathcal{P}}(T) = T$ may be shortened considerably.
In particular, $T^{*}T, E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T)$ are trace-class with $s(T^{*}T) = s(E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T))$ and so $\trace(T^{*}T) = \trace(E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T))$.
Since the expectation $E_{\mathcal{P}}(T)$ is trace-preserving,
\begin{align*}
\trace(E_{\mathcal{P}}(T^{*}T) - E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T)) &= \trace(E_{\mathcal{P}}(T^{*}T - E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T))) \\
&= \trace(T^{*}T - E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T)) = 0.
\end{align*}
Since $E_{\mathcal{P}}(T^{*}T) - E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T)$ is a positive operator by \eqref{eq:epX} and the trace is faithful, we must have $E_{\mathcal{P}}(T^{*}T) = E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T)$, and hence $T = E_{\mathcal{P}}(T)$ by the equality case of \eqref{eq:epX}.
\end{remark}
\begin{remark}
Fan's theorem (\Cref{thm:fans-theorem}) is a special case of \Cref{thm:fans-theorem-pinching} by selecting the projections $P_n$ to have rank one, and therefore $E = E_{\mathcal{P}}$.
As we need \Cref{thm:fans-theorem} to prove \Cref{thm:fans-theorem-pinching}, this doesn't provide an independent proof of Fan's theorem.
\end{remark}
Our second main theorem says that there is nothing special about the main diagonal $T_0$: for all $k \in \mathbb{Z}$, $s(T_k) \pprec s(T)$;
Moreover, this holds even for \emph{asymmetric} shift decompositions.
\begin{theorem}
\label{thm:sum-off-diagonal-corners-am-closure}
Suppose that $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$ are block decompositions and let $T \in \mathcal{K}(\mathcal{H})$ with asymmetric shift decomposition $\delim\{\}{T_k}_{k \in \mathbb{Z}}$.
Then $s(T_k) \pprec s(T)$.
Consequently, if $T$ lies in some ideal $\mathcal{I}$, then $T_k \in \amclosure{\mathcal{I}}$.
\end{theorem}
\begin{proof}
It suffices to prove the theorem for $T_0$ since $T_k$ is simply $T_0$ relative to the translated block decomposition pair $\delim\{\}{P_{n+k}}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$.
Each $Q_n T P_n$ has the polar decomposition $Q_n T P_n = U_n \abs{Q_n T P_n}$ where $U_n$ is a partial isometry\footnotemark{} with $Q_n U_n = U_n = U_n P_n$.
Then $U := \sum_{n \in \mathbb{Z}} U_n$ converges in the strong operator topology since the collections $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$ are each mutually orthogonal and hence also $U$ is a partial isometry.
Moreover,
\begin{equation*}
T_0^{*} T_0 = \delim()*{ \sum_{n \in \mathbb{Z}} P_n T^{*} Q_n } \delim()*{ \sum_{m \in \mathbb{Z}} Q_m T P_m } = \sum_{n \in \mathbb{Z}} \abs{Q_n T P_n}^2.
\end{equation*}
Since the operators $\abs{Q_n T P_n}^2$ are orthogonal (i.e., their products are zero), $\abs{T_0} = (T_0^{*}T_0)^{\frac{1}{2}} = \sum_{n \in \mathbb{Z}} \abs{Q_n T P_n}$.
Thus,
\begin{align*}
E_{\mathcal{P}}(U^{*}T) &= \sum_{n \in \mathbb{Z}} P_n U^{*} T P_n = \sum_{n \in \mathbb{Z}} \delim()*{ \sum_{m \in \mathbb{Z}} P_n U^{*}_m T P_n } \\
&= \sum_{n \in \mathbb{Z}} \delim()*{ \sum_{m \in \mathbb{Z}} P_n P_m U^{*}_m Q_m T P_n } = \sum_{n \in \mathbb{Z}} U^{*}_n Q_n T P_n \\
&= \sum_{n \in \mathbb{Z}} \abs{Q_n T P_n} = \abs{T_0}.
\end{align*}
Finally, by \Cref{thm:fans-theorem-pinching} and since $U^{*}$ is a contraction,
\begin{equation*}
s(T_0) = s(\abs{T_0}) = s(E_{\mathcal{P}}(U^{*}T)) \pprec s(U^{*}T) \le s(T).
\end{equation*}
Therefore, if $T \in \mathcal{I}$, then $T_0 \in \amclosure{\mathcal{I}}$ by definition.
\end{proof}
\footnotetext{That $Q_n U_n = U_n = U_n P_n$ follows from well-known facts (e.g., see \cite[Theorem~I.8.1]{Dav-1996}) when $U_n$ is taken to be the canonical unique partial isometry on $\mathcal{H}$ mapping $\closure{\range(\abs{Q_n T P_n})} \to \closure{\range(Q_n TP_n)}$ and noting also the range projection of $Q_n T P_n$ is dominated by $Q_n$ and the projection onto $\closure{\range(\abs{Q_n T P_n})} = \ker^{\perp}(Q_n T P_n)$ is dominated by $P_n$.}
\begin{remark}
In the previous theorem we assumed that $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$ were block decompositions, but the condition that they sum to the identity is not actually necessary (the same proof given above still works).
Therefore, if $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$ are sequences of mutually orthogonal projections then $s(\sum_{n \in \mathbb{Z}} Q_n T P_n) \pprec s(T)$;
consequently, if $T \in \mathcal{I}$ then we still have $\sum_{n \in \mathbb{Z}} Q_n T P_n \in \closure[\mathrlap{am}]{\mathcal{I}}$.
\end{remark}
\section*{Acknowledgments}
The authors would like to thank Fedor Sukochev for providing insight into the history of fully symmetric ideals.
\printbibliography
\end{document}
|
1,108,101,564,346 | arxiv | \section{Introduction}
It is well known that magnetic ordering is an essentially quantum
phenomenon. According to the Bohr -- van Leeven theorem (see, e.g.,
\cite{Vonsovskii}), the magnetization of a thermodynamically equilibrium
classical system of charged particles is zero even in presence of an external
magnetic field. Classical theories of magnetic properties were based on
certain assumptions going beyond the limits of classical physics (e.g., the
existence of stable micro-particles with nonzero magnetic moment assumed in
Langevin's theory of paramagnetism \cite{Vonsovskii}). The nature of magnetic
ordering was revealed only after the discovery of modern quantum mechanics in
the works of Heisenberg, Frenkel and Dorfman. In 30s, many remarkable results
were obtained within the microscopic quantum theory: Bloch \cite{Bloch30}
predicted the existence of magnons and low-temperature behavior of
magnetization; Bethe \cite{Bethe31} was able to construct the complete set of
excited states for a spin-$1\over2$ chain, including nonlinear soliton-type
excitations (spin complexes).
The `undivided rule' of the quantum
theory of magnetism lasted only till 1935, when in the well-known work Landau
and Lifshitz \cite{LL} formulated the equation describing the dynamics of
macroscopic magnetization of a ferromagnet (FM). When deriving the
Landau-Lifshitz (LL) equation, a quantum picture of magnetic ordering was
used, particularly, the exchange nature of spin interaction, but the LL
equation itself has the form of a classical equation for the magnetization
$\vec{M}$. Later on the basis of the LL equation the macroscopic theory of
magnetism was developed and enormous number of various phenomena were
described \cite{SW,BICG} (an overview of modern phenomenological theory of
magnetically ordered media can be also found in this book in the lecture by
V.G.Bar'yakhtar).
This lecture presents an introduction to the foundations of a new,
fast-developing topic in the physics of magnetism, Macroscopic Quantum
Tunnelling (MQT). Let us first address briefly the scope of problems belonging
to this field. MQT problems can be roughly divided into two main types. First
of all, there are phenomena connected with the underbarrier transition from a
metastable state, corresponding to a local minimum of the magnet energy, to a
stable one. Such effects were observed in low-temperature remagnetization
processes in small FM particles as well as in macroscopic samples (due to the
tunneling depinning of domain walls), see the recent review
\cite{TejadaZhang95}. Such phenomena of ``quantum escape'' are typical not
only for magnets, e.g., quantum depinning of vortices contributes
significantly to the energy losses in HTSC materials \cite{HTSC}.
Here we will concentrate on another type of phenomena, the so-called {\em
coherent MQT.\/} To illustrate their main feature, let us consider a small FM
particle with the easy axis along the $Oz$ direction. If the particle size is
small enough (much less than the domain wall thickness $\Delta_{0}$), the
particle is in a single-domain state, because the exchange interaction makes
the appearance of a state with magnetic inhomogeneities energetically
unfavorable. Then, from the point of view of classical physics, the ground
state of the particle is twofold degenerate. Those two states correspond to
two local minima of the anisotropy energy and are macroscopically different
since they have different values of macroscopic magnetization $\vec{M}=\pm
M_{0} \vec{e}_{z}$. The situation is the same as in the elementary mechanical
problem of a particle in two-well potential $U(x)$ having equivalent minima at
$x=\pm a$, see Fig.\ \ref{fig:two-well}. In classical mechanics the minimum of
energy corresponds to a particle located in one of the two local minima of the
potential.
However, from quantum mechanics textbooks it is well known that the actual
situation is {\em qualitatively\/} different: the particle is ``spread'' over
two wells, and the ground state is nondegenerate \cite{LL-QM}. One can expect
that the same should be true for a FM particle: its correct ground state will
be a superposition of ``up'' and ``down'' states, and the mean value of
magnetization will be zero. Such picture was first proposed by Chudnovsky
\cite{Chud79}; further calculations showed \cite{ChudGunther88} that such
effects are possible for FM particles with rather large number of spins (about
$10^{3}\div10^{4}$). The tunneling effects, according to the theoretical
estimates \cite{BarbaraChud90,KriveZaslavskii90}, should be even more
important for small particles of antiferromagnet (AFM); the effects of quantum
coherence in AFM particles were observed in Ref.\ \cite{Awschalom+92}.
Thus, an important feature of quantum mechanics, a possibility of underbarrier
transitions, can manifest itself in magnetic particles on a macroscopic
(strictly speaking, mesoscopic) scale. Maybe even more interesting is the
manifestation of another characteristic feature of quantum physics, viz.\ the
effects of quantum interference. Such effects arise in the problem of MQT in
magnetic nanostructures and can partially or completely suppress tunneling,
restoring the initial degeneracy of the ground state
\cite{Loss+92,DelftHenley92}. We wish to remark that understanding that
motion of particles along very different classical trajectories can ``sum up''
in some sense and yield an interference picture was one of the crucial points
in the development of quantum mechanics, and a considerable part of the
well-known discussion between Bohr and Einstein was devoted to this problem.
Besides the importance of the tunneling phenomena in magnets from the
fundamental point of view, they are potentially important for the future
magnetic devices working on a nanoscale.
In the present lecture we restrict ourselves to discussing the problems of
coherent MQT in various mesoscopic magnetic structures. The paper is organized
as follows: Sect.\ \ref{sec:basics} contains the elementary description of the
instanton formalism, traditionally used in the theoretical treatment of MQT
problems. Since the instanton approach, though being the most straightforward
one, is based on rather complicated mathematical formalism, we will discuss it
in parallel with simple and widely known semiclassical approximation of
quantum mechanics. The point is that those two approaches are equally adequate
for treating the problem of MQT in small particles, and the ``standard''
semiclassical calculations, easily reproducible by anybody who learned
foundations of quantum mechanics, may be helpful for understanding the
structure of the results derived within the instanton technique. Further, in
Sections \ref{sec:FM} and \ref{sec:AFM} we discuss the problem of MQT in
ferro- and antiferromagnetic small particles, with a special attention to the
interference effects. For the description of AFM we use simple but adequate
approach based on the equations for the dynamics of the antiferromagnetism
vector $\vec{l}$. This approach easily allows one to keep trace of the actual
magnetic symmetry of the crystal; the symmetry is lowered when external
magnetic field is applied or when certain weak interactions, e.g., the
so-called Dzyaloshinskii-Moriya (DM) interaction, are taken into account,
which leads to quite nontrivial interference phenomena. Section \ref{sec:top}
is devoted to the analysis of coherent MQT in ``topological nanostructures,''
i.e.\ static inhomogeneous states of magnets with topologically nontrivial
distribution of magnetization; among the examples considered there are domain
walls in one-dimensional (1D) magnets \cite{ivkol94,ivkol95tun,ivkol96jetp},
magnetic vortices \cite{GalkinaIv95} and disclinations \cite{IvKir97} in 2D
antiferromagnets, and antiferromagnetic rings with odd number of spins
\cite{kireev}. For those problems, when the description of tunneling involves
multidimensional (space-time) instantons, there is no alternative to the
instanton approach and its use is decisive.
Finally, Section \ref{sec:summary} contains a brief summary and discussion of
several problems which are either left out of our consideration or unsolved.
\section{Basics of Tunneling: With and Without Instantons}
\label{sec:basics}
For the sake of the presentation completeness, let us recall briefly the main
concepts of the instanton technique, since we will extensively use them below.
In quantum field theory, the propagator, i.e., the amplitude of probability
$P_{12}$ of the transition from any given state with the field configuration
$\varphi_{A}(x)$ at $t=0$ to another state $\varphi_{B}(x)$ at $t=t_{0}$ is
determined by the path integral
\begin{equation}
\label{1to2}
P_{AB}=\langle \varphi_{A}|e^{i \widehat{H}t_{0}/\hbar}|\varphi_{B}\rangle
=\int_{\varphi(x,0)=\varphi_{A}(x)}^{\varphi(x,t)=\varphi_{B}(x)}
{\cal D}\varphi(x,t )\, \exp\big\{i{\cal A}[\varphi]/\hbar \big\}\,,
\end{equation}
where
\[
{\cal A}[\varphi]=\int_{0}^{t_{0}} dt\int dx\, {\cal L}[\varphi(x,t)]
\]
is the action functional. Here ${\cal L}$ is the Lagrangian density, and the
integration in (\ref{1to2}) goes over all space-time field configurations
$\varphi(x,t)$ satisfying the boundary conditions
$\varphi(x,0)=\varphi_{A}(x)$ and $\varphi(x,t_{0})=\varphi_{B}(x)$. (We leave
out the problem of a consistent definition of the measure ${\cal D} \varphi$
that arises for systems with infinitely many degrees of freedom, keeping in
mind that we are going to talk about the application of field theory to the
physics of spin systems on a discrete lattice, and thus all necessary
regularizations are provided by the lattice in a natural way.)
Instead of working with the propagator (\ref{1to2}) in usual Minkovsky's
space-time, it is convenient to make the Wick rotation $t\to i\tau$
(essentially this procedure is an analytical continuation in $t$), passing to
the Euclidean space-time. Then one has the Euclidean propagator
\[
P_{AB}^{\rm eucl}=\langle
\varphi_{A}|e^{-\widehat{H}\tau_{0}/\hbar}|\varphi_{B}\rangle =\int {\cal
D}\varphi\,\exp\big\{ -{\cal A}_{\rm eucl}/\hbar\big\} \,.
\]
The main contribution to the path integral comes from the global minimum of
the Euclidean action functional ${\cal A}_{\rm eucl}$. This minimum
corresponds to a trivial solution $\varphi=\varphi_{0}=\mbox{const}$, where
$\varphi_{0}$ determines the minimal energy of the system. However, if
several different values of $\varphi_{0}$ are possible, it is often important
to take into account the contribution from the {\em local} minima of the
Euclidean action as well. Such a local minimum can correspond, e.g., to a
trajectory $\varphi=\varphi_{\rm inst}(\tau)$ connecting two possible
$\varphi_{0}$ values; it is clear that the probability $P_{AB}$ will contain
the factor $\exp\{-{\cal A}_{\rm eucl}[\varphi_{\rm inst}]\}/\hbar$. Such a
contribution can be calculated in a semiclassical approximation and describes
effects which cannot be accessed by means of the perturbation theory.
We will illustrate the above arguments on the example of a simple
quantum-mechanical problem. Consider the motion of a particle of mass $m$ in a
symmetric two-well potential $U(x)$ of the type shown in Fig.\
\ref{fig:two-well}, with two equivalent minima at $x=\pm a$. Following
the popular choice \cite{inst}, we will assume this potential in the form
\begin{equation}
\label{fi4}
U(x)=\lambda (x^{2}-a^{2})^{2}\,,
\end{equation}
where the parameters $\lambda$ and $a$ determine the height and width of the
barrier between two wells. This model is described by the Lagrangian
\begin{equation}
\label{Lfi4}
L={m\over2}\left({dx\over dt}\right)^{2}-U(x)\,.
\end{equation}
After passing to the imaginary time, the Euclidean action is easily obtained
in the form
\begin{equation}
\label{fi4A}
{\cal A}_{\rm eucl}=\int_{0}^{\tau_{0}} d\tau\, \Big\{ {1\over2}m
\left({dx\over d\tau}\right)^{2} + U(x) \Big\}\,.
\end{equation}
The classical (global) minimum of this functional is reached at $x=a$
or $x=-a$. Equations of motion for the action (\ref{fi4A})
\[
m{d^{2}x\over d\tau^{2}}={dU\over dx}
\]
correspond to the particle moving in the potential $-U(x)$, so that
$x=\pm a$ are {\em maxima} of this effective potential, and there exist
classical low-energy trajectories connecting them. Such trajectories represent
{\em local} minima of the Euclidean action functional and are called {\em
instantons}. They can be easily found in implicit form,
\begin{equation}
\label{impl}
\int dx\, \left({m\over 2U(x)}\right)^{1/2}=\tau-\tau_{0}\,,
\end{equation}
where $\tau_{0}$ is an arbitrary parameter determining the ``centre'' of
instanton solution. For many potentials the integration can be performed
explicitly, e.g., in case of (\ref{fi4}) one obtains
\begin{equation}
\label{fi4inst}
x=\pm a\tanh[\omega_{0}(\tau-\tau_{0})/2]\,,
\end{equation}
where $\omega_{0}=(8\lambda a^{3}/m)$ is the frequency of linear oscillations
around one of classical minima. Euclidean action for the instanton trajectory
can be written as
\begin{equation}
\label{fi4A0}
{\cal A}_{0}=\int_{-a}^{+a} \sqrt{2mU(x)}\, dx\,.
\end{equation}
For the model (\ref{fi4}) one has ${\cal A}_{0}=8a^{3}\sqrt{\lambda m}/3$.
Thus, instantons are very much like solitons with the difference that they are
localized in time. Trajectories (\ref{fi4inst}) begin at $\tau\to-\infty$ in
one of the minima of $U(\varphi)$ and end at $\tau\to+\infty$ in the other
one; the contribution of those trajectories is responsible for the tunneling
splitting of the lowest energy level in the two-well potential. Indeed, the
tunneling level splitting is proportional to the matrix element $t_{12}$ of the
transition from one well to the other, and the probability amplitude of such a
transition is given by the path integral from $x=a$ to $x=-a$. It
is thus clear that the contribution of a single instanton to the transition
amplitude is proportional to $e^{-{\cal A}_{0}/\hbar}$.
\begin{figure}
\mbox{\hspace{6mm}\psfig{figure=twow.ps,width=110mm,angle=-90.}}
\caption{A two-well potential $U(x)$ with equivalent wells at $x=\pm a$.
Semiclassical treatment of tunneling
is possible if the amplitude of zero-point oscillations $\delta\ll a$.}
\label{fig:two-well}
\end{figure}
The full calculation of this amplitude, however, is more complicated and
should take into account not only the instanton trajectories but all
trajectories close to them. Further, the full variety of multiinstanton paths
which bring the particle from one well to the other should be taken into
account. If the problem is semiclassical, i.e.\ ${\cal A}_{0}/\hbar$ is large
and the probability of tunneling is small, integration over ``close''
trajectories can be described as an effect of small fluctuations above the
instanton solution. Even this, usually elementary, problem of integrating over
small (linear) fluctuations is nontrivial in case of instantons, because some
of those fluctuations do not change the action. Particularly, from
(\ref{fi4inst}) it is easy to see that changing the position of instanton
centre $\tau_{0}$ has no effect on ${\cal A}_{0}$. Such ``zero modes'' {\em
always\/} arise in instanton problems and their contribution requires a
special analysis. Detailed description of this technique would take us out of
the space limits, and we refer the interested reader to textbooks and review
articles (see, e.g., \cite{rajaraman,inst}).
We will attempt to get the correct result for the probability amplitude
$P_{AB}$ by means of the ``traditional'' quantum mechanics (without use of
path integrals and instantons). First, let us note that, due to the symmetry
of the potential $U(-x)=U(x)$, two lowest levels correspond to even and odd
eigenfunctions $\psi_{s}(x)$ and $\psi_{a}(x)$, with the energies $E_{s}$ and
$E_{a}$, respectively. Multiplying the Schr\"odinger equation for $\psi_{s}$
by $\psi_{a}$ and vice versa, then taking the difference of those two
equations and finally integrating over $x$ from $0$ to $\infty$, one obtains
the relation
\begin{equation}
\label{rel1}
(E_{a}-E_{s})\left[\psi_{s}{d\psi_{a}\over dx}\right]_{x=0}=
\int_{0}^{\infty} \psi_{a}\psi_{s}\, dx\,,
\end{equation}
which is {\em exact\/} and is nothing but a mere consequence of the symmetry
properties.
It is natural to try to use a semiclassical approximation. The semiclassical
result is given, e.g., in a popular textbook by Landau and Lifshitz
(\cite{LL-QM}, see the problem 3 after \S 50). According to that result,
$E_{s}-E_{a}=(\hbar\omega_{0}/\pi)\exp\{-{\cal A}_{0}'/\hbar\}$, where
$\omega_{0}=[k/m]^{1/2}$, $k\equiv(d^{2}U/dx^{2})_{x=a}$ and ${\cal
A}_{0}'=\int_{-a'}^{+a'}\big[2m\big(U(x)-E\big)\big]^{1/2}$, here $a'$ is the
turnover point of the classical trajectory with energy $E$ (corresponding to a
non-split level) defined by the equation $U(a')=E$. However, this result is
not adequate for our problem, and it does not coincide with the result of
instanton calculation. The point is that, surprisingly, the problem of
tunneling from one classical ground state to another {\em is not
semiclassical:\/} semiclassical approximation cannot be directly applied to
the ground state wavefunction inside one well.
Therefore we will do as follows: let us represent the wavefunctions inside the
barrier region as symmetric and antisymmetric combinations of the WKB
exponents,
\begin{eqnarray}
\label{WKB}
&&\psi_{s}={C_{s}\over\sqrt{|p|}}\cosh\left\{{1\over\hbar}\int_{0}^{x}|p| dx
\right\}\,\nonumber\\
&&\psi_{a}={C_{a}\over\sqrt{|p|}}\sinh\left\{{1\over\hbar}\int_{0}^{x}|p| dx
\right\}\,
\end{eqnarray}
where $|p|=\sqrt{2m[U(x)-E]}$. Those wavefunctions can be used inside the
entire barrier region, except narrow intervals $|x\pm a|<\delta$ near the well
minima, where $\delta=(\hbar/m\omega_{0})^{1/2}$ is the amplitude of
zero-point fluctuations.
On the other hand, if the condition $a\gg\delta$ is satisfied, then for the
description of the wavefunction inside the well any reasonable potential
$U(x)$ can be replaced by the parabolic one, $U(x)\to (k/2)(x\pm a)^{2}$. Then
in ``non-semiclassical'' regions one may use well-known expression for the
ground state wavefunction of a harmonic oscillator,
\begin{equation}
\label{harm}
\psi\to (\pi\delta^{2})^{-1/4}\exp[(x\pm a)^{2}/2\delta^{2}]\,.
\end{equation}
Thus, in the regions $a^{2}\gg (x\pm a)^{2}\gg\delta^{2}$ both the expressions
(\ref{WKB}) and (\ref{harm}) are valid. Then, normalization factors $C_{s,a}$
can be determined from the condition of matching (\ref{WKB}) and (\ref{harm})
in the two above-mentioned regions, and after that the integration in
(\ref{rel1}) can be performed explicitly. After some amount of algebra the
tunneling level splitting can be represented in the form
\begin{equation}
\label{tunnel-QM}
E_{a}-E_{s}=4\hbar\omega
\sqrt{2\over\pi}\exp\left\{\int_{0}^{a-\delta}dx\,\sqrt{U''(a)\over2U(x)}
\right\} \exp\left\{ -{1\over\hbar}{\cal A}_{0}\right\}\,,
\end{equation}
where the quantity ${\cal A}_{0}=\int_{-a}^{+a}dx\,\sqrt{2mU(x)}$ coincides
with the Euclidean action for the instanton trajectory.
One can see that the difference between the formula (\ref{tunnel-QM}) and the
usual semiclassical result consists in the pre-exponential factor containing
the integral of the type
$\int dx U^{-1/2}(x)$. It is clear that the main contribution into this
prefactor comes from the region $x\sim a$, where the integral can be
approximated as $\int^{a-\delta}dx/|a-x|$, so that it diverges logarithmically
at $\delta\to 0$. Thus for any potential $U(x)$ the prefactor can be
represented in the form $\widetilde{C}(a/\delta)$ or, equivalently, $(C{\cal
A}_{0}/\hbar)^{1/2}$. Here $C$ is a numerical constant of the order of unity,
it can be easily calculated for any given potential $U(x)$. So, finally we
arrive at the following universal formula:
\begin{equation}
\label{tunnel-inst}
E_{a}-E_{s}=4\hbar\omega_{0}\left({2C\over\pi}\right)^{1/2} \left({{\cal
A}_{0}\over\hbar}\right)^{1/2} \exp\left\{-{{\cal
A}_{0}\over\hbar}\right\}\, .
\end{equation}
For the model potentials $U=\lambda(x^{2}-a^{2})$ and $U=2U_{0}\sin^{2}x$ the
value of $C$ is equal to $\sqrt{3}$ and $\sqrt{2}$, respectively.
The formulas (\ref{tunnel-QM},\ref{tunnel-inst}) give the desired result for
any two-well potential with sufficiently large barrier. The main feature of
this result is the presence of an exponentially small factor. The small
parameter of the MQT problem is $\hbar/{\cal A}_{0}$, which can be represented
as a ratio of the zero-point fluctuations amplitude to the distance between
wells, $(\hbar/{\cal A}_{0})\sim (\delta/a)^{2}$. The expression
$e^{-a^{2}/\delta^{2}}$ is non-analytical in the small parameter, and thus the
MQT phenomenon cannot be obtained in any order of the perturbation theory. We
wish to emphasize that the correct result is roughly $({\cal
A}_{0}/\hbar)^{1/2}$ times greater than that following from ``naive''
semiclassical formula. This large additional factor appears due to the
contribution from the regions close to the minima of the potential, where the
motion is not semiclassic. Let us try to understand this in the instanton
language.
As we mentioned before, the small exponential factor $\exp(-{\cal
A}_{0}/\hbar)$ arises immediately in the instanton approach; the main
problem is to compute the pre-exponential factor, which is determined by the
integration over all small deviations from the instanton solution. Those
deviations are of two types: real fluctuations of the instanton structure,
which increase the Euclidean action, and ``zero modes'' which correspond to
moving the instanton centre. It is rather clear that ``nonzero'' modes have a
characteristic energy of the order of $\hbar\omega_{0}$, and that the quantity
$\omega_{0}$ has nothing to do with the zero mode. Thus, it is obvious that
the factor $\hbar\omega_{0}$ arises from the integration over all ``nonzero''
modes, and the large factor $({\cal
A}_{0}/\hbar)^{1/2}$ arises due to the zero (in our case -- translational)
mode. Such a ``separation'' naturally arises in rigorous calculations
\cite{inst,rajaraman}.
It is remarkable that the above result can be generalized to the case of much
more complicated problems involving space-time instantons (which, as we will
see later, is important for the problem of MQT in topological
nanostructures). For any instanton {\em all\/} nonzero modes yield a factor
like $\hbar\omega_{0}$, and {\em each\/} of the zero modes yields the factor
$({\cal A}_{0}/\hbar)^{1/2}$ \cite{inst,rajaraman}, so that the final result
can be reconstructed practically without calculations (up to a numerical
factor of the order of unity).
To illustrate one more feature typical for tunneling problems, let us consider
another model \cite{rajaraman}: a particle of mass $m$ which
can move along the circle of radius $R$, so that its
coordinate is determined by a single angular variable $\varphi$,
$0\leq\varphi\leq 2\pi$, in the two-well potential
\begin{equation}
\label{pocU}
U(\varphi)=U_{0}(1-\cos 2\varphi)\,.
\end{equation}
The model is described by the following
Lagrangian:
\begin{equation}
\label{pocL}
L={1\over2}mR^{2} \left({d\varphi\over dt }\right)^{2} - U(\varphi)\,.
\end{equation}
The classical Lagrangian can be modified by adding the arbitrary full
derivative term, e.g.,
\begin{equation}
\label{pocTOP}
L\mapsto L+\gamma {d\varphi\over dt}\,,
\end{equation}
which of course does not change the corresponding {\em classical} equations of
motion. However, adding the full derivative (\ref{pocTOP}) changes the
definition of the canonical momentum conjugate to $\varphi$, which, as one can
easily check, leads to a considerable change in the Hamiltonian of the
corresponding {\em quantum-mechanical} system after canonical quantization:
for nonzero $\gamma$ the correct Hamiltonian would be
\begin{equation}
\label{pocH}
\widehat{H}={1\over2mR^{2}}\Big\{i\hbar{d\over d\varphi} + \gamma \Big\}^{2}
+ U(\varphi)\,.
\end{equation}
Thus, there is no one-to-one correspondence between classical and
quantum-mechanical systems: several quantum systems can have the same
classical system as a classical limit.
For this model problem the instanton trajectories can be written down
explicitly:
\begin{eqnarray}
\label{pocI}
&& \cos\varphi=\sigma_{i} \tanh[\omega(\tau-\tau_{i})]\,,\\
&& \omega=(4U_{0}/mR^{2})^{1/2}\,. \nonumber
\end{eqnarray}
where $\tau_{i}$ is the arbitrary parameter determining the instanton position
in the imaginary time axis and $\sigma_{i}=\pm1$ is the topological charge
distinguishing instantons and
antiinstantons; the instanton action is finite and is
given by ${\cal A}_{0}=(8mR^{2}U_{0})^{1/2}$.
The importance of the full derivative term (\ref{pocTOP}) can be most easily
understood in terms of instantons. Indeed, let us consider the tunneling
amplitude $P_{12}$ from the $\varphi=0$ well to $\varphi=\pi$ one: it is clear
that the contribution to this amplitude is made equally by instantons (with
$\varphi$ changing from $0$ to $\pi$) and antiinstantons (with $\varphi$
changing from $0$ to $-\pi$). However, the term (\ref{pocTOP}) becomes an {\em
imaginary\/} part of the Euclidean action and leads to the additional factor
$e^{i\pi\gamma/\hbar}$ associated with the instanton contribution and a
similar factor $e^{-i\pi\gamma/\hbar}$ for antiinstanton paths. Thus, the
resulting transition amplitude for nonzero $\gamma$ is modified as follows:
\begin{equation}
\label{ampltop}
P_{12}= [ P_{12}]_{\gamma=0}\cos(\pi\gamma/\hbar)\,,
\end{equation}
where $ [P_{12}]_{\gamma=0} \propto \omega \left({{\cal
A}_{0}/\hbar}\right)^{1/2}e^{-{{\cal A}_{0}/\hbar}}$, according to the general
result described above. One can see that for half-integer $\gamma/\hbar$ the
interference of instanton and antiinstanton paths is {\em destructive,\/} so
that at $\gamma=\pm{\hbar\over2},\pm{3\hbar \over2},\ldots$ the tunneling
between two wells is {\em completely suppressed.\/} This effect is essentially
{\em topological\/} because the topological charge appears in the answer: the
contribution of configurations with different topological charge is
different. The same result can be obtained directly by solving the
Schr\"odinger equation with the Hamiltonian (\ref{pocH}): for half-integer
$\gamma/\hbar$ it can be mapped to the Mathieu equation with {\em
antiperiodic\/} boundary conditions, and the corresponding energy levels are
known to be doubly degenerate \cite{batemanIII}, which also means absence of
tunneling.
\section{Field-Theoretical Description of a Small Ferromagnetic Particle}
\label{sec:FM}
In this section we consider the basic technique of field-theoretical
description for spin systems on the simplest example, namely a nanoparticle of
a ferromagnetic material. Such an object may be viewed as a zero-dimensional
magnetic system, because at very low temperature all spins in the particle can
be considered as pointing in the same direction.
It is worthwhile to consider first the dynamics of a single spin $S$. In order
to obtain the effective Lagrangian describing the spin dynamics, it is
convenient to use a coherent state path-integral approach (see, e.g., the
excellent textbook by Fradkin \cite{fradkin}). Let us introduce a set of
generalized coherent states \cite{perelomov86}
\begin{equation}
|\vec{n}\rangle=\exp\{ i\theta(\vec{n}\times
\widehat{\vec{z}})\widehat{\vec{S}}\} |m=S\rangle
\label{one-site}
\end{equation}
parameterized by the unit vector $\vec{n}(\theta,\varphi)$. Here
$\widehat{\vec{z}}$ is a unit vector pointing along the $z$ axis, and
$|m\rangle$ denotes a spin-$S$ state with $S^{z}=m$. They form a
non-orthogonal `overcomplete' basis so that the following property, usually
called a resolution of unity,
holds:
\begin{equation}
\int {\cal D}\vec{n}\,|\vec{n}\rangle \langle\vec{n}|=1\,,
\label{complete}
\end{equation}
another useful property is that quantum average of $\widehat{\vec{S}}$ on
those coherent states is the same as of classical vector of length $S$:
\[
\langle \vec{n}| \widehat{\vec{S}} |\vec{n}\rangle = S\vec{n} \,.
\]
In case of $S$=1/2 those coherent states have a very simple form and are
general single-spin wavefunctions:
\[
|\vec{n}\rangle=\cos(\theta/2)|\uparrow\rangle
+\sin(\theta/2)e^{i\varphi}|\downarrow\rangle\;.
\]
We again start from the formula for propagator (\ref{1to2}) which is
essentially a definition of the effective Lagrangian. Slicing the time
interval $[0;t_{0}]$ into infinitely small pieces $\Delta t=t_{0}/N$, and
successively using the identity (\ref{complete}), one can rewrite this
propagator in $\vec{n}$-representation as
\begin{eqnarray}
P_{AB}
&=&\lim_{N\to\infty}\int d\vec{n}_0 d\vec{n}_1
\cdots d\vec{n}_N \langle A|\vec{n}_{0}\rangle
\langle \vec{n}_N|B \rangle \nonumber\\
&\times& \prod_{k=0}^{N-1}
\langle\vec{n}_k| e^{-i\widehat{H}\Delta t/\hbar}
|\vec{n}_{k+1}\rangle\;.
\label{trotter}
\end{eqnarray}
Passing to the function $\vec{n}(t)$ of the continuum variable $t$, one ends
up with the coherent state path integral (\ref{1to2}) where the action ${\cal
A}$ is determined by the effective Lagrangian
\begin{equation}
L_{\rm eff}={1\over2}i\hbar \left\{ \langle\partial_t\vec{n}|
\vec{n}\rangle -
\langle\vec{n}| \partial_t
\vec{n}\rangle\right\} -
\langle\vec{n}|\widehat{H}|\vec{n}\rangle\;.
\label{efflagr}
\end{equation}
It can be shown that the dynamical part of this Lagrangian has the form
\begin{equation}
\label{berry}
\hbar S(1-\cos\theta){d\varphi\over dt}\,;
\end{equation}
for arbitrary $S$ this calculation requires some algebra, but for the simplest
case $S={1\over2}$ it is straightforward. The expression (\ref{berry}) is
nothing but the Berry phase \cite{berry84} for adiabatic motion of a single
spin.
It should be remarked that the presence of the full derivative term $\hbar
S(d\varphi/dt)$ is rather nontrivial and allows one to capture subtle
differences between integer and half-integer spins, as we will see below. For
example, consider a single spin $S$ in some crystal-field potential, with the
effective Hamiltonian
\begin{equation}
\label{singspinH}
\widehat{H}=KS_{z}^{2} -K'S_{x}^{2}\,,
\end{equation}
where $K,K'>0$ and the easy-plane anisotropy $K$ is much stronger than the
in-plane anisotropy $K'$. The Lagrangian is
\begin{equation}
\label{singspinL}
L=\hbar S(1-\cos\theta){d\varphi\over dt} -KS^{2}\cos^{2}\theta
-K'S^{2}\sin^{2}\theta\cos^{2}\varphi\,.
\end{equation}
There are two equivalent classical minima of the potential at
$\theta={\pi\over2}$, $\varphi=0$ and $\theta={\pi\over2}$, $\varphi=\pi$.
Paths with $\theta\approx \pi/2$ make the main contribution into the
tunneling amplitude, so that we can approximately set
$\theta={\pi\over2}+\vartheta$, $\vartheta\ll 1$, and expand in $\vartheta$ up
to quadratic terms in the Lagrangian; in the term proportional to
$\vartheta^{2}$ the $K'$ contribution may be neglected as small comparing to
the contribution of $K$. After that, the ``slave'' variable $\vartheta$ can be
excluded from the Lagrangian (``integrated out'' of the path integral) because
the corresponding equation of motion $\delta L/\delta\vartheta=0$ allows to
express $\vartheta$ through $\varphi$ explicitly:
\begin{equation}
\label{slave}
\vartheta=-{\hbar \over2KS} {d\varphi\over dt}\,.
\end{equation}
Substituting this solution into the original Lagrangian (\ref{singspinL}), one
obtains the effective Lagrangian depending on $\varphi$ only:
\begin{equation}
\label{singspinLeff}
L_{\rm eff}= \hbar S{d\varphi\over dt} +
{\hbar^{2}\over4K}\left({d\varphi\over
dt}\right)^{2}+K'S^{2}\cos^{2}\varphi\,.
\end{equation}
We see that we end up with the Lagrangian of a particle on a circle from the
previous section, with the topological term $\gamma=\hbar S$. For each path
where $\varphi$ changes from $0$ to $\pi$ there is a corresponding
antiinstanton path with $\varphi$ changing from $0$ to $-\pi$, and those paths
contribute to the tunneling amplitude with phase factors $e^{i\pi S}$ and
$e^{-i\pi S}$. For half-integer $S$ those contributions precisely cancel each
other, making the tunneling impossible. This is exactly in line with the
well-known Kramers theorem, which states that in absence of external magnetic
field all energy levels of a system with half-integer total spin should be
twofold degenerate. One can also straightforwardly check that for a single
spin in magnetic field, i.e., for $\widehat{H}=g\mu_{B}H\widehat{S}_{z}$, the
correct energy levels can be obtained only with the full derivative term taken
into account.
Now we are prepared enough, finally, to consider the problem of tunneling in
a small ferromagnetic particle consisting of $N$ spin-$S$ spins. If we assume
that ferromagnetic exchange interaction is so strong that we may consider all
spins as having the same direction, then we come to the ``giant spin'' model
where the entire particle is described as a quantum-mechanical
(``zero-dimensional'') system with only two degrees of freedom $\theta$ and
$\varphi$. In fact, we should postulate that in our path integral, when
integrating over the coherent state configurations $\prod_{i=1}^{N} \otimes
|\vec{n}_{i}\rangle$, the main contribution comes from the subspace with all
$N$ vectors $\vec{n}_{i}$ replaced by the same vector
$\vec{n}(\theta,\varphi)$, and we take into account only configurations from
this subspace. Assuming that the crystal-field anisotropy has the form
(\ref{singspinH}), we come to essentially the same effective Lagrangian
(\ref{singspinLeff}), and the only difference is that Eq.\
(\ref{singspinLeff}) should now be multiplied by the total number of spins
$N$. The tunnel splitting of the ground state level, according to Eq.\
(\ref{ampltop}), is given by
\begin{equation}
\label{splitFM}
\Delta E = C (NS^{3})^{1/2}(KK'^{3})^{1/4} |\cos(\pi NS)|
\exp\left\{-NS(2K'/K)^{1/2}\right\}\,,
\end{equation}
where $C$ is a numerical constant of the order of $1$. A remarkable property
of the result (\ref{splitFM}) is that presence of a large number $N$ in the
exponent can be to some extent compensated by smallness of the ratio
$K'/K$. However, when the in-plane anisotropy $K'\to0$, the splitting vanishes
(this reflects the fact that in uniaxial case tunneling is impossible because
of the conservation of the corresponding projection of the total spin; the
same is true for $K\to0$). Another remarkable feature is that for
half-integer $S$ the finite splitting can be observed only in particles with
even number of spins $N$; since in any statistical ensemble $N$ fluctuates a
bit, this roughly means that only one half of all particles gives nonzero
contribution.
Statistical fluctuations of $N$ have another, more painful consequence: since
$N$ stays in the exponent, even small fluctuations of the total number of
spins in the particle lead to large fluctuations of the splitting. Moreover,
since $N$ scales as the third power of the linear size $L$, small fluctuations
of $L$ will be considerably enhanced in $N$. This may be crucial if one tries
to detect the splitting by means of some resonance technique: the initially
weak signal would be even more weakened by the strong broadening of the
resonance peak. Actually, many factors can prevent one from observing the
tunneling resonance, e.g., relaxation, temperature effects, etc. Here we will
not at all touch the problem of relaxation because of its complexity; instead
of that we refer the interested reader to the review
\cite{CaldeiraLeggett83}. Taking into account the finite temperature effects
is also nontrivial, particularly because it requires changing the procedure of
taking averages in the path integral: statistical averages should be taken
simultaneously with quantum-mechanical ones. Roughly (and without taking into
account the temperature dependence of relaxation mechanisms) the effects of
finite temperature can be estimated with the help of the concept of a
characteristic temperature $T_{c}$ below which the effects of quantum
tunneling prevail over thermal transitions. Rough estimate for $T_{c}$ is
obtained from the comparison of the relative strength of two exponential
factors: thermal exponent $e^{-\Delta U/T}$ and tunneling exponent $e^{-{\cal
A}_{0}/\hbar }$, where $\Delta U$ is the height of barrier separating two
equivalent states and ${\cal A}_{0}$ is the corresponding instanton action,
then $T_{c}= (\hbar \Delta U/{\cal A}_{0})$. It is easy to see that for
the ferromagnetic particle problem considered above
\begin{equation}
\label{Tfm}
T_{FM}=S(KK'/2)^{1/2}\,,
\end{equation}
i.e. the temperature of crossover from classical to quantum transitions is
in this case rather small since it is determined by weak (relativistic)
anisotropy interaction constants; for typical anisotropy values $T_{FM}$ is
about $0.1$~K.
\section{Quantum Tunneling in a Small Antiferromagnetic
Particle\protect\footnote{Subsection \protect\ref{sec:DM} was
written together with Vadim Kireev.}}
\label{sec:AFM}
\subsection{Continuum field model of antiferromagnet}
\label{sec:AFMmodel}
The problem of continuum field description of antiferromagnet (AFM) is more
complicated but also much more interesting than a similar problem for
ferromagnet. Antiferromagnet contains at least two different ``sublattices''
whose magnetizations compensate each other in the equilibrium state. Thus,
when choosing the coherent state wavefunction in the form $|\Psi
\rangle=\prod_i|\vec{n}_i\rangle$ as described above, one cannot any more
consider $n_{i}$ as a ``smooth'' function of the lattice site $i$. Let us
adopt the simplest two-sublattice model which, despite the fact that it may be
inadequate for a specific material, still allows one to demonstrate the
essential physics of antiferromagnetism. We assume that there are two
equivalent sublattices with magnetizations $\vec{M}_{1}(\vec{r})$ and $\vec{M}_{2}(\vec{r})$,
$|\vec{M}_{1}|=|\vec{M}_{2}=M_{0}$. Then, when passing to the continuum
limit, one has to introduce smooth fields
$\vec{m}=(\vec{M}_{1}+\vec{M}_{2})/2M_{0}$ and
$\vec{l}=(\vec{M}_{1}-\vec{M}_{2})/2M_{0}$ describing net magnetization and
sublattice magnetization, respectively. They satisfy the constraints
$\vec{m}\vec{l}=0$, $\vec{m}^2+\vec{l}^2=1$, and we further assume that
$|\vec{m}|\ll |\vec{l}|$. The energy of AFM $W=\langle \widehat{H}\rangle$
then can be expressed as a functional of $\vec{m}$ and $\vec{l}$:
\begin{equation}
\label{W}
W[\vec{m},\vec{l}]=M_{0}^{2} \int dV\, \left\{ {1\over2}\delta \vec{m}^{2}
+{1\over2}\alpha(\nabla\vec{l})^{2}+w_{a}(\vec{l})
-{g\over M_{0}}(\vec{m}\cdot\vec{H})\right\} \,.
\end{equation}
Here the phenomenological constants $\delta$ and $\alpha$ describe homogeneous
and inhomogeneous exchange, respectively, $\vec{H}$ is the external magnetic
field, $g$ is the Lande factor, the function $w_{a}$ describes the energy of
magnetic anisotropy, and we use the notation
$(\nabla\vec{l})^{2}\equiv\sum_{i}(\partial\vec{l}/\partial x_{i})^{2}$. The
magnitude of sublattice magnetization $M_{0}=g\mu_{B}S/v_{0}$, where $\mu_{B}$
is the Bohr magneton, $S$ is the spin of a magnetic ion, and $v_{0}$ is the
volume of the magnetic elementary cell.
As we learned from the previous section, the correct Lagrangian, suitable for
the quantum-mechanical treatment, has the form
\begin{equation}
\label{Lafm}
L=\sum_{i} \hbar S \left\{ (1-\cos\theta_{1i}){d\varphi_{1i}\over dt}
+ (1-\cos\theta_{2i}){d\varphi_{2i}\over dt}\right\} -
W[\vec{m},\vec{l}]\,,
\end{equation}
where the angular variables $(\theta_{1i},\varphi_{1i})$ and
$(\theta_{2i},\varphi_{2i})$ determine the unit vectors describing the
orientation of spins in first and second sublattice, respectively. Note that
we have kept intact the summation sign in the dynamical part of (\ref{Lafm}):
the reason is that the explicit expression for the Berry phase in the
continuum limit strongly depends on the details of the magnetic elementary
cell structure (which dictates the correct definition of $\vec{m}$ and
$\vec{l}$ and the procedure of passing to the continuum limit).
Under the assumption that $|\vec{m}|\ll|\vec{l}|$, the magnetization $\vec{m}$
can be excluded from the Lagrangian (\ref{Lafm}), and one obtains the
effective Lagrangian depending only on $\vec{l}$; after that step $\vec{l}$
can be regarded as a unit vector, $\vec{l}^{2}=1$.
For example, in the simplest case of an antiferromagnet with only two
(equivalent) atoms in elementary magnetic cell the dynamic part of the
Lagrangian (\ref{Lafm}) can be written as
\begin{equation}
\label{Berry-ml}
\int dV 2\hbar S\vec{m}\cdot(\vec{l}\times \partial\vec{l}/\partial t)\,,
\end{equation}
and the density of the effective Lagrangian takes the form
\begin{equation}
\label{LeffAFM}
{\cal L}=M_{0}^{2} \left\{ {\alpha\over 2c^{2}}
\left({\partial\vec{l}\over\partial t}\right)^{2}
-{\alpha\over2}(\nabla\vec{l})^{2} -\widetilde{w}_{a}(\vec{l})\right\}
+{4\over\gamma\delta}
\vec{H}\cdot\left(\vec{l}\times{\partial\vec{l}\over\partial t} \right)\,,
\end{equation}
where $\widetilde{w}_{a}$ is the anisotropy energy renormalized by the
magnetic field,
\begin{equation}
\label{wa}
\widetilde{w}_{a}=w_{a}+{2\over\delta M_{0}^{2}}(\vec{l}\cdot\vec{H})^{2}\,,
\end{equation}
$\gamma=g\mu_{B}/\hbar$ is the gyromagnetic ratio, and $c={1\over2}\gamma
M_{0}(\alpha\delta)^{1/2}$ is the limiting velocity of spin waves. Using
general phenomenological arguments, one can show \cite{andrmar80} that in case
of arbitrary collinear antiferromagnet the Lagrangian should have the form
similar to (\ref{LeffAFM}).
Other, more complicated interactions can be present in Eq.\ (\ref{W}). In
some AFM materials (which are, strictly speaking, weak ferromagnets) the
so-called Dzyaloshinskii-Moriya (DM) interaction is possible. It can be
described by including the term $D_{ik}m_{i}l_{k}$ under the integration sign
in into (\ref{W}), where $D_{ik}$ is some tensor (which is not necessarily
symmetric or antisymmetric). The origin of the DM interaction is rather
nontrivial, and there is a number of ``selection rules'' excluding the
possibility of its existence, particularly the DM interaction cannot exist (i)
if there is an inversion center interchanging sublattices; (ii) if there is a
translation which interchanges sublattices, i.e. if the magnetic elementary
cell is larger than the elementary cell of the original crystal lattice. It
can be shown \cite{kik90} that presence of the DM interaction can be taken into
account by the substitution
\begin{equation}
\label{HDM}
\vec{H}\mapsto \vec{\widetilde{H}}=\vec{H}-{1\over2} M_{0}\vec{D}
\end{equation}
in the Lagrangian (\ref{LeffAFM}), where the components of vector $\vec{D}$
are defined as $D_{i}= D_{ik}l_{k}$.
If there exists a sublattice-interchanging inversion center, another invariant
may be present in (\ref{W}), namely
$\mu_{i}(\vec{m}\cdot\partial\vec{l}/\partial x_{i})$ (here $\mu_{i}$ are
certain exchange constants). It is very important for the physics of AFM in
one dimension, as we will see later.
\subsection{Spin tunneling in antiferromagnetic nanoparticle}
In case of a small particle one can consider $\vec{m}$ and $\vec{l}$ as being
uniform throughout the particle, i.e.\ as not having any space
dependence. Then, the Lagrangian (\ref{LeffAFM}) takes the form
\begin{eqnarray}
\label{LpartAFM}
L &=& {\hbar NS\over\gamma H_{e}} \left\{ \dot{\theta}^{2}+\sin^{2}\theta
\dot{\varphi}^{2}
+ 2\gamma \dot{\theta}\,(\widetilde{H}_{y}\cos\varphi
-\widetilde{H}_{x}\sin\varphi) \right.\\
&+&\left. 2\gamma\dot{\varphi}\,
[\widetilde{H}_{z}\sin^{2}\theta
-\sin\theta\cos\theta(\widetilde{H}_{y}\sin\varphi
+\widetilde{H}_{x}\cos\varphi)] \right\}- M_{0}^{2}\widetilde{w}_{a}
\,,\nonumber
\end{eqnarray}
where $N$ is the total number of magnetic elementary cells in the particle,
$H_{e}=\delta M_{0}/2$ is the exchange field, the dot denotes differentiation
with respect to time, and we used angular variables for the vector $\vec{l}$,
\[
l_{z}=\cos\theta,\qquad l_{x}+il_{y}=\sin\theta\,e^{i\varphi}\,.
\]
There is another possible effect, typical only for antiferromagnetic
particles: due to the boundary (surface) effects, the number of spins in two
sublattices can differ from each other. In that case the Lagrangian
(\ref{LpartAFM}) will contain the additional term
\begin{equation}
\label{noncomp}
\hbar \nu S (1-\cos\theta)\,\dot{\varphi}\,,
\end{equation}
which is essentially the Berry phase of $\nu$ non-compensated spins. Such a
sublattice decompensation in fact should be present in any ensemble of
nanoparticles, so that $\nu$ has certain statistical variation.
The full Lagrangian (\ref{LpartAFM}) is rather complicated, and for the sake
of clarity we will consider separately the effects of field and DM interaction.
\subsubsection{Tunneling in presence of external magnetic field}
Consider a small AFM particle with easy-axis anisotropy
\[
w_{a} = {1\over2}\beta(l_{y}^{2}+l_{z}^{2})
\]
in external magnetic field $H$ perpendicular to the easy axis.
Then the Euclidean action takes the form
\begin{eqnarray}
\label{AH}
{\cal A}_{\rm eucl}&=&-{\hbar NS\over \gamma H_{e}} \int d\tau \Bigg\{
\left({d\theta\over d\tau}\right)^{2}+\sin^{2}\theta\left({d\varphi \over
d\tau}\right)^{2} +2i\gamma H\sin^{2}\theta{d\varphi\over d\tau}
\nonumber\\
&+&\omega_{0}^{2}\left[\sin^{2}\theta\sin^{2}\varphi
+(1+\gamma^{2} H^{2}/\omega_{0}^{2})\cos^{2}\theta\right]\Bigg\} \\
&+&i\hbar\nu S\int d\tau (1-\cos\theta){d\varphi\over d\tau} \, \nonumber
\end{eqnarray}
where $\tau$ is the imaginary time, and $\omega_{0}={1\over2}\gamma
M_{0}(\delta\beta)^{1/2}$ is the characteristic magnon frequency
($\hbar\omega_{0}$ is the magnon gap).
There are two equivalent states $A$ and $B$ with opposite direction of
$\vec{l}$ along the easy axis $Ox$, and obviously the most preferable
instanton path is given by $\theta=\pi/2$, $\varphi=\varphi(\tau)$. The
instanton solution for $\varphi$ is the same as in case of particle on a
circle, and one-instanton action is
\begin{equation}
\label{AinstH}
{{\cal A}_{0}\over \hbar} = {NS\over\gamma H_{e}}\left( 4\omega_{0}\pm 2\pi i
\gamma H \right) \pm i\pi\nu S \,,
\end{equation}
where $\pm$ signs correspond to instantons and antiinstantons.
Thus, the tunneling amplitude $P_{AB}$ is proportional to
\begin{equation}
\label{PH}
\left({4NS\omega_{0}\over\gamma H_{e}}\right)^{1/2}
\exp\left\{-{4NS\omega_{0}\over\gamma H_{e}}\right\} \cos\left\{ \pi\nu S
+2\pi NS(H/H_{e})\right\}\,,
\end{equation}
and the corresponding magnitude of tunneling level splitting (proportional to
$|P_{AB}|$) oscillates with the period $\Delta H=(H_{e}/2NS)$ when changing
the external field. This period $\Delta H$ may be rather small, for typical
values of the exchange field $H_{e}\sim 10^{6}$~Oe and the number of spins in
the particle $N\sim 10^{3}\div10^{4}$ one obtains $\Delta H\sim
10^{2}\div10^{3}$~Oe. The effects of this type were studied in
\cite{DuanGarg94,GolyshevPopkov95}.
The result (\ref{PH}) illustrates also another remarkable
feature: in any experiment probing the response of the ensemble of AFM
nanoparticles at each $H$ there must be only one possible value of splitting
(i.e., only one peak in the low-frequency response) when the spin of magnetic
ions $S$ is integer; but if $S$ is half-integer then, since in any ensemble
$\nu$ arbitrarily takes even and odd values, for approximately one
half of all particles the phase of cosine in (\ref{PH}) is shifted by $\pi/2$,
and there should be {\em two} peaks at each $H$.
It is worthwhile to note that the real part of the one-instanton action, which
enters the exponent in (\ref{PH}), is proportional to $(K/J)^{1/2}$ (where $J$
and $K$ are the exchange and anisotropy constants) while the
corresponding quantity for ferromagnet, according to (\ref{splitFM}), does not
contain the exchange constant and is determined by the rhombicity
$(K'/K)^{1/2}$. One may conclude that tunneling in AFM particles is
more easy than in FM; indeed, the characteristic crossover temperature below
which quantum effects dominate over thermal ones, for antiferromagnets is
\begin{equation}
\label{Tafm}
T_{AFM}\propto S(KJ)^{1/2}\,,
\end{equation}
which is much greater than for ferromagnets [cf. Eq.(\ref{Tfm})]; typically
$T_{AFM}$ is about $1\div3$~K.
\subsubsection{Tunneling in presence of the DM interaction}
\label{sec:DM}
Consider the same small AFM particle from the previous subsection, but imagine
that the DM interaction in its simplest form is present, with the energy
given by
\begin{equation}
\label{wd}
w_{d}=d(m_{y}l_{z}-m_{z}l_{y})\,.
\end{equation}
Then the DM interaction leads to the contribution into the Lagrangian
(\ref{LpartAFM}) of the form
\begin{equation}
\label{Ld}
\Delta L_{d}={\hbar NS\over\gamma H_{e}}\cdot 2 H_{D}{d\over
dt}(\sin\theta\cos\varphi)\,,
\end{equation}
where $H_{D}=dM_{0}$ is the so-called Dzyaloshinskii field. This term will
contribute to the imaginary part of the Euclidean action (\ref{AH}), and
as a result the cosine in (\ref{PH}) will be modified as
\begin{equation}
\label{Pd}
\cos\left\{ \pi\nu S
+2\pi NS (H/H_{e}) + 4NS (H_{D}/H_{e})\right\}\,.
\end{equation}
Thus, presence of the DM interaction alone also leads to effective change of
the Berry phase and lifts the degeneracy for odd $\nu$ and half-integer $S$.
\section{Spin Tunneling in Topological Magnetic
Nanostructures\protect\footnote{Subsection \protect\ref{sec:rings} was
written together with Vadim Kireev.}}
\label{sec:top}
As we mentioned before, one of the most difficult experimental tasks when
trying to detect the resonance on tunnel-splitted levels in small particles is
to prepare the ensemble of particles with very sharp size distribution: even
small fluctuations of size lead to large fluctuations of the tunneling
probability since they contribute to the power of exponent. Preparing such an
ensemble requires high technologies and involves considerable difficulties.
One may think about some other, ``natural'' type of magnetic nanostructures to
observe spin tunneling phenomena in. One nice solution, which have actually
been used in experiment, is to use biologically produced nanoparticles
\cite{Awschalom+92}.
Another possible way, proposed in \cite{ivkol94,ivkol95tun,GalkinaIv95}, is to
use {\em topologically nontrivial magnetic structures:\/} kinks in quasi-1D
materials, vortices and disclinations in 2D, etc. Such objects have required
mesoscopic scale (e.g., the thickness of a domain wall is usually about 100
lattice constants) and, since their shape is determined by the material
constants, they are identical to a high extent (up to a possible inhomogeneity
of the sample).
Here we consider several possible scenarios of tunneling in
topological nanostructures and show that their use has a number of advantages.
\subsection{Tunneling in a Kink of 1D Antiferromagnet}
Consider a one-dimensional two-sublattice antiferromagnet with rhombic
anisotropy described by the Hamiltonian
\begin{equation}
\label{Hafm}
\widehat{H}=J\sum_{i} \vec{S}_{i}\vec{S}_{i+1} +\sum_{i}\big[
K_{1}(S_{i}^{z})^{2} +K_{2}(S_{i}^{y})^{2}\big]\,,
\end{equation}
where $i$ labels sites of the spin chain with the lattice constant $a$,
$K_{1}>K_{2}>0$ are the anisotropy constants (so that $Oz$ is the difficult
axis and $Ox$ is the easy axis), and $J$ is the exchange constant. For
passing to the continuum field description one may introduce vectors $\vec{m}$
and $\vec{l}$ as $\vec{m}_k= (\vec{n}_{2k+1}+\vec{n}_{2k})/2$ and $\vec{l}_k=
(\vec{n}_{2k+1}-\vec{n}_{2k})/2$, where $\vec{n}$ are the unit vectors
describing the direction of spins (the parameters of the corresponding
coherent states, see the discussion in Sect.\ \ref{sec:AFMmodel}) above.
These fields live on the lattice with the double spacing $2a$, and it is easy
to see that the energy functional $W=\langle\widehat{H}\rangle$ contains the
term $\vec{m}\cdot\partial_{x}\vec{l}$. Using the equation $\delta
L/\delta\vec{m}=0$, one may express $\vec{m}$ through $\vec{l}$ and its
derivatives and exclude it from the Lagrangian. The effective Lagrangian
takes the following form:
\begin{equation}
L_{\rm eff}= \int {dx\over 2a} \left\{
{\hbar^{2}\over4J}
(\partial_{t}\vec{l})^{2} -JS^{2}a^{2}(\partial_{x}\vec{l})^{2}
-K_{1}S^{2}l_{z}^{2} -K_{2}S^{2}l_{y}^{2}\right\} +L_{\rm top} \,,
\label{Lafm1d}
\end{equation}
which represents a (1+1)-dimensional nonlinear $\sigma$-model with the
so-called topological term
\begin{equation}
\label{Ltop}
L_{\rm top}= {1\over2}\,\hbar S \int dx \,
\vec{l}\cdot(\partial_{x}\vec{l}\times\partial_t\vec{l}) \,.
\end{equation}
It is easy to trace the origin of this term: because of presence
of$\vec{m}\cdot\partial_{x}\vec{l}$ in the energy, the expression for
$\vec{m}$ contains $\partial_{x}\vec{l}$ which after the substitution into the
Berry phase (\ref{berry}) yields the topological term. In agreement with
general phenomenological result (\ref{LeffAFM}), the Lagrangian (\ref{Lafm1d})
is Lorentz-invariant, with the limiting velocity $c=2JSa/\hbar$.
A stable kink solution corresponds to rotation of vector $\vec{l}$ in the easy
plane $(xy)$:
\begin{equation}
\label{kink}
l_{x}=\sigma'\tanh(x/\Delta),
\quad l_{y}={\sigma\over\cosh(x/\Delta)},\quad
l_{z}=0\,,
\end{equation}
where $\Delta=a(J/K_{2})^{1/2}$ is the characteristic kink thickness, and the
quantities $\sigma$ and $\sigma'$ may take the values $\pm1$. The topological
charge of the kink $\sigma'$ is determined by the boundary conditions and
cannot change in any thermal or tunneling processes. The situation is
different with the quantity $\sigma$ which determines the sign of $\vec{l}$
projection onto the ``intermediate'' axis $Oy$. Two states with $\sigma=\pm1$
are energetically equivalent; change of $\sigma$ is not forbidden by
any conservation laws and describes the reorientation of the macroscopic
number of spins $N\sim\Delta/a\gg1$ ``inside'' a kink, typically $N\sim
70\div100$.
\begin{figure}
\mbox{\hspace{6mm}\psfig{figure=2dinst.ps,width=110mm,angle=-90.}}
\caption{The structure of instanton solution for the problem of tunneling in
a kink of a 1D antiferromagnet. Arrows and circles denote projections of
vector $\vec{l}$ on the easy plane $(xy)$ and on the difficult axis $Oz$,
respectively. Vector $\vec{l}$ forms the angle of about $45^{\circ}$ with
the easy axis $Ox$ on thin solid curves, and with the difficult axis $Oz$
on the circle (the circle radius is approximately $r_{0}$).}
\label{fig:inst1}
\end{figure}
Again, tunneling between the kink states with $\sigma=\pm1$ can be studied
using the instanton formalism. In contrast to the case of a nanoparticle, here
the tunneling between two {\em inhomogeneous\/} states takes place, so that
nontrivial {\em space-time\/} instantons come into play. The instanton
solution $\vec{l}_{0}(x,\tau)$ is now {\em two-dimensional\/} and has the
following properties (see Fig.\ \ref{fig:inst1}):
\begin{eqnarray}
\label{inst-prop}
l_{x}\to\pm\sigma' && \mbox{at\ } x\to\pm\infty \nonumber\\
l_{y}\to\mp\sigma && \mbox{at\ } x=0,\;\tau\to\pm\infty\\
l_{z}=p=\pm1 && \mbox{at\ } x=0,\;\tau=0\,. \nonumber
\end{eqnarray}
Along any closed contour around the instanton center in the Euclidean plane
vector $\vec{l}$ rotates through the angle $2\pi \nu$ in the easy plane
$(xy)$, where $\nu=\sigma\sigma'=\pm1$. Thus, the instanton configuration has
the properties of a magnetic vortex and is characterized by two topological
charges \cite{affleck89rev,ivkol95rev}: vorticity $\nu$ and polarization
$p$. The instanton solution satisfies the equations
\begin{eqnarray}
\label{inst-eq}
&&\vec{\nabla}^2\theta +\sin\theta\cos\theta
[(1+\rho\sin^2\varphi)/\Delta^2-(\vec{\nabla}\varphi)^2] =0,\nonumber\\
&&\vec{\nabla}\cdot(\sin^2\theta\vec{\nabla}\varphi)
-(\rho/\Delta^2)\sin^2\theta \sin\varphi\cos\varphi=0,
\end{eqnarray}
where we have introduced the angular variables $l_{y}+il_{z}=\sin\theta
e^{i\varphi}$, $l_{x}=\cos\theta$, $\rho=(K_{1}-K_{2})/K_{2}$ is the
rhombicity parameter, and $\vec{\nabla}=(\partial/\partial
x_{1},\partial/\partial x_{2})$ is the Euclidean gradient,
$(x_{1},x_{2})\equiv(x,c\tau)$.
Several important properties of the instanton can be obtained without using
the explicit form of the solution. First of all, note that this instanton has
{\em two\/} zero modes which correspond to shifting the position of its centre
along the direction of $\tau$ and $x$ axes, respectively. The physical meaning
of the first mode is the same as for 1D instanton, and the second mode
corresponds to moving the kink center in real space (the kink position in
infinite 1D magnet is not fixed in our continuum model); however, if the kink
center is fixed due to some effects (e.g., because of pinning on the lattice,
or by boundary conditions), so that the eigenfrequency of its oscillations is
comparable with the characteristic magnon frequency, then only one
zero-frequency mode is present.
The Euclidean action ${\cal A}_{\rm eucl}$ can be represented in the form
\begin{eqnarray}
\label{Ainst}
&& {\cal A}_{\rm eucl}={1\over2}S\hbar F +i2\pi S\hbar Q,\quad\mbox{where}
\nonumber\\
&& F={1\over2}\int d^{2}x\,\big[(\vec{\nabla}\theta)^{2}
+\sin^{2}\theta(\vec{\nabla}\varphi)^{2}
+{1\over\Delta^{2}}\cos^{2}\theta\,(1+\rho\sin^{2}\varphi) \big] \nonumber\\
&& Q={1\over4\pi}\int d^{2}x\, \varepsilon_{\alpha\beta}\sin\theta
\partial_{\alpha}\theta \partial_{\beta}\varphi\,.
\end{eqnarray}
Imaginary part of the Euclidean action is in this case completely determined
by the topological term $L_{\rm top}$. The word ``topological'' becomes now
clear, because $Q$ is the homotopical index of mapping of the $(x_{1},x_{2})$
plane onto the sphere $\vec{l}^{2}=1$ (the Pontryagin index, or the winding
number). For uniform boundary conditions at infinity in the $(x_{1},x_{2})$
plane $Q$ can take only integer values, but in our case
$Q=-p\nu/2=\pm{1\over2}$ is half-integer, which is typical for vortices (see,
e.g., \cite{affleck89rev,ivkol95rev}). For a kink with given $\sigma'$ there
are two instanton solutions with the same vorticity $\nu$ and different
polarizations $p$. Thus, the tunneling amplitude is proportional to $\cos(\pi
S)$ and vanishes when the spin $S$ of magnetic ions is half-integer. However,
the degeneracy can be lifted in presence of external magnetic field or the DM
interaction, as we will see below.
We are not able to construct the exact solution of Eqs.\ (\ref{inst-eq}), but
the estimate of the tunneling amplitude in various limiting cases can be
obtained from approximate arguments. For $\rho\ll 1$ the characteristic space
scale of $\varphi$ variation $\Delta/\sqrt{\rho}$ is much greater than the
kink thickness $\Delta$, and the problem can be mapped to one with a finite
number of degrees of freedom (one may introduce the variable $\phi$ having the
meaning of the angle of deviation out of the easy plane ``inside a kink'', so
that the instanton solution can be seeked in the form $\phi=\phi(\tau)$), then
it is easy to obtain \cite{ivkol95prl}
\begin{equation}
\label{F-lowrho}
F\simeq 4\rho^{1/2}\quad \mbox{at\ } \rho\ll1\, .
\end{equation}
In the opposite limiting case $\rho\gg 1$ one again has two different length
scales: the kink thickness $\Delta$ and the ``core'' radius
$r_{0}=\Delta(K_{2}/K_{1})^{1/2}$, $r_{0}\ll\Delta$. For $r\ll\Delta$ all
interactions except the exchange one can be neglected, and one may use the
``isotropic'' vortex solution
\begin{eqnarray}
\label{iso}
&& \theta=\theta_{0}(r),\quad \varphi=\nu\chi,\quad\nu=\pm1\,,\nonumber\\
&& {d^{2}\theta_{0}\over dr^{2}} +\left({1\over\Delta^{2}}
-{\nu^{2}\over r^{2}}\right)
+\sin\theta_{0}\cos\theta_{0} =0\,,
\end{eqnarray}
where $r=(x_{1}^{2}+x_{2}^{2})^{1/2}$, $\chi=\arctan(x_{2}/x_{1})$ are polar
coordinates in the $(x_{1},x_{2})$ plane. For $r\gg r_{0}$, i.e., far outside
the core, one can approximately assume that
\begin{equation}
\label{far}
\theta={\pi\over2},\quad
\vec{\nabla}^{2}\varphi={\rho\over2\Delta^{2}}\sin2\varphi\,.
\end{equation}
Within a wide range of $r$ (for $r_{0}\ll r\ll \Delta$) the solutions
(\ref{iso}) and (\ref{far}) can be regarded as coinciding, and the integrand
in $F$ is proportional to $1/r^{2}$.
Then, one may divide the integration domain into two parts: $r<R$ and $r>R$,
where $R$ is arbitrary in between $r_{0}$ and $\Delta$. For $r<R$ the solution
(\ref{iso}) may be used, yielding
$F_{r<R}=\pi\ln(\zeta R/r_{0})$ with $\zeta\simeq4.2$ \cite{KosVorMan83}. For
$r>R$, one can use a simple trial function approximately satisfying
(\ref{far}), e.g.,
\begin{equation}
\label{far-app}
\cos\varphi={x_{2}\over r}{1\over\cosh(x/\Delta)},\quad
\sin\varphi={x_{1}\over r}{1\over\cosh(x/\Delta)}\,,
\end{equation}
which yields $F_{r>R}=\pi\ln(\zeta'\Delta/R)$ with $\zeta\simeq0.1$. Summing
up the two contributions, we obtain
\begin{equation}
\label{F-largerho}
F\simeq\pi\ln(0.42\Delta/r_{0})\quad\mbox{at\ } \rho\gg1\,.
\end{equation}
The tunnel splitting of the ``ground state'' level of the kink
\begin{equation}
\label{splitK}
\Gamma\propto
\hbar\omega_{l}\,(FS/2)^{n/2}e^{-(FS/2)}\,|\Phi|\,,
\end{equation}
where $\omega_{l}=2S(JK_{2}\rho)^{1/2}$ is the frequency of the out-of-plane
magnon localized at the kink, $\Phi$ is the factor determined by the imaginary
part of the Euclidean action [in the simplest model $\Phi=\cos(\pi S)$], and
$n$ is the number of zero modes which can be equal to $1$ or $2$ depending on
whether the kink position is fixed, see above. It is easy to estimate the
crossover temperature for the problem of tunneling in a kink, comparing the
exponent in (\ref{splitK}) with $e^{-U_{0}/T}$, where $U_{0}\simeq
2S^{2}(\sqrt{JK_{1}}-\sqrt{JK_{2}})$ is the barrier height; for $\rho\gg1$
(i.e., $K_{1}\gg K_{2}$) and $n=1$ one obtains
\begin{equation}
\label{Tkink}
T_{k}\propto {S(JK_{1})^{1/2}\over\ln(K_{1}/K_{2})}\,,
\end{equation}
which is only logarithmically smaller than the corresponding temperature for a
particle (\ref{Tafm}).
Let us discuss now the behavior of the imaginary part of the Euclidean action
in case of deviations from the simplest model (\ref{Hafm}) for which the
tunneling is prohibited for half-integer $S$. The most simple observation is
that in a spin chain with alternated exchange interaction, when along the
chain the strength of exchange constant alternates as
$J_{1}J_{2}J_{1}J_{2}\cdots$, the topological term (\ref{Ltop}) acquires
additional factor $J_{1}/J_{2}$ (see, e.g.,
\cite{affleck85,affleck-book}), which leads to $\Phi=\cos(\pi S
J_{1}/J_{2})$ and allows tunneling for half-integer $S$. Another way to lift
the degeneracy at half-integer $S$ is to ``switch on'' the DM interaction or
external magnetic field.
Consider the same model (\ref{Hafm}) with the addition of a magnetic field
$\vec{H}$ applied in the easy plane $(xy)$. Presence of the field leads to the
additional contribution to the imaginary part of ${\cal A}_{\rm eucl}$
\begin{eqnarray}
\label{addH}
&& {\cal A}_{\rm eucl}\mapsto {\cal A}_{\rm eucl}+i\hbar Q'\,,\nonumber\\
&& Q'={2S\over a}{H\over H_{e}} \int \vec{n}\cdot
\left(\vec{l}\times{\partial\vec{l}\over\partial x_{2}}\right) d^{2}x \,,
\end{eqnarray}
where $\vec{n}\equiv \vec{H}/H$. The mixed product in (\ref{addH}) can be
rewritten in angular variables as
\[
-\sin\theta\cos\theta\, (n_{x}\cos\varphi+n_{y}\sin\varphi)\,
{\partial\varphi\over \partial x_{2}} +(n_{y}\cos\varphi-n_{x}\sin\varphi)\,
{\partial\theta\over \partial x_{2}}\,.
\]
One may note that $\sin\theta$ and $\theta$ significantly differ from zero
only in the vortex core, and thus the isotropic vortex solution (\ref{iso})
may be used for the calculation of $Q'$. After integration we obtain
\begin{eqnarray}
\label{addQ}
&& Q'=2S{H\over H_{e}}{\Delta\over a} p(A n_{x}+\nu B n_{y})\,,
\nonumber\\
&& A=\int_{0}^{\infty}(dr/\Delta)\sin\theta_{0}\cos\theta_{0},\quad
B=\int_{0}^{\infty}(dr/\Delta)r(d\theta_{0}/dr)\,,
\end{eqnarray}
where $p$ and $\nu$, as earlier, denote the polarization and vorticity of the
instanton solution, and $A,B$ are numerical constants (recall that, according
to (\ref{iso}), the isotropic solution $\theta_{0}$ may depend only on
$r/\Delta$). After performing the summation in $p,\nu$, and with the account
taken of the contribution $Q$ coming from the topological term, the factor
$\Phi$ in (\ref{splitK}) will be modified as
\begin{equation}
\label{PhiH}
\Phi\mapsto\Phi_{H}=\cos\left(2ASn_{x}{H\over H_{e}}{\Delta\over a}\right)
\cos\left(\pi S + 2BSn_{y}{H\over H_{e}}{\Delta\over a}\right)\,,
\end{equation}
which means that for the given geometry only the field component perpendicular
to the easy axis lifts the degeneracy existing for half-integer $S$. Similarly
to the case of a small AFM particle, the tunneling amplitude is an oscillating
function of the external magnetic field $H$, but here the situation is more
complicated because the period of oscillations depends on the field
orientation.
\subsection{Tunneling in Antiferromagnetic Rings with Odd Number of
Spins}
\label{sec:rings}
\begin{figure}
\mbox{\hspace{6mm}\psfig{figure=disloc.ps,width=110mm,angle=-90.}}
\caption{A ``ring'' around the core of dislocation in two-dimensional
antiferromagnet. The dislocation is shown with a dashed line.}
\label{fig:ring}
\end{figure}
Another example of a magnetic nanostructure is a {\em ring\/} formed by
magnetic atoms; such rings may occur in a dislocation core of a 2D crystal as
shown in Fig.\ \ref{fig:ring}, and the characteristic feature of this object
is that the number of atoms in the ring is {\em odd}. Here we consider only
{\em antiferromagnetic\/} rings. In terms of the vector $\vec{l}$ such a ring
is a spin disclination. Let us assume that the magnetic anisotropy is of the
easy-plane type, and all spins lie in the $(xy)$ plane,
\[
\vec{S}_{i}=(-1)^{i}(\vec{e}_{x}\cos\varphi_{i}+\vec{e}_{y}\sin\varphi_{i})\,,
\]
where $\vec{e}_{x,y}$ are the unit vectors along $x,y$. Then there are two
energetically equivalent states of the ring, with $\varphi_{i}=\chi_{i}/2$ and
$\varphi_{i}=-\chi_{i}/2$, where $\chi_{i}$ is the azimuthal coordinate of the
$i$-th spin (let us assume that the ring is a circle of radius $R$). It is
possible to construct the instanton solution which links the two states; in
terms of $\vec{l}$ it can be written as
\begin{eqnarray*}
l_{x}=\cos{\chi\over2},\quad l_{y}=\sin{\chi\over2}\cos\psi,\quad
l_{z}=\sin{\chi\over2}\sin\psi\,, \\
\cos\psi=\pm\tanh(\omega_{0}\tau),\quad
\omega_{0}\simeq {1\over2}\gamma M_{0}(\beta\delta)^{1/2}\,.
\end{eqnarray*}
Calculation shows \cite{kireev} that the tunneling amplitude is proportional
to
\begin{equation}
\label{splitR}
\cos(\pi S)\exp\{-\pi S R/\Delta \},\quad \Delta=(\alpha/\beta)^{1/2}\,,
\end{equation}
i.e., the probability of tunneling is sufficiently large if the radius of the
ring is smaller than the characteristic thickness of the domain wall $\Delta$
(usually $\Delta\sim100$\AA). Again, the tunneling is suppressed for
half-integer $S$, and this can be changed with the help of external magnetic
field. More detailed analysis \cite{kireev} shows that the field $\vec{H}$
should be applied in the easy plane in order to lift the degeneracy, then the
cosine in (\ref{splitR}) will change into
\[
\cos\left(\pi S +\pi^{2}S{H\over4H_{e}}{R\over a}\right)\,,
\]
where $a$ is of the order of the lattice constant. For weak fields the above
expression describes just the Zeeman splitting of the ground state level of a
ring (recall that due to the odd number of spins the ring always has an
uncompensated total spin if $S$ is half-integer).
\subsection{Tunneling in a Magnetic Vortex of 2D Antiferromagnet}
One more example of a magnetic topologically nontrivial structure is {\em
magnetic vortex\/} in quasi-2D easy-plane antiferromagnet. Consider the system
described by the Hamiltonian
\begin{equation}
\label{H2dafm}
\widehat{H}=J\sum_{\langle i,j\rangle} \vec{S}_{i}\cdot\vec{S}_{j}
+K\sum_{i} \big(S^{z}_{i}\big)^{2}\,
\end{equation}
where $K>0$ is the anisotropy constant, and $Oz$ is the difficult axis. In
terms of the angular variables for the antiferromagnetism vector,
$l_{z}=\cos\theta$, $l_{x}+il_{y}=\sin\theta\,e^{i\varphi}$, a vortex
corresponds to the solution
\begin{eqnarray}
\label{vort}
&& \theta=\theta_{0}^{\pm}(r),\quad \varphi=\nu\chi+\varphi_{0}\,,\\
&& \theta_{0}^{\pm}(\infty)=\pi/2,\quad \theta_{0}^{+}(0)=0,\quad
\theta_{0}^{-}(0)=\pi\,,
\end{eqnarray}
where $\theta_{0}$ satisfies the equation from the second line of Eq.\
(\ref{far}), $x+iy=re^{i\chi}$, and the solutions $\theta_{0}^{+}$ and
$\theta_{0}^{-}$ have the same vorticity $\nu$ but different polarizations
$p=\cos\theta(0)=\pm1$. The vortex states with $p=\pm1$ are energetically
equivalent, and the transition between them corresponds to reorientation of a
macroscopic number of spins $N\sim (\Delta/a)^{2}$, where
$\Delta=a(K/4J)^{1/2}$ is the characteristic radius of the vortex core and
$a$ is the lattice constant. It is worthwhile to remark that such a transition
would be forbidden in ferromagnet because of the conservation of the
$z$-projection of the total spin $S^{z}$.
The instanton solution $\vec{l}(x,y,\tau)$ linking two vortex configurations
$\theta_{0}^{\pm}$ with $\nu=1$ when the imaginary time $\tau$ changes from
$-\infty$ to $+\infty$ is schematically shown in Fig.\ \ref{fig:inst2}. In the
3D Euclidean space $(x,y,\tau)$ it describes a topological configuration of
the hedgehog type and has a singularity at the origin. Such a singularity
means that in a small space region around the origin (roughly within the
distance of about $a$) one has to take into account the change of magnitude of
the sublattice magnetization: the length of the vector $\vec{l}$ has to change
so that $|\vec{l}(0,0,0)|=0$. In this case there are four zero modes, three of
them correspond to translations along $x$, $y$, $\tau$, and the fourth one
corresponds to changing the $\varphi_{0}$ angle. If the position and structure
of the vortex are fixed by some additional interactions, only one zero mode is
left.
\begin{figure}
\mbox{\hspace{6mm}\psfig{figure=3dinst.ps,width=110mm,angle=-90.}}
\caption{The structure of instanton solution for the problem of tunneling in
a vortex. At $\tau\to\pm\infty$ one has $p=-1$ and $p=+1$ vortices,
respectively. The sphere near the origin corresponds to the region where
a hedgehog-type solution is adequate. }
\label{fig:inst2}
\end{figure}
The Euclidean action derived from the Lagrangian of the $\sigma$-model has the
following form
\begin{equation}
\label{Avor}
{\cal A}_{E}= JS^{2}\int\! d\tau\!\int\! d^{2}x \left\{
{1\over c^{2}}\left({\partial\vec{l}\over\partial \tau}\right)^{2}
+(\vec{\nabla}\vec{l})^{2}- (\vec{\nabla}\vec{l}^{(0)})^{2}
+{1\over\Delta^{2}}\big[ l_{z}^{2}-(l_{z}^{(0)})^{2}\big]\right\} \,,
\end{equation}
where $\vec{l}^{(0)}$ describes the vortex solution (\ref{vort}) and $c$
denotes the limiting velocity $c=2JSa/\hbar$. Away from the singularity (for
$\rho\gg a$, $\rho\equiv (c^{2}\tau^{2}+x^{2}+y^{2})^{{1/2}}$) the condition
$\vec{l}^{2}=1$ holds, and the equations for $\theta,\varphi$ become
\begin{eqnarray}
\label{inst2-eq}
&&\vec{\nabla}^2\theta +\sin\theta\cos\theta
[1/\Delta^2-(\vec{\nabla}\varphi)^2] =0,\nonumber\\
&&\vec{\nabla}\cdot(\sin^2\theta\vec{\nabla}\varphi)=0\,.
\end{eqnarray}
In the region $a\ll \rho\ll\Delta$ this system has an exact centrally
symmetric solution of the hedgehog type:
\begin{equation}
\label{hedgehog}
\cos\theta={c\tau\over\rho}, \quad \tan\varphi={y\over x}\,.
\end{equation}
It can be shown that the contribution of the singularity itself is small and
can be neglected. Dividing the integration domain into two regions $\rho<R$
and $\rho>R$, where $R\ll\Delta$, one can see that the contribution of the
region of small distances $\rho<R$ to the Euclidean action is given by
\begin{equation}
\label{lowrho}
{\cal A}_{E}[\rho<R]=4\pi(JS^{2}/c)R\,.
\end{equation}
For estimating the contribution of the ``large'' distance region we use a
variational procedure with the trial function of the form
\begin{equation}
\label{trial}
\theta(x,y,\tau)=\pi/2 +F(c\tau)[\pi/2-\theta_{0}^{(+)}(r)]\,,
\end{equation}
where $F(c\tau)$ is a ``smeared step function:'' $F\to\pm1$ as
$\tau\to\pm\infty$ and the derivative of $F$ is nonzero in the region of the
thickness $\Delta_{1}$ around $\tau=0$. A simple estimate shows that the
resulting contribution of the region $\rho>R$ is described by
\begin{equation}
\label{farrho}
{\cal A}_{E}[\rho>R]=(2\pi JS^{2}/c)\big[
\xi_{1}\Delta_{1}\ln(\Delta/R)
+\xi_{2}\Delta_{1}+\xi_{3}\Delta^{2}/\Delta_{1} \big]\,,
\end{equation}
where $\xi_{1,2,3}$ are numerical constants of the order of unity. Summing up
(\ref{lowrho}) and (\ref{farrho}) and minimizing
${\cal A}_{E}$ with respect to $\Delta_{1}$ and $R$, we find
$\Delta_{1}\sim R\sim\Delta$. Thus, the total one-instanton Euclidean action
may be estimated as
\begin{equation}
\label{Ainst2}
{\cal A}_{0}=2\pi\xi JS^{2}\Delta/c=\xi\pi\hbar S\Delta/a\,,
\end{equation}
where $\xi\sim 1$.
Demanding that the tunneling exponent is not too large, e.g., ${\cal
A}_{0}<20\div30$, we see that for $S=5/2$ this means $\Delta/a<3\div4$, which
is rather tight; the continuum field approach we used here formally requires
$\Delta\gg a$, but in practice it is still applicable for $\Delta/a\sim2\div3$
\cite{galkina+93}. The crossover temperature $T_{c}\sim S(JK)^{1/2}$ is not
small since it is proportional to $\sqrt{J}$.
\section{Summary, and What is left under the carpet.}
\label{sec:summary}
Let us mention briefly the problems which are closely related to the topic of
this paper but were left out of discussion, and also those problems which are
not clear at present, to our opinion.
First of all we would like to remark that we did not touch at all {\em
microscopic\/} essentially quantum effects in magnets, e.g., predicted by
Haldane destruction of (quasi)long-range order in 1D antiferromagnets with
integer spin $S$ caused by quantum fluctuations. Effects of quantum
interference are also important for this phenomenon, and its existence is
determined by the presence of topological term in the Lagrangian of
antiferromagnet (see, e.g., the reviews \cite{affleck89rev,affleck-book}). For
small $S$ and weak anisotropy the ground state of 1D antiferromagnetic system
can differ drastically from its classical prototype; e.g., the ground state of
a $S=1$ AFM ring is not sensitive to whether the number of spins is odd or
even and is always unique, and the ground state of a $S={1\over2}$ AFM ring
with odd number of spins is {\em fourfold\/} degenerate \cite{kireev}.
We also did not consider the contribution of tunneling-generated internal
soliton modes to the thermodynamics and response functions of 1D
antiferromagnets, which can lead to interesting effects (see
\cite{ivkol95prl,ivkol95rev,ivkol96jetp,ivkol97}).
Another problem which was ignored in our consideration is the role
of relaxation and thermal fluctuations of different origin. Even at low
temperature the interaction of spins with other crystal subsystems (lattice,
nuclear spins, etc.) may be very important, see
\cite{GargKim91,Garg93,Prokof'evStamp94}. It is clear that stochastic
influence on the dynamics of magnetization from thermal fluctuations leads to
decoherence and suppresses coherent tunneling. Description of this fundamental
problem in any detail goes far beyond the scope of the present lecture, and we
refer the reader to the review by Caldeira and Leggett
\cite{CaldeiraLeggett83}.
One more problem which is unclear from our point of view is a justification of
considering all spins in a small particle as moving coherently (a ``giant
spin'' approximation usually used in treating the MQT problems and also
adopted in the present paper). In fact, the only justification of this
approximation is energetical: if the particle size is much smaller than the
characteristic domain wall thickness, any inhomogeneous perturbation costs
much energy. On the other hand, for the Hamiltonian (\ref{singspinH}) neither
$\widehat{{\mathbf S}^{2}}$ nor $\widehat{S}^{z}$ are good quantum numbers,
which means presence of magnons (deviations from collinear order) in the
ground state.
Our lecture was devoted first of all to the fundamental aspects of MQT
considered as a beautiful physical phenomenon which is rather difficult to
observe. But technological development can lead to the situation when this
phenomenon will become practically important. The present tendency of
increasing the density of recording in the development of information storage
devices means decrease of the elementary magnetic scale corresponding to one
bit of information, and one may expect that quantum effects will determine the
``natural limit'' of miniaturization in future.
\section*{Acknowledgements}
Sections \ref{sec:DM} and \ref{sec:rings} were written together with V.~Kireev
whom the authors wish to thank for cooperation. This work was partially
supported by the grant 2.4/27 ``Tunnel'' from the Ukrainian Ministry of
Science.
|
1,108,101,564,347 | arxiv | \section{Introduction}
Recently, the theory on existence and uniqueness of solutions of linear and
nonlinear fractional differential equations has attracted the attention of
many authors, see for example, \cite{agar1}-\cite{wang2} and references
therein. Many of the physical systems can better be described by integral
boundary conditions. Integral boundary conditions are encountered in various
applications such as population dynamics, blood flow models, chemical
engineering and cellular systems. Moreover, boundary value problems with
integral boundary conditions constitute a very interesting and important class
of problems. They include two-point, three-point, multi-point and nonlocal
boundary value problems as special cases. The existing literature mainly deals
with first order and second order boundary value problems and there are a few
papers on third order problems.
Shahed \cite{el} studied existence and nonexistence of positive solution of
nonlinear fractional two-point boundary value problem derivativ
\begin{align*}
\mathfrak{D}_{0^{+}}^{\alpha}u(t)+\lambda a\left( t\right) f\left(
u(t)\right) & =0,\text{ \ }0<t<1;\text{ }2<\alpha<3,\\
u\left( 0\right) & =u^{\prime}\left( 0\right) =u^{\prime}\left(
1\right) =0,
\end{align*}
where $\mathfrak{D}_{0^{+}}^{\alpha}$ denotes the Caputo derivative of
fractional order $\alpha,$ $\lambda$ is a positive parameter and $a:\left(
0,1\right) \rightarrow\left[ 0,\infty\right) $ is continuous function.
In \cite{ahmadntoy1}, Ahmad and Ntouyas studied a boundary value problem of
nonlinear fractional differential equations of order $\alpha\in\left(
2,3\right] $ with anti-periodic type integral boundary conditions
\begin{align*}
\mathfrak{D}_{0^{+}}^{\alpha}u(t) & =f\left( t,u(t)\right) ;\text{
\ }0<t<T;\text{ }2<\alpha\leq3,\\
u^{\left( j\right) }(0)-\lambda_{j}u^{\left( j\right) }(T) & =\mu_{j
{\displaystyle\int\limits_{0}^{T}}
g_{j}(s,u\left( s\right) )ds,\ \ j=0,1,2,
\end{align*}
where $\mathfrak{D}_{0^{+}}^{\alpha}$ denotes the Caputo derivative of
fractional order $\alpha$, $u^{\left( j\right) }$ denotes $j$-th derivative
of $u$, $f,g_{0},g_{1},g_{2}:\left[ 0,T\right] \times\mathbb{R
\rightarrow\mathbb{R}$ are given continuous functions and $\lambda_{j},\mu
_{j}\in\mathbb{R}$ ($\lambda_{j}\neq1$). The same problem for fractional
differential inclusions is considered in \cite{ahmadntoy2}.
Ahmad and Nieto \cite{ahmad6} studied existence and uniqueness results for the
following general three point fractional boundary value problem involving a
nonlinear fractional differential equation of order $\alpha\in\left(
m-1,m\right] $
\begin{align*}
\mathfrak{D}_{0^{+}}^{\alpha}u(t) & =f\left( t,u(t)\right) ;\text{
\ }0<t<T,\ m\geq2,\\
u\left( 0\right) & =u^{\prime}\left( 0\right) =...=u^{\left(
m-2\right) }\left( 0\right) =0,\ \ u\left( 1\right) =\lambda u\left(
\eta\right) .
\end{align*}
However, very little work have been done on the case when the nonlinearity $f$
depends on the fractional derivative of the unknown function. Su and Zhang
\cite{su}, Rehman et al. \cite{rehman} studied the existence and uniqueness of
solutions for following nonlinear two-point and three point fractional
boundary value problem when the nonlinearity $f$ depends on the fractional
derivative of the unknown function.
In this paper, we investigate the existence (and uniqueness) of solution for
nonlinear fractional differential equations of order $\alpha\in\left(
2,3\right]
\begin{equation}
\mathfrak{D}_{0^{+}}^{\alpha}u(t)=f\left( t,u(t),\mathfrak{D}_{0^{+}
^{\beta_{1}}u(t),\mathfrak{D}_{0^{+}}^{\beta_{2}}u(t)\right) ;\text{ \ }0\leq
t\leq T;\text{ }2<\alpha\leq3 \label{p1
\end{equation}
with the three point and integral boundary condition
\begin{equation}
\left\{
\begin{array}
[c]{l
a_{0}u(0)+b_{0}u(T)=\lambda_{0
{\displaystyle\int\limits_{0}^{T}}
g_{0}(s,u\left( s\right) )ds,\ \ \\
a_{1}\mathfrak{D}_{0^{+}}^{\beta_{1}}u(\eta)+b_{1}\mathfrak{D}_{0^{+}
^{\beta_{1}}u(T)=\lambda_{1
{\displaystyle\int\limits_{0}^{T}}
g_{1}(s,u\left( s\right) )ds,\ \ \ 0<\beta_{1}\leq1,\ \ 0<\eta<T,\\
a_{2}\mathfrak{D}_{0^{+}}^{\beta_{2}}u(\eta)+b_{2}\mathfrak{D}_{0^{+}
^{\beta_{2}}u(T)=\lambda_{2
{\displaystyle\int\limits_{0}^{T}}
g_{2}(s,u\left( s\right) )ds,\ \ \ \ 1<\beta_{2}\leq2,
\end{array}
\right. \label{p2
\end{equation}
where $\mathfrak{D}_{0^{+}}^{\alpha}$ denotes the Caputo fractional derivative
of order $\alpha$, $f,g_{0},g_{1},g_{2}$ are continuous functions.
\section{Preliminaries}
Let us recall some basic definitions \cite{sam}-\cite{kilbas}.
\begin{definition}
\label{Def:1}The Riemann Liouville fractional integral of order $\beta$ for
continous function $f:[0,\infty)\rightarrow\mathbb{R}$ is defined as
\[
I_{0^{+}}^{\alpha}f(t)=\frac{1}{\Gamma(\alpha)}\int\limits_{0}^{t
(t-s)^{\alpha-1}f(s)ds,\text{ \ }\alpha>0
\]
provided the integral exists.
\end{definition}
\begin{definition}
For $n$-times continously differentiable function $f:[0,\infty)\rightarrow
\mathbb{R}$ \ the Caputo derivative fractional order $\alpha$ is defined as
\[
\mathfrak{D}_{0^{+}}^{\alpha}f(t)=\frac{1}{\Gamma(n-\alpha)}\int
\limits_{0}^{t}(t-s)^{n-\alpha-1}f^{(n)}(s)ds;\text{ \ }n-1<\alpha
<n,\ n=\left[ \alpha\right] +1,
\]
where $\left[ \alpha\right] $ denotes the integral part of the real number
$\alpha.$
\end{definition}
\begin{lemma}
\label{Lem:1}Let $\alpha>0.$ Then the differential equation $\mathfrak{D
_{0^{+}}^{\alpha}f(t)=0$ has solution
\[
f(t)=k_{0}+k_{1}t+k_{2}t^{2}+...+k_{n-1}t^{n-1
\]
and
\[
I_{0^{+}}^{\alpha}\mathfrak{D}_{0^{+}}^{\alpha}f(t)=f(t)+k_{0}+k_{1
t+k_{2}t^{2}+...+k_{n-1}t^{n-1},
\]
here $k_{i}\in\mathbb{R}$ and $i=1,2,3,...,n-1$, $n=\left[ \alpha\right]
+1.$
\end{lemma}
Caputo fractional derivative of order $n-1<\alpha<n$ for $t^{\gamma}$, is
given a
\begin{equation}
\mathfrak{D}_{0^{+}}^{\alpha}t^{\gamma}=\left\{
\begin{array}
[c]{l
\dfrac{\Gamma\left( \gamma+1\right) }{\Gamma\left( \gamma-\alpha+1\right)
}t^{\gamma-\alpha},\ \ \gamma\in\mathbb{N}\ \ \text{and }\gamma\geq
n\ \ \text{or\ \ }\gamma\notin\mathbb{N}\text{\ \ and\ }\gamma>n-1,\\
0,\ \ \ \gamma\in\left\{ 0,1,...,n-1\right\} .
\end{array}
\right. \label{d1
\end{equation}
Assume that $a_{i},b_{i},\lambda_{i}\in\mathbb{R},$ $0<\eta<T,$ $\beta_{0}=0,$
$0<\beta_{1}\leq1$, $1<\beta_{2}\leq2$ and
\[
a_{0}+b_{0}\neq0,\ \ a_{1}\eta^{1-\beta_{1}}+b_{1}T^{1-\beta_{1}
\neq0,\ \ a_{i}\eta^{2-\beta_{i}}+b_{i}T^{2-\beta_{i}}\neq0.
\]
For convenience, we se
\begin{align*}
\mu^{\beta_{1}} & :=\frac{\Gamma(3-\beta_{1})}{2\left( a_{1}\eta
^{2-\beta_{1}}+b_{1}T^{2-\beta_{1}}\right) },\ \mu^{\beta_{2}}:=\frac
{\Gamma(3-\beta_{2})}{2\left( a_{2}\eta^{2-\beta_{2}}+b_{2}T^{2-\beta_{2
}\right) },\ \ \ \ \nu^{\beta_{1}}:=\frac{\Gamma(2-\beta_{1})}{a_{1
\eta^{1-\beta_{1}}+b_{1}T^{1-\beta_{1}}},\ \\
\omega_{0} & :=\dfrac{1}{a_{0}+b_{0}},\ \ \ \omega_{1}\left( t\right)
:=\nu^{\beta_{1}}\left( \dfrac{b_{0}}{a_{0}+b_{0}}T-t\right) ,\ \ \ \\
\omega_{2}\left( t\right) & :=\dfrac{b_{0}T^{2}}{a_{0}+b_{0}}\mu
^{\beta_{2}}-\dfrac{b_{0}T}{a_{0}+b_{0}}\nu^{\beta_{1}}\dfrac{\mu^{\beta_{2}
}{\mu^{\beta_{1}}}+\nu^{\beta_{1}}\dfrac{\mu^{\beta_{2}}}{\mu^{\beta_{1}
}t-\mu^{\beta_{2}}t^{2}.
\end{align*}
\begin{lemma}
For any $f,g_{0},g_{1},g_{2}\in C\left( \left[ 0,T\right] ;\mathbb{R
\right) $, the unique solution of the fractional boundary value problem
\begin{gather}
\ \ \ \ \ \ \mathfrak{D}_{0^{+}}^{\alpha}u(t)=f(t);\text{ \ }0\leq t\leq
T,\text{ }2<\alpha\leq3,\label{e1}\\
\left\{
\begin{array}
[c]{l
a_{0}u(0)+b_{0}u(T)=\lambda_{0
{\displaystyle\int\limits_{0}^{T}}
g_{0}(s)ds,\\
a_{1}\mathfrak{D}_{0^{+}}^{\beta_{1}}u(\eta)+b_{1}\mathfrak{D}_{0^{+}
^{\beta_{1}}u(T)=\lambda_{1
{\displaystyle\int\limits_{0}^{T}}
g_{1}(s)ds,\ \ \ \ 0<\eta<T,\ \ 0<\beta_{1}\leq1,\\
a_{2}\mathfrak{D}_{0^{+}}^{\beta_{2}}u(\eta)+b_{2}\mathfrak{D}_{0^{+}
^{\beta_{2}}u(T)=\lambda_{2
{\displaystyle\int\limits_{0}^{T}}
g_{2}(s)ds,\ \ \ \ 1<\beta_{1}\leq2
\end{array}
\right. \ \label{e2
\end{gather}
is given b
\begin{align*}
u\left( t\right) &
{\displaystyle\int\limits_{0}^{t}}
\dfrac{(t-s)^{\alpha-1}}{\Gamma(\alpha)}f(s)ds
{\displaystyle\sum\limits_{i=0}^{2}}
\omega_{i}\left( t\right) b_{i
{\displaystyle\int\limits_{0}^{T}}
\frac{(T-s)^{\alpha-\beta_{i}-1}}{\Gamma(\alpha-\beta_{i})}f(s)ds\\
&
{\displaystyle\sum\limits_{i=1}^{2}}
\omega_{i}\left( t\right) a_{i
{\displaystyle\int\limits_{0}^{\eta}}
\frac{(\eta-s)^{\alpha-\beta_{i}-1}}{\Gamma(\alpha-\beta_{i})}f(s)ds
{\displaystyle\sum\limits_{i=0}^{2}}
\omega_{i}\left( t\right) \lambda_{i
{\displaystyle\int\limits_{0}^{T}}
g_{i}(s)ds.
\end{align*}
\end{lemma}
\begin{proof}
By Lemma \ref{Lem:1}, for $2<\alpha\leq3$ the general solution of the equation
$\mathfrak{D}_{0^{+}}^{\alpha}u(t)=f(t)$ can be written as
\begin{equation}
u(t)=\frac{1}{\Gamma(\alpha)
{\displaystyle\int\limits_{0}^{t}}
(t-s)^{\alpha-1}f(s)ds-k_{0}-k_{1}t-k_{2}t^{2}, \label{ss1
\end{equation}
where $k_{0},k_{1},k_{2}\in\mathbb{R}$ are arbitrary constants. Moreover, by
the formula (\ref{d1}) $\beta_{1}$ and $\beta_{2}$ order derivatives are as
follows:
\begin{align*}
\mathfrak{D}_{0^{+}}^{\beta_{1}}u(t) & =I^{\alpha-\beta_{1}}f(t)-k_{1
\frac{t^{1-\beta_{1}}}{\Gamma(2-\beta_{1})}-2k_{2}\frac{t^{2-\beta_{1}
}{\Gamma(3-\beta_{1})},\\
\mathfrak{D}_{0^{+}}^{\beta_{2}}u(t) & =I^{\alpha-\beta_{2}}f(t)-2k_{2
\frac{t^{2-\beta_{2}}}{\Gamma(3-\beta_{2})}.
\end{align*}
Using boundary conditions (\ref{e2}), we get the following algebraic system of
equations for $k_{0},k_{1},k_{2}$
\begin{align*}
-\left( a_{0}+b_{0}\right) k_{0}-b_{0}Tk_{1}-b_{0}T^{2}k_{2} &
=\lambda_{0}\int\limits_{0}^{T}g_{0}(s)ds-b_{0}I_{0^{+}}^{\alpha}f(T),\\
-\frac{a_{1}\eta^{1-\beta_{1}}+b_{1}T^{1-\beta_{1}}}{\Gamma(2-\beta_{1})
k_{1}-2\frac{a_{1}\eta^{2-\beta_{1}}+b_{1}T^{2-\beta_{1}}}{\Gamma(3-\beta
_{1})}k_{2} & =\lambda_{1}\int\limits_{0}^{T}g_{1}(s)ds-a_{1}I_{0^{+
}^{\alpha-\beta_{1}}f(\eta)-b_{1}I_{0^{+}}^{\alpha-\beta_{1}}f(T),\\
-2\frac{a_{2}\eta^{2-\beta_{2}}+b_{2}T^{2-\beta_{2}}}{\Gamma(3-\beta_{2
)}k_{2} & =\lambda_{2}\int\limits_{0}^{T}g_{2}(s)ds-a_{2}I_{0^{+}
^{\alpha-\beta_{2}}f(\eta)-b_{2}I_{0^{+}}^{\alpha-\beta_{2}}f(T).
\end{align*}
Solving the above system of equations for $k_{0},k_{1},k_{2}$, we get the
following
\begin{align*}
k_{2} & =b_{2}\mu^{\beta_{2}}I_{0^{+}}^{\alpha-\beta_{2}}f(T)+a_{2
\mu^{\beta_{2}}I_{0^{+}}^{\alpha-\beta_{2}}f(\eta)-\lambda_{2}\mu^{\beta_{2
}\int\limits_{0}^{T}g_{2}(s)ds,\\
k_{1} & =b_{1}\nu^{\beta_{1}}I_{0^{+}}^{\alpha-\beta_{1}}f(T)+a_{1
\nu^{\beta_{1}}I_{0^{+}}^{\alpha-\beta_{1}}f(\eta)-\lambda_{1}\nu^{\beta_{1
}\int\limits_{0}^{T}g_{1}(s)ds\\
& -b_{2}\nu^{\beta_{1}}\dfrac{\mu^{\beta_{2}}}{\mu^{\beta_{1}}}I_{0^{+
}^{\alpha-\beta_{2}}f(T)-a_{2}\nu^{\beta_{1}}\dfrac{\mu^{\beta_{2}}
{\mu^{\beta_{1}}}I_{0^{+}}^{\alpha-\beta_{2}}f(\eta)+\lambda_{2}\nu^{\beta
_{1}}\dfrac{\mu^{\beta_{2}}}{\mu^{\beta_{1}}}\int\limits_{0}^{T}g_{2}(s)ds,\\
k_{0} & =\dfrac{b_{0}}{a_{0}+b_{0}}I_{0^{+}}^{\alpha}f(T)-\dfrac{\lambda
_{0}}{a_{0}+b_{0}}\int\limits_{0}^{T}g_{0}(s)ds\\
& -\dfrac{b_{0}b_{1}\nu^{\beta_{1}}T}{a_{0}+b_{0}}I_{0^{+}}^{\alpha-\beta
_{1}}f(T)-\dfrac{b_{0}a_{1}\nu^{\beta_{1}}T}{a_{0}+b_{0}}I_{0^{+}
^{\alpha-\beta_{1}}f(\eta)+\dfrac{b_{0}\lambda_{1}\nu^{\beta_{1}}T
{a_{0}+b_{0}}\int\limits_{0}^{T}g_{1}(s)ds\\
& +\dfrac{b_{0}b_{2}\nu^{\beta_{1}}T}{a_{0}+b_{0}}\dfrac{\mu^{\beta_{2}}
{\mu^{\beta_{1}}}I_{0^{+}}^{\alpha-\beta_{2}}f(T)+\dfrac{b_{0}a_{2}\nu
^{\beta_{1}}T}{a_{0}+b_{0}}\dfrac{\mu^{\beta_{2}}}{\mu^{\beta_{1}}}I_{0^{+
}^{\alpha-\beta_{2}}f(\eta)-\dfrac{b_{0}\lambda_{2}\nu^{\beta_{1}}T
{a_{0}+b_{0}}\dfrac{\mu^{\beta_{2}}}{\mu^{\beta_{1}}}\int\limits_{0}^{T
g_{2}(s)ds\\
& -\dfrac{b_{0}b_{2}\mu^{\beta_{2}}T^{2}}{a_{0}+b_{0}}I_{0^{+}}^{\alpha
-\beta_{2}}f(T)-\dfrac{b_{0}a_{2}\mu^{\beta_{2}}T^{2}}{a_{0}+b_{0}}I_{0^{+
}^{\alpha-\beta_{2}}f(\eta)+\dfrac{b_{0}\lambda_{2}\mu^{\beta_{2}}T^{2}
{a_{0}+b_{0}}\int\limits_{0}^{T}g_{2}(s)ds.
\end{align*}
Inserting $k_{0},k_{1},k_{2}$ into (\ref{ss1}) we get the desired
representation for the solution of (\ref{e1})-(\ref{e2}).
\end{proof}
\begin{remark}
\label{Rem:1}The Green function of the BVP is defined by
\[
G(t;s)=\left\{
\begin{array}
[c]{c
-\dfrac{(t-s)^{\alpha-1}}{\Gamma(\alpha)}+G_{0}(t;s),\text{ \ \ }0\leq s\leq
t\leq T,\\
G_{0}(t;s),\text{ \ \ \ \ }0\leq t\leq s\leq T,
\end{array}
\right.
\]
where
\begin{align*}
G_{0}(t;s) &
{\displaystyle\sum\limits_{i=0}^{2}}
\omega_{i}\left( t\right) b_{i}\dfrac{(T-s)^{\alpha-\beta_{i}-1}
{\Gamma(\alpha-\beta_{i})}
{\displaystyle\sum\limits_{i=1}^{2}}
\omega_{i}\left( t\right) a_{i}\dfrac{(\eta-s)^{\alpha-\beta_{i}-1}
{\Gamma(\alpha-\beta_{i})}\chi_{\left( 0,\eta\right) }\left( s\right) ,\\
\chi_{\left( a,b\right) }\left( s\right) & :=\left\{
\begin{array}
[c]{c
1,\ \ \ s\in\left( a,b\right) ,\\
0,\ \ \ s\notin\left( a,b\right) .
\end{array}
\right. .
\end{align*}
\end{remark}
\begin{remark}
For $\alpha=3,\beta_{1}=1,\beta_{2}=2$ and $\eta=0,$ the Green function of
(\ref{e1})-(\ref{e2}) can be writen as follows
\[
G(t;s)=\left\{
\begin{array}
[c]{c
-\dfrac{(t-s)^{\alpha-1}}{\Gamma(\alpha)}+G_{0}(t;s),\text{ \ \ }0\leq s\leq
t\leq T,\\
G_{0}(t;s),\text{ \ \ \ \ }0\leq t\leq s\leq T.
\end{array}
\right.
\]
wher
\begin{align*}
& G_{0}(t;s)=\dfrac{b_{0}}{a_{0}+b_{0}}\dfrac{(T-s)^{\alpha-1}}{\Gamma
(\alpha)}+\left( -\frac{b_{0}T}{a_{0}+b_{0}}\frac{b_{1}}{a_{1}+b_{1}
+\frac{b_{1}}{a_{1}+b_{1}}t\right) \dfrac{(T-s)^{\alpha-2}}{\Gamma(\alpha
-1)}\\
& +\left( \frac{b_{0}}{a_{0}+b_{0}}\frac{b_{1}}{a_{1}+b_{1}}\frac{b_{2
}{a_{2}+b_{2}}T-\frac{b_{0}T^{2}}{a_{0}+b_{0}}\frac{b_{2}}{2\left(
a_{2}+b_{2}\right) }-\frac{2b_{1}}{a_{1}+b_{1}}\frac{b_{2}}{2\left(
a_{2}+b_{2}\right) }t+\frac{b_{2}}{2\left( a_{2}+b_{2}\right)
t^{2}\right) \dfrac{(T-s)^{\alpha-3}}{\Gamma(\alpha-2)
\end{align*}
Moreover, the cas
\[
a_{0}=1,b_{0}=0,a_{1}=0,b_{1}=1,a_{2}=1,b_{2}=0
\]
is investigated in \cite{ait}. In this case
\[
G(t;s)=\left\{
\begin{array}
[c]{c
-\dfrac{(t-s)^{\alpha-1}}{\Gamma(\alpha)}+t\dfrac{(T-s)^{\alpha-2}
{\Gamma(\alpha-1)},\text{ \ \ }0\leq s\leq t\leq T,\\
t\dfrac{(T-s)^{\alpha-2}}{\Gamma(\alpha-1)},\text{ \ \ \ \ }0\leq t\leq s\leq
T.
\end{array}
\right.
\]
\end{remark}
\section{Existence and uniqueness results}
In this section we state and prove an existence and uniqueness result for the
fractional BVP (\ref{p1})-(\ref{p2}) by using the Banach fixed-point theorem.
We study our problem in the spac
\[
C_{\beta}\left( \left[ 0,T\right] ;\mathbb{R}\right) :=\left\{ v\in
C\left( \left[ 0,T\right] ;\mathbb{R}\right) :\mathfrak{D}_{0^{+}
^{\beta_{1}}v,\ \mathfrak{D}_{0^{+}}^{\beta_{2}}v\in C\left( \left[
0,T\right] ;\mathbb{R}\right) \right\}
\]
equipped with the nor
\[
\left\Vert v\right\Vert _{\beta}:=\left\Vert v\right\Vert _{C}+\left\Vert
\mathfrak{D}_{0^{+}}^{\beta_{1}}v\right\Vert _{C}+\left\Vert \mathfrak{D
_{0^{+}}^{\beta_{2}}v\right\Vert _{C},
\]
where $\left\Vert \cdot\right\Vert _{C}$ is the sup norm in $C\left( \left[
0,T\right] ;\mathbb{R}\right) $.
The following notations, formulae and estimations will be used throughout the
paper
\begin{align*}
\mathfrak{D}_{0^{+}}^{\beta_{1}}\omega_{1}\left( t\right) & =-\dfrac
{\nu^{\beta_{1}}t^{1-\beta_{1}}}{\Gamma(2-\beta_{1})},...\mathfrak{D}_{0^{+
}^{\beta_{1}}\omega_{1}\left( t\right) =0,\\
\mathfrak{D}_{0^{+}}^{\beta_{1}}\omega_{2}\left( t\right) & =\dfrac
{\nu^{\beta_{1}}\mu^{\beta_{2}}t^{1-\beta_{1}}}{\mu^{\beta_{1}}\Gamma
(2-\beta_{1})}-2\dfrac{\mu^{\beta_{2}}t^{2-\beta_{1}}}{\Gamma(3-\beta_{1
)},\ \ \mathfrak{D}_{0^{+}}^{\beta_{2}}\omega_{2}\left( t\right)
=-2\dfrac{\mu^{\beta_{2}}t^{2-\beta_{2}}}{\Gamma(3-\beta_{2})}.
\end{align*
\begin{align*}
\left\vert \omega_{0}\right\vert & =\dfrac{1}{\left\vert a_{0
+b_{0}\right\vert }=:\rho_{0},\ \ \ \left\vert \omega_{1}\left( t\right)
\right\vert \leq\left\vert \nu^{\beta_{1}}\right\vert \left( \left\vert
\omega_{0}\right\vert \left\vert b_{0}\right\vert +1\right) T:=\rho
_{1},\ \ \ \\
\left\vert \omega_{2}\left( t\right) \right\vert & \leq\dfrac{\left\vert
b_{0}\right\vert \left\vert \mu^{\beta_{2}}\right\vert }{\left\vert
a_{0}+b_{0}\right\vert }T^{2}+\dfrac{\left\vert b_{0}\right\vert \left\vert
\nu^{\beta_{1}}\right\vert }{\left\vert a_{0}+b_{0}\right\vert
\dfrac{\left\vert \mu^{\beta_{2}}\right\vert }{\left\vert \mu^{\beta_{1
}\right\vert }T+\dfrac{\left\vert \nu^{\beta_{1}}\right\vert \left\vert
\mu^{\beta_{2}}\right\vert }{\left\vert \mu^{\beta_{1}}\right\vert
}T+\left\vert \mu^{\beta_{2}}\right\vert T^{2}:=\rho_{2}.\\
\widetilde{\rho}_{0} & =0,\ \ \left\vert \mathfrak{D}_{0^{+}}^{\beta_{1
}\omega_{1}\left( t\right) \right\vert \leq\dfrac{\left\vert \nu^{\beta_{1
}\right\vert T^{1-\beta_{1}}}{\Gamma(2-\beta_{1})}:=\widetilde{\rho
_{1},\ \ \ \left\vert \mathfrak{D}_{0^{+}}^{\beta_{1}}\omega_{2}\left(
t\right) \right\vert \leq\dfrac{\left\vert \mu^{\beta_{2}}\right\vert
\left\vert \nu^{\beta_{1}}\right\vert T^{1-\beta_{1}}}{\left\vert \mu
^{\beta_{1}}\right\vert \Gamma(2-\beta_{1})}+2\dfrac{\left\vert \mu^{\beta
_{2}}\right\vert T^{2-\beta_{1}}}{\Gamma(3-\beta_{1})}:=\widetilde{\rho
_{2},\\
\widehat{\rho}_{0} & =\widehat{\rho}_{1}=0,\ \ \ \left\vert \mathfrak{D
_{0^{+}}^{\beta_{2}}\omega_{2}\left( t\right) \right\vert \leq
2\dfrac{\left\vert \mu^{\beta_{2}}\right\vert T^{2-\beta_{2}}}{\Gamma
(3-\beta_{1})}:=\widehat{\rho}_{2}.
\end{align*
\begin{gather*}
\Delta_{0}:=\dfrac{T^{\alpha-\tau}}{\Gamma(\alpha)}\left( \frac{1-\tau
}{\alpha-\tau}\right) ^{1-\tau}
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left( \left\vert b_{i}\right\vert \dfrac{T^{\alpha-\beta_{i}-\tau
}{\Gamma(\alpha-\beta_{i})}+\left\vert a_{i}\right\vert \dfrac{\eta
^{\alpha-\beta_{i}-\tau}}{\Gamma(\alpha-\beta_{i})}\right) \left(
\frac{1-\tau}{\alpha-\beta_{i}-\tau}\right) ^{1-\tau},\\
\Delta_{1}:=\dfrac{l_{f}T^{\alpha-\beta_{1}}}{\Gamma(\alpha-\beta_{1}+1)}
{\displaystyle\sum\limits_{i=1}^{2}}
\widetilde{\rho}_{i}\left( \left\vert b_{i}\right\vert \dfrac{l_{f
T^{\alpha-\beta_{i}}}{\Gamma(\alpha-\beta_{i}+1)}+\left\vert a_{i}\right\vert
\dfrac{l_{f}\eta^{\alpha-\beta_{i}}}{\Gamma(\alpha-\beta_{i}+1)}\right) ,\\
\Delta_{2}:=\dfrac{l_{f}T^{\alpha-\beta_{2}}}{\Gamma(\alpha-\beta_{2
+1)}+\widehat{\rho}_{2}\left( \left\vert b_{2}\right\vert \dfrac
{l_{f}T^{\alpha-\beta_{2}}}{\Gamma(\alpha-\beta_{2}+1)}+\left\vert
a_{2}\right\vert \dfrac{l_{f}\eta^{\alpha-\beta_{2}}}{\Gamma(\alpha-\beta
_{2}+1)}\right) .
\end{gather*}
\begin{theorem}
\label{Thm:uniq}Assume that
\begin{enumerate}
\item[(H$_{1}$)] The function $f:\left[ 0,T\right] \times\mathbb{R
\times\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$ is jointly continuous.
\item[(H$_{2}$)] There exists a function $l_{f}\in L^{\frac{1}{\tau}}\left(
\left[ 0,T\right] ;\mathbb{R}^{+}\right) $ with $\tau\in\left(
0,\alpha-\beta_{2}\right) $ such tha
\[
\left\vert f\left( t,u_{1},u_{2},u_{3}\right) -f\left( t,v_{1},v_{2
,v_{3}\right) \right\vert \leq l_{f}\left( t\right) \left( \left\vert
u_{1}-v_{1}\right\vert +\left\vert u_{2}-v_{2}\right\vert +\left\vert
u_{3}-v_{3}\right\vert \right) ,
\]
for each $\left( t,u_{1},u_{2},u_{3}\right) ,\left( t,v_{1},v_{2
,v_{3}\right) \in\left[ 0,T\right] \times\mathbb{R}\times\mathbb{R
\times\mathbb{R}.$
\item[(H$_{3}$)] The function $g_{i}:\left[ 0,T\right] \times\mathbb{R
\rightarrow\mathbb{R}$ is jointly continuous and there exists $l_{g_{i}}\in
L^{1}\left( \left[ 0,T\right] ,\mathbb{R}^{+}\right) $ such tha
\[
\left\vert g_{i}\left( t,u\right) -g_{i}\left( t,v\right) \right\vert \leq
l_{g_{i}}\left( t\right) \left\vert u-v\right\vert ,\ i=0,1,2
\]
for each $\left( t,u\right) ,\left( t,v\right) \in\left[ 0,T\right]
\times\mathbb{R}.$\newline I
\begin{equation}
\left( \Delta_{0}+\Delta_{1}+\Delta_{2}\right) \left\Vert l_{f}\right\Vert
_{1/\tau}
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left\vert \lambda_{i}\right\vert \left\Vert l_{g_{i}}\right\Vert
_{1}
{\displaystyle\sum\limits_{i=1}^{2}}
\widetilde{\rho}_{i}\left\vert \lambda_{i}\right\vert \left\Vert l_{g_{i
}\right\Vert _{1}+\widehat{\rho}_{2}\left\vert \lambda_{2}\right\vert
\left\Vert l_{g_{2}}\right\Vert _{1}<1, \label{cc1
\end{equation}
then the problem (\ref{p1})-(\ref{p2}) has a unique solution on $\left[
0,T\right] $.
\end{enumerate}
\end{theorem}
\begin{proof}
In order to transform the BVP (\ref{p1})-(\ref{p2}) into a fixed point
problem, we consider the operator $\mathfrak{F}:C_{\beta}\left( \left[
0,T\right] ;\mathbb{R}\right) \rightarrow C_{\beta}\left( \left[
0,T\right] ;\mathbb{R}\right) $ which is defined b
\begin{align}
\left( \mathfrak{F}u\right) \left( t\right) &
{\displaystyle\int\limits_{0}^{t}}
\dfrac{(t-s)^{\alpha-1}}{\Gamma(\alpha)}f(s,u\left( s\right) ,\mathfrak{D
_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D}_{0^{+}}^{\beta_{2}}u(s))ds\nonumber\\
&
{\displaystyle\sum\limits_{i=0}^{2}}
\omega_{i}\left( t\right) b_{i
{\displaystyle\int\limits_{0}^{T}}
\frac{(T-s)^{\alpha-\beta_{i}-1}}{\Gamma(\alpha-\beta_{i})}f(s,u\left(
s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D}_{0^{+}
^{\beta_{2}}u(s))ds\nonumber\\
&
{\displaystyle\sum\limits_{i=1}^{2}}
\omega_{i}\left( t\right) a_{i
{\displaystyle\int\limits_{0}^{\eta}}
\frac{(\eta-s)^{\alpha-\beta_{i}-1}}{\Gamma(\alpha-\beta_{i})}f(s,u\left(
s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D}_{0^{+}
^{\beta_{2}}u(s))ds
{\displaystyle\sum\limits_{i=0}^{2}}
\omega_{i}\left( t\right) \lambda_{i
{\displaystyle\int\limits_{0}^{T}}
g_{i}(s,u\left( s\right) )ds, \label{ff1
\end{align}
and take its $\beta_{1}$-th and $\beta_{2}$-th fractional derivative to ge
\begin{align}
\mathfrak{D}_{0^{+}}^{\beta_{1}}\left( \mathfrak{F}u\right) \left(
t\right) &
{\displaystyle\int\limits_{0}^{t}}
\dfrac{(t-s)^{\alpha-\beta_{1}-1}}{\Gamma(\alpha-\beta_{1})}f(s,u\left(
s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D}_{0^{+}
^{\beta_{2}}u(s))ds\nonumber\\
&
{\displaystyle\sum\limits_{i=1}^{2}}
\mathfrak{D}_{0^{+}}^{\beta_{1}}\omega_{i}\left( t\right) b_{i
{\displaystyle\int\limits_{0}^{T}}
\frac{(T-s)^{\alpha-\beta_{i}-1}}{\Gamma(\alpha-\beta_{i})}f(s,u\left(
s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D}_{0^{+}
^{\beta_{2}}u(s))ds\nonumber\\
&
{\displaystyle\sum\limits_{i=1}^{2}}
\mathfrak{D}_{0^{+}}^{\beta_{1}}\omega_{i}\left( t\right) a_{i
{\displaystyle\int\limits_{0}^{\eta}}
\frac{(\eta-s)^{\alpha-\beta_{i}-1}}{\Gamma(\alpha-\beta_{i})}f(s,u\left(
s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D}_{0^{+}
^{\beta_{2}}u(s))ds
{\displaystyle\sum\limits_{i=1}^{2}}
\mathfrak{D}_{0^{+}}^{\beta_{1}}\omega_{i}\left( t\right) \lambda_{i
{\displaystyle\int\limits_{0}^{T}}
g_{i}(s,u\left( s\right) )ds, \label{ff2
\end{align}
an
\begin{align}
\mathfrak{D}_{0^{+}}^{\beta_{2}}\left( \mathfrak{F}u\right) \left(
t\right) &
{\displaystyle\int\limits_{0}^{t}}
\dfrac{(t-s)^{\alpha-\beta_{2}-1}}{\Gamma(\alpha-\beta_{2})}f(s,u\left(
s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D}_{0^{+}
^{\beta_{2}}u(s))ds\nonumber\\
& +\mathfrak{D}_{0^{+}}^{\beta_{2}}\omega_{2}\left( t\right) b_{2
{\displaystyle\int\limits_{0}^{T}}
\frac{(T-s)^{\alpha-\beta_{2}-1}}{\Gamma(\alpha-\beta_{2})}f(s,u\left(
s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D}_{0^{+}
^{\beta_{2}}u(s))ds\nonumber\\
& +\mathfrak{D}_{0^{+}}^{\beta_{2}}\omega_{2}\left( t\right) a_{2
{\displaystyle\int\limits_{0}^{\eta}}
\frac{(\eta-s)^{\alpha-\beta_{2}-1}}{\Gamma(\alpha-\beta_{2})}f(s,u\left(
s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D}_{0^{+}
^{\beta_{2}}u(s))ds-\mathfrak{D}_{0^{+}}^{\beta_{2}}\omega_{2}\left(
t\right) \lambda_{2
{\displaystyle\int\limits_{0}^{T}}
g_{2}(s,u\left( s\right) )ds. \label{ff3
\end{align}
Clearly, due to $f,g_{0},g_{1},g_{2}$ being jointly continuous, the
expressions (\ref{ff1})-(\ref{ff3}) are well defined. It is obvious that the
fixed point of the operator $\mathfrak{F}$ is a solution of the problem
(\ref{p1})-(\ref{p2}). To show existence and uniquiness of the solution
(\ref{e1})-(\ref{e2}) we use the Banach fixed point theorem. To this end, we
show that $\mathfrak{F}$ is contraction
\begin{align}
& \left\vert \left( \mathfrak{F}u\right) \left( t\right) -\left(
\mathfrak{F}v\right) \left( t\right) \right\vert \nonumber\\
& \le
{\displaystyle\int\limits_{0}^{t}}
\dfrac{(t-s)^{\alpha-1}}{\Gamma(\alpha)}\left\vert f(s,u\left( s\right)
,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D}_{0^{+}}^{\beta_{2
}u(s))-f(s,v\left( s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}
v(s),\mathfrak{D}_{0^{+}}^{\beta_{2}}v(s))\right\vert ds\nonumber\\
&
{\displaystyle\sum\limits_{i=0}^{2}}
\left\vert \omega_{i}\left( t\right) \right\vert \left\vert b_{i
\right\vert
{\displaystyle\int\limits_{0}^{T}}
\frac{(T-s)^{\alpha-\beta_{i}-1}}{\Gamma(\alpha-\beta_{i})}\left\vert
f(s,u\left( s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D
_{0^{+}}^{\beta_{2}}u(s))-f(s,v\left( s\right) ,\mathfrak{D}_{0^{+}
^{\beta_{1}}v(s),\mathfrak{D}_{0^{+}}^{\beta_{2}}v(s))\right\vert
ds\nonumber\\
&
{\displaystyle\sum\limits_{i=1}^{2}}
\left\vert \omega_{i}\left( t\right) \right\vert \left\vert a_{i
\right\vert
{\displaystyle\int\limits_{0}^{\eta}}
\frac{(\eta-s)^{\alpha-\beta_{i}-1}}{\Gamma(\alpha-\beta_{i})}\left\vert
f(s,u\left( s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D
_{0^{+}}^{\beta_{2}}u(s))-f(s,v\left( s\right) ,\mathfrak{D}_{0^{+}
^{\beta_{1}}v(s),\mathfrak{D}_{0^{+}}^{\beta_{2}}v(s))\right\vert
ds\nonumber\\
&
{\displaystyle\sum\limits_{i=0}^{2}}
\left\vert \omega_{i}\left( t\right) \right\vert \left\vert \lambda
_{i}\right\vert
{\displaystyle\int\limits_{0}^{T}}
\left\vert g_{i}(s,u\left( s\right) )-g_{i}(s,v\left( s\right)
)\right\vert ds\nonumber\\
& \leq\left\Vert l_{f}\right\Vert _{1/\tau}\dfrac{T^{\alpha-\tau}
{\Gamma(\alpha)}\left( \frac{1-\tau}{\alpha-\tau}\right) ^{1-\tau}\left\Vert
u-v\right\Vert _{\beta}\nonumber\\
& +\left\Vert l_{f}\right\Vert _{1/\tau
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left( \left\vert b_{i}\right\vert \dfrac{T^{\alpha-\beta_{i}-\tau
}{\Gamma(\alpha-\beta_{i})}+\left\vert a_{i}\right\vert \dfrac{\eta
^{\alpha-\beta_{i}-\tau}}{\Gamma(\alpha-\beta_{i})}\right) \left(
\frac{1-\tau}{\alpha-\beta_{i}-\tau}\right) ^{1-\tau}
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left\vert \lambda_{i}\right\vert \left\Vert l_{g_{i}}\right\Vert
_{1}\left\Vert u-v\right\Vert _{\beta}\nonumber\\
& =\left( \Delta_{0}\left\Vert l_{f}\right\Vert _{1/\tau}
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left\vert \lambda_{i}\right\vert \left\Vert l_{g_{i}}\right\Vert
_{1}\right) \left\Vert u-v\right\Vert _{\beta}. \label{f1
\end{align}
On the other hand,
\begin{align}
& \left\vert \mathfrak{D}_{0+}^{\beta_{1}}\left( \mathfrak{F}u\right)
\left( t\right) -\mathfrak{D}_{0+}^{\beta_{1}}\left( \mathfrak{F}v\right)
\left( t\right) \right\vert \nonumber\\
& \le
{\displaystyle\int\limits_{0}^{t}}
\dfrac{(t-s)^{\alpha-\beta_{1}-1}}{\Gamma(\alpha-\beta_{1})}\left\vert
f(s,u\left( s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D
_{0^{+}}^{\beta_{2}}u(s))-f(s,v\left( s\right) ,\mathfrak{D}_{0^{+}
^{\beta_{1}}v(s),\mathfrak{D}_{0^{+}}^{\beta_{2}}v(s))\right\vert
ds\nonumber\\
&
{\displaystyle\sum\limits_{i=1}^{2}}
\left\vert \mathfrak{D}_{0+}^{\beta_{1}}\omega_{i}\left( t\right)
\right\vert \left\vert b_{i}\right\vert
{\displaystyle\int\limits_{0}^{T}}
\frac{(T-s)^{\alpha-\beta_{i}-1}}{\Gamma(\alpha-\beta_{i})}\left\vert
f(s,u\left( s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D
_{0^{+}}^{\beta_{2}}u(s))-f(s,v\left( s\right) ,\mathfrak{D}_{0^{+}
^{\beta_{1}}v(s),\mathfrak{D}_{0^{+}}^{\beta_{2}}v(s))\right\vert
ds\nonumber\\
&
{\displaystyle\sum\limits_{i=1}^{2}}
\left\vert \mathfrak{D}_{0+}^{\beta_{1}}\omega_{i}\left( t\right)
\right\vert \left\vert a_{i}\right\vert
{\displaystyle\int\limits_{0}^{\eta}}
\frac{(\eta-s)^{\alpha-\beta_{i}-1}}{\Gamma(\alpha-\beta_{i})}\left\vert
f(s,u\left( s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D
_{0^{+}}^{\beta_{2}}u(s))-f(s,v\left( s\right) ,\mathfrak{D}_{0^{+}
^{\beta_{1}}v(s),\mathfrak{D}_{0^{+}}^{\beta_{2}}v(s))\right\vert
ds\nonumber\\
&
{\displaystyle\sum\limits_{i=1}^{2}}
\left\vert \mathfrak{D}_{0+}^{\beta_{1}}\omega_{i}\left( t\right)
\right\vert \left\vert \lambda_{i}\right\vert
{\displaystyle\int\limits_{0}^{T}}
\left\vert g_{i}(s,u\left( s\right) )-g_{i}(s,v\left( s\right)
)\right\vert ds\nonumber\\
& \leq\dfrac{T^{\alpha-\beta_{1}-\tau}}{\Gamma(\alpha-\beta_{1})}\left(
\frac{1-\tau}{\alpha-\beta_{1}-\tau}\right) ^{1-\tau}\left\Vert
l_{f}\right\Vert _{1/\tau}\left\Vert u-v\right\Vert _{\beta}\nonumber\\
&
{\displaystyle\sum\limits_{i=1}^{2}}
\widetilde{\rho}_{i}\left[ \left\Vert l_{f}\right\Vert _{1/\tau}\left(
\left\vert b_{i}\right\vert \dfrac{T^{\alpha-\beta_{i}-\tau}}{\Gamma
(\alpha-\beta_{i})}+\left\vert a_{i}\right\vert \dfrac{\eta^{\alpha-\beta
_{i}-\tau}}{\Gamma(\alpha-\beta_{i})}\right) \left( \frac{1-\tau
{\alpha-\beta_{i}-\tau}\right) ^{1-\tau}+\left\vert \lambda_{i}\right\vert
\left\Vert l_{g_{i}}\right\Vert _{1}\right] \left\Vert u-v\right\Vert
_{\beta}\nonumber\\
& =\left( \Delta_{1}\left\Vert l_{f}\right\Vert _{1/\tau}
{\displaystyle\sum\limits_{i=1}^{2}}
\widetilde{\rho}_{i}\left\vert \lambda_{i}\right\vert \left\Vert l_{g_{i
}\right\Vert _{1}\right) \left\Vert u-v\right\Vert _{\beta}. \label{f11
\end{align}
Similarl
\begin{align}
& \left\vert \mathfrak{D}_{0+}^{\beta_{2}}\left( \mathfrak{F}u\right)
\left( t\right) -\mathfrak{D}_{0+}^{\beta_{2}}\left( \mathfrak{F}v\right)
\left( t\right) \right\vert \nonumber\\
& \le
{\displaystyle\int\limits_{0}^{t}}
\dfrac{(t-s)^{\alpha-\beta_{2}-1}}{\Gamma(\alpha-\beta_{2})}\left\vert
f(s,u\left( s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D
_{0^{+}}^{\beta_{2}}u(s))-f(s,v\left( s\right) ,\mathfrak{D}_{0^{+}
^{\beta_{1}}v(s),\mathfrak{D}_{0^{+}}^{\beta_{2}}v(s))\right\vert
ds\nonumber\\
& +\left\vert \mathfrak{D}_{0+}^{\beta_{2}}\omega_{2}\left( t\right)
\right\vert \left\vert b_{2}\right\vert
{\displaystyle\int\limits_{0}^{T}}
\frac{(T-s)^{\alpha-\beta_{2}-1}}{\Gamma(\alpha-\beta_{2})}\left\vert
f(s,u\left( s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D
_{0^{+}}^{\beta_{2}}u(s))-f(s,v\left( s\right) ,\mathfrak{D}_{0^{+}
^{\beta_{1}}v(s),\mathfrak{D}_{0^{+}}^{\beta_{2}}v(s))\right\vert
ds\nonumber\\
& +\left\vert \mathfrak{D}_{0+}^{\beta_{2}}\omega_{2}\left( t\right)
\right\vert \left\vert a_{2}\right\vert
{\displaystyle\int\limits_{0}^{\eta}}
\frac{(\eta-s)^{\alpha-\beta_{2}-1}}{\Gamma(\alpha-\beta_{2})}\left\vert
f(s,u\left( s\right) ,\mathfrak{D}_{0^{+}}^{\beta_{1}}u(s),\mathfrak{D
_{0^{+}}^{\beta_{2}}u(s))-f(s,v\left( s\right) ,\mathfrak{D}_{0^{+}
^{\beta_{1}}v(s),\mathfrak{D}_{0^{+}}^{\beta_{2}}v(s))\right\vert
ds\nonumber\\
& +\left\vert \mathfrak{D}_{0+}^{\beta_{2}}\omega_{2}\left( t\right)
\right\vert \left\vert \lambda_{2}\right\vert
{\displaystyle\int\limits_{0}^{T}}
\left\vert g_{2}(s,u\left( s\right) )-g_{2}(s,v\left( s\right)
)\right\vert ds\nonumber\\
& \leq\dfrac{T^{\alpha-\beta_{2}-\tau}}{\Gamma(\alpha-\beta_{2})}\left(
\frac{1-\tau}{\alpha-\beta_{2}-\tau}\right) ^{1-\tau}\left\Vert
l_{f}\right\Vert _{1/\tau}\left\Vert u-v\right\Vert _{\beta}\nonumber\\
& +\widehat{\rho}_{2}\left( \left\vert b_{2}\right\vert \frac{T^{\alpha
-\beta_{2}-\tau}}{\Gamma(\alpha-\beta_{2})}+\left\vert a_{2}\right\vert
\frac{\eta^{\alpha-\beta_{2}-\tau}}{\Gamma(\alpha-\beta_{2})}\right) \left(
\frac{1-\tau}{\alpha-\beta_{2}-\tau}\right) ^{1-\tau}\left\Vert
l_{f}\right\Vert _{1/\tau}+\widehat{\rho}_{2}\left\vert \lambda_{2}\right\vert
\left\Vert l_{g_{2}}\right\Vert _{1}\left\Vert u-v\right\Vert _{\beta
}\nonumber\\
& =\left( \Delta_{2}\left\Vert l_{f}\right\Vert _{1/\tau}+\widehat{\rho
_{2}\left\vert \lambda_{2}\right\vert \left\Vert l_{g_{2}}\right\Vert
_{1}\right) \left\Vert u-v\right\Vert _{\beta}. \label{f12
\end{align}
Here, in estimations (\ref{f1})-(\ref{f12}), we used the H\"{o}lder inequalit
\begin{align*
{\displaystyle\int\limits_{0}^{t}}
l_{f}\left( s\right) \left( t-s\right) ^{\alpha-m-1}ds & \leq\left(
{\displaystyle\int\limits_{0}^{t}}
\left( l_{f}\left( s\right) \right) ^{\frac{1}{\tau}}ds\right) ^{\tau
}\left(
{\displaystyle\int\limits_{0}^{t}}
\left( \left( t-s\right) ^{\alpha-m-1}\right) ^{\frac{1}{1-\tau
}ds\right) ^{1-\tau}\\
& =\left\Vert l_{f}\right\Vert _{L^{1/\tau}}\left( \frac{1-\tau
{\alpha-m-\tau}\right) ^{1-\tau}t^{\alpha-m-\tau},\ \text{\ \ if
0<\gamma<\alpha-m.
\end{align*}
From (\ref{f1})-(\ref{f12}), it follows tha
\begin{align*}
& \left\Vert \left( \mathfrak{F}u\right) -\left( \mathfrak{F}v\right)
\right\Vert _{\beta}\leq\\
& \leq\left[ \left( \Delta_{0}+\Delta_{1}+\Delta_{2}\right) \left\Vert
l_{f}\right\Vert _{1/\tau}
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left\vert \lambda_{i}\right\vert \left\Vert l_{g_{i}}\right\Vert
_{1}
{\displaystyle\sum\limits_{i=1}^{2}}
\widetilde{\rho}_{i}\left\vert \lambda_{i}\right\vert \left\Vert l_{g_{i
}\right\Vert _{1}+\widehat{\rho}_{2}\left\vert \lambda_{2}\right\vert
\left\Vert l_{g_{2}}\right\Vert _{1}\right] \left\Vert u-v\right\Vert
_{\beta}.
\end{align*}
Consequently by (\ref{cc1}), $\mathfrak{F}$ is a contraction mapping. As a
consequence of the Banach fixed point theorem, we deduce that $\mathfrak{F}$
has a fixed point which is a solution of the problem (\ref{p1})-(\ref{p2}).
\end{proof}
\begin{remark}
In the assumptions (H$_{2}$) if $l_{f}$ is a constant then the condition
(\ref{cc1}) can be replaced b
\begin{align*}
& \dfrac{l_{f}T^{\alpha}}{\Gamma(\alpha+1)}+l_{f
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left( \left\vert b_{i}\right\vert \dfrac{T^{\alpha-\beta_{i}
}{\Gamma(\alpha-\beta_{i}+1)}+\left\vert a_{i}\right\vert \dfrac{\eta
^{\alpha-\beta_{i}}}{\Gamma(\alpha-\beta_{i}+1)}\right) \\
& +\dfrac{l_{f}T^{\alpha-\beta_{1}}}{\Gamma(\alpha-\beta_{1}+1)}+l_{f
{\displaystyle\sum\limits_{i=1}^{2}}
\widetilde{\rho}_{i}\left( \left\vert b_{i}\right\vert \dfrac{T^{\alpha
-\beta_{i}}}{\Gamma(\alpha-\beta_{i}+1)}+\left\vert a_{i}\right\vert
\dfrac{\eta^{\alpha-\beta_{i}}}{\Gamma(\alpha-\beta_{i}+1)}\right) \\
& +\dfrac{l_{f}T^{\alpha-\beta_{2}}}{\Gamma(\alpha-\beta_{2}+1)
+l_{f}\widehat{\rho}_{2}\left( \left\vert b_{2}\right\vert \dfrac
{T^{\alpha-\beta_{2}}}{\Gamma(\alpha-\beta_{2}+1)}+\left\vert a_{2}\right\vert
\dfrac{\eta^{\alpha-\beta_{2}}}{\Gamma(\alpha-\beta_{2}+1)}\right) \\
&
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left\vert \lambda_{i}\right\vert \left\Vert l_{g_{i}}\right\Vert
_{1}
{\displaystyle\sum\limits_{i=1}^{2}}
\widetilde{\rho}_{i}\left\vert \lambda_{i}\right\vert \left\Vert l_{g_{i
}\right\Vert _{1}+\widehat{\rho}_{2}\left\vert \lambda_{2}\right\vert
\left\Vert l_{g_{2}}\right\Vert _{1}<1.
\end{align*}
\end{remark}
\section{Existence results}
To prove the existence of solutions for BVP (\ref{p1})-(\ref{p2}), we recall
the following known nonlinear alternative.
\begin{theorem}
\label{Thm:na}(Nonlinear alternative) Let $X$ be a Banach space, let $B$ be a
closed, convex subset of $X$, let $W$ be an open subset of $B$ and $0\in W$.
Suppose that $F:\overline{W}\rightarrow B$ is a continuous and compact map.
Then either (a) $F$ has a fixed point in $\overline{W}$, or (b) there exist an
$x\in\partial W$ (the boundary of $W$) and $\lambda\in\left( 0,1\right) $
with $x=\lambda F\left( x\right) .$
\end{theorem}
\begin{theorem}
\label{Thm:exis}Assume that
\begin{enumerate}
\item[(H$_{4}$)] there exist non-decreasing functions $\varphi:\left[
0,\infty\right) \times\left[ 0,\infty\right) \times\left[ 0,\infty\right)
\rightarrow\left[ 0,\infty\right) ,\ \psi_{i}:\left[ 0,\infty\right)
\rightarrow\left[ 0,\infty\right) $ and functions $l_{f}\in L^{\frac{1
{\tau}}\left( \left[ 0,T\right] ,\mathbb{R}^{+}\right) $, $l_{g_{i}}\in
L^{1}\left( \left[ 0,T\right] ,\mathbb{R}^{+}\right) $ with $\tau
\in\left( 1,\min(\alpha-\beta_{2}\right) )$ such that
\begin{align*}
\left\vert f\left( t,u,v,w\right) \right\vert & \leq l_{f}\left(
t\right) \varphi\left( \left\vert u\right\vert +\left\vert v\right\vert
+\left\vert w\right\vert \right) ,\\
\left\vert g\left( t,u\right) \right\vert & \leq l_{g_{i}}\left(
t\right) \psi_{i}\left( \left\vert u\right\vert \right)
\end{align*}
$i=0,1,2$ for all $t\in\left[ 0,T\right] $ and $u,v,w\in\mathbb{R}$;
\item[(H$_{5}$)] there exists a constant $K>0$ such tha
\[
\frac{K}{\varphi\left( K\right) \left\Vert l_{f}\right\Vert _{1/\tau}\left(
\Delta_{0}+\Delta_{1}+\Delta_{2}\right)
{\displaystyle\sum\limits_{i=0}^{2}}
\left( \rho_{i}+\widetilde{\rho}_{i}+\widehat{\rho}_{i}\right) \left\vert
\lambda_{i}\right\vert \psi_{i}\left( K\right) \left\Vert l_{g_{i
}\right\Vert _{1}}>1.
\]
Then the problem (\ref{p1})-(\ref{p2}) has at least one solution on $\left[
0,T\right] .$
\end{enumerate}
\end{theorem}
\begin{proof}
Let $B_{r}:=\left\{ u\in C_{\beta}\left( \left[ 0,T\right] ;\mathbb{R
\right) :\left\Vert u\right\Vert _{\beta}\leq r\right\} $.
Step 1: We show that the operator $\mathfrak{F:}C_{\beta}\left( \left[
0,T\right] ;\mathbb{R}\right) \rightarrow C_{\beta}\left( \left[
0,T\right] ;\mathbb{R}\right) $ defined by (\ref{ff1}) maps $B_{r}$ into
bounded set.
For each $u\in B_{r}$, we hav
\begin{align*}
\left\vert \left( \mathfrak{F}u\right) \left( t\right) \right\vert &
\leq\dfrac{\varphi\left( r\right) }{\Gamma(\alpha)
{\displaystyle\int\limits_{0}^{t}}
(t-s)^{\alpha-1}\left\vert l_{f}(s)\right\vert ds\\
& +\varphi\left( r\right)
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left\vert b_{i}\right\vert \frac{1}{\Gamma(\alpha-\beta_{i})
{\displaystyle\int\limits_{0}^{T}}
(T-s)^{\alpha-\beta_{i}-1}\left\vert l_{f}(s)\right\vert ds\\
& +\varphi\left( r\right)
{\displaystyle\sum\limits_{i=1}^{2}}
\rho_{i}\left\vert a_{i}\right\vert \frac{1}{\Gamma(\alpha-\beta_{i})
{\displaystyle\int\limits_{0}^{\eta}}
(\eta-s)^{\alpha-\beta_{i}-1}\left\vert l_{f}(s)\right\vert ds\\
&
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left\vert \lambda_{i}\right\vert \psi_{i}\left( r\right)
{\displaystyle\int\limits_{0}^{T}}
\left\vert l_{g_{i}}(s)\right\vert ds.
\end{align*}
By the H\"{o}lder inequality, we hav
\begin{align*}
\left\vert \left( \mathfrak{F}u\right) \left( t\right) \right\vert &
\leq\varphi\left( r\right) \left\Vert l_{f}\right\Vert _{1/\tau}\left(
\dfrac{T^{\alpha-\tau}}{\Gamma(\alpha)}\left( \frac{1-\tau}{\alpha-\tau
}\right) ^{1-\tau}
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left\vert b_{i}\right\vert \dfrac{T^{\alpha-\beta_{i}-\tau}
{\Gamma(\alpha-\beta_{i})}\left( \frac{1-\tau}{\alpha-\beta_{i}-\tau}\right)
^{1-\tau}\right. \\
& +\left.
{\displaystyle\sum\limits_{i=1}^{2}}
\rho_{i}\left\vert a_{i}\right\vert \dfrac{\eta^{\alpha-\beta_{i}-\tau
}{\Gamma(\alpha-\beta_{i})}\left( \frac{1-\tau}{\alpha-\beta_{i}-\tau
}\right) ^{1-\tau}\right)
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left\vert \lambda_{i}\right\vert \psi_{i}\left( r\right) \left\Vert
l_{g_{i}}\right\Vert _{1}\\
& =\varphi\left( r\right) \left\Vert l_{f}\right\Vert _{1/\tau}\Delta_{0}
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left\vert \lambda_{i}\right\vert \psi_{i}\left( r\right) \left\Vert
l_{g_{i}}\right\Vert _{1}.
\end{align*}
In a similar manner
\begin{align*}
\left\vert \mathfrak{D}_{0+}^{\beta_{1}}\left( \mathfrak{F}u\right) \left(
t\right) \right\vert & \leq\varphi\left( r\right) \left\Vert
l_{f}\right\Vert _{1/\tau}\left( \dfrac{T^{\alpha-\beta_{1}-\tau}
{\Gamma(\alpha-\beta_{1})}\left( \frac{1-\tau}{\alpha-\beta_{1}-\tau}\right)
^{1-\tau}
{\displaystyle\sum\limits_{i=1}^{2}}
\widetilde{\rho}_{i}\left\vert b_{i}\right\vert \dfrac{T^{\alpha-\beta
_{i}-\tau}}{\Gamma(\alpha-\beta_{i})}\left( \frac{1-\tau}{\alpha-\beta
_{i}-\tau}\right) ^{1-\tau}\right. \\
& +\left.
{\displaystyle\sum\limits_{i=1}^{2}}
\widetilde{\rho}_{i}\left\vert a_{i}\right\vert \dfrac{\eta^{\alpha-\beta
_{i}-\tau}}{\Gamma(\alpha-\beta_{i})}\left( \frac{1-\tau}{\alpha-\beta
_{i}-\tau}\right) ^{1-\tau}\right)
{\displaystyle\sum\limits_{i=1}^{2}}
\widetilde{\rho}_{i}\left\vert \lambda_{i}\right\vert \psi_{i}\left(
r\right) \left\Vert l_{g_{i}}\right\Vert _{1}\\
& =\varphi\left( r\right) \left\Vert l_{f}\right\Vert _{1/\tau}\Delta_{1}
{\displaystyle\sum\limits_{i=1}^{2}}
\widetilde{\rho}_{i}\left\vert \lambda_{i}\right\vert \psi_{i}\left(
r\right) \left\Vert l_{g_{i}}\right\Vert _{1},
\end{align*}
an
\begin{align*}
\left\vert \mathfrak{D}_{0+}^{\beta_{2}}\left( \mathfrak{F}u\right) \left(
t\right) \right\vert & \leq\varphi\left( r\right) \left\Vert
l_{f}\right\Vert _{1/\tau}\left( \dfrac{T^{\alpha-\beta_{2}-\tau}
{\Gamma(\alpha-\beta_{2})}\left( \frac{1-\tau}{\alpha-\beta_{2}-\tau}\right)
^{1-\tau}+\widehat{\rho}_{2}\frac{T^{\alpha-\beta_{2}-\tau}\left\vert
b_{2}\right\vert }{\Gamma(\alpha-\beta_{2})}\left( \frac{1-\tau}{\alpha
-\beta_{2}-\tau}\right) ^{1-\tau}\right. \\
& +\left. \widehat{\rho}_{2}\frac{\eta^{\alpha-\beta_{2}-\tau}\left\vert
b_{2}\right\vert }{\Gamma(\alpha-\beta_{2})}\left( \frac{1-\tau}{\alpha
-\beta_{2}-\tau}\right) ^{1-\tau}\right) +\widehat{\rho}_{2}\left\vert
\lambda_{2}\right\vert \psi_{2}\left( r\right) \left\Vert l_{g_{2
}\right\Vert _{1}\\
& =\varphi\left( r\right) \left\Vert l_{f}\right\Vert _{1/\tau}\Delta
_{2}+\widehat{\rho}_{2}\left\vert \lambda_{2}\right\vert \psi_{2}\left(
r\right) \left\Vert l_{g_{2}}\right\Vert _{1}.
\end{align*}
Thu
\[
\left\Vert \left( \mathfrak{F}u\right) \right\Vert _{\beta}\leq
\varphi\left( r\right) \left\Vert l_{f}\right\Vert _{1/\tau}\left(
\Delta_{0}+\Delta_{1}+\Delta_{2}\right)
{\displaystyle\sum\limits_{i=0}^{2}}
\left( \rho_{i}+\widetilde{\rho}_{i}+\widehat{\rho}_{i}\right) \left\vert
\lambda_{i}\right\vert \psi_{i}\left( r\right) \left\Vert l_{g_{i
}\right\Vert _{1}.
\]
Step 2: The families $\left\{ \mathfrak{F}u:u\in B_{r}\right\} $, $\left\{
\mathfrak{D}_{0+}^{\beta_{1}}\left( \mathfrak{F}u\right) :u\in
B_{r}\right\} $, $\left\{ \mathfrak{D}_{0+}^{\beta_{2}}\left(
\mathfrak{F}u\right) :u\in B_{r}\right\} $ are equicontinuous.
Because of continuity of $\omega_{i}\left( t\right) $ and assumption
(H$_{4}$) we have
\begin{align*}
\left\vert \left( \mathfrak{F}u\right) \left( t_{2}\right) -\left(
\mathfrak{F}u\right) \left( t_{1}\right) \right\vert & \leq\dfrac
{1}{\Gamma(\alpha)}\varphi\left( r\right) \int_{t_{1}}^{t_{2}
(t_{2}-s)^{\alpha-1}l_{f}\left( s\right) ds\\
& +\dfrac{1}{\Gamma(\alpha)}\varphi\left( r\right) \int_{0}^{t_{1}}\left(
(t_{2}-s)^{\alpha-1}-(t_{1}-s)^{\alpha-1}\right) l_{f}\left( s\right) ds\\
& +\varphi\left( r\right)
{\displaystyle\sum\limits_{i=0}^{2}}
\left\vert \omega_{i}\left( t_{2}\right) -\omega_{i}\left( t_{1}\right)
\right\vert \left\vert b_{i}\right\vert
{\displaystyle\int\limits_{0}^{T}}
\frac{(T-s)^{\alpha-\beta_{i}-1}}{\Gamma(\alpha-\beta_{i})}l_{f}\left(
s\right) ds\\
& +\varphi\left( r\right)
{\displaystyle\sum\limits_{i=1}^{2}}
\left\vert \omega_{i}\left( t_{2}\right) -\omega_{i}\left( t_{1}\right)
\right\vert \left\vert a_{i}\right\vert
{\displaystyle\int\limits_{0}^{\eta}}
\frac{(\eta-s)^{\alpha-\beta_{i}-1}}{\Gamma(\alpha-\beta_{i})}l_{f}\left(
s\right) ds\\
&
{\displaystyle\sum\limits_{i=0}^{2}}
\left\vert \omega_{i}\left( t_{2}\right) -\omega_{i}\left( t_{1}\right)
\right\vert \left\vert \lambda_{i}\right\vert \psi_{i}\left( r\right)
\left\Vert l_{g_{i}}\right\Vert _{1}\\
& \rightarrow0\ \ \ \ \ \text{as\ \ \ }t_{2}\rightarrow t_{1}.
\end{align*}
Therefore, $\left\{ \mathfrak{F}u:u\in B_{r}\right\} $ is equicontinuous.
Similarly, we may prove that $\left\{ \mathfrak{D}_{0+}^{\beta_{1}}\left(
\mathfrak{F}u\right) :u\in B_{r}\right\} $ and $\left\{ \mathfrak{D
_{0+}^{\beta_{2}}\left( \mathfrak{F}u\right) :u\in B_{r}\right\} $ are equicontinuous.
Hence, by the Arzela--Ascoli theorem, the sets $\left\{ \mathfrak{F}u:u\in
B_{r}\right\} $, $\left\{ \mathfrak{D}_{0+}^{\beta_{1}}\left(
\mathfrak{F}u\right) :u\in B_{r}\right\} ,$ $\left\{ \mathfrak{D
_{0+}^{\beta_{2}}\left( \mathfrak{F}u\right) :u\in B_{r}\right\} $ are
relatively compact in $C\left( \left[ 0,T\right] ;\mathbb{R}\right) .$
Therefore, $\mathfrak{F}\left( B_{r}\right) $ is a relatively compact subset
of $C_{\beta}\left( \left[ 0,T\right] ;\mathbb{R}\right) .$ Consequently,
the operator $\mathfrak{F}$ is compact.
Step 3: $\mathfrak{F}$ has a fixed in $\overline{W}=\left\{ u\in C_{\beta
}\left( \left[ 0,T\right] ;\mathbb{R}\right) :\left\Vert u\right\Vert
_{\beta}<K\right\} .$
We let $u=\lambda\left( \mathfrak{F}u\right) $ for $0<\lambda<1$. Then for
each $t\in\left[ 0,T\right] $,
\begin{align*}
\left\Vert u\right\Vert _{\beta} & =\left\Vert \lambda\left( \mathfrak{F
u\right) \right\Vert _{\beta}\leq\varphi\left( \left\Vert u\right\Vert
_{\beta}\right) \left\Vert l_{f}\right\Vert _{1/\tau}\left( \Delta
_{0}+\Delta_{1}+\Delta_{2}\right) \\
&
{\displaystyle\sum\limits_{i=0}^{2}}
\left( \rho_{i}+\widetilde{\rho}_{i}+\widehat{\rho}_{i}\right) \left\vert
\lambda_{i}\right\vert \psi_{i}\left( \left\Vert u\right\Vert _{\beta
}\right) \left\Vert l_{g_{i}}\right\Vert _{1}.
\end{align*}
In other words
\[
\frac{\left\Vert u\right\Vert _{\beta}}{\varphi\left( \left\Vert u\right\Vert
_{\beta}\right) \left\Vert l_{f}\right\Vert _{1/\tau}\left( \Delta
_{0}+\Delta_{1}+\Delta_{2}\right)
{\displaystyle\sum\limits_{i=0}^{2}}
\left( \rho_{i}+\widetilde{\rho}_{i}+\widehat{\rho}_{i}\right) \left\vert
\lambda_{i}\right\vert \psi_{i}\left( \left\Vert u\right\Vert _{\beta
}\right) \left\Vert l_{g_{i}}\right\Vert _{1}}\leq1.
\]
According to the assumptions, we know that there exists $K>0$ such that
$K\neq\left\Vert u\right\Vert _{\beta}$. The operator $\mathfrak{F:
\overline{W}$ $\rightarrow C_{\beta}\left( \left[ 0,T\right] ;\mathbb{R
\right) $ is continuous and compact. From Theorem \ref{Thm:na}, we can deduce
that $\mathfrak{F}$ has a fixed point in $\overline{W}$.
\end{proof}
\begin{remark}
Notice that analogues of Theorem \ref{Thm:uniq} and \ref{Thm:exis} for the
case $f(t,u,v,w)=f(t,u)$ were considered in \cite{ahmadntoy2}. Thus our
results are generalization of \cite{ahmadntoy2}.
\end{remark}
\begin{remark}
Since the number $\left( \alpha-\beta_{2}-1\right) $ can be negative, the
function $(T-s)^{\alpha-\beta_{2}-1}\notin L^{\infty}\left( \left[
0,T\right] ,\mathbb{R}\right) $. That is why in Theorem \ref{Thm:uniq} and
\ref{Thm:exis} it is assumed that $l_{f}\in L^{\frac{1}{\tau}}$, $\tau
\in(0,\min(1,\alpha-\beta_{2})).$
\end{remark}
\section{Examples}
\textbf{Example 1.} Consider the following boundary value problem of
fractional differential equation
\begin{equation}
\left\{
\begin{array}
[c]{c
\mathfrak{D}_{0^{+}}^{5/2}u\left( t\right) =l_{f}\left( \dfrac{\left\vert
u\left( t\right) \right\vert }{1+\left\vert u\left( t\right) \right\vert
}+\dfrac{\left\vert \mathfrak{D}_{0^{+}}^{1/2}u\left( t\right) \right\vert
}{1+\left\vert \mathfrak{D}_{0^{+}}^{1/2}u\left( t\right) \right\vert
+\tan^{-1}\left( \mathfrak{D}_{0^{+}}^{3/2}u\left( t\right) \right)
\right) ,\ \ \ 0\leq t\leq1,\\
u\left( 0\right) +u\left( 1\right)
{\displaystyle\int_{0}^{1}}
\dfrac{u\left( s\right) }{\left( 1+s\right) ^{2}}ds,\\
\mathfrak{D}_{0^{+}}^{1/2}u\left( \frac{1}{10}\right) +\mathfrak{D}_{0^{+
}^{1/2}u\left( 1\right) =\dfrac{1}{2
{\displaystyle\int_{0}^{1}}
\left( \dfrac{e^{s}u\left( s\right) }{1+2e^{s}}+\dfrac{1}{2}\right) ds,\\
\mathfrak{D}_{0^{+}}^{3/2}u\left( \frac{1}{10}\right) +\mathfrak{D}_{0^{+
}^{3/2}u\left( 1\right) =\dfrac{1}{3
{\displaystyle\int_{0}^{1}}
\left( \dfrac{u\left( s\right) }{1+e^{s}}+\dfrac{3}{4}\right) ds.
\end{array}
\right. \label{ex1
\end{equation}
Here
\begin{align*}
\alpha & =5/2,\beta_{1}=1/2,\beta_{2}=3/2,T=1,a_{0}=b_{0}=a_{1}=b_{1
=a_{2}=b_{2}=1,\ \\
\eta & =\frac{1}{10}\ \ \lambda_{0}=1,\ \ \lambda_{1}=\frac{1}{2
,\ \ \lambda_{2}=\frac{1}{3},\ l_{g_{0}}=l_{g_{1}}=l_{g_{2}}=1,
\end{align*}
and
\begin{align*}
f\left( t,u,v,w\right) & :=\dfrac{u}{1+u}+\dfrac{v}{1+v}+\tan^{-1}\left(
w\right) ,\\
g_{0}\left( t,u\right) & :=\dfrac{u}{\left( 1+t\right) ^{2}
,\ \ \ g_{1}\left( t,u\right) :=\dfrac{e^{t}u}{1+2e^{t}}+\frac{1}{2},\\
g_{2}\left( t,u\right) & :=\dfrac{u}{1+e^{t}}+\frac{3}{4}.
\end{align*}
Since $1.77<\Gamma(\frac{1}{2})<1.78;0.88<\Gamma(\frac{3}{2})<0.89;1.32<\Gamma
(\frac{5}{2})<1.33$ and $3.32<\Gamma(\frac{7}{2})<3.33$ with simple
calculations we show tha
\begin{align*}
\Delta_{0} & =2.34,\ \ \Delta_{1}=0.19,\ \ \Delta_{2}=0.15,\\
\rho_{0} & =0.5,\ \ \ \rho_{1}=1.01,\ \ \ \rho_{2}=1.2,\ \ \\
\tilde{\rho}_{0} & =0,\ \ \tilde{\rho}_{1}=0.76,\ \ \ \tilde{\rho
_{2}=0.9,\ \ \\
\hat{\rho}_{0} & =\hat{\rho}_{1}=0,\text{ \ }\hat{\rho}_{2}=0.51
\end{align*}
Furthermore,
\begin{align*}
& \left( \Delta_{0}+\Delta_{1}+\Delta_{2}\right) \left\Vert l_{f
\right\Vert _{1/\tau}
{\displaystyle\sum\limits_{i=0}^{2}}
\rho_{i}\left\vert \lambda_{i}\right\vert \left\Vert l_{g_{i}}\right\Vert
_{1}
{\displaystyle\sum\limits_{i=1}^{2}}
\widetilde{\rho}_{i}\left\vert \lambda_{i}\right\vert \left\Vert l_{g_{i
}\right\Vert _{1}+\widehat{\rho}_{2}\left\vert \lambda_{2}\right\vert
\left\Vert l_{g_{2}}\right\Vert _{1}\\
& <2.7l_{f}+0.75<1.
\end{align*}
Therefore, we can choose
\[
l_{f}<\frac{0.25}{2.7}.
\]
Thus, all the assumptions of Theorem \ref{Thm:uniq} are satisfied. Hence, the
problem (\ref{ex1}) has a unique solution on $[0,1]$.
\textbf{Example 2. \ }Consider the following boundary value problem of
fractional differential equation
\begin{equation}
\left\{
\begin{array}
[c]{c
\mathfrak{D}_{0^{+}}^{5/2}u(t)=\dfrac{\left\vert u\left( t\right)
\right\vert ^{3}}{9(\left\vert u\left( t\right) \right\vert ^{3}+3)
+\dfrac{\left\vert \sin\mathfrak{D}_{0^{+}}^{1/2}u\left( t\right)
\right\vert }{9(\left\vert \sin\mathfrak{D}_{0^{+}}^{1/2}u\left( t\right)
\right\vert +1)}+\dfrac{1}{12},\ \ t\in\left[ 0,1\right] ,\\
u(0)+u(1)
{\displaystyle\int\limits_{0}^{1}}
\dfrac{u(s)}{3(1+s)^{2}}ds,\\
\mathfrak{D}_{0^{+}}^{1/2}u\left( \frac{1}{10}\right) +\mathfrak{D}_{0^{+
}^{1/2}u\left( 1\right) =\dfrac{1}{2
{\displaystyle\int\limits_{0}^{1}}
\dfrac{e^{s}u(s)}{3(1+e^{s})^{2}}ds,\\
\mathfrak{D}_{0^{+}}^{3/2}u\left( \frac{1}{10}\right) +\mathfrak{D}_{0^{+
}^{3/2}u\left( 1\right) =\dfrac{1}{3
{\displaystyle\int\limits_{0}^{1}}
\dfrac{u(s)}{3(1+e^{s})^{2}}ds,
\end{array}
\right. \label{ex2
\end{equation}
where $f$ is given b
\[
f(t,u,v,w)=\frac{\left\vert u\right\vert ^{3}}{10(\left\vert u\right\vert
^{3}+3)}+\frac{\left\vert \sin v\right\vert }{9(\left\vert \sin v\right\vert
+1)}+\frac{1}{12}.
\]
We hav
\[
\left\vert f(t,u,v,w)\right\vert \leq\frac{\left\vert u\right\vert ^{3
}{9(\left\vert u\right\vert ^{3}+3)}+\frac{\left\vert \sin v\right\vert
}{9(\left\vert \sin v\right\vert +1)}+\frac{1}{12}\leq\frac{11}{36},\text{
\ \ \ \ }u\in\mathbb{R}.
\]
Thu
\[
\left\Vert f\right\Vert \leq\frac{11}{36}=l_{f}(t)\varphi\left( K\right)
,\ \ \ \ \text{with \ \ }l_{f}(t)=\dfrac{1}{3},\varphi\left( K\right)
=\dfrac{11}{12}.
\]
Moreover
\begin{align*}
\alpha & =5/2,\beta_{1}=1/2,\beta_{2}=3/2,T=1,a_{0}=b_{0}=a_{1}=b_{1
=a_{2}=b_{2}=1,\ \\
\eta & =\frac{1}{10}\ \ \lambda_{0}=1,\ \ \lambda_{1}=\frac{1}{2
,\ \ \lambda_{2}=\frac{1}{3},\ l_{g_{0}}=l_{g_{1}}=l_{g_{2}}=\frac{1}{3},
\end{align*
\begin{align*}
\Delta_{0} & =2.34,\ \ \Delta_{1}=0.19,\ \ \Delta_{2}=0.15,\\
\rho_{0} & =0.5,\ \ \ \rho_{1}=1.01,\ \ \ \rho_{2}=1.2,\ \ \\
\tilde{\rho}_{0} & =0,\ \ \tilde{\rho}_{1}=0.76,\ \ \ \tilde{\rho
_{2}=0.9,\ \ \\
\hat{\rho}_{0} & =\hat{\rho}_{1}=0,\text{ \ }\hat{\rho}_{2}=0.51,
\end{align*}
and
\[
g_{0}\left( t,u\right) :=\dfrac{u}{3(1+t)^{2}},\ \ \ g_{1}\left(
t,u\right) :=\dfrac{e^{t}u}{3(1+e^{t})^{2}},\ \ g_{2}\left( t,u\right)
:=\dfrac{u}{3(1+e^{t})^{2}},\ \ \psi_{i}\left( K\right) =K.
\]
From the condition
\[
\frac{K}{\varphi\left( K\right) \left\Vert l_{f}\right\Vert _{1/\tau}\left(
\Delta_{0}+\Delta_{1}+\Delta_{2}\right)
{\displaystyle\sum\limits_{i=0}^{2}}
\left( \rho_{i}+\widetilde{\rho}_{i}+\widehat{\rho}_{i}\right) \left\vert
\lambda_{i}\right\vert \psi_{i}\left( K\right) \left\Vert l_{g_{i
}\right\Vert _{1}}>1
\]
we find tha
\[
K>9.8.
\]
Thus, all the conditions of Theorem \ref{Thm:exis} are satisfied. So, there
exists at least one solution of problem (\ref{ex2}) on $\left[ 0,1\right] $.
\bigskip
\bigskip
|
1,108,101,564,348 | arxiv | \section{Introduction}
One of the most important and explicit indicators of stellar youth are circumstellar disks, which seem to be a universal feature of the star-forming process. The existence of a stellar companion close to a disk-bearing star, however, has a complex impact on the evolution of a disk: it will be truncated to about a third of the binary separation, may become eccentric and warped, and the material in the disk may be heated and dynamically stirred \citep{art94,kle08,fra10,nel00}. Truncation, in particular, can explain observations of young low-mass binary components that have a systematically lower disk frequency than single stars with otherwise identical properties \citep{bou06,mon07,cie09}. Material that is usually transported inwards from the outer parts of the disk is missing, thus cannot replenish the inner disk and extend their disk lifetimes, in a way assumed for single star disks.
Primordial circumstellar disks are not only indicators of the formation of the star itself, but also contain the material for the formation of planetary systems. If disks in binaries evolve differently from those in single stars, then the properties of the population of planets found in binaries can reflect those differences. Today, more than 40 planets are known to orbit one of the components of a binary system, with binary separations of mostly $\ge$30\,AU \citep{mug09,egg10}. Interestingly, systems in which both components orbit their own planet are disproportionally rarely observed -- to the knowledge of the authors no system has been published so far. Although not free from observational selection effects, this might indicate that there are differences in the evolutions of the initial circumstellar material around the individual components of a binary system.
Another open issue is universality throughout the Galaxy.
Disk evolution is not determined entirely by the interaction with their host stars or stellar binary companions. The irradiation by nearby hot sources and the gravitational environment play an important role \citep{sca01}. While previous studies of the signatures of disks in binaries were mostly focused on nearby, loose, stellar associations such as Taurus and Ophiuchus \citep{whi01,har03,pra03,duc04,pat08,cie09}, there is evidence that most field stars are instead born in dense high-mass clusters such as the Orion Nebula Cluster \citep[ONC;][]{lad03}.
As the nearest \citep[414$\pm$7\,pc;][]{men07} young \citep[$\sim$1\,Myr;][]{hil97} high-mass star forming cluster, the ONC has been targeted by many studies owing to its abundant circumstellar disk \citep[e.g.][]{hil98,lad00,da_10} and binary content \citep{pro94,pad97,pet98,sim99,khl06,rei07}, although a detailed survey of disks around binary components has yet to be performed. In this paper, we present the first spatially resolved near-infrared spectroscopic observations of a large sample of binaries in a clustered star-forming environment -- the Orion Nebula Cluster -- to investigate its circumstellar disk content and the binary component properties.
\section{Observations and data reduction}\label{sec:observations}
\subsection{Sample definition}
We observed 20 visual binaries in the ONC from the binary census of \citet{pet98a} and \citet{khl06}. Data for six additional binaries were taken from observations by \citet[\emph{in prep.}]{cor12}. All targets are listed in Table~\ref{tab:observations}.
\begin{table*}
\caption{Targets and observations}
\label{tab:observations}
\begin{center}
\begin{tabular}{lcccccc}
\hline\hline\\[-2ex]
&
dist. to &
&
\underline{I}maging or &
Imaging &
&
Date \\
Name\tablefootmark{a} &
$\theta^1$\,Ori\,C [$^\prime$]&
Obs. with&
\underline{S}pectroscopy &
Filters\tablefootmark{b} &
SB\tablefootmark{c} &
(UT) \\[0.5ex]
\hline\\[-2ex]
\object{[AD95]\,1468} & 7.18 & NACO & I\&S &$JH$ & no & Feb 09, 2005 \\
\object{[AD95]\,2380} & 6.92 & NACO & I\&S &$JH$ & no & Jan 06, 2005 \\
\object{JW\,235} & 6.85 & NACO & I\&S &$JH$ & no & Dec 19, 2004 \\
\object{JW\,260} & 9.00 & NACO & I\&S &$JH$ & SB1 & Dec 19, 2004 \\
\object{JW\,519} & 0.80 & NACO & I &$JHK_\mathrm{s}$ & & Feb 07 \& 09, 2005 \\
\object{JW\,553} & 0.52 & NACO & S & & & Dec 07, 2004 \\
\object{JW\,566} & 7.17 & NACO & I\&S &$JH$ & & Dec 08, 2004 \& Feb 17, 2005\\
\object{JW\,598} & 0.71 & NACO & S & & & Dec 08, 2004 \\
\object{JW\,648} & 1.00 & NACO & I\&S &$JHK_\mathrm{s}$ & & Feb 07 \& 17, 2005 \\
\object{JW\,681} & 1.25 & NACO & S & & no & Jan 06, 2005 \\
\object{JW\,687} & 2.06 & NACO & I\&S &$JH$ & & Feb 09, 2005 \& Dec 07, 2004\\
\object{JW\,765} & 14.09 & NACO & I &$JH$ & no & Feb 07, 2005 \\
\object{JW\,876} & 14.45 & NACO & S & & no & Jan 08, 2005 \\
\object{JW\,959} & 7.98 & NACO & S & & no & Jan 08, 2005 \\
\object{JW\,974} & 14.96 & NACO & I &$JH$ & SB2? & Nov 19, 2004 \\
\object{[HC2000]\,73} & 2.30 & NACO & S & & & Jan 05, 2005 \\
\object{TCC\,15} & 0.50 & NACO & S & & & Dec 29, 2004 \\
\object{TCC\,52} & 0.46 & NACO & S & & & Dec 08, 2004 \\
\object{TCC\,55} & 0.49 & NACO & S & & & Jan 10 \& 11, 2005 \\
\object{TCC\,97} & 0.42 & NACO & I &$JHK_\mathrm{s}$ & & Feb 07, 2005 \\
\object{JW\,63}\tablefootmark{d} & 8.95 & GEMINI & I\&S &$JHKL^\prime$ & & Feb 16 \& 19, 2008 \\
\object{JW\,128}\tablefootmark{d} & 6.10 & GEMINI & I\&S &$JHKL^\prime$ & & Feb 17 \& 19, 2008 \\
\object{JW\,176}\tablefootmark{d} & 4.67 & GEMINI & I\&S &$JHKL^\prime$ & no & Feb 24 \& Mar 7, 2008 \\
\object{JW\,391}\tablefootmark{d} & 2.81 & GEMINI & I\&S &$JHKL^\prime$ & & Feb 19, 2008 \\
\object{JW\,709}\tablefootmark{d} & 3.53 & GEMINI & I\&S &$JHKL^\prime$ & & Feb 18 \& 20, 2008 \\
\object{JW\,867}\tablefootmark{d} & 5.79 & GEMINI & I\&S &$JHKL^\prime$ & no & Feb 23, 2008 \\
\hline
\end{tabular}
\end{center}
\vspace{-3ex}
\tablefoot{
\tablefoottext{a} Identifiers are suitable for use with the simbad database ({\tt http://simbad.harvard.edu/simbad/}). References: \emph{AD95}: \citealt{ali95}; \emph{JW}: \citealt{jon88}; \emph{TCC}: \citealt{mcc94}; \emph{HC2000}: \citealt{hil00}.
\tablefoottext{b} Photometry for this study was obtained in these filters and has been listed in Table~\ref{tab:systemparameters}. If fewer filters than the complete set of $JHK_\mathrm{s}$ has been indicated here, photometry has been added to Table~\ref{tab:systemparameters} from literature sources when available.
\tablefoottext{c} Spectroscopic binary status.
References: \citet{tob09,frs08}.
\tablefoottext{d} From \citet[\emph{in prep.}]{cor12}.
}
\end{table*}
The projected separations range from 0.25 to 1.1\,arcsec, which corresponds to roughly 100--400\,AU at the distance of the ONC. Magnitude differences of the binary components range from 0.1 to $\sim$3\,mag in $H$ and $K_\mathrm{s}$-band.
All of our targets are members or very likely members of the ONC, which were mostly identified based on their proper motion \citep[and references therein]{hil97}. Targets without proper motion measurements ([HC2000]\,73, TCC\,15, TCC\,55) or those with low proper-motion membership probability (JW\,235, JW\,566, JW\,876) were confirmed to be young stars, thus likely members based on their X-ray activity \citep{hil00,rei07,get05}. No further information is available for two targets, [AD95]\,1468 and [AD95]\,2380, but our spectroscopy shows late spectral types at moderate luminosity and extinction for these and all other targets, ruling out foreground and background stars.
Common proper motion with the ONC and signs of youth combined with small angular separation of the components render it likely that all binaries are gravitationally bound. However, chance alignment of unrelated members of the Orion complex cannot be excluded.
A possible example might be TCC\,15, whose secondary shows almost no photospheric features but strong Br$\gamma$ and \ion{He}{I} emission, suggesting a highly veiled nature of this component. This makes it a candidate member of the Orion BN/KL region, which is in the line of sight of the ONC but slightly further away ($\sim$450\,pc) and probably younger \citep{men07}. However, the chance of finding an unrelated stellar component at the separation measured for this binary (1\farcs02) is only $W$=8\% given a projected density of $\sim$0.03\,stars/arcsec$^2$ in the center of the ONC \citep{pet98}.
In the following, we treat TCC\,15 as a physical binary.
We also searched the literature for possible spectroscopic pairs among our binary components and found that among the 11 binaries that had been surveyed \citep{tob09,frs08}, two are spectroscopic binary candidates (see Table~\ref{tab:observations}). Both had already been excluded from our statistical analysis for other reasons (see discussion in Sect.~\ref{sec:brgammavsnirexcess} for JW\,260; JW\,974 was not observed with spectroscopy). Spectroscopic binarity cannot be excluded for the other targets, since no published spectroscopic binary surveys exist and our observations are of too low spectral resolution to detect spectroscopic pairs. In the following, we treat these binary components as single stars.
Table~\ref{tab:observations} lists all targets including the dates of their obervation, the instrument used, and the observational mode. Furthermore, we provide the projected distance to the massive and bright $\theta^1$\,Ori\,C system, which is at the center of the ONC.
\subsection{NACO/VLT} Twenty of our 26 targets were observed with the NAOS-CONICA instrument \citep[NACO;][]{len03,rou03} at UT4/VLT in the time from November 2004 to February 2005. Of those, 16 were observed in spectroscopic mode and 11 in CONICA imaging mode (see Table~\ref{tab:observations}). All observations were executed using the adaptive optics system NAOS where the targets themselves could be used as natural guide stars. Depending on the brightness of a target, either an infrared or visual wavefront sensor was used.
\subsubsection{NACO Imaging}\label{sec:nacoimagingreduction} We employed the S13 camera of NACO with a pixel scale of 13.26\,mas/pixel and 13\farcs6$\times$13\farcs6 field of view. Each imaged target binary was observed in $J$ and $H$ filters and three of these additionally in $K_\mathrm{s}$. Observations of the same target in different filters were obtained consecutively to minimize the effects of variability.
The FWHMs of the observations are typically 0\farcs{}075 in $J$, 0\farcs{}065 in $H$, and 0\farcs{}068 in $K_\mathrm{s}$-band. All observations were made in a two-offset dither pattern with a 5\arcsec\ pointing offset to allow for sky subtraction with total integration times varying between 60\,s and 360\,s per target and filter, depending on the brightness of the target components.
Imaging data were reduced with custom IRAF and IDL routines according to the following procedure. A low amplitude ($<10$\,ADU), roughly sinusodial horizontal noise pattern--probably 50\,Hz pick-up noise at read-out--was removed from all raw images by subtracting a row-median (omitting the central region containing the signal) from each column. We cropped each image to a 400$\times$400 pixel ($\equiv 5\farcs3\times 5\farcs3$) subregion of the original 1024$\times$1024 pix preserving all flux from each target multiple. For each target and filter there were at least two dithered images that undergo the same corrections, which were then used for sky subtraction. Bad pixel correction and flat fielding were applied, the latter using lamp flats taken with the NACO internal calibration unit. All reduced images per object and filter were then aligned and averaged. The fully reduced images in $H$-band are presented in Fig.~\ref{fig:imaging}.
\begin{figure*}
\centering
\includegraphics[angle=0,width=0.98\textwidth]{fig01.ps}
\caption{$H$-band images of all targets observed with NACO imaging. The intensity scale is linear and adapted to best depict both components.}
\label{fig:imaging}
\end{figure*}
\subsubsection{NACO Spectroscopy} NACO was used in grism-spectroscopy setup using the S27 camera and an 86\,mas wide slit. The wavelength coverage is $2.02$--$2.53$\,$\mu${}m with a spectral resolution of $R$$\sim$$1400$ and a resolution of 27\,mas/pix in the spatial direction. The slit was aligned with the binary separation vector to simultaneously obtain spectra of both stellar components. All binaries were observed in an ABBA nodding pattern with a 12\arcsec\ nod throw, performing several nod cycles for the faintest targets. Total exposure times per target were 120$-$2600\,s. Spectroscopic standards of B spectral type were observed with the same camera setup, close in time and at similar airmass to enable us to remove telluric features. These standard spectra were reduced and extracted in the same way as the target exposures.
Spectroscopic exposures were reduced with IDL and IRAF routines including flat fielding, sky subtraction, and bad pixel removal. Lamp flats taken with an internal flat screen were fitted along the dispersion axis and divided. The extraction and wavelength calibration of the reduced target and standard exposures were performed using the \emph{apextract} and \emph{dispcor} packages in \emph{IRAF}. The two-dimensional spectrum was traced with a fourth order polynomial and extracted through averaging over a 10 pixel-wide ($\sim$0\farcs27) aperture centered on the $\sim$4 pixel-wide (FWHM) trace of each target component. The subsequent wavelength calibration uses exposures of an argon arc lamp extracted in the same traces. All nodding exposures of the same target component were then aligned and averaged. The final extracted and calibrated spectra span a wavelength range of 20320--25440\,\AA\ with a resolution of 5\,\AA/pixel.
To allow for accurate telluric-line removal, intrinsic spectral features of the standard stars had to be removed. These were the \mbox{Brackett-$\gamma$} (Br$\gamma$, 21665\,\AA) and the \ion{He}{I} (21126\,\AA) absorption lines, where the latter was only observed for spectral types B0 and B1. The removal of in particular the Br$\gamma$ line is crucial because we aim to measure the equivalent widths of Br$\gamma$ emission in the target spectra. Therefore, the Br$\gamma$ line was carefully modeled according to the following scheme. Two telluric lines blend into the blue and red wings of Br$\gamma$ at the spectral resolution of our observations. We modeled these tellurics by averaging the fluxes in the Br$\gamma$ region (before telluric removal) of those target component spectra that showed neither absorption nor emission in Br$\gamma$. The telluric standard flux was divided by the thus generated local telluric model, allowing us to fit the remaining Br$\gamma$ absorption line with a Moffat-profile and remove its signature. The cleaned standard spectra were then divided by a blackbody curve of temperature $T_\mathrm{eff}$ according to their spectral type \citep{cox00} to obtain a pure telluric spectrum, convolved with the instrumental response of NACO.
We observed a wavelength-dependent mismatch of the wavelength calibrations output from \emph{IRAF/dispcor} between the target stars and the corresponding telluric standards of up to about one pixel. To guarantee a good match of the positions of the telluric features, we used the tellurics themselves to fine-tune the wavelength calibration of the standard spectra. A customized \emph{IDL} routine computes the local wavelength difference by means of cross correlation and corrects the mismatch accordingly. Since the tellurics are the strongest features in all our spectra, this method results in a good match between the telluric features of the target and reference, and telluric features could be removed reliably.
Flux uncertainties in the derived spectra as needed for the $\chi^2$ minimization method described in \S\ref{sec:fitting} were estimated from the fully reduced and extracted target spectra by performing local computation of standard deviations. The final reduced and extracted component spectra contain a noisy (signal-to-noise ratio (S/N) smaller than 20/pix) region redwards of $\sim$\,25000\,\AA\ caused by the low atmospheric transmission at these wavelengths. The region with $\lambda> 25120$\,\AA\ was excluded from any further evaluation. The set of reduced and extracted spectra consists of one spectrum for each target component in the spectral range of 20320\,\AA\ -- 25120\,\AA\ with a resolution of 5\,\AA/pixel. All final reduced spectra are displayed in Fig.~\ref{fig:spectroscopy}. The positions of the most prominent absorption and emission features are overplotted and listed in Table~\ref{tab:features}.
\begin{figure*}
\begin{center}
\includegraphics[angle=0,width=1.0\textwidth]{fig02a.ps}
\caption{Spectra of the primary (top spectrum in each panel) and secondary component (bottom spectrum) of all targets observed with NACO spectroscopy (Gemini/NIFS spectra are displayed in \citet[\emph{in prep.}]{cor12}). Primary spectra are normalized at 2.2\,$\mu$m, and the secondaries are arbitrarily offset. The position of the most prominent lines in Table~\ref{tab:features} are indicated.}
\label{fig:spectroscopy}
\end{center}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\begin{center}
\includegraphics[angle=0,width=1.0\textwidth]{fig02b.ps}
\caption{\it Continued}
\end{center}
\end{figure*}
\begin{table}
\caption{Spectral features identified in the observed spectra}
\label{tab:features}
\begin{center}
\begin{tabular}{lccl}
\hline\hline\\[-2ex]
\multicolumn{1}{c}{$\lambda_\mathrm{c}$} &
\multicolumn{1}{c}{Width} &
&
\\
\multicolumn{1}{c}{[\AA]} &
\multicolumn{1}{c}{[\AA]} &
\multicolumn{1}{c}{Species} &
\multicolumn{1}{c}{Transition}\\[0.5ex]
\hline\\[-2ex]
20338.0 & & H$_2$ & $\nu = (1$--$0)\,S(2)$ \\
20587.0 & & \ion{He}{I} & $2p^1P^o$--$2s^1S$ \\
21066.6 & & \ion{Mg}{I} & $4f^3F^o_{2,3,4}$--$7g^3G^o_{3,4,5}$\\
21098.8 & & \ion{Al}{I} & $4p^2P^o_{1/2}$--$5s^2S_{1/2}$ \\
21169.6 & & \ion{Al}{I} & $4p^2P^o_{3/2}$--$5s^2S_{1/2}$ \\
21218.0 & & H$_2$ & $\nu = (1$--$0)\,S(1)$ \\
21661.2 & 56 & \ion{H}{I} & $n = 7-4\,\,(\mathrm{Br}\gamma)$ \\
21785.7 & & \ion{Si}{I} & \\
21825.7 & & \ion{Si}{I} & \\
21903.4 & & \ion{Ti}{I} & $a^5P_2$--$z^5D^o_3$ \\
22062.4 & 116 & \ion{Na}{I} & $4p^2P^o_{3/2}$--$4s^2S_{1/2}$ \\
22089.7 & * & \ion{Na}{I} & $4p^2P^o_{1/2}$--$4s^2S_{1/2}$ \\
22614.1 & 91 & \ion{Ca}{I} & $4f^3F^o_2$--$4d^3D_1$ \\
22631.1 & * & \ion{Ca}{I} & $4f^3F^o_3$--$4d^3D_2$ \\
22657.3 & * & \ion{Ca}{I} & $4f^3F^o_4$--$4d^3D_3$ \\
22814.1 & & \ion{Mg}{I} & $4d^3D_{3,2,1}$--$6f^3F^o_{2,3,4}$ \\
22935.3 & 170 & \element[][12]{CO} & $\nu=(2$--$0)$ band head \\
23226.9 & 170 & \element[][12]{CO} & $\nu=(3$--$1)$ band head \\
23354.8 & & \ion{Na}{I} & $4p^2P^o_{1/2}$--$4d^2D_{3/2}$ \\
23385.5 & & \ion{Na}{I} & $4p^2P^o_{3/2}$--$4d^2D_{5/2}$ \\
23524.6 & 170 & \element[][12]{CO} & $\nu=(4$--$2)$ band head \\
23829.5 & 170 & \element[][12]{CO} & $\nu=(5$--$3)$ band head \\[0.5ex]
\hline
\end{tabular}
\end{center}
\vspace{-3ex}
\tablefoot{
Identified features in our $K$-band spectra. Integration widths are listed here for all lines that have equivalent widths measured in this paper. Lines marked with asterisks blend into the line with the next shortest wavelength (the respective previous line in the list); these lines were integrated together. The transition information is composed from \citet{pra03} and \citet{kle86}.
}
\end{table}
\subsection{NIFS-NIRI/Gemini North imaging and spectroscopy}
The observations with NIRI photometry and NIFS spectroscopy at Gemini North are described in \citet[\emph{in prep.}]{cor12}.
We used the reduced and extracted but non-telluric corrected spectra of Correia et al. (2011) and performed the telluric correction using the same method as for our NACO observations. Furthermore, since the NIFS spectra are of higher spectral resolution (R$\sim$5000) than both our NACO observations and the template spectra, we smoothed the NIFS spectra with a Gaussian kernel to a resolution of R$\sim$1400. These steps were included to guarantee that a coherent evaluation of all data is possible.
The six target spectra observed with NIFS that were included in this study will be presented in \citet[\emph{in prep.}]{cor12}.
\section{Results}\label{sec:results}
\subsection{Photometry and astrometry}\label{sec:photometry}
Relative aperture photometry for all targets observed with NACO was obtained by applying the PHOT task in the IRAF DAOPHOT package to each of the reduced binary star images. The aperture radius was varied from 2 to 20 pixels to find a possible convergence of the magnitude difference of primary and secondary. The differential photometry of most binaries converged for aperture sizes of 3 to 6 pixels allowing determination of their magnitude differences with uncertainties of $\Delta\mathrm{mag}\lesssim0.03$. For the binaries that did not converge but followed a monotonic decrease in their magnitude difference with aperture size, we assigned a value of $\Delta\mathrm{mag}$ by averaging the results for apertures of sizes between 3 and 6 pixels. The uncertainty was estimated according to the slope of each individual curve.
The DAOFIND task in DAOPHOT returned sufficiently accurate astrometry for all targets. The pixel data were transformed into physical angles and separations using the pixel scale of 0.013260\,arcsec/pixel and the rotation offset of $0^\circ$. The resulting relative photometry and astrometry are listed in Table~\ref{tab:systemparameters}.
\begin{table*}
\caption{Relative photometry and astrometry of the observed binaries\tablefootmark{\dag}}
\label{tab:systemparameters}
\begin{center}
\begin{tabular}{lr@{\,$\pm$\,}lr@{\,$\pm$\,}lr@{\,$\pm$\,}lr@{\,$\pm$\,}lcccc}
\hline\hline\\[-2ex]
&
\multicolumn{2}{c}{$\Delta J$} &
\multicolumn{2}{c}{$\Delta H$} &
\multicolumn{2}{c}{$\Delta K_\mathrm{s}$} &
\multicolumn{2}{c}{$\Delta K$} &
sep\tablefootmark{a} &
PA\tablefootmark{b} &
&
\\
Name &
\multicolumn{2}{c}{[mag]} &
\multicolumn{2}{c}{[mag]} &
\multicolumn{2}{c}{[mag]} &
\multicolumn{2}{c}{[mag]} &
[\arcsec] &
[$^\circ$] &
Ref. \\[0.5ex]
\hline\\[-2ex]
{[AD95]\,1468} & 0.75 & 0.10 & 0.66 & 0.02 & 0.10 & 0.05 & \multicolumn{2}{c}{} & 1.08 & 76.9 & T,1 \\
{[AD95]\,2380} &\multicolumn{2}{c}{$\gtrsim$2.5\tablefootmark{c}}& 2.62 & 0.03 & 2.99 & 0.15 & \multicolumn{2}{c}{} & 0.59 & 77.6 & T,1 \\
{JW\,235} & 0.46 & 0.02 & 0.10 & 0.02 & 0.47 & 0.15 & \multicolumn{2}{c}{} & 0.35 & 163.6 & T,1 \\
{JW\,260} & 0.53 & 0.02 & 0.40 & 0.02 & 0.17 & 0.05 & \multicolumn{2}{c}{} & 0.35 & 292.2 & T,1 \\
{JW\,519}\tablefootmark{e} & 2.8 & 0.2 & 2.8 & 0.2 & 2.6 & 0.1 & \multicolumn{2}{c}{} & 0.36 & 204.3 & T \\
{JW\,553} & 2.1 & 0.3 & 3.2 & 0.3 & 3.19 & 0.10 & \multicolumn{2}{c}{} & 0.384$\pm$0.004 & 248.1$\pm$0.3 & 2 \\
{JW\,566} & 0.20 & 0.03 & 0.39 & 0.02 & 0.78 & 0.02 & \multicolumn{2}{c}{} & 0.86 & 33.8 & 5 \\
{JW\,598} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{3.19} & 0.9 & & 4 \\
{JW\,648} & 1.12 & 0.03 & 1.16 & 0.03 & 1.23 & 0.02 & \multicolumn{2}{c}{} & 0.68 & 278.7 & T \\
{JW\,681} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{1.53} & 1.09 & 214 & 3,4 \\
{JW\,687} & 1.63 & 0.05 & 1.15 & 0.07\tablefootmark{d}& 0.53 & 0.07 & \multicolumn{2}{c}{} & 0.47 & 232.6 & T \\
{JW\,765} & 0.08 & 0.02 & 0.12 & 0.01 & 0.06 & 0.10 & \multicolumn{2}{c}{} & 0.33 & 16.5 & T,1 \\
{JW\,876} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & 0.50 & 0.05 & \multicolumn{2}{c}{} & 0.49 & & 1 \\
{JW\,959} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & 0.07 & 0.03 & \multicolumn{2}{c}{} & 0.34 & & 1 \\
{JW\,974} & 1.16 & 0.04 & 1.26 & 0.04 & 1.41 & 0.05 & \multicolumn{2}{c}{} & 0.32 & 128.7 & T,1 \\
{[HC2000]\,73} & 0.48 & 0.09 & 1.13 & 0.06 & \multicolumn{2}{c}{} & 1.52 & 0.07 & 0.71 & 266.2$\pm$0.5 & 4 \\
{TCC\,15} & 2.5 & 0.3 & 3.3 & 0.3 & 3.65 & 0.1 & \multicolumn{2}{c}{} & 1.022$\pm$0.004 & 288.2$\pm$0.3 & 2 \\
{TCC\,52} & 1.61 & 0.03 & 1.56 & 0.03 & 1.64 & 0.02 & \multicolumn{2}{c}{} & 0.52 & 39.24 & 5 \\
{TCC\,55} & 1.0 & 0.1\tablefootmark{f}& 2.2 & 0.3 & 1.26 & 0.10 & \multicolumn{2}{c}{} & 0.256$\pm$0.004 & 153.1$\pm$0.3 & 2 \\
{TCC\,97} & 1.8 & 0.15 & 1.8 & 0.15 & 1.4 & 0.2 & \multicolumn{2}{c}{} & 0.88 & 98.5 & T \\
\hline
\end{tabular}
\end{center}
\vspace{-3ex}
\tablefoot{
\tablefoottext{\dag} Reproduced here are only the targets observed with NACO. The photometry of the six additional targets observed with Gemini/NIRI can be found in \citet[\emph{in prep.}]{cor12}.
\tablefoottext{a} Uncertainty in the separation: $\Delta\mathrm{sep}=0\farcs01$ unless otherwise noted.
\tablefoottext{b} Position angle uncertainty: $\Delta\mathrm{PA}=0.5^\circ$ unless otherwise noted.
\tablefoottext{c} The companion to {[AD95]\,2380}A is detected with less than 3$\sigma$ significance in $J$-band. The number quoted here is a lower limit.
\tablefoottext{d} This number is an average of two independent measurements that differ by 0.16 magnitudes.
\tablefoottext{e} The elongated shape of {JW\,519}'s primary suggests the primary to be binary itself. The photometry for {JW\,519}, however, is based on the assumption of a single central object since separate components cannot be identified.
\tablefoottext{f} Reanalysis of MAD data published in \citet{bou08}.
\tablebib{
(T) This paper; (1) \citealt{khl06}; (2) \citealt{bou08}; (3) \citealt{rei07}; (4) \citealt{pet98a}; (5) ESO archival data \mbox{074.C-0637(A)}. Reduction and photometry are described in Sects.~\ref{sec:nacoimagingreduction} and \ref{sec:photometry}.
}
}
\end{table*}
The procedure was successful for all binaries observed with NACO, although we found peculiarities for two targets. The image of the brighter component of {JW\,519} has an elongated shape (see Fig.~\ref{fig:imaging}) that is not seen in the companion point spread function (PSF). We therefore conclude that we have found evidence of an unresolved third component in the {JW\,519} system. Unfortunately, the close separation of the system does not allow us to determine any of the separate parameters of the individual components. The photometry of {JW\,519} in Tables~\ref{tab:systemparameters} and \ref{tab:componentmagnitudes}, as well as all further evaluation, therefore treats this likely triple as a binary.
The primary of {TCC\,97} is surrounded by a proplyd that was first identified by \citet{ode93}. Since this feature is detected in our images, our aperture photometry averages only the smallest useful apertures of 3--5 pixel radius (instead of 3--6) to exclude as much of the disk flux as possible.
A special photometry routine was applied to the target binaries {JW\,553}, {[HC2000]\,73}, and {TCC\,55}, which used photometry of several reference stars in the same exposure. In particular for {JW\,553} and {TCC\,55}, this procedure leads to more accurate photometry in the environment of strong nebulosity and high stellar density close to the cluster center. The photometry of {JW\,553} and {TCC\,55} uses fully reduced $J$, $H$, and $K$-band mosaics of the Trapezium region \citep{bou08} observed with the MCAO demonstrator \emph{MAD} \citep{mar07}. A reference PSF was computed from 4--8 stars within $\sim$9\arcsec\ of the target binary. With this PSF, we obtained instrumental photometry for all stars in the vicinity of the target using the IRAF daophot package. Observed apparent magnitudes of the reference stars together with our measured relative magnitudes were then used to derive the apparent magnitudes of the target binary components. Similarly, PSF photometry of {[HC2000]\,73} was derived using ADONIS \citep{pet98a} exposures making use of the PSF and apparent magnitude of the nearby star $\theta^2$\,Ori\,A \citep{mue02}.
Most photometry in this paper was obtained in the NACO and 2MASS $JHK_\mathrm{s}$ filter systems. Since these two filter sets are very similar, no transformations had to be applied. Some photometry, however, was performed in different filters. We checked the compatibility of the 2MASS $K_\mathrm{s}$ and the ADONIS and MAD $K$ filters used for some of the systems and relative magnitudes (see Tables~\ref{tab:systemparameters} and \ref{tab:integratedphotometry}).
Comparing 39 young stars in the ONC (with similar properties as the target sample) for which both ADONIS $K$ and $K_\mathrm{s}$ photometry are available \citep{pet98a}, we found an average offset of only 0.01$\pm$0.23\,mag between $K$ and $K_\mathrm{s}$. The Gemini/NIRI $K$-band photometry of late-type stars is compatible with that of the $K_\mathrm{s}$-band with $K-K_\mathrm{s}=0\pm0.05$\,mag \citep{dae07}. We therefore did not apply any corrections to our $K$-band magnitudes. Some of the relations used in Sect.~\ref{sec:AVCTTS} (i.e.\ the dwarf, giant, and CTT locuses) were transformed to the 2MASS system using the relations in \citet{car01}. In the following, we thus assume that all photometry is compatible with the 2MASS filter system.
The relative photometric and astrometric results are given in Table~\ref{tab:systemparameters}. Together with literature values for the integrated photometry in Table~\ref{tab:integratedphotometry}, component magnitudes were derived, which are listed in Table~\ref{tab:componentmagnitudes}.
\begin{table*}
\caption{Non-resolved photometry of the observed binaries}
\label{tab:integratedphotometry}
\begin{center}
\begin{tabular}{lr@{\,$\pm$\,}lr@{\,$\pm$\,}lr@{\,$\pm$\,}lr@{\,$\pm$\,}lc}
\hline\hline\\[-2ex]
&
\multicolumn{2}{c}{$J^\mathrm{sys}$} &
\multicolumn{2}{c}{$H^\mathrm{sys}$} &
\multicolumn{2}{c}{$K_\mathrm{s}^\mathrm{sys}$} &
\multicolumn{2}{c}{$K^\mathrm{sys}$} &
\\
Name &
\multicolumn{2}{c}{[mag]} &
\multicolumn{2}{c}{[mag]} &
\multicolumn{2}{c}{[mag]} &
\multicolumn{2}{c}{[mag]} &
Ref. \\[0.5ex]
\hline\\[-2ex]
{[AD95]\,1468} & 13.92 & 0.11 & 12.48 & 0.18 & 11.23 & 0.08 & \multicolumn{2}{c}{} & 3\tablefootmark{a}\\
{[AD95]\,2380} & 13.9 & 0.02 & 11.39 & 0.03 & 9.88 & 0.02 & \multicolumn{2}{c}{} & 3\tablefootmark{a}\\
{JW\,235} & 12.10 & 0.15 & 11.11 & 0.15 & 10.50 & 0.15 & \multicolumn{2}{c}{} & 2 \\
{JW\,260} & 8.19 & 0.15 & 7.60 & 0.15 & 7.23 & 0.15 & \multicolumn{2}{c}{} & 2 \\
{JW\,519} & 12.07 & 0.01 & 11.07 & 0.01 & 10.61 & 0.01 & \multicolumn{2}{c}{} & 1 \\
{JW\,553} & 10.56 & 0.10 & 9.49 & 0.05 & \multicolumn{2}{c}{} & 9.05 & 0.10 & \tablefootmark{b} \\
{JW\,566} & 11.49 & 0.04 & 9.97 & 0.05 & 8.86 & 0.03 & \multicolumn{2}{c}{} & 3\tablefootmark{a}\\
{JW\,598} & 10.87 & 0.04 & 9.57 & 0.05 & 8.93 & 0.11 & \multicolumn{2}{c}{} & 2 \\
{JW\,648} & 10.97 & 0.03 & 9.91 & 0.02 & 9.28 & 0.01 & \multicolumn{2}{c}{} & 1 \\
{JW\,681} & 12.78 & 0.01 & 11.82 & 0.01 & 11.07 & 0.11 & \multicolumn{2}{c}{} & 1 \\
{JW\,687} & 11.94 & 0.01 & 10.50 & 0.01 & 9.70 & 0.03 & \multicolumn{2}{c}{} & 1 \\
{JW\,765} & 11.76 & 0.15 & 11.04 & 0.15 & 10.81 & 0.15 & \multicolumn{2}{c}{} & 2 \\
{JW\,876} & 9.31 & 0.02 & 8.45 & 0.03 & 8.03 & 0.02 & \multicolumn{2}{c}{} & 3\tablefootmark{a}\\
{JW\,959} & 9.36 & 0.02 & 8.84 & 0.03 & 8.63 & 0.02 & \multicolumn{2}{c}{} & 3\tablefootmark{a}\\
{JW\,974} & 12.42 & 0.02 & 11.78 & 0.03 & 11.41 & 0.02 & \multicolumn{2}{c}{} & 3\tablefootmark{a}\\
{[HC2000]\,73} & 12.59 & 0.05 & 11.72 & 0.03 & 10.99 & 0.03 & \multicolumn{2}{c}{} & \tablefootmark{b} \\
{TCC\,15} & 12.96 & 0.03 & 11.14 & 0.01 & 10.25 & 0.01 & \multicolumn{2}{c}{} & 1 \\
{TCC\,52} & 8.64 & 0.04 & 7.56 & 0.04 & 6.72 & 0.04 & \multicolumn{2}{c}{} & 2 \\
{TCC\,55} & 15.10 & 0.14 & 13.27 & 0.14 & \multicolumn{2}{c}{} & 11.15 & 0.14 & \tablefootmark{b} \\
{TCC\,97} & 13.13 & 0.03 & 12.44 & 0.02 & 11.77 & 0.01 & \multicolumn{2}{c}{} & 1 \\
\hline
\end{tabular}
\end{center}
\vspace{-3ex}
\tablefoot{
\tablefoottext{a} If no other reference could be found \emph{and} if the distance to $\theta^1$\,Ori\,C is larger than $5\arcmin$, 2MASS values were used.
\tablefoottext{b} System magnitudes were derived from the component magnitudes (Table~\ref{tab:componentmagnitudes}).
\tablebib{
(1) \citealt{mue02}; (2) \citealt{car01a}; (3) 2MASS, \citealt{cut03}.
}
}
\end{table*}
\begin{table}
\caption{Individual component apparent magnitudes\tablefootmark{\dag}}
\label{tab:componentmagnitudes}
\begin{center}
\begin{tabular}{lcr@{$\,\pm\,$}lr@{$\,\pm\,$}lr@{$\,\pm\,$}lr@{$\,\pm\,$}l}
\hline\hline\\[-2ex]
&
&
\multicolumn{2}{c}{$J$} &
\multicolumn{2}{c}{$H$} &
\multicolumn{2}{c}{$K_\mathrm{s}$} \\
\multicolumn{1}{c}{Name} &
&
\multicolumn{2}{c}{[mag]} &
\multicolumn{2}{c}{[mag]} &
\multicolumn{2}{c}{[mag]} \\[0.5ex]
\hline\\[-2ex]
{}[AD95]\,1468 & A & 14.36 & 0.11 & 12.95 & 0.18 & 11.93 & 0.08 \\
& B & 15.11 & 0.13 & 13.61 & 0.18 & 12.03 & 0.08 \\[0.5ex]
{}[AD95]\,2380 & A & \multicolumn{2}{c}{$\cdots$} & 11.48 & 0.03 & 9.95 & 0.02 \\
& B & \multicolumn{2}{c}{$\cdots$} & 14.10 & 0.04 & 12.94 & 0.14 \\[0.5ex]
JW\,235 & A & 12.65 & 0.15 & 11.81 & 0.15 & 11.04 & 0.16 \\
& B & 13.11 & 0.15 & 11.91 & 0.15 & 11.51 & 0.18 \\[0.5ex]
JW\,260 & A & 8.71 & 0.15 & 8.17 & 0.15 & 7.90 & 0.15 \\
& B & 9.24 & 0.15 & 8.57 & 0.15 & 8.07 & 0.15 \\[0.5ex]
JW\,519 & A & 12.15 & 0.02 & 11.15 & 0.02 & 10.70 & 0.01 \\
& B & 14.95 & 0.19 & 13.95 & 0.19 & 13.30 & 0.09 \\[0.5ex]
JW\,553 & A & 10.66 & 0.12 & 9.55 & 0.06 & 9.11 & 0.04\tablefootmark{a}\\
& B & 13.18 & 0.09 & 12.70 & 0.06 & 12.21 & 0.13\tablefootmark{a}\\[0.5ex]
JW\,566 & A & 12.15 & 0.04 & 10.54 & 0.05 & 9.29 & 0.03 \\
& B & 12.35 & 0.04 & 10.93 & 0.05 & 10.07 & 0.03 \\[0.5ex]
JW\,648 & A & 11.30 & 0.03 & 10.23 & 0.02 & 9.58 & 0.01 \\
& B & 12.42 & 0.04 & 11.39 & 0.03 & 10.81 & 0.02 \\[0.5ex]
JW\,687 & A & 12.16 & 0.01 & 10.82 & 0.02 & 10.22 & 0.04 \\
& B & 13.79 & 0.04 & 11.97 & 0.05 & 10.75 & 0.05 \\[0.5ex]
JW\,765 & A & 12.47 & 0.15 & 11.73 & 0.15 & 11.53 & 0.16 \\
& B & 12.55 & 0.15 & 11.85 & 0.15 & 11.59 & 0.16 \\[0.5ex]
JW\,876 & A & \multicolumn{2}{c}{$\cdots$} & \multicolumn{2}{c}{$\cdots$} & 8.56 & 0.03 \\
& B & \multicolumn{2}{c}{$\cdots$} & \multicolumn{2}{c}{$\cdots$} & 9.06 & 0.04 \\[0.5ex]
JW\,959 & A & \multicolumn{2}{c}{$\cdots$} & \multicolumn{2}{c}{$\cdots$} & 9.35 & 0.02 \\
& B & \multicolumn{2}{c}{$\cdots$} & \multicolumn{2}{c}{$\cdots$} & 9.42 & 0.03 \\[0.5ex]
JW\,974 & A & 12.74 & 0.02 & 12.08 & 0.03 & 11.67 & 0.02 \\
& B & 13.90 & 0.04 & 13.34 & 0.04 & 13.08 & 0.04 \\[0.5ex]
{}[HC2000]\,73 & A & 13.13 & 0.06 & 12.05 & 0.03 & 11.23 & 0.03 \\
& B & 13.61 & 0.07 & 13.18 & 0.05 & 12.75 & 0.06 \\[0.5ex]
TCC\,15 & A & 13.06 & 0.04 & 11.19 & 0.02 & 10.29 & 0.01 \\
& B & 15.56 & 0.27 & 14.49 & 0.29 & 13.94 & 0.10 \\[0.5ex]
TCC\,52 & A & 8.86 & 0.04 & 7.79 & 0.04 & 6.94 & 0.04 \\
& B & 10.47 & 0.05 & 9.35 & 0.05 & 8.58 & 0.04 \\[0.5ex]
TCC\,55 & A & 14.45 & 0.08 & 12.67 & 0.17 & 10.96 & 0.03\tablefootmark{a}\\
& B & 15.45 & 0.10 & 14.84 & 0.31 & 12.22 & 0.08\tablefootmark{a}\\[0.5ex]
TCC\,97 & A & 13.32 & 0.04 & 12.63 & 0.03 & 12.03 & 0.04 \\
& B & 15.12 & 0.13 & 14.43 & 0.13 & 13.43 & 0.16 \\[0.5ex]
\hline
\end{tabular}
\end{center}
\vspace{-3ex}
\tablefoot{
\tablefoottext{\dag} The photometry of the six targets observed with Gemini/NIRI is listed in \citet[\emph{in prep.}]{cor12}.
\tablefoottext{a} These values are derived from $K$-band system magnitudes instead of $K_\mathrm{s}$ (see Table~\ref{tab:integratedphotometry}).
}
\end{table}
\subsection{Color-color diagram, extinctions, and color-excess}\label{sec:AVCTTS}
Using the component magnitudes of Table~\ref{tab:componentmagnitudes}, we composed a ($H$$-$$K_\mathrm{s}$)-($J$$-$$H$) color-color diagram (Fig.~\ref{fig:colorcolor}).
\begin{figure}
\centering
\includegraphics[angle=0,width=0.48\textwidth]{fig03.ps}
\caption{Color-color diagram of the target binary components. The CTTs locus \citep{mey97}, the dwarf and giant locus \citep{bes88}, and a reddening vector of 10\,mag length are overplotted, after conversion to the 2MASS photometric system \citep{car01}. Targets to the right of the dash-dotted line have an IR color excess.}
\label{fig:colorcolor}
\end{figure}
We compared our data with the loci of dwarfs and giants \citep{bes88} and the location of classical \mbox{T\,Tauri} stars \citep[CTTs locus;][]{mey97} that had both been converted to the 2MASS photometric system. Extinctions were derived by dereddening to the CTTs locus along the interstellar reddening vector \citep{coh81} and are listed in Table~\ref{tab:componentparameters}.
Most targets are located in the region accessible from the dwarf and CTTs loci by adding extinction. However, three groups of targets have rather peculiar locations in the diagram: \emph{i)}~two of the targets with among the smallest of $H$$-$$K_\mathrm{s}$ values, the secondaries of {JW\,63} and {JW\,176}, have no intersection with either the dwarf or giant locus along the dereddening direction. \emph{ii)}~There are some targets (the secondaries to {JW\,553}, {[HC2000]\,73}, and {TCC\,97}) that are significantly below the CTTs locus. Targets if this location in the color-color diagram have been observed before \citep[see e.g.\ Fig.~20 in][]{rob10}. \emph{iii)}~The outlier {TCC\,55}B in the bottom-right of the plot.
Most of these peculiar locations can be explained by the intrinsic photometric variability of young stars of $\sim$0.2\,mag \citep{car01a} and the fact that photometry was not taken simultaneously. Furthermore, colors are known to depend on the inclination of a possible disk \citep{rob06} -- a parameter that cannot be determined with our data. All peculiar objects were assigned an extinction of 0. Using expected dwarf colors from \citet{bes88}, we then derived the color excesses $E_{J-H}=(J-H)_\mathrm{obs}-(J-H)_0$ and $E_{H-K_\mathrm{s}}$ for all objects (Table~\ref{tab:componentparameters}).
We compared our extinction measurements with optical data from \citet{pro94}. Their sample contains six spatially resolved binaries that are also part of our sample (JW\,553, JW\,598, JW\,648, JW\,681, JW\,687, TCC\,15\footnote{\citet{pro94} list the brighter component in $V$ as the primary of binary. We show, however, that it is of later spectral type than its companion. Accordingly, our designation swaps both components with respect to the \citeauthor{pro94} paper.}).
Using the observed $V\!-\!I$ colors and an estimate of $(V\!-\!I)_0$ derived from our measured effective temperatures
and the 1Myr stellar evolutionary model of \citet{bar98}, we calculated extinctions as $A_V \approx [(V-I) - (V-I)_0]/0.4$ \citep{bes88}.
The results are consistent with our near-infrared extinction measurements usually to within $\sim1.0$\,mag, except for JW\,687\,A, where we find a substantially larger $A_V$ from the optical data than from our NIR measurements. This discrepancy for this target, however, could be explained by its large veiling value (optical veiling can reduce the measured $V\!-\!I$ color and thus the derived extinction).
\subsection{Spectral types and veiling}\label{sec:fitting}
Owing to the young age of the targets in our sample, the stars are still contracting, hence their surface gravities ($\log g$) are lower than those of main sequence stars. However, to find appropriate templates for spectral classification and the estimation of both the visual extinction $A_V$ and near-infrared continuum excess in $K$-band (veiling, $r_K$), one needs to find templates with physical conditions as close as possible to the pre-main sequence stars in this sample. Since no comprehensive catalog of pre-main sequence spectra at a spectral resolution of $R=1400$ or higher exists, we compared our target spectra to those of dwarfs and giants using a method similar to that shown in \citet{pra03}. We measured the equivalent widths of the $T_\mathrm{eff}$ and $\log g$-sensitive photospheric features \ion{Na}{I}, \ion{Ca}{I}, \element[][12]{CO}(2-0), and \element[][12]{CO}(4-2) (wavelengths see Table~\ref{tab:features}) for all target component spectra as well as dwarf and giant spectra from the IRTF spectral libary \citep{ray09,cus05} in a spectral range between F2 and M9. Fig.~\ref{fig:ew1} demonstrates that dwarf spectra are more suitable templates for our sample than giant stars in the same spectral range.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{fig04.ps}
\caption{\label{fig:ew1}\emph{Top:} The equivalent widths of \ion{Na}{I}+\ion{Ca}{I} versus CO[2-0]+CO[4-2] as an indicator of T$_\mathrm{eff}$ and $\log g$ for dwarf (open circles) and giant (filled circles) template spectra from the IRTF spectral library.
Spectral types are color-coded according to the legend in the upper left.
The red and blue lines guide the eye to the two relations. \emph{Bottom:} The same plot including the derived dwarf and giant relations (red and blue curve) as in the top panel, overplot with the ONC target components. Open symbols show targetsted with high veiling $r_K>0.2$, filled symbols $r_K<0.2$.
Two targets, {TCC\,15}B and {[AD95]\,2380}B, are off bounds at high $W_\lambda$(\ion{Na}{I}+\ion{Ca}{I}).
}
\end{figure}
We note that veiling reduces the equivalent widths of spectral lines according to $W_\lambda^\mathrm{measured} = W_\lambda/(1+r_K)$ (see also Sect.~\ref{sec:WBr}) and points in Fig.~\ref{fig:ew1} would move towards the origin if the veiling is high. It would therefore be necessary to only compare targets with small $r_K$, which is only possible after the veiling was determined using the templates. However, we see that not only the distribution of $r_K<0.2$ targets but all targets are well-congruent with the dwarf locus, indicating that dwarfs are suitable for the determination of the stellar parameters of our pre-main sequence stars. We also considered intermediate solutions between dwarfs and giants as possible templates. Despite the possibly closer match in $\log g$ of pre-main sequence stars, no improvement in the resulting match with our target equivalent widths could be seen. Thus, in order to minimize additional noise sources, we used dwarf templates for the evaluation that we now describe.
Spectral types, spectroscopic extinctions, and veiling were simultaneously determined by a $\chi^2$ minimization method that modifies the template spectra according to
\begin{equation}\label{eq:F}
F^*(\lambda) = \left(\frac{F_\mathrm{phsph}(\lambda)}{c} + k\right)e^{-\tau_\lambda}
\end{equation}
\citep[cf.][]{pra03} with $\tau_\lambda=(0.522/\lambda)^{1.6}A_V^\mathrm{spec}$, where $F_\mathrm{phsph}$ is the photospheric flux of the templates and $k$ is the $K$-band excess in units of $F_\mathrm{phsph}(2.2\mu\mathrm{m})/c$, i.e. the excess over the photospheric flux at 2.2\,$\mu$m normalized with a constant $c$. We introduce the extinction variable $A_V^\mathrm{spec}$, since it is typically not identical to the photometric extinction $A_V$. The excess $k$ is assumed to not vary strongly with wavelength within the limits of $K$-band, i.e.\ it is
$k(\lambda)=\mathrm{const}$. Although some of the targets show evidence of a slightly stronger excess towards the red edge of the $K$-band, the theoretically slightly poorer fit of line depth at the long wavelength end does not have a strong impact on the resulting $k$ since it is determined from the best fit to the entire wavelength range of our spectra. For each template spectral type, the three variables $A_V^\mathrm{spec}$, $k$, and $c$ were modified with 120$-$160 steps each within a reasonable range of values and we found a minimum value of
\begin{equation}
\chi^2 = \frac{1}{n-\mathrm{\it dof}-1}\sum_{i=1}^n \frac{(F_i - F^*_i)^2}{\Delta F_i^2}
\end{equation}
where $n$ is the number of pixels in the spectrum, $\mathrm{\it dof}=3$ the degrees of freedom, and $F_i$, $\Delta F_i$, and $F^*_i$ the flux in the $i$-th pixel of the target spectrum, its measurement error, and the modified template spectrum, respectively. The minimum of the $\chi^2$ distribution was compared for different spectral types and the nine best-fitting solutions (spectral type along with the corresponding optimal combination of $A_V^\mathrm{spec}$, $k$, $c$) when examined by eye. Spectral types were selected by comparing the overall shape of the continuum and the strength of several photospheric absorption lines of the modified (eq.~\ref{eq:F}) best-fit templates. Uncertainties in the spectral type were estimated from the range of spectra that could possibly fit the data. This resulted in a typical uncertainty of one or two subclasses. The uncertainty in $k$ was determined from the error ellipse in a three-parameter $\chi^2$-minimization at $\chi^2_\mathrm{min}+3.5$ \citep{wal96}.
An example of the best-fit result for one of our target binaries is shown in Fig.~\ref{fig:bestfit}.
\begin{figure}
\centering
\includegraphics[angle=0,width=0.48\textwidth]{fig06.ps}
\caption{The result of the spectral template fitting for the primary and secondary component of JW\,876. The black curves show the spectrum of the primary and secondary component respectively, offset for clearer visibility. The red curves are the corresponding best-fit models from the IRTF spectral library, modified according to eq.~(\ref{eq:F}) with extinction and veiling values from Table~\ref{tab:componentparameters}.}
\label{fig:bestfit}
\end{figure}
From the excess flux $k$, the $K$-band continuum excess $r_K$ is calculated to be
\begin{equation}
r_K = \frac{k}{F_\mathrm{phsph}(2.2\mu\mathrm{m})/c}\quad.
\end{equation}
The results from the spectral fitting are summarized in Table~\ref{tab:componentparameters}.
\addtocounter{table}{1}
Some spectra do not show any photospheric features and no estimation of the spectral type was possible. These target components are marked with ellipses in the spectral type-dependent columns of Table~\ref{tab:componentparameters}.
Despite the wide range of extinction values derived from the photometric and spectroscopic determination, we did not force the $\chi^2$ minimization to match the photometric $A_V$ values from Sect.~\ref{sec:AVCTTS} for the following reason. In the $\chi^2$ fitting routine, the determination of the other parameters (spectral type, veiling, normalizing constant $c$) does not depend on the value chosen for $A_V$ as long as it has reasonable values. While spectral types are mainly determined by line ratios, veiling is sensitive to line depths; both signatures are not strongly influenced by extinction. However, forcing $A_V$ to a particular value typically produces poor fits and hence $A_V$ is kept as a free parameter.
We only use the photometric extinctions listed in Table~\ref{tab:componentparameters} for further evaluation. Exceptions are {JW\,681}, {JW\,876}, and {JW\,959} where no photometric extinctions could be measured and our best estimates for $A_V$ come from spectroscopy with an uncertainty determined from the fitting error ellipse that is similar to the uncertainty in $k$.
\subsection{Luminosity, effective temperature, and radius}
Luminosities $L_*$ of our target components were derived from bolometric magnitudes by applying bolometric corrections $BC\!_J$ \citep{har94} to the measured $J$-band magnitudes with a distance to Orion of $414\pm7$\,pc \citep{men07} and $J$-band extinctions of $A_J=0.27 A_V$ \citep{coh81}. The $J$-band was chosen to help us minimize the impact of hot circumstellar material, which mainly contributes flux at longer wavelengths ($K$ and $L$-band) i.e.\ closer to the maximum of the $T\sim1500$\,K blackbody emission from the inner dust rim \citep{mey97}. The resulting luminosities are listed in Table~\ref{tab:componentparameters}, along with effective temperatures $T_\mathrm{eff}$ derived from their spectral types and SpT--$T_\mathrm{eff}$ relations (earlier than M0: \citealt{sch82}; later or equal to M0: \citealt{luh03}). Luminosity uncertainties were propagated from the magnitude, extinction, and distance uncertainties but do not include any of the intrinsic variability of the targets since we cannot estimate the magnitude of the effect for any individual target component\footnote{The variability of a few of the target binaries was observed by \citet{car01a}, although they did not resolve the individual components.}. We estimate, however, an average impact of variability by varying $M_J$ by 0.2\,mag \citep{car01a} and rederiving the luminosity of each target. We observe a difference in the derived luminosities of up to 20\% with a sample median of 0.08\,dex. This is twice as large as the typical propagated random uncertainties.
Stellar radii $R$ were then calculated from $L_*$ and $T_\mathrm{eff}$.
\subsubsection{The HR diagram: Ages and masses}\label{sec:HRdiagram}
Effective temperatures and luminosities were used to derive ages and masses by comparison with evolutionary tracks from \citet{sie00}. Fig.~\ref{fig:HRD} shows the position on the HR-diagram of all components with measured $T_\mathrm{eff}$ and $L_*$.
\begin{figure}
\centering
\includegraphics[angle=0,width=0.48\textwidth]{fig07.ps}
\caption{HR-diagram with evolutionary tracks from \citet{sie00}. Open symbols are target components with high veiling $r_K>0.2$, filled symbols have $r_K\le0.2$. Typical uncertainties from random errors and the resulting uncertainty from 0.2\,mag intrinsic photometric variability \citep{car01a} are shown in the lower left. For the determination of the parameters (age \& mass, see Table \ref{tab:componentparameters}), more tracks and isochrones were used than shown in the plot; they were omitted for a clearer illustration.}
\label{fig:HRD}
\end{figure}
The derived masses are -- except for three targets -- below 1\,$M_\odot$, and ages are found to be in the range of $10^4$--$10^7$\,yr, with an average of 0.97\,Myr.
One object, the secondary component of [HC2000]\,73, has an estimated mass of $0.09\pm0.05$\,$M_\odot$, indicating that it is a possible substellar object.
The details of this binary are discussed in an accompanying paper \citep[\emph{in prep.}]{pet11}.
It is apparent that the targets that are classified as the youngest, are those with the highest veiling. Their high luminosities thus probably do not represent extreme youth but are rather an indication of hot circumstellar material contributing near-infrared flux even in $J$-band. This is not properly accounted for when extracting ages from the HR diagram because the infrared-excess in the $J$-band is unknown for our targets and therefore not subtracted from their brightness. A consequence is the apparent non-coevality of binary stars with at least one component with high veiling, as discussed in Sect.~\ref{sec:relativeExtinctions} and Fig.~\ref{fig:PrimaryAge-SecondaryAge}. Furthermore, apparently old ages can be caused by underestimated extinctions, making these targets appear underluminous and thus too old. However, since the evolutionary tracks for stars of a certain mass are almost vertical ($T_\mathrm{eff}\approx \mathrm{const}$) in this part of the HR diagram, the uncertainty in luminosity does not translate into an equally large uncertainty in mass.
\subsection{Accretion: $W($Br$\gamma)$, $L_\mathrm{acc}$, and $\dot{M}_\mathrm{acc}$}
The accretion activity of each target component was inferred from the Br$\gamma$ emission feature at 21665\,\AA. Our measurements are described in this section.
\subsubsection{Equivalent widths of Br$\gamma$ emission}\label{sec:WBr}
We measured the equivalent width of the Br$\gamma$ line
\begin{equation}\label{eq:WBr}
W_{\mathrm{Br}\gamma} = \int_{\mathrm{Br}\gamma}\frac{F_\lambda-F_{\mathrm{c}}}{F_{\mathrm{c}}}d\lambda
\end{equation}
in an interval of width 56\,\AA\ around the center of Br$\gamma$ (which was individually fit and recentered between 21654\,\AA\ and 21674\,\AA) for all target components. The integration interval was chosen to ensure that we include as much line flux as possible, while minimizing the influence from the continuum noise around the line at the spectral resolution of our observations. The continuum $F_\mathrm{c}$ was determined from a linear fit to the local pseudo-continuum in a region of width $130$\,\AA\ shortward and longward of the integration limits. The above definition of equivalent width (eq.~\ref{eq:WBr}) returns positive values for emission lines and is negative in case of absorption.
To assess the significance of Br$\gamma$ emission, we inferred the noise level at the position of Br$\gamma$ from the surrounding continuum. This allowed us to derive a measure for the probability of the presence of a gaseous accretion disk in a binary component (see Appendix~\ref{sec:app:statistics}).
The procedure of measuring equivalent widths was performed in two stages. In the first stage, the method was applied to the uncorrected spectra to serve as a measure of accretion disks (Appendix~\ref{sec:app:statistics}), where we sought to assess the significance of the measured emission features and thus do not wish to introduce additional uncertainties by means of further modifications of the spectra. However, spectral features were reduced in strength when veiling was imposed on the spectrum and thus the Br$\gamma$ equivalent width had to be corrected for $r_K$ to represent the actual emission emitted from the accretion process. This required a second step of either calculating $W_\lambda = W_\lambda^\mathrm{measured}\cdot(1+r_\lambda)$, as inferred from eqs. (\ref{eq:F}) and (\ref{eq:WBr}), or applying eq.~(\ref{eq:F}) with $r_K$ values from Table~\ref{tab:componentparameters} to the reduced spectra (for which extinction does not change the equivalent width and can be assumed to be equal to zero for this calculation). Both options returned very similar results. We chose to modify the spectra and remeasure $W_{\mathrm{Br}\gamma}$ to also obtain good estimates for the continuum flux noise, which does not necessarily transform in the same way. The results for this measurement of the actual equivalent widths of the Br$\gamma$ emission are listed in Table~\ref{tab:componentparameters}. When $r_K$ was unknown, no correction was applied and values in Table~\ref{tab:componentparameters} correspond to lower limits of $W_{\mathrm{Br}\gamma}$.
\subsubsection{Br$\gamma$ equivalent widths versus NIR excess}\label{sec:brgammavsnirexcess}
We compared the color excesses $E_{H-K_\mathrm{s}}$, which measure the existence of hot circumstellar material around each binary component \citep{cie05}, with our veiling-corrected Br$\gamma$ emission values. Fig.~\ref{fig:excessvsbrg} shows that, while all but one target with significant emission in Br$\gamma$ exhibit a NIR excess, the opposite is not true and many targets with NIR excess are found that do not show significant signs of hydrogen emission.
\begin{figure}
\centering
\includegraphics[angle=0,width=0.48\textwidth]{fig08.ps}
\caption{Veiling-corrected Br$\gamma$ equivalent width as a function of NIR excess in ($H$$-$$K_\mathrm{s}$) color. Circles represent primary, and diamonds secondary binary components. The detection limits of target components with insignificant equivalent widths are marked with arrows.}
\label{fig:excessvsbrg}
\end{figure}
This imbalance is well-known and discussed further in Sect.~\ref{sec:accretiondiskfraction}.
The figure shows some interesting features. It seems that there is a slight systematic offset in the calculated excesses relative to the origin, as we see a number of targets clustering around $E_{H-K_\mathrm{s}}\approx-0.03$. Assuming that these are targets with no significant color excess, we can infer a small mismatch between the color scales of the theoretical and measured values used to derive $E_{H-K_\mathrm{s}}$. This is partly a consequence of using dwarfs for the theoretical colors $(H$$-$$K_\mathrm{s})_\mathrm{theor}$ instead of pre-main sequence stars. When using the colors of pre-main sequence stars derived by \citet{luh10}, we measured an average shift of 0.015\,mag towards redder $H$$-$$K_\mathrm{s}$ colors that moved the accumulation of low color-excess components towards the origin. Since this correction was small and the \citeauthor{luh10} spectral sequence does not cover all spectral types of our sample, we use color-excesses derived from the dwarf colors. In addition, the anticipated systematic offset was negligible compared to the possible photometric variability of $\sim$0.2 magnitudes \citep{car01a}. Although variability does not change our (qualititative) conclusions, it might be a reason for the position of the targets with negative excess with and without significant Br$\gamma$ emission.
Targets in the bottom right quadrant display Br$\gamma$ in absorption; since we also detect a NIR emission excess, there is a possibility that Br$\gamma$ emission generated by accretion cancels with part of the absorption feature. To estimate the real emission strength, one could measure and subtract the strength of Br$\gamma$ absorption from photospheric standards of the same spectral type. This concerns, however, only the components of JW\,260, a binary that was excluded from the discussion and conclusions owing to its earlier spectral type. We thus skip a more thorough evaluation of the accretion state of this binary here.
\subsubsection{Br$\gamma$ line luminosity and mass accretion rates}\label{sec:massaccretion}
For target components with full knowledge of extinction and veiling, we calculated Br$\gamma$ line luminosities and the mass accretion rate $\dot{M}_\mathrm{acc}$. The accretion luminosity was derived through
\begin{equation}\label{eq:Lacc}
\log(L_\mathrm{acc}) = (1.26\pm0.19)\log(L_{\mathrm{Br}\gamma}/L_\odot)+(4.43\pm0.79)
\end{equation}
\citep{muz98} where the Br$\gamma$ line luminosity is defined as
\begin{equation}
L_{\mathrm{Br}\gamma} = 4\pi r^2 \int_{\mathrm{Br}\gamma}(F_\lambda-F_{\mathrm{c}})\,d\lambda
\end{equation}
in the same integration limits used for the equivalent width and using a distance to the ONC of $r=414\pm7$\,pc \citep{men07}. To measure the luminosity of the Br$\gamma$ line, the spectra must be flux-calibrated. This was achieved by comparing the $K_\mathrm{s}$-band photometry from Table~\ref{tab:componentmagnitudes} with synthetic photometry obtained by convolving our measured spectra with the 2MASS $K_\mathrm{s}$ filter curve and integrating with $zp(F_\lambda) = 4.283\times 10^{-7}$ for the zeropoint\footnote{http://www.ipac.caltech.edu/2mass/releases/allsky/doc/sec6\_4a.html}. Since the $K_\mathrm{s}$ filter curve extends to slightly bluer wavelengths than our NACO spectra, we had to extrapolate the spectrum. To estimate the impact of our linear extrapolation on the integration result, we assumed several extrapolations with different slopes (up to unreasonable values). The resulting variation in $L_{\mathrm{Br}\gamma}$ is small, mainly because the extrapolated part coincides with the steep edge of the filter and was kept as an additional uncertainty. After correcting the calibrated spectra for veiling and extinction, the data was multiplied with the filter curve, integrated over $K_\mathrm{s}$-band, and converted to $L_\mathrm{acc}$ according to eq.~(\ref{eq:Lacc}). Uncertainties were estimated from error propagation of all involved parameters including the extrapolation error and the empirical uncertainties in eq.~\ref{eq:Lacc}. A source of additional uncertainty that we could not quantify in greater detail from our observations is variability. Since the photometry was not taken simultaneously with our spectral observations, the calibration of our spectra might suffer from additional uncertainty when these young targets were observed in different states of activity. Since we had no estimate of the size of this effect for an individual target, we did not introduce any correction but wish to caution that individual accretion luminosities might be offset from the true value. However, assuming that the effect of variability in the accretion luminosities is random, the sample statistics should not be biased.
The mass accretion rate was calculated according to \citet{gul98} as
\begin{equation}
\dot{M}_\mathrm{acc} = \frac{L_\mathrm{acc}\,R_*}{GM_*}\left(\frac{R_\mathrm{in}}{R_\mathrm{in}-R_*}\right)\quad,
\end{equation}
for a stellar radius $R_*$ and mass $M_*$ from Table~\ref{tab:componentparameters}, and the graviational constant $G$, assuming that material falls onto the star from the inner rim of the disk at $R_\mathrm{in}\approx5R_*$. The accretion luminosities and mass accretion rates are listed in Table~\ref{tab:componentparameters}.
\subsubsection{Is the Br$\gamma$ emission generated by magnotespheric accretion?}
The main source of Br$\gamma$ emission in \mbox{T\,Tauri} stars is often assumed to be magnetospheric accretion \citep[e.g.][]{bec10}. However, mechanisms such as stellar wind, disk wind, outflow, or photoevaporation of the disk by a nearby high-mass star can also contribute to the Br$\gamma$ emission observed in low-mass stars \citep{har90,har95,eis10}. While the emission region from magnetospheric accretion should be located close to the stellar surface, the sources of most other mechanisms are expected to be located further away at several stellar radii or even in the outer parts of the disk. Detecting a spatial displacement of the Br$\gamma$ emitting region from the star locus would refute the possibility of magnetospheric accretion as the origin of the emission and render other explanations more likely.
To test this, we used the sky-subtracted raw frames of the spectral observations, showing the spectral traces of both components of each target binary. The orientation of the images is such that the dispersion direction is roughly aligned along the columns of the detector, while the spatial information is aligned along the rows. In each row, we fit two Gaussians to the two profiles of the component spectra, thus measuring the spatial location and width of the spectral profile of the targets in each wavelength bin of size $\sim$5\,\AA. Depending on the S/N of the individual observations, the locations of the trace center can be determined to an accuracy of 0.008--0.10 pixels, which, at the spatial resolution of the observations of $\sim$0.027\,arcsec/pix, corresponds to $\sim$0.1--1.1\,AU at the distance of the ONC.
We found no significant offset at $\lambda$(Br$\gamma$) from the rest of the trace for any of the targets in which we observed Br$\gamma$ in emission. Neither did we detect extended emission in excess of the width of the spectral trace at similar wavelengths. This indicates that the emission indeed originates from a small region close to the stellar surface and is likely to be the product of magnetospheric accretion. However, extended or displaced emission may well be generated mainly perpendicular to the slit. In this case, a displacement of the emission peak could be observed in the dispersion direction. Since we did not measure frequency shifts of spectral features, we cannot exclude this possibility for any individual target component.
While the Br$\gamma$ emission seems to come from a region close to the star for all targets, we observed one target ({TCC\,55}) in which emission at $\lambda($H$_2) = 21218$\,\AA\ comes from an extended region around the star, rather than the star itself. We observed at least three diffuse H$_2$ emitting regions along the slit, each several pixels wide, one of them apparently surrounding the binary. We used K-band images from the MAD instrument \citep{pet08} to investigate the surrounding area for possible emission sources close to this target located only 0\farcm49 from $\theta^1$\,Ori\,C. We identified bow-shocks coinciding with the location of the H$_2$ features in the spectra. The diffuse emission around the binary itself might represent the remaining material of a proto-stellar envelope, which would be indicative of a young evolutionary state (i.e.\ Class\,I) of the binary.
\section{Discussion}\label{sec:discussion}
\subsection{Stellar parameters and sample biases}
To detect possible biases in the numbers derived from the components of our binary sample, we discuss the degree to which the populations of primaries and secondaries differ and whether they are typical of the ONC population.
\subsubsection{Spectral types}
It is known that the strengths of accretion parameters correlate with the mass of a star, which (considering the limited spread in age) can be represented by the typically more tightly constrained spectral type\footnote{That accretion signatures do correlate with spectral type can be seen from e.g. Table 7 in \citet{whi01}.}. In Fig.~\ref{fig:spthistogram}, we see that the spectral type distribution of the primaries peaks at slightly earlier spectral types of M0--M1 than the secondaries (M2--M3).
\begin{figure}
\centering
\includegraphics[angle=0,width=0.48\textwidth]{fig09.ps}
\caption{Number of target components per spectral type. Gray areas: primaries. Black outline: secondaries. The distributions of primaries and secondaries are only slightly different in spectral types.}
\label{fig:spthistogram}
\end{figure}
This difference is significant at 98\%, according to a Kolmogorov-Smirnov (K-S) test. However, both distributions are indistinguishable from the spectral types of the entire ONC population \citep{hil97}, with K-S probabilities for different distributions of 65\% and 57\% for primaries and secondaries, respectively. This means that both primaries and secondaries are 'typical' members of the ONC, whereas the primary and secondary spectral type distributions deviate slightly and differences in the derived parameters (such as accretion rates and Br$\gamma$-emission strength) can partly be attributed to the -- on average -- earlier spectral types of the primaries.
\subsubsection{Relative extinctions and ages}\label{sec:relativeExtinctions}
Interstellar extinction through embedding in the Orion molecular cloud expresses itself as a spatially variable source of extinction that is, nevertheless, very similar for all components of a stellar multiple. An additional source of extinction that can be very different even for components of the same binary can be caused by circumstellar material such as a (nearly edge-on) circumstellar disk, obscuration by the other component's disk, or a remaining dust envelope.
In Fig.~\ref{fig:AVprimary-AVsecondary}, we can identify and verify the impact of the different sources of extinction.
\begin{figure}
\centering
\includegraphics[angle=0,width=0.48\textwidth]{fig10.ps}
\caption{Extinction of the primary versus versus secondary components. Circles show targets with $A_\mathrm{V}$ determined from the dereddening of the CTT locus. Squares show the spectroscopically determined extinctions of {JW\,681}, {JW\,876}, and {JW\,959}, because photometric extinctions could not be measured. Filled symbols indicate targets with both components having a low veiling $r_K<0.2$. The dashed line corresponds to equal extinctions. }
\label{fig:AVprimary-AVsecondary}
\end{figure}
Binaries composed of two low-veiling components have similar extinctions, according to the level of embedding in the cloud, whereas binaries with high-veiling components do not display such correlation. This might be due to dust material from a disk that is detected by means of its hot-continuum emission from magnetospheric accretion, i.e.\ veiling \citep[e.g.][]{bou07}. In particular, the magnitude of extinction is a strong function of the angle under which the system is observed with close to edge-on disks causing a strong reddening of the NIR colors \citep{rob06}. However, our observations do not enable us to determine to the inclinations of the circumstellar disks, hence conclusions about disk orientation and alignment cannot be drawn.
The stellar components within a binary are close to being coeval. Fig.~\ref{fig:PrimaryAge-SecondaryAge} shows a well-defined correlation of primary and secondary ages for those binaries with little or no veiling.
\begin{figure}
\centering
\includegraphics[angle=0,width=0.48\textwidth]{fig11.ps}
\caption{Relative ages for the primary and secondary components of all target binaries where both components could be placed in the HR diagram. Filled squares show targets where both components exhibit low veiling $r_K<0.2$. Targets with open squares have at least one component with strong veiling $r_K\ge0.2$, thus it is likely that the age estimation from the HR diagram is biased, since extra luminosity from accretion makes the targets appear brighter and thus younger.}
\label{fig:PrimaryAge-SecondaryAge}
\end{figure}
The five binaries with at least one component of $r_K\ge0.2$ are clearly located off the sequence of coeval binaries, which is probably due to a non-negligible amount of veiling in the $J$-band for targets with high $r_K$ values. In our present study, we did not attempt to derive accurate absolute ages for our sample, but testing for equal ages within binaries does serve two purposes: \emph{i)} A sanity check. Assuming that binary components do form reasonably close in time \citep{kra09}, we confirmed that our derived $T_\mathrm{eff}$ and $L_*$ were accurately determined because they result in reasonable values when placed in an HR-diagram. We inferred that the derived parameters (e.g.\ $L_\mathrm{acc}$, $\dot{M}_\mathrm{acc}$) are of sufficient quality to help us derive the conclusions of this paper. \emph{ii)} The derived parameters have no age dependence. Since there is no systematic difference between the primary and secondary age of the binaries, we were able to exclude any dependence on age of the derived relative parameters between the populations of the primaries and secondaries. \emph{iii)} Coevality is consistent with the physical binarity.
While we derived ages $\tau=\log(\mathrm{age})$ that span a range of $\sim\!5.5\le\tau\le\sim\!6.5$ even for the well-behaved class of low-$r_K$ binaries, this can probably not be attributed to a real age spread. \citet{jef11} observed no difference in age between stars with and without disks in the ONC, indicating that the age spread must be shorter than the disk lifetime, i.e.\ significantly more confined than traditionally assumed. We conclude that not only observational uncertainties, but also an intrinsic scatter in the luminosities at constant age must be present. Despite the importance of this observation for the absolute ages of members of star forming regions, we assumed that our \emph{relative} age measurements apply, since the boundary conditions are very similar for both components of the same binary.
\subsection{Disk evolution around the components of visual ONC binaries}
The absolute and differential abundances of disk signatures in the components of our target binaries were found to be a function of binary parameters, as we discuss in the following sections.
\subsubsection{The disk fraction of binary components}\label{sec:accretiondiskfraction}
We derived the fraction of ONC binary components harboring an accretion disk. This number was compared to the single-star disk frequency in the ONC and to binary samples of other star forming regions to expose the effect of binarity and cluster environment on the evolution of circumstellar disks.
The probability density function of the disk frequency was derived in an Bayesian approach as well as the probability of an individual binary component harboring an accretion disk (see Appendix~\ref{sec:app:statistics}; individual disk probabilities in Table~\ref{tab:componentparameters}). As the strength of disk signatures depends on spectral type \citep[e.g.][]{whi01}, we excluded the binary {JW\,260} from further evaluation of disk frequencies, since its spectral type is considerably earlier than for the rest of the sample. We measured an accretion disk fraction of $F=35_{-8}^{+9}$\% in our sample of 42 spectroscopically observed binary components (Appendix~\ref{sec:app:statistics}).
Analyses of the Br$\gamma$ line are known to return a smaller fraction of accretors than the H$\alpha$ emission feature, which is typically used to decide whether an individual star can be classified as a classical \mbox{T\,Tauri} star. \citet{fol01} observed a number of classical \mbox{T\,Tauri} stars in Taurus-Auriga, measuring NIR emission-line strengths including Brackett-$\gamma$ signatures. Twenty-four of their targets are in the range of K3--M6 spectral types, comparable to our sample, and three show no emission in Br$\gamma$. Hence, a fraction of $f=0.125$ of all classical \mbox{T\,Tauri} targets would have to have been misclassified as weak-line \mbox{T\,Tauri} stars from their Br$\gamma$ emission. We thus expected a number of $f/(1-f)\times (F\cdot N)\approx2$ (with $N=42$ target components) classical \mbox{T\,Tauri} stars to be classified as non-accreting. The corrected fraction of classical \mbox{T\,Tauri}s among binary star components in the ONC was thus $F_\mathrm{CTT}=40_{-9}^{+10}$\%. Since many studies refer to the fraction of H$\alpha$-detected classical \mbox{T\,Tauri} stars rather than the number of accretors from Br$\gamma$ emission, we used the latter number to compare it with studies of accretion disk frequencies.
\citet{hil98} and \citet{frs08} found a frequency of accretion disks bearing single stars in the ONC of 50\% and 55\%, respectively. Both are marginally (1$\sigma$ and 1.5$\sigma$) larger than our measured fraction of $40_{-9}^{+10}$\%. While the \citeauthor{hil98} sample was derived from the $I_\mathrm{C}-K$ color instead of $H\alpha$ measurements, the \citeauthor{frs08} sample is biased towards classical \mbox{T\,Tauri} stars making their estimate an upper limit of disk frequency. Both of these findings prevent us from drawing firm conclusions about the difference between the disk frequencies of single stars and binaries. This evidence of a lower disk frequency around 100--400\,AU binary components will hence need future confirmation from observations using comparable diagnostics (preferably Br$\gamma$) in an unbiased comparison sample.
Similarly, we found evidence of an underrepresentation of dust disks in Orion binaries. Sixteen out of 27 target components (excluding JW\,260) with measured $H$$-$$K_\mathrm{s}$ colors show signs of a dust excess, that is a fraction of 59$\pm$15\% compared to the values of 55\%--90\% found for single stars in the ONC by \citet{hil98} and 80$\pm$8\% from \citet{lad03}. Since these numbers were derived using indicators other than the $H$$-$$K_\mathrm{s}$ excess, which is known to typically return a comparably small fraction of dust disks compared to e.g.\ $K$$-$$L$ \citep{hil05}, this evidence cannot be quantified in greater detail.
Our numbers suggest that there is a higher frequency of target components with inner dust disks (59$\pm$15\%) than accretion disks ($40_{-9}^{+10}$\%). This discrepancy agrees with observations of single stars in various star-forming regions, where \citet{fed10} concluded that accretion disks decay more rapidly than dust disks in a particular cluster.
The presence of dust and accretion disks around the binary components of our sample is thus consistent with \emph{i)} observations of single stars in a variety of young clusters that dust disks are more abundant than accretion disks in the same cluster and \emph{ii)} the expectation that disk lifetimes are shorter for disks in binary systems than for single stars with comparable properties. The latter is also theoretically motivated by the missing outer disk through dynamical truncation \citep{art94} and the resulting reduced feeding of the inner disk from the outer disk material \citep{mon07}. As expected, disk frequency is a function of binary separation (also see Sect.~\ref{sec:differentialdiskevolution}). For example, \citet{cie09} found the component disk frequency in tight $<$40\,AU binaries of Taurus-Auriga to be significantly lower, at less than one half of the single-star disk frequency. Both their wider binaries (40--400\,AU) and our sample (100--400\,AU), however, have disk frequencies lower but comparable to single stars, indicating that there is a separation dependent mechanism.
\subsubsection{Synchronized disk evolution in ONC binaries}\label{sec:synchronizedevolution}
We detected a significant overabundance of close, $\lesssim$200\,AU pairs of equal emission state, i.e.\ with both components accreting (CC)\footnote{To ease the reading of the binary categories, we use the common abbreviations 'C' for accreting components (referring to \emph{c\/}lassical \mbox{T\,Tauri} Stars), and 'W' for non-accreting components (\emph{w\/}eak-line \mbox{T\,Tauri} stars) where in the designation of a binary pair (e.g.\ 'CW') the first position describes the primary (here 'C') and the second the secondary (here 'W') component state. We note, however, that we only refer to the presence of accretion as measured through Br$\gamma$, which is correlated with but not equal to the the distinction between weak-line and classical \mbox{T\,Tauri} stars (see also the discussion Sect.~\ref{sec:accretiondiskfraction}).} or both components showing no accretion (WW) signatures. This is apparent in Fig.~\ref{fig:sephistogram} and a K-S test indicates 99.5\% probability that the separation distribution of equal pairs (CC and WW) differs from the separations of mixed pairs (CW and WC).
\begin{figure}
\centering
\includegraphics[angle=0,width=0.48\textwidth]{fig13.ps}
\caption{Histograms of binary separation as a function of component accretion-type (the y-axis tickmarks indicate one binary each). Most of the close binaries are of type WW or CC, i.e.\ synchronized in their disk evolutionary state. }
\label{fig:sephistogram}
\end{figure}
To investigate a possible correlation between the synchrony of disks in ONC binaries and their separations, we split our sample into binaries with projected separations larger and smaller than 200\,AU. From the content of accreting and non-accreting components in the two separation bins, we predicted the average number of CC, WW, and mixed systems by random pairing, and compare it to our measured distribution. If there were no correlation between the evolution of both components of the same binary, the randomly paired sample of the same number of components (W and C) should be consistent with our observed sample. With 18 non-accreting components and 6 accretors in the sample of 12 binaries with separations $<$200\,AU, we expect an average of $\sim$1.2$\times$CC, $\sim$6.9$\times$WW, and $\sim$4.1 mixed systems, as predicted by a Monte Carlo simulation. However, we found 3$\pm$1$\times$CC, 9$\pm$0$\times$WW, and 0$^{+2}_{-0}$ mixed pairs\footnote{uncertainties are derived from the possibility of a binary changing classificiation within its 1$\sigma$ limit of W$_{\mathrm{Br}\gamma}$, i.e.\ a CC might turn into a CW, if its secondary is classified as C but with an equivalent width that is less than 1$\sigma$ away from W$_\mathrm{min}$.}. This is clearly incompatible with the prediction of random pairing. On the contrary, wide-separation binaries are well described through random pairing with predicted values of 2.2$\times$CC, 4.1$\times$WW and 4.8$\times$mixed and measured values of 1$^{+1}_{-0}\times$CC, 3$^{+2}_{-1}\times$WW and 7$^{+1}_{-2}\times$mixed.
\citet{whi01} observed a similar underdensity of mixed pairs among binaries with separations $<$210\,AU in a sample of 46 binaries in the Taurus-Auriga star-forming association. They concluded that synchronized evolution, which they attributed to the existence of a circumbinary reservoir, can more or less equally replenish the circumprimary and circumsecondary disks, as previously suggested by \citet{pra97}.
Can circumbinary disks of sufficient size survive in the ONC and thus be the cause for the synchronization of disk evolution? The typical size of a disk in the Trapezium region is below 200\,AU \citep{vic05} and only 3 of 149 analyzed systems -- the authors claim completeness for large disks $>$150\,AU at moderate extinctions -- were found to have disk sizes $>$400\,AU. However, circumbinary disks have inner radii of at least twice the binary separation \citep{art94}. This requires circumbinary disk sizes of more than 400\,AU in diameter for a binary with 100\,AU separation and even $\gtrsim$800\,AU circumbinary disks for 200\,AU binaries. Assuming that dynamical interactions \citep{olc06} and photoevaporation \citep{man09a} are the reason for disk truncation in the ONC, the observed size limits of single star disks should also apply to circumbinary matter. This implies that circumbinary disks of binary systems $>$100\,AU should not be largely abundant since they are typically truncated to radii below the dynamically induced inner hole radius. These considerations render it unlikely that stable circumbinary disks are the reason for disk-synchronization in the inner regions of the Orion Nebula cluster.
In agreement with this, none of these systems that were observed with circumbinary material (e.g.\ GG Tau, \citealt{dut94}; V892\,Tau, \citealt{mon08}; Orion proplyd 124-132, \citealt{rob08}) have binary separations of more than 100\,AU. This points to the possible universality of the result. \citet{vic05} observed no trend in the disk sizes with distance to $\theta^1$\,Ori\,C out to $\sim$4\arcmin, which means that Orion disks are small, independent of their position in the inner $\sim$50\,arcmin$^2$ of the cluster. Furthermore, we would expect to see a larger ratio of CC binaries to WW binaries at large distances from $\theta^1$\,Ori\,C when circumbinary disks are present only in the outer parts of the cluster and transfer a significant amount of material to the individual stellar disks. The distribution of CC and WW in our ONC does not, however, increase with distance to $\theta^1$\,Ori\,C (Fig.~\ref{fig:disttocenter}).
\begin{figure}
\centering
\includegraphics[angle=0,width=0.48\textwidth]{fig15.ps}
\caption{Histograms of distance to $\theta^1$\,Ori\,C as a function of accretion type. The histogram shows no indication that binaries of any type are more likely at any distance. The vertical dashed line shows the radius \citep[460\arcsec;][]{rei07} inside which the ratio of wide binaries (0\farcs5--1\farcs5) to close binaries (0\farcs15--0\farcs5) drops considerably, probably owing to dynamical interaction.}
\label{fig:disttocenter}
\end{figure}
Finally, although in Taurus \citep{and07} more disks with large radii have been observed than in the ONC, probably owing to its weaker dynamical interactions and irradiation, the similarity of the observed parameters (such as the 200\,AU limit for synchronization) seems to suggest that the disk feeding mechanism in Taurus binaries is probably similar to that in Orion, i.e.\ not due to replenishment from circumbinary disks.
What other mechanism could synchronize the circumprimary and circumsecondary disks in $\lesssim$200\,AU systems? Since mass accretion rates are a function of stellar mass \citep{whi01} and disk truncation radii are similar in equal mass systems \citep{art94}, synchronization of disk evolution might arise, if close binary systems are preferentially equal mass systems. For 13 binaries of the sample, we were able to derive masses of both binary components (see Fig.~\ref{fig:sep-q}).
\begin{figure}
\centering
\includegraphics[angle=0,width=0.48\textwidth]{fig14.ps}
\caption{Mass ratios of binaries as a function of projected binary separation, indicating whether both binary components are accreting (filled circles, CC), neither component is accreting (open circle, WW), or either component shows signs of accretion (half-open circles = CW and WC). This plot only contains 13 targets, since masses could not be derived for all binary components in the sample. For some target binaries with mass ratios close to 1, the primary (as estimated from the NIR colors and listed in Table~\ref{tab:componentmagnitudes}) turned out to be the less massive component. The mass ratios of those targets were calculated as the inverse leading to $q$-values $\lesssim$1.}
\label{fig:sep-q}
\end{figure}
For these, we observe that all (four) systems with mass ratios of q$>$0.8 and separations $<$200\,AU are of WW type, agreeing with the hypothesis of high mass ratios being the cause of synchronized disk evolution. The significance of this result, however, is low. There is a 21\% chance that the mass ratios of close ($<$200\,AU) systems are drawn from the same parent distribution as mass ratios of wider pairs (K-S test). A larger sample of spatially resolved spectroscopic observations of pre-main sequence binaries is needed to decide whether mass ratios are the main driver of the synchronization of binaries closer than 200\,AU.
\subsubsection{Differential disk evolution in binaries}\label{sec:differentialdiskevolution}
Mixed pairs with accreting (CW) and non-accreting primaries (WC) -- within the uncertainties -- are equally abundant: CW pairs appear 4$\pm$2 times while WC are measured 3$\pm$1 times. Although not statistically significant, this is evidence against a strong preference for primaries to have longer lived disks than the less massive secondary. However, longer lived disks around primaries are suggested by theory since disks around secondaries are truncated to smaller radii \citep{art94} and dissipation times are predicted to scale like $R^{2-a}$ with $R$ the disk radius and $a$$\approx$1--1.5 \citep[and references therein]{mon07}. \citet{mon07} found that their measured overabundance of 14$\times$CW versus 6$\times$WC is consistent with this effect taking place, however, with other factors (i.e.\ initial disk conditions) having a more a dominant impact on the lifetimes of circumprimary and circumsecondary disks than the differential scaling with $R$, which is only strong for binaries with low mass ratios $q$$\le$0.5. Our data agree with this proposed \emph{weak} correlation of the binary mass ratios with the abundance of CW-binaries in Orion, though this result is limited in significance by the small number of mixed systems in our sample.
It is noteworthy that the existence of mixed pairs, together with their property of having wider separations (Fig.~\ref{fig:sephistogram}), can introduce difficulties in the interpretation of binary studies that do not resolve their targets into separate components. \citet{cie09} used NIR photometry of unresolved binaries with known separations from several star-forming regions (Taurus, $\rho$-Oph, Cha\,I, and Corona Australis) to infer a smaller separation in binaries with no accreting components than in accreting binaries. Besides the proposed shorter disk lifetimes around close binary components, there is an alternative interpretation of their data that they did not discuss. Since they did not resolve binaries into separate components, they were unable to distinguish CC, CW, and WC-type binaries, but merged them all into the category of having at least one disk. As we showed earlier, however, the separation distribution of CW and WC binaries differ significantly from equal-accretion binaries including CC (Fig.~\ref{fig:sephistogram}). When joining the three categories with at least one accreting component, the combined separations display a distribution with on average larger separations than the WW distribution. Only from our resolved population, can we interpret this as a lack of mixed (CW+WC) pairs and not a lack of accreting (CW+WC+\emph{CC}) components in close binaries.
\subsubsection{Accretion luminosities and mass accretion rates}\label{sec:massaccretionrates}
Figs.~\ref{fig:accretionluminosityHistogram} and \ref{fig:massaccretion} show a luminosity histogram and component mass-accretion rates as a function of stellar mass, respectively.
\begin{figure}
\centering
\includegraphics[angle=0,width=0.48\textwidth]{fig16.ps}
\caption{Histogram of the accretion luminosities of the primaries (gray shaded area) and secondaries (hatched) calculated from the Br$\gamma$ emission. For comparison, the accretion luminosity distributions of single stars of Orion from \citet[dashed outline]{rob04} and \citet[dotted outline, scaled by a factor of 1/20 for clear comparability]{da_10} are overplotted, both limited to the same range of stellar masses as in our binary survey.
The distributions are slightly offset relative to each other to make them more visible.}
\label{fig:accretionluminosityHistogram}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=0,width=0.48\textwidth]{fig17.ps}
\caption{Mass accretion versus stellar mass for all significant emitters of the sample (filled symbols) and upper limits to all other targets with measured $\dot{M}_\mathrm{acc}$ and $M_*$ (open symbols). Primaries are marked with circles, secondaries with diamonds. Asterisks show the mass accretion rates of single stars in Orion \citep{rob04}, whereas plus signs and upper limits are binary components in Taurus \citep{whi01}.
}
\label{fig:massaccretion}
\end{figure}
In binaries with two accreting components, it is usually the more massive component that has the higher accretion luminosity.
Accordingly, we see a tendency for the subsample of primary components to have slightly higher relative accretion luminosities $L_\mathrm{acc}/L_*$ than the secondaries of our Orion binaries.
The derived accretion luminosities agree very well with single stars in the ONC \citep{rob04,da_10} for $\log(L_\mathrm{acc}/L_\odot)\gtrsim-1.5$, which is the average sensitivity limit of the accretion luminosities derived from our Br$\gamma$ observations.
Similarly, the mass accretion rates of the ONC \mbox{T\,Tauri} binary components as a function of stellar mass are (except for three outliers) comparable to those of single stars in the ONC. Fig.~\ref{fig:massaccretion} shows that Orion single stars occupy almost the same area in the $\log(\dot{M}_\mathrm{acc})$-$\log(M_*)$ diagram, with tendency towards slightly lower accretion rates. This tendency is most likely an observational bias, since \citet{rob04} use $U$-band observations with the Hubble Space Telescope that are more sensitive to lower mass accretion rates than our Br$\gamma$ data.
We also overplot stellar components of Taurus binaries \citep{whi01}, which have similar mass accretion rates.
Comparable mass accretion rates of singles and binaries in the ONC and Taurus\footnote{Compare also singles in Taurus from \citet{muz98}, which are as well at similar accretion rates.} are not self-evident, considering the different disk populations of the ONC and Taurus. Since disk masses in binaries are lower than in single stars of the same star-forming region because of disk truncation \citep{art94} and disks in Orion are less massive than disks in Taurus \citep{man09a}, uniform accretion rates indicate either different disk lifetimes or variable efficiency for the replenishment of an existing disk. The hypothesis of shorter disk lifetimes would corroborate the evidence from Sect.~\ref{sec:synchronizedevolution} that we observe fewer disks around binary components than were measured for single stars.
Three binary components (TCC\,52\,A\&B, JW\,391\,A) have comparably high mass accretion rates to the rest of the sample in Fig.~\ref{fig:massaccretion}. The responsible mechanism is obscure, since the two respective binaries have no remarkable properties in common relative to other targets. Their mass, separation, and distance to $\theta^1$\,Ori\,C are unremarkable.
While JW\,391A shows no peculiarity in luminosity, the two components of TCC\,52 are found to be the most luminous targets with respect to their mass. The derived very young ages and large radii might indicate an earlier evolutionary stage, i.e.\ lass\,I, which would agree with the higher mass accretion rates of TCC\,52A+B than to older class\,II components \citep{rob06}.
Regardless of their properties, it remains possible that all three components were observed in a temporary state of high activity.
To assess whether disks in binaries need significant replenishment to survive in sufficient quantities, we estimated disk lifetimes $\tau_\mathrm{disk}=M_\mathrm{disk}/\dot{M}_\mathrm{acc}$ from the ratio of disk mass to mass accretion rate and compare it to the age of the star-forming region. To estimate $M_\mathrm{disk}$, we compiled upper limits to the total mass of dusty material around both binary components from the literature of millimeter observations (JW\,519, JW\,681, TCC\,52, TCC\,97, \citealt{man10}; TCC\,52, TCC\,55, \citealt{eis08}).
Only for TCC\,52 do we have available both mass accretion rates and the total mass of the surrounding (disk) material.
The two measurements of total disk mass of TCC\,52 disagree at the 1$\sigma$-level: 0.0288$\pm$0.0029\,$M_\odot$ are derived by \citet{man10} and 0.042$\pm$0.009\,M$_\odot$ by \citet{eis08}. Nevertheless, we now illustrate that an ``order-of-magnitude'' estimate is possible when we assume a total disk mass of $\sim$0.03\,$M_\odot$.
Disk radii can be estimated from their dynamical truncation radii to be $\sim$0.38 and $\sim$0.3 times the binary separation for primary and secondary, respectively \citep{arm99}, considering our derived binary mass ratio of $q\approx0.65$ and $M_\mathrm{disk}\!\propto\!R_\mathrm{disk}$ \citep[e.g.][]{man10}.
This results in individual disk masses of $M_\mathrm{disk}^\mathrm{prim}\approx0.017\,M_\odot$ and $M_\mathrm{disk}^\mathrm{sec}\approx0.013\,M_\odot$. The derived disk lifetimes for these two target components with the highest mass accretion rates of our sample are $\tau_\mathrm{disk}^\mathrm{prim}\approx1.4\!\times\!10^4$\,yr and $\tau_\mathrm{disk}^\mathrm{sec}\approx8.7\!\times\!10^4$\,yr. Compared to the median age of the ONC binaries of 1\,Myr, derived from Tab.~\ref{tab:componentparameters}, this is very short. These high accretion rates could not have been sustained over the entire early evolution process even if the disk masses were initially ten times more massive than we now observe. If both components were not observed at a younger age than assumed or in a short-lived above-average state of accretion, binary component disks would need substantial replenishment to display the strong accretion activity we detect.
\subsection{Is the existence of an inner disk linked to planet formation in binaries?}\label{sec:planetformation}
Primordial disks like those detected around the binary components in this sample contain the basic material for the formation of planets. Hence, any peculiarities in the evolution of disks in these systems can leave their footprints on the population of planets in binaries, and it should be instructive to compare the properties of planets and disks around the individual components of binaries.
A recent census identified 40 planets in 35 multiple systems \citep[and references therein]{egg10}, almost all of which orbit the more massive component of the binary \citep{mug09}. An additional eight planets were claimed to reside in circumbinary ({\it P-type}) orbits (PSR\,B1620-26, \citealt{ras94}; HD\,202206, \citealt{cor05}; HW\,Vir, \citealt{lee09}; NN\,Ser, \citealt{beu10}; DP\,Leo, \citealt{qia10}; QS\,Vir, \citealt{qia10a}; HU Aqr, \citealt{qia11}; Kepler-16, \citealt{doy11}). Although some of these planets still need confirmation, we note that all latter candidate hosts are spectroscopic binaries with comparably small separations.
From the thus composed picture of planet occurrence in binaries ($\sim$50 planets in multiples, $\sim$2 around the less massive component, 8 circumbinary planets), one might suspect that (i) planet formation around the less massive components of binaries is suppressed and (ii) that circumbinary planets are rare, but do exist. These observations might either be caused by selection effects, since spectroscopic binaries and fainter secondary stars are less often targeted by spectroscopic surveys, or be the consequence of peculiar disk evolution in binaries.
In an attempt to carefully evaluate systematical errors and biases, \citet{egg10} discovered an underdensity of planets around stars with close, 35--225\,AU stellar companions when compared to single stars with similar properties. This is not seen for wider binaries. The upper limit of 225\,AU noticeably coincides with our 200\,AU transition to synchronized disk evolution. This suggests a common origin of both effects. To test whether shorter lived disks in close binaries can explain the deficiency of planets in binaries of the same separation range, we derived the disk frequency in both subsamples. Binaries with separations $<$200\,AU have a slightly smaller fraction of disk bearing components (34$_{-23}^{+47}$\%) than wide $>$200\,AU pairs (37$_{-27}^{+49}$\%). This difference is, however, not as pronounced as the 1.6--2.1$\sigma$ difference that \citet{egg10} observe for the frequency of planets in the close and wide sample. We concluded that either our sample is not large enough to reveal a significant difference or planet formation in binaries is not strongly correlated with the occurrence of accretion disks.
If the apparent paucity of planets orbiting the less massive components of binaries were not entirely due to selection effects, it could be a consequence of differential disk evolution, which we observe as mixed systems with only one accreting component. If the probability of a star to eventually host a giant extrasolar planet were significantly correlated with the lifetime of its disk, we would expect mainly CW systems, to evolve into circumprimary planetary systems while WC-type binary-disk-systems would preferably evolve into circumsecondary systems. However, we do not observe any significant difference in the appearance of CW systems versus WC (see Sect.~\ref{sec:differentialdiskevolution}). In the same way, we do not see any binaries in which both components are orbited by their individual planets, although the majority of disks evolves synchronizedly. In agreement with this result are the findings of \citet{jen03}, who found that disk masses around the primary are always higher than secondary disk masses in four \mbox{T\,Tauri} binaries, independent of their classification as CC, CW, WC, or WW. Again, a possible explanation could be that the evolution of the inner (accretion) disk is not strongly related to the formation of planets and that other factors dominate the planet formation process.
\section{Summary}\label{sec:summary}
We have presented high-spatial-resolution near-infrared spectroscopic and photometric observations of the individual components of 20 young, low-mass visual binaries in the Orion Nebula Cluster. The sample was complemented with similar observations of six additional targets from \citet[\emph{in prep.}]{cor12}. We have measured the relative positions, $JHK_\mathrm{s}$ photometry, and $K$-band spectra including the accretion-indicating Brackett-$\gamma$ feature in order to derive the projected binary separations as well as the absolute magnitude, spectral type, effective temperature, extinction, veiling, luminosity, and the probability of dust and accretion disks around each binary component. By placing the components into an HR diagram and comparing with pre-main sequence evolutionary tracks, we have estimated the individual age, mass, and radius, as well as the mass accretion rate for each individual component.
Putting the results into context with the star forming environment of the Orion Nebula Cluster and with other young low-mass binary studies, we conclude the following:
\begin{enumerate}
\item We have found evidence of a slightly lower frequency of circumstellar disks around the individual components of binaries compared to that around single stars of the ONC, in agreement with theory. We have measured a corrected accretion disk fraction of $40_{-9}^{+10}$\% for stars in multiple systems of the ONC, which is lower than the $\sim$50\% accretion disk fraction of single stars in the ONC found by \citet{hil98}. A similar result was found for dust disks, as indicated by NIR excess emission, although with lower significance. As observed for single stars in other clusters, binary components of the ONC have been more often found to contain dust disks than accretion signatures.
\item The evolution of disks around both components of a binary is correlated for binaries with separations of 200\,AU and below. This was inferred from our inability to detect any mixed pairs of accreting and non-accreting components with separations $<$200\,AU, and that the populations of mixed pairs exhibit significantly larger separations (99.5\% confidence) than pairs of two accreting or two non-accreting components. We have demonstrated that this synchronization is probably not caused by a feeding mechanism involving a circumbinary disk, but possibly instead closer binaries that harbor equal-mass components.
\item Mixed pairs including an accreting primary and those with an accreting secondary have been observed to be almost equally abundant. In addition to the implication that mixed pairs are common, there is apparently no preference for either the disk of the more or less massive binary component to dissolve first. This points to a \emph{weak} correlation between the binary mass ratio and the presence of a disk around either binary component.
\item We have found that the mass accretion rates of binary components in the ONC do not differ from the accretion rates of single stars and binary components in Orion and Taurus, respectively. Since disk masses and radii of primordial disks in the ONC are -- on average -- lower, this can potentially lead to shorter lifetimes of disks around binary components, in agreement with our finding of fewer dust and accretion disks than singles of the ONC.
\item We have measured no strong correlation between the existence of planets around the components of main-sequence binaries and the occurrence of accretion features measured around young binary stars in this paper. Although planets seem to be slightly suppressed in binaries of separations smaller than $\sim$200\,AU \citep[1.6--2.1$\sigma$ significance;][]{egg10}, in wider binaries they are not, and we have not found any equally large differences between the presence of disks in close and wide binary systems.
\end{enumerate}
\begin{acknowledgements}
We thank the referee for a very helpful review leading to a significantly improved paper.
SD would like to thank Thies Heidecke for help with the statistical analysis and Paula S. Teixeira for insightful discussions.
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
In addition, it has used data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
\end{acknowledgements}
\begin{appendices}
|
1,108,101,564,349 | arxiv | \section{Rate of particle production in thermal medium}
At Betatron energy, the temperature of excited nucleus $T_0=3.8\sqrt{N}$, where $N$ refers to the number of excited nuclei (resonances), was related to the excitation energy $U=m_2/(m_1+m_2)^2\, E$, with $m_1 (m_2)$ and $E$ being mass of projectile (mass of target) nucleus and kinetic energy, respectively. $T_0$ was measured as $\sim 10~$MeV. Koppe assumed that even the projectile ($\alpha$-particle) can not remain stable \cite{hkoppe}. Therefore, pair production is likely \cite{hkoppe2}. {\it ''pair-degeneracy''} \cite{hkoppe2} or {\it ''Vacuum dissociation''} \cite{refffff2} was principally investigated from electron-positron pair production. Koppe assumed that the same considerations make it possible to apply this in meson-pair production \cite{hkoppe} and expected a very small number of produced mesons.
The electron (produced particle) gas can be treated as cavity radiation with a concrete energy density relating radiation loss to cross-section of excited nuclei $\sigma$. The temporal evolution of temperature should reflect the expansion of the interacting system. The speed of electrons (produced particles) is very close to $c$. Then, the energy flux caused by electrons relative to light quantum radiation will be increased by the same factor $7/8$. The rate of produced particle was calculated as,
\begin{eqnarray}
\nu(T(t)) &=& \frac{m_b\, \sigma}{\pi^2\, \hbar^3}\, T(t)^2\, e^{-\frac{m_b\, c^2}{T(t)}}.
\end{eqnarray}
Then, the integration results in $ n=a (m_1+m_2) \, T_0 \, \exp(-m_b\, c^2/T_0)$,
where $a=0.031$. Substituting with given values of energy and $T_0$, then the number of mesons which should be produced in $\alpha-A$ collisions at $380~$MeV per unit time was estimated as $\sim1.7\times10^{-4}$.
\section{Particle production and non-equilibrium particle distribution}
In ultra-relativistic nuclear collisions, deconfinement and/or chiral broken-symmetry restoration phase transition(s) is(are) supposed to take place. To study the dynamics and velocity distribution of objects in such thermal background, like transport properties in quark-gluon plasma, Fokker-Planck equation is a well-known tool. The statistical properties of an ensemble consisting of individual parton objects is given by non-equilibrium single-particle distribution function $f$ \cite{Tawfik:2010kz,Tawfik:2010uh,Tawfik:2010pt}. The probability of finding an object in infinitesimal region in phase space is directly proportional to the volume element and $f$. The latter is assumed to fulfil the Boltzmann-Vlasov (BV) master equation,
\begin{eqnarray} \label{eq:te1}
\dot f + \dot{\vec{x}} \cdot \nabla_x f + \dot{\vec{k}} \cdot \nabla_k f + \dot{\vec{q_c}} \cdot \nabla_{q_c} f &=& {\cal G} + {\cal L}.
\end{eqnarray}
The first term in rhs ${\cal G}$ represents gain or rate of particle production with momentum $k + kt$, which is conjectured to lose momentum $kt$ due to reactions with the background. The second term ${\cal L}$ represents loss due to the scattering rate. The effective potential $U$ has to combine the well-known Coulomb $U(x\rightarrow 0)\propto 1/x$ and confined potentials $U(x\rightarrow \infty)\propto 0$. The standard position $\vec{x}$ and momentum $\vec{k}$ variables are given in first two terms in lhs. The third term represents the dynamics of the charge, where $\dot{\vec{k}}$ can be given by field tensor. The fourth term reflects an extension of phase space to include color charge, $\dot{\vec{q_c}}$.
Studying the stochastic behavior of a single object propagating with random noise known as Langevin equation represents one way to solve this problem. A master equation, such as the linearised BV equation, with the Landau soft-scattering approximation would give another method. Koppe's work would be a fundamental-statistical approach for a qualitative estimation for the non-equilibrium rate of particle production.
|
1,108,101,564,350 | arxiv | \section{Introduction}
The Ising model~\cite{Lenz1920,Ising1925} of ferromagnetism in crystals has
been the object of sustained scrutiny since its introduction nearly a century
ago, due to the rich phenomenology it produces from simple
dynamics~\cite{McCoyMaillard2012,Taroni2015}. The Ising model has also had a
far-reaching influence in domains ranging from protein
folding~\cite{TanakaScheraga1977} to social
science~\cite{Schelling1971,Stauffer2007}. Yet it has proven resistant to
analytical solution, except in special cases such as 1 or 2-dimensional
lattices with no external field. Indeed, solving the model in the general case
is known to be NP-hard~\cite{Barahona1982,UngerMoult1993,Istrail2000}. Hence
we largely depend on approximations or numerical simulations for understanding
its properties. Unfortunately, the na\"ive Metropolis algorithm suffers from
poor convergence at precisely the most interesting region of parameter space,
the critical point~\cite{Metropolis1953,Swendsen1987}. Wang and
Landau~\cite{Wang2001,Wang2001a} proposed a more efficient algorithm that
focuses on estimating the density of states. Once the density of states is
known, the system's partition function and related thermodynamic quantities can
be computed without further simulation. The original sequential Wang-Landau
method is not practical for large systems, because its convergence time
increases rapidly with the number of energy states. The state-of-the-art
replica-exchange framework~\cite{Vogel2013,Vogel2014} provides a parallel
algorithm to estimate the univariate density of states $g\left( k \right)$.
However, it is not clear how to apply this parallel scheme to estimate the
joint density of states $g\left( k,v \right)$, necessary for computing physical
quantities in the presence of an external field. Here we use insights from
Moore-Shannon network reliability~\cite{Moore1956191,Moore1956281} to construct
a new parallel scheme that bridges the gap between the Wang-Landau approach and
the estimation of joint density of states $g\left( k,v \right)$. The result is
an efficient estimation scheme for the partition function of the Ising model in
the presence of an external field that performs well even on large, irregular
networks.
The Ising model is defined on a graph $G\left(V,E\right)$ with vertex and edge
sets $V$ and $E$, respectively, by the Hamiltonian
\begin{equation}
H=J\sum_{\left(i,j\right)\in E}\sigma_{i}\sigma_{j}+\mu B\sum_{i\in V}\sigma_{i},\label{eq:Hamiltonian}
\end{equation}
where $\sigma_{i}\in\left\{ -1,1\right\} $ represents the state of the vertex
$i$. $J$ is the coupling strength between neighboring vertices and $B$ is the
external field. The exact solution of the Ising model in one dimension does not
exhibit any critical phenomena. In the study of the order-disorder
transformation in alloys, Bragg and Williams~\cite{Bragg1934,Bragg1935} used a
mean-field approximation for the Hamiltonian in which each individual vertex
interacts with the mean state of the entire system. This is known as the
Bragg-Williams approximation or the zeroth approximation of the Ising
model~\cite{Pathria1996Book}. An analytic expression for the partition function
of a two-dimensional Ising model \textit{in the absence of an external field}
was given by Onsager~\cite{Onsager1944} and later derived rigorously by C.\ N.\
Yang~\cite{Yang1952}. In spite of great effort in the seven decades since, the
exact solution of the 2D Ising model in the presence of an external field
remains unknown.
A network's reliability is the probability it ``functions'' -- i.e., continues
to have a certain structural property -- even under random failures of its
components. It was proposed in 1956 by Moore and
Shannon~\cite{Moore1956191,Moore1956281} as a theoretical framework for
analyzing the trade-off between reliability and redundancy in telephone relay
networks. The desired structural property in that case, known as
``two-terminal'' reliability, is to have a communication path between a
specified source node and specified target node. Since then, a wide variety of
properties have been studied, for example: ``all-terminal'' reliability
requires the entire graph to be connected; ``attack-rate-$\alpha$'' reliability
requires the root-mean-square of component sizes is no less than $\alpha
N$~\cite{Youssef2013}. Network reliability can be expressed as a polynomial in
parameters of the dynamical system whose coefficients encode the interaction
network's structure. A reliability polynomial is the partition function of a
physical system~\cite{EssamTsallis86,welsh2000potts,beaudin2010little}, but it
emphasizes the role of an interaction network's structure rather than the form
of the interactions.
Specifically, the reliability of an interaction network $G\left(V,E\right)$ is
\begin{equation}
R\left(x;r,G\right)\equiv\sum_{s\in\mathcal{S}}r\left(s\right)p_{s}\left(x\right),
\end{equation} where $\mathcal{S}$ is the set of all subgraphs of
$G\left(V,E\right)$; $r\left(s\right)\in\left\{0,1\right\}$ is a binary
function indicating whether the subgraph $s$ has the desired property, i.e.
``two-terminal''; and $p_{s}\left(x\right)$ is the probability resulting in a
modified interaction subgraph $s$. The probability of picking a subgraph
$p_{s}\left(x\right)$ reflects random, independent edge failures in the network
with a failure rate $\left( 1-x
\right)\in\left[0,1\right]$~\cite{Moore1956191}. Hence, with
$M\equiv\left|E\right|$, $p_{s}\left(x\right)=x^{k}\left(1-x\right)^{M-k}$
where $k$ is the number of edges in the subgraph $s$.
If we group all $2^{M}$ subgraphs into $M$ equivalence classes by the number of
edges in the subgraphs, the reliability can be expressed as \begin{equation}
R\left(x;r,G\right)=\sum_{k=1}^{M}R_{k}\left( r,G
\right)x^{k}\left(1-x\right)^{M-k} \label{eq:Rk} \end{equation} where
$R_{k}$ is the number of subgraphs with $k$ edges that have the desired
property. As shown in Section~\ref{sec:ReliParty}, $R_k$ is equivalent to the
density of states in the Ising model.
Evaluating $R_k$ exactly is known to be as difficult as $\#P-complete$.
In practice, however, $R_k$ can be estimated by $R_{k}=P_{k}{M \choose k}$,
where $P_{k}$ is the fraction of subgraphs with the desired property, which
can be estimated via sampling.
In summary, the reliability of a graph $G\left(V,E\right)$ with respect to a
certain binary criterion can be written as a polynomial:
\begin{equation}
R\left(x;r,G\right) = \sum_{k=1}^{M}P_{k}\left( r,G \right){M
\choose k}x^{k}\left(1-x\right)^{M-k} \label{eq:RxrG}
\end{equation}
Each term in the reliability polynomial, Eq.~(\ref{eq:RxrG}), contains two
independent factors: a \textit{structural} factor $P_{k}$ and a
\textit{dynamical} factor $x^{k}\left(1-x\right)^{M-k}$. The reason for
calling this factor ``dynamical'' will become apparent in
Section~\ref{sec:deltaAppr}. The structural factor depends only on the
topology of the graph $G$ and the reliability criterion $r$, whereas the
dynamical factor only depends on the parameter $x$ -- for given values of
$P_k$, the reliability $R$ is a function of $x$ alone.
This separation of \textit{dynamical} and \textit{structural} factors suggests
new, more efficient ways to simulate Ising models. In
Section~\ref{sec:ReliParty}, we will illustrate that the reliability $R\left( x
\right)$ is equivalent to the partition function $Z\left( \beta \right)$ of the
Ising model; and the ``failure rate'' $1-x$ actually corresponds to physical
quantities such as the temperature, the external field and the coupling
strength in the Ising model. In Section~\ref{sec:deltaAppr}, we use this
perspective to show that the Bragg-Williams approximation is given by the
first-order term in a principled approximation to the structural factor. In
Section~\ref{sec:joint}, we use this perspective to extend the Wang-Landau
method into an efficient parallel scheme for estimating the joint density of
states, which we demonstrate on a $32\times 32$ square lattice and a Cayley
tree.
\section{Network Reliability and Partition Function}\label{sec:ReliParty}
The Ising model assumes that the state of a site is binary, either ``spin-down"
($\sigma_{i}=-1$) or ``spin-up" ($\sigma_{i}=1$), and that each site interacts
only with its nearest neighbors, with a coupling strength $J$. All sites are
exposed to a uniform external field $B$. The collection of all the sites'
states is called a ``microstate'' of the system. The Hamiltonian for the Ising
model on a graph $G\left(V,E\right)$ is shown in Eq.~(\ref{eq:Hamiltonian}).
The canonical partition function $Z\left(\beta,B,J\right)$ is given by the
summation of $\exp\left(-\beta H_{s}\right)$ over all possible microstates $s$:
$Z\left(\beta,B,J\right)=\sum_{s}e^{-\beta H_{s}}$, where
$\beta=\left(k_{B}T\right)^{-1}$ is the inverse temperature. In the
alternative expression of the reliability polynomial Eq.~(\ref{eq:RxrG}), the
summation over all subgraphs is organized into equivalence classes by the
number of edges in the subgraphs. Similarly, we can group all microstates into
equivalence classes (energy levels) determined by the number of adjacent sites
in opposite states (``discordant vertex pairs" or ``edges'') and the number of
spin-up sites. With $N\equiv\left|V\right|$, the partition function can be
expressed as:
\begin{equation}
Z\left(\beta,B,J\right)
= C
\sum_{k=0}^{M}\sum_{v=0}^{N}
g\left(k,v\right)e^{-2\beta\left(Jk+\mu Bv\right)}
\label{eq:ZBT}
\end{equation}
where $C\equiv e^{\beta\left(JM+\mu BN\right)}$ and $g\left(k,v\right)$ is the
number of microstates with $v$ spin-up vertices and $k$ discordant adjacent
vertex pairs (edges). Note that in the absence of an external field ($B=0$),
the sum over $v$ reduces to the univariate density of states $g\left( k
\right)$. Eq.~(\ref{eq:ZBT}) is a useful form for deriving a
``low-temperature'' expansion~\cite{Pathria1996Book}, in which only equivalence
classes with small $k$ and $v$ contribute. In analogy with the reliability
polynomial Eq.~(\ref{eq:RxrG}), each term in Eq.~(\ref{eq:ZBT}) can be factored
into two separate parts: \textit{structural} -- the number of microstates
$g\left(k,v\right)$ determined by the graph, and \textit{dynamical} -- the
physical quantities $\beta$, $J$ and $B$-- or \textit{thermal} to be more
precise in the Ising model context. Just as the structural factors $R_k(r, G)$
of the reliability $R\left( x; r, G \right)$ can be computed independently of
$x$, Eq.~\ref{eq:Rk}, $g\left( k,v \right)$ can be computed independently of
$\beta$, $B$ or $J$. Once we have $g\left( k,v \right)$, we can plug in any
value of physical quantities and compute the thermodynamic functions without
any further simulation. This is more efficient than the traditional Metropolis
methods. This observation has also been made by Wang and
Landau~\cite{Wang2001,Wang2001a}. By introducing the transformation
$x\left(B,\beta\right)\equiv\left(1+e^{2\beta\mu B}\right)^{-1}$ and
$y\left(J,\beta\right)\equiv\left(1+e^{2\beta J}\right)^{-1}$, we can express
the partition function $Z\left(\beta,B,J\right)$ as a bivariate reliability
polynomial $R\left(x,y;r,G\right)$ using the transformation $\beta\mu B
\equiv \frac{1}{2}\ln\frac{1-x}{x}$ and $\beta J
\equiv\frac{1}{2}\ln\frac{1-y}{y}$ (Appendix~\ref{apdx:A}):
\begin{eqnarray*}
Z\left(\beta,B,J\right) & \propto &
\sum_{v,k}g\left(k,v\right)x^{v}\left(1-x\right)^{N-v}y^{k}\left(1-y\right)^{M-k}\\
& = &
R\left(x,y;r,G\right)
\end{eqnarray*}
Note that the density of states $g\left( k,v \right)$ is equivalent to $R_k$,
the number of subgraphs satisfying a binary criterion, Eq.~\ref{eq:Rk}. We
call the corresponding reliability criterion $r$ \textit{Ising-feasibility}: a
subgraph $s$ is Ising-feasible if and only if it is possible to find an
assignment of spins to all vertices such that every pair of discordant vertices
connected by an edge in $G$ is also connected by an edge in $s$ and there is no
edge between any other pair of vertices. Fig.~\ref{fig:IsFeasible}a illustrates
an Ising-feasible microstate on a 4-by-4 square lattice;
Fig.~\ref{fig:IsFeasible}b, an infeasible one. Thus the Ising model's
partition function is a bivariate reliability polynomial with the special
Ising-feasibility criterion.
\begin{figure}
\includegraphics[width=0.6\columnwidth]{Fig1}
\protect\caption{\label{fig:IsFeasible}(a) An Ising-feasible configuration with
three spin-up vertices (red dots) and eight discordant spin pairs or edges
(solid line segments). (b) A configuration that is not Ising-feasible because
of the inconsistent edges (red line segments). Independently choosing edges and
spin-up vertices will rarely produce an Ising-feasible configuration, but any set
of randomly chosen spin-up vertices uniquely determines a set of edges.
}
\end{figure}
\section{$\delta-$Function Approximation}\label{sec:deltaAppr}
By definition, the structural factor $g\left( k,v \right)$ is independent of any of the
physical variables, $\beta$, $J$, or $B$. Solving the Ising model
numerically on any graph $G$ requires only estimating its joint density of
states $g\left( k,v \right)$.
Given the joint density of states, the partition function, and thus any
thermodynamic quantities, can easily be evaluated for any particular values of
$\beta$, $J$ and $B$. We use Monte Carlo sampling to estimate $g\left( k,v
\right)$. Because sampling vertices and edges independently rarely produces an
Ising-feasible configuration,
we randomly assign $v$ vertices to be in the spin-up state and then measure the
number of discordant node pairs (edges) $k$.
We then estimate the conditional probability $p\left(k|v\right)$
by the frequency of producing $k$ edges given $v$ spin-up vertices.
Because there are exactly ${N \choose v}$ ways to choose $v$ vertices, the
joint density of states $g\left(k,v\right)$ can be expressed as
$p\left(k|v\right){N \choose v}$. For example, on a 2D square lattice with
periodic boundary conditions,
when $v=1$ the only feasible microstates have $k = 4$. Therefore,
the number of microstates with 4 edges and 1 spin-up vertex is
$g\left(4,1\right)=1\cdot{N \choose 1}=N$.
Similarly, for $v=2$, $k=6$ when the two chosen vertices are neighbors and
$k=8$ when they are not. Assuming $N\geq4$, the corresponding
conditional probabilities $p\left(k=6|v=2\right)=\frac{4}{N-1}$
and $p\left(k=8|v=2\right)=\frac{N-5}{N-1}$. The
number of states $g\left(k=6,v=2\right)$ and $g\left(k=8,v=2\right)$ can be
calculated accordingly by multiplying ${N \choose 2}$.
These are the lowest order terms in the low temperature expansion.
In general, $p\left(k|v\right)$ is very difficult
to compute analytically.
\begin{figure}
\includegraphics[width=\columnwidth]{Fig2}
\protect\caption{\label{fig:deltaApproxUandC}
(a) Conditional state distribution of $p\left(k|v\right)$ sampled by a na\"ive
Monte Carlo simulation on a 16-by-16 square lattice. The conditional probability
is normalized separately for each value of $v$. The color reflects the value of $p\left(k|v\right)$ in
logarithmic scale. As $p\left(k|v\right)$ is symmetric along $v=N/2$, the simulation
is only done for $v\leq N/2$.
(b) The peaks of $p\left(k|v\right)$ for various lattices have the same functional form:
$y\left(x\right)=2x\left(1-x\right)$, where $x\equiv v/N$ and $y\equiv k/M$.
}
\end{figure}
An example of $p\left(k|v\right)$ sampled using a na\"ive
Monte Carlo method on a 16-by-16 square lattice is shown in
Fig.~\ref{fig:deltaApproxUandC}a. Note that, because $p\left( k|v \right)$ is
the \textit{conditional} density function, it is normalized separately for each value of $v$ so that $\sum_k p\left( k|v
\right) =1$. Also, for a 16-by-16 square lattice, $N=256$ and
$M=512$, the maximum of $k$ can be as great as 512.
This maximum is only achieved by a microstate in which spin-up and spin-down
sites strictly alternate. There are only two such states out of ${256 \choose
128}$ possible microstates with v=128. The na\"ive Monte Carlo method described
above can hardly be expected to sample microstates as rare as this.
Interestingly, as we explain in Section~\ref{sec:joint}, these rare microstates
can dominate the value of the joint density of states.
Empirically, the peaks of $p\left(k|v\right)$ lie on the curve
$\frac{k}{M}=2\frac{v}{N}\left(1-\frac{v}{N}\right)$. This functional
relationship seems independent of the system size $N$ or the coordination
number (mean degree) $q\equiv2M/N$ of the lattice,
Fig.~\ref{fig:deltaApproxUandC}b. A simple argument suggests why this is the case.
If the spin-up vertices are distributed uniformly across the lattice,
the probability that the neighbor of a spin-up vertex is spin-{\em down} is $1 - \frac{v}{N}$.
For $v$ spin-up vertices, each with $q$ neighbors, the expected number of discordant pairs is
thus $\frac{1}{2}qv(1 - \frac{v}{N})$.
As the system size goes to infinity, $N\rightarrow\infty$, the conditional
probability $p\left(k|v\right)$ becomes more sharply peaked at its center. We
can approximate $p\left(k|v\right)$ as a Kronecker $\delta$-function
$p\left(k|v\right)\simeq\delta\left(\frac{k}{M},2\frac{v}{N}\left(1-\frac{v}{N}\right)\right)$.
Inserting the $\delta$-function approximation for $p\left(k|v\right)$ in our expression for the partition function, Eq.~(\ref{eq:ZBT}), yields:
\begin{eqnarray}
Z\left(\zeta,\eta\right) & = & C\sum_{e,v}p\left(k|v\right){N \choose v}e^{-2\left(\zeta v+\eta k\right)}\notag\\
& \simeq & C\sum_{e,v}\delta\left(k/M,y\left(v/N\right)\right){N \choose v}e^{-2\left(\zeta v+\eta k\right)}\notag\\
& \simeq & C\sum_{v}{N \choose v}e^{-2N\left(\zeta\frac{v}{N}+\frac{1}{2}\eta qy\left(v/N\right)\right)}
\label{eq:Zintegral}
\end{eqnarray}
where $\zeta\equiv\beta\mu B$, $\eta\equiv\beta J$ and $y\left( x
\right)=2x\left( 1-x \right)$.
This produces the Bragg-Williams mean-field approximation \cite{Bragg1934,Bragg1935}
, where the interaction
term in the Hamiltonian $-J\sum_{\left(i,j\right)\in E}\sigma_{i}\sigma_{j}$ is
approximated as
$-J\left(\frac{1}{2}q\overline{\sigma}\right)\sum_{i}\sigma_{i}$, and
$\overline{\sigma}=\frac{1}{N}\sum_{i}\sigma_{i}$ is the average spin of the system.
The Bragg-Williams mean-field approach -- and hence Eq.~\ref{eq:Zintegral} --
incorrectly predicts that one-dimensional systems exhibit a critical point.
According to Eq.~\ref{eq:Zintegral}, the partition function depends on the
dimension of the system and the graph structure only through $q$, the
coordination number, where $q=2$ for a 1D lattice, $q=4$ for a 2D square
lattice and $q=6$ for a 2D triangular lattice. Moreover, its dependence on $q$
is only through the product $N\eta qy(x)$. If the external field is zero
($\zeta=0$), changing $q$ is equivalent to changing the system size $N$ or
coupling strength $\eta$. In other words, a 2D square lattice with size $N$
behaves the same as a 1D lattice with size $2N$ in this approximation, which is
physically incorrect. In Section~\ref{sec:joint} we explore the causes of this
failure and explain how to address it.
\section{Estimating the Density of States}\label{sec:joint}
Although, for a \textit{particular} $v$, it is reasonable to approximate
$p\left(k|v\right)$ as a $\delta$-function, critical phenomena are determined
by \textit{all} $p\left(k|v\right)$ synergistically. The Ising model is hard
to solve exactly because extremely rare events for one value of $v$ are as
important as the most common events for another value. To demonstrate this, we
first transform the conditional probability $p\left(k|v\right)$ to the number
of states $g\left(k,v\right) = p\left(k|v\right){N \choose v}$. Since the
binomial factor ${N \choose v}$ scales exponentially with $v$, it can dominate
the ratio $g\left(k,v_i\right) / g\left(k,v_j\right)$. \textit{Ceteris
paribus}, this makes contributions to $Z$ from the tails of $p(k|v_i)$
comparable to contributions from the peaks of $p\left(k|v_j\right)$. The joint
density of states of a 5-by-5 2D lattice is shown in Fig.~\ref{fig:gkv5by5}.
Consider $g\left(16,5\right)$, the number of microstates with $k=16$ discordant
neighbors when there are $v=5$ spins up. It corresponds to the \textit{peak}
of $p\left(k|5\right)$, and is roughly the same as $g(16,10)$, which is in the
\textit{tail} of $p\left(k|10\right)$. The na\"ive Monte Carlo method misses
the tail of $p\left(k|v\right)$, and is thus inaccurate.
\begin{figure}
\centering\includegraphics[width=0.6\columnwidth]{Fig3}
\protect\caption{\label{fig:gkv5by5}
The exact joint density of states $g\left(k,v\right)$ computed via exhaustive
enumeration on a 5-by-5 square lattice. Because a na\"ive Monte Carlo only
samples points near the \textit{peak} of each curve, and the tails of many curves are
as important as the peaks of others, it severely underestimates the univariate density of states $g\left( k
\right) \equiv \sum_v g\left(k,v\right)$ at $k=16$.
}
\end{figure}
Despite the failure of the na\"ive Monte Carlo method, the strategy of dividing
energy states into equivalence classes remains valuable. It separates the
estimation of the joint density of states $g\left(k,v\right)$ into $N/2$
independent estimations of univariate distributions $p\left(k|v\right)$, thus
enabling a novel parallel estimation scheme. And, each of $p\left(k|v\right)$
can be estimated using the improved Wang-Landau (WL)
algorithm\cite{Wang2001,Wang2001a}. The WL algorithm is a Markov-chain Monte
Carlo algorithm to obtain the univariate density of states $g\left(k\right)$
for the Ising model.
The WL algorithm is very similar to the
Metropolis-Hasting~\cite{Metropolis1953,HASTINGS1970} algorithm. However,
instead of \textit{assuming} the detailed balance condition, the WL algorithm
pursues its so-called ``flat'' histogram by sculpting the $g\left(k\right)$
gradually during the simulation. Therefore, the running time of the WL
algorithm largely depends on the number of energy states. As the number of
states in $g\left(k,v\right)$ is proportional to $O\left( N^2 \right)$, the
\textit{square} of the number of states in $g\left( k \right)$, the WL
algorithm takes a tremendous amount of time to converge when computing the
joint density of states~\cite{landau2004}. Each step in the random walk in WL
algorithm flips the spin of a random vertex, which inevitably changes both $v$
and $k$. Our modification of this algorithm is to constrain the random walk to
maintain $v$ invariant. For each $v$-spin subspace, we assign an independent
random walker. Therefore, the number of energy states is reduced to $O\left( N
\right)$ for each walker. Specifically, instead of randomly flipping the spin
of a vertex as is done in the WL random walk, each step of our random walk
exchanges the locations of a spin-up vertex and spin-down vertex. The rest of
the algorithm is as the same as the WL algorithm \cite{Wang2001},
Appendix~\ref{apdx:C}.
\begin{figure}
\centering\includegraphics[width=0.6\columnwidth]{Fig4}
\protect\caption{\label{fig:Time}
The running time for estimating the joint density of states $g\left( k,v \right)$
on 2D lattices of different sizes, from $N=8\times 8$ to $N=24\times 24$.
The sequential Wang-Landau algorithm (blue) needs to cover $O\left( N^2 \right)$ energy states,
and become impractical for large systems.
The spin-exchange WL algorithm (red) divides the energy states into $N/2$ energy slices,
and the running time is bounded by the energy slice with the most number of states at $v=N/2$.
This energy slice only contains $O\left( N \right)$ states.
By dividing the energy slice into 6 equal-sized, 75\% overlapping energy windows,
the running time is reduced even further (yellow).
}
\end{figure}
To demonstrate the efficiency of our algorithm, we compare the running
time on 2D lattices of different sizes, from $N=8\times 8$ to $N=24\times
24$, Fig.~\ref{fig:Time}. The number of energy states is proportional to $N^2$.
The running time for the sequential WL algorithm (blue) grows exponentially as the
number of energy states increases. It becomes impractical for large systems $N>10^3$.
The spin-exchange method (red) splits the energy states into $N/2$
$v$-specific energy slices of different sizes.
The overall running time is bounded by that of the slice for which $v=N/2$, which contains the most energy states.
As the number of energy states in each slice is $O\left( N \right)$, the spin-exchange
method is much faster than the sequential WL algorithm.
Since each energy slice essentially is a univariate density, we can reduce the
computation time even further by dividing an energy slice into multiple overlapping energy windows.
We tested using six $75\%$ overlapping energy windows (yellow).
The running time test simply assumes independent random walkers in each window.
One can choose more sophisticated methods,
such as the replica-exchange scheme~\cite{Vogel2013,Vogel2014}.
\begin{figure}
\centering\includegraphics[width=\columnwidth]{Fig5}
\protect\caption{\label{fig:UC2D}
(a) The joint density of states $g\left( k,v \right)$ of a 32-by-32 square lattice.
(b) The univariate density of states $g\left( k \right)\equiv \sum_v g\left(
k,v \right)$ estimated using a spin-exchange MCMC algorithm,
compared with the exact analytical result.
(c) The internal energy $U=-\partial \ln Z / \partial \beta$, and
(d) the heat capacity $C=\partial U / \partial T$
with coupling strength $J=0.5$. }
\end{figure}
We apply our algorithm on a 32-by-32 square lattice, with $\sim 0.5\times 10^6$
energy levels (equivalence classes) and more than $10^{308}$ microstates. Note
that, for the same system, the univariate density of states $g\left( k \right)$
only has $\sim 10^3$ energy levels . Fig.~\ref{fig:UC2D}a shows estimates for
the joint density of states. The simulation is performed using 300 cores
within two days. To verify this result, we compare its projection onto the
univariate density of state $g\left( k \right)\equiv\sum_v g\left( k,v \right)$
with the known analytical result~\cite{beale1996}. As shown in
Fig.~\ref{fig:UC2D}b, the agreement is very good. Given the joint density of
states, we can easily find the partition function $Z\left( \beta,B,J \right)$
using Eq.~(\ref{eq:ZBT}). Then, without additional simulations, any
thermodynamic functions can be obtained directly from the partition function,
such as the internal energy $U=-\partial \ln Z / \partial \beta$ and the heat
capacity $C=\partial U / \partial T$. Fig.~\ref{fig:UC2D}c and d show the
internal energy $U$ and heat capacity $C$ as a function of inverse temperature
$\beta$ and external field $B$ (assuming the magnetic susceptibility $\mu=1$)
at $J=0.5$. The heat capacity curve presents the correct critical point of
$k_BT/J=2.27$ at $B=0$. As the heat capacity is known to be very sensitive to
the density of states, in Fig.~\ref{fig:ApdxFig}a Appendix~\ref{apdx:D}, we
show that the heat capacity at $B=0$ from our simulation agrees with the one
from the analytic result.
\begin{figure}
\centering\includegraphics[width=\columnwidth]{Fig6}
\protect\caption{\label{fig:Cayley}
(a) A Cayley tree with degree $d=3$ and number of shells $r=6$.
(b) The joint density of states $g\left( k,v \right)$ of the Cayley tree with $d=3$ and $r=8$ ($N=765$).
As $d=3$, there are many inaccessible states.
(c) The heat capacity of the Ising model on the Cayley tree. The red curve is from the exact solution at $B=0$.
}
\end{figure}
We also apply our algorithm on {\color{black} Cayley trees (a finite-size analogue to Bethe lattices)},
where the exact result of the Ising model for $B=0$ is
known~\cite{baxter1982,eggarter1974}. A Cayley tree has a central vertex and
every vertex (except leaves) has $d$ neighbors, Fig.~\ref{fig:Cayley}a. It is
defined by two parameters, the degree $d$ and the number of shells $r$. There
are $d \left( d-1 \right)^{\left( j-1 \right)}$ vertices at $j$-th shell and
$d\left[ \left( d-1 \right)^r -1 \right]/\left( d-2 \right)$ vertices in total.
So the ratio of the number of leaves to the system size tends to $\left( d-2
\right)/\left( d-1 \right)$. The dimensionality
$\lim_{n\rightarrow\infty}\left( \ln c_n \right)/\ln n \rightarrow \infty$,
where $c_n$ is the number of vertices within $n$ shells. All these
characteristics make Cayley trees very different from a regular lattice and
very interesting to study. The simulation on a Cayley tree with $d=3$ and $r=8$
($N=765$) yields the joint density of states as shown in
Fig.~\ref{fig:Cayley}b. As $d=3$ in this particular Cayley tree, there are
inaccessible states (``holes'') in the $g\left( k,v \right)$. The heat
capacity is shown in Fig.~\ref{fig:Cayley}c and a comparison with the analytic
result at $B=0$ is shown in Fig.~\ref{fig:ApdxFig}b in Appendix~\ref{apdx:D}.
{\color{black} Due to the difference in topologies, the heat capacity of the Ising model
on the Cayley tree is very different from that on the 2D square lattice.}
The spin-exchange WL algorithm proposed above provides a unique and efficient
parallel scheme for computing the joint density of states of Ising models in
the presence of an external field. Essentially, this parallel scheme splits the
joint density of states $g\left(k,v \right)$ into $N/2$ conditional densities
$p\left( k|v \right)$.
\section{Conclusion}
Network reliability is a general framework for understanding the interplay of
network topology and network dynamics. Here we have used network reliability to
study a prototypical network dynamical system -- the Ising model. This
framework can be adapted to other network dynamics as well, by defining a
suitable feasibility criterion for microstates.
The network reliability perspective separates effects of network structure from
dynamics in the system's partition function. Based on this separation, we
introduced a $\delta$-function approximation for the density of states, which
leads to the Bragg-Williams approximation for the internal energy. We also
showed why a na\"ive Monte Carlo method is not accurate enough for estimating
the joint density of states. Finally, we introduced a novel parallel scheme
using a spin-exchange MCMC algorithm for estimating the joint density of
states. The scheme requires no inter-processor communication and can take
advantage of the replica-exchange parallel framework. We applied our method to
a periodic 32-by-32 square lattice estimating its internal energy and heat
capacity as a function of both temperature and external magnetic field.
This work will make simulations of Ising-like dynamics on large, complex
networks feasible and efficient, and opens the door to studying the Ising model
in the presence of an external field.
An efficient algorithm makes it possible to study the effects of network structure
in systems that are too irregular to admit closed-form solutions. Furthermore,
as is suggested by Fig.~\ref{fig:UC2D}d, the nature of the phase transition
depends on the external field strength. Our approach enables studies of such
phenomena in large systems for the first time.
\begin{acknowledgments}
Research reported in this publication was supported by
the National Institute of General Medical Sciences of the National
Institutes of Health under Models of Infectious Disease Agent Study Grant
5U01GM070694-11, by the Defense Threat Reduction Agency under Grant
HDTRA1-11-1-0016 and by the National Science Foundation under Network
Science and Engineering Grant CNS-1011769. The content is solely the
responsibility of the authors and does not necessarily represent the
official views of the National Institutes of Health, the Department of
Defense or the National Science Foundation. We would like to
thank P. D. Beale for providing the code to compute exact univariate
density of states, and T. Vogel for discussing the replica-exchange
algorithm. We would also like to thank Y. Khorramzadeh, Z. Toroczkai,
M. Pleimling, U. T\"{a}uber and R. Zia for comments and suggestions.
\end{acknowledgments}
|
1,108,101,564,351 | arxiv | \section{Introduction}
This work considers large scale convex optimization problems that are defined over networks, and develops and analyzes distributed algorithms that are compatible with the communication constraints to solve them. Such optimization problems naturally arise in many engineering scenarios.
For example, problems such as estimation in sensor networks, distributed control of multi-agent systems, and resource allocation, can be formulated as distributed convex programs \cite{boyddistributed,nedicnetworktopology}. Advantages of distributed optimization over its centralized counterpart lie in that it offers a flexible and robust solution framework where only locally light computations and peer-to-peer communication are required to minimize a global objective function.
Due to their wide applications, distributed multi-agent decision making has been recently widely studied by researchers. In the literature, two types of distributed optimization problems are of particular interest, that is, optimization problems with \emph{coupled cost functions} or \emph{coupled constraints}.
They are essentially different in terms of the coupling sources that prevent decomposition of the original problem, thus making the design challenging.
For \emph{optimization problems with coupled costs}, early distributed optimization algorithms can be found in the seminal work \cite{tsitsiklisdistributed}, where multiple processors cooperatively minimize a common objective function by conducting gradient-based local iterations and exchanging information with other agents asynchronously.
Notable recent distributed optimization algorithms are reported in \cite{nedicachieving,yuanontheconvergence,xuconvergence,xuabregman,quharnessing,shiextra,shionthelinear,duchidualaveraging,hosseinionline}.
Technically speaking, their designs both contain the following two crucial steps. In the first step, one assigns local copies about the global decision variable to each node such that each node has a local version of the optimization variable to work with and imposes a consensus constraint on local estimates to guarantee the equivalence to the original problem. Then, a local iteration rule associated with an appropriate synchronization mechanism for updating local estimates of the global minimizer is designed.
Existing methods essentially differ from each other in terms of the design in the second step.
Regarding the local update of global variables, the algorithms reported in \cite{nedicachieving,yuanontheconvergence,xuconvergence,xuabregman,quharnessing,shiextra,shionthelinear} update local estimates about the minimizer by using primal methods, which directly generate points in the feasible set that is contained in the primal space of variables.
Typical primal methods include the projected subgradient method, where the minimizing sequences are generated by shifting the test point along the opposite directions of subgradients and conducting Euclidean projections in an iterative way;
for detailed introduction about primal and dual methods the readers are referred to the monograph \cite{nemirovskyproblem}.
In this class of methods, consensus among local estimates is usually enforced by averaging each agent's local estimate and the information received from its immediate neighbors at each round based on certain weight matrices, e.g., doubly stochastic matrices.
For unconstrained optimization problems with smooth objective functions, recent works make use of the dynamic average consensus scheme \cite{zhudiscretetime} to track the gradient of the overall objective, and consecutively move the local estimates along the opposite directions of the approximated overall gradients to achieve minimization.
Although the schemes developed in \cite{quharnessing,nedicachieving,xuconvergence} share similarities, the methodologies for establishing convergence properties are different from each other. For example, the authors in \cite{nedicachieving} and \cite{quharnessing} develop convergence results based on the small-gain theorem and linear system inequalities, respectively.
These schemes provably enjoy faster convergence rate and exact consensual minimization; please see \cite{quharnessing} for more details.
There are also some distributed optimization algorithms available in the literature \cite{duchidualaveraging,hosseinionline,shahrampourdistributed} where the local iteration rule works in the dual space, e.g., dual averaging \cite{nesterovprimal}. It is shown in \cite{duchidualaveraging} that minimizing the dual model of the objective function can alleviate some technical difficulties caused by the projection step used in primal methods.
It is worth noting that all the aforementioned algorithms may not be able to generate \emph{a convergent sequence of test points}. Indeed, they only guarantee convergence of the objective function values at the {running average} along the local minimizing sequence, i.e., ergodic convergence properties. This essentially allows undesired jumps of the objective function values at some iterations, possibly threatening the stability of the distributed system. In centralized optimization, this problem may be mitigated by further considering the best test point achieved so far. This procedure, however, may be not implementable in distributed scenarios since the quality test of certain points requires knowledge of the global objective.
In another line of research, \emph{optimization problems with coupled constraints} have also recently been extensively studied in the literature \cite{nesterovdualsubgradient,nedicapproximate,falsonedualdecomposition,simonettoprimalrecovery,mateosdistributed,wangdistributedMPC,chatzipanagiotisanaugmented,notarnicolaconstraint,shersonontheduality,liangdistributed}. In this class of problems, each agent holds its own decision variable, objective function and constraints, and is coupled via global inequality constraints. A powerful methodology to this kind of problem is known as the Lagrangian relaxation that transforms the primal problem with shared constraints to the corresponding dual problem with coupled costs and solves it; see \cite{nedicapproximate,nesterovdualsubgradient,conejodecomposition} for more details.
It is worth mentioning that, in such a framework having the optimal dual variable does not necessarily give us an optimal primal variable as the dual objective is generally nonsmooth at the optimal dual point, and thus nontrivial primal recovery schemes are required \cite{gustavssonprimal}.
Examples of this method include dual decomposition and augmented Lagrangian methods (also known as the method of multipliers) \cite{chatzipanagiotisanaugmented}.
Note that standard dual decomposition \cite{nedicapproximate,nesterovdualsubgradient} requires a fusion center that is able to communicate with all other agents to collect necessary gradient information of the dual objective function.
To enable fully distributed implementation, the work in \cite{wangdistributedMPC} forms a double-loop algorithm that combines the accelerated gradient method and a finite time consensus scheme to tackle the dual problem.
{ Notably, the authors in \cite{liangdistributed} theoretically validate the use of a constant stepsize for the case with the objectives and the functions that characterize the coupled constraint being smooth.}
In a nonsmooth scenario, recent work in \cite{notarnicolaconstraint} properly relaxes the constraint-coupled problem and explores the duality principle twice to design a distributed iteration scheme. It is shown in \cite{notarnicolaconstraint} that the primal variable converges without any averaging steps.
Alternatively, the authors in \cite{simonettoprimalrecovery,falsonedualdecomposition,mateosdistributed} resort to the consensus-based distributed subgradient methods to solve the dual problem. To be specific, the authors in \cite{shersonontheduality} propose to use the alternating method of multipliers (ADMM) and the primal-dual method of multipliers (PDMM) to solve the dual problem; however they do not present convergence results.
The work in \cite{simonettoprimalrecovery} establishes convergence rate for constant stepsizes and \cite{mateosdistributed} for decaying stepsizes both in virtue of the assumption that a Slater point exists and is known to all agents, which is somewhat restrictive although a construction process of a Slater point requiring extra negotiation between agents is provided in \cite{mateosdistributed}.
The framework considered in \cite{falsonedualdecomposition} relaxes this assumption, but the requirement on the stepsize is more restrictive, i.e., square summable stepsizes, and the convergence rate is missing.
It is worth mentioning that the consensus-based distributed subgradient methods used in \cite{simonettoprimalrecovery,mateosdistributed,falsonedualdecomposition} for solving the dual problem cannot generate a convergent sequence of dual variables but their running averages or the best achieved dual variables, as we explained above.
As a consequence, the obtained dual variable sequence that plays an important role in allocating the coupled constraint resources does not necessarily stabilize the multi-agent system.
Note that the constraint-coupled optimization problems can be converted to problems with coupled costs by augmenting the local variable such that each agent becomes interested in a copy of the global variable. In doing so, the algorithms in \cite{nedicconstrainedconsensus,duchidualaveraging,shionthelinear} can be applied but with an increased communication and computation load.
This paper mainly contributes to this area in two aspects.
\begin{itemize}
\item The first contribution of this work is to provide a distributed subgradient method with double averaging (abbreviated as ${\rm DSA_2}$) for nonsmooth optimization that enjoys \emph{non-ergodic convergence properties}, i.e, the local minimizing sequence itself is convergent. The developed method is based on the centralized {subgradient method with double averaging} (${\rm SA_2}$) recently developed in \cite{nesterovquasimonotone,nesterovdualsubgradient}. However, the methodology for establishing convergence properties is significantly different from that in \cite{nesterovquasimonotone}, since in consensus-based distributed optimization one should carefully handle the inexact subgradient information and quantify the network effect caused by distributed implementation. Compared to existing distributed dual methods, e.g., distributed mirror descent \cite{shahrampourdistributed} and distributed dual averaging \cite{duchidualaveraging}, we further introduce an averaging step to the distributed minimization scheme and theoretically show that it is this extra averaging step that makes the sequence of local test points convergent. Since dual methods require a linear model to minimize at each round, the dynamic average consensus scheme is recruited to track the overall gradient as in \cite{quharnessing,xuconvergence} such that each agent maintains a local estimate of the global gradient to form the linear approximation of the global objective.
We establish an $O(\frac{1}{\sqrt{t}})$ convergence rate for the proposed strategy, which is known as the best achievable rate of convergence for subgradient methods. See Table \ref{overview} for a detailed comparison between the proposed algorithm and existing results.
\item
Extensions are made to solve large scale optimization problems with coupled functional constraints by combining ${\rm DSA_2}$ and dual decomposition.
By Lagrangian relaxation, the coupling in constraints in the primal problem is first transformed into that in objective functions of the dual problem.
Then a primal-dual sequence is constructed by solving the dual problem via ${\rm DSA_2}$ and using local estimates of the optimal dual to derive the corresponding primal variable.
A feature of this strategy is that agents only negotiate on dual variables but do not exchange information about local objective functions, constraints, and their optimal decisions, which can effectively help secure privacy among agents.
We theoretically show that both the dual objective error and the quadratic penalty for the coupled constraint admit $O(\frac{1}{\sqrt{t}})$ upper bounds, and the primal objective error vanishes asymptotically.
Numerical simulations and comparisons with state-of-the-art algorithms verify our theoretical findings.
\end{itemize}
\emph{Notation}: We denote by $\mathbb{R}$ the set of real numbers and $\mathbb{R}^m$ the $m$-dimensional Euclidean space.
In this space, we let $\lVert \cdot \rVert_p$ denote the $l_p$-norm operator, $\lVert \cdot \rVert_*$ the dual norm of $\lVert \cdot \rVert$, and $\langle \cdot,\cdot \rangle $ the inner product of two vectors. $0_m\in\mathbb{R}^m$ represents the vector of all zeros, and $\mathbf{1}$ stands for an $m$-dimensional all one column vector.
Notation `$\geq$' is element-wise when applied to vectors.
For a column vector $x\in\mathbb{R}^m$,
$x(i)$ denotes the $i$th element of vector $x$.
$\triangle_m =\{ x\in\mathbb{R}^m \lvert x\geq 0_m,\sum_{i=1}^{m}x(i)=1 \}$ represents the $m$-dimensional probability simplex.
Given an $m\times m$ matrix $A$, we denote its singular values by $\sigma_1(A)\geq \sigma_2(A)\geq \cdots \geq \sigma_m(A) \geq 0$.
A sequence $\{x_t\}_{t\geq 0}$ is said to have non-ergodic (ergodic) convergence rate $O(\cdot)$ if the rate is evaluated at the test point itself $x_t$ (a supporting running sequence $y_t=\frac{1}{t+1}\sum_{k=0}^{t}x_k$).
\begin{table*}
\caption{An overview of existing distributed optimization algorithms. }
\label{overview}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\hline
\multirow{2}{*}{Algorithms} &
\multirow{2}{*}{${\rm DSA_2}$}&
\multicolumn{4}{|c|}{\bf{Coupled costs}} &
\multicolumn{5}{|c|}{\bf{Coupled constraints}} \\
\cline{3-11}
& & \cite{duchidualaveraging} & \cite{nedicconstrainedconsensus} & \cite{quharnessing} & \cite{shiextra}& \cite{nedicapproximate} & \cite{nesterovdualsubgradient} & \cite{mateosdistributed}& \cite{falsonedualdecomposition}& {\cite{notarnicolaconstraint}} \\
\hline
\multirow{2}{*}{Assumptions} & \multicolumn{3}{|c|}{\multirow{2}{*}{Convex, Constrained,}} & \multicolumn{2}{|c|}{\multirow{2}{*}{Convex, Smooth,}} & \multicolumn{2}{|c|}{\multirow{2}{*}{Convex, Constrained}} & \multicolumn{3}{|c|}{\multirow{2}{*}{Convex,}} \\
&\multicolumn{3}{|c|}{Bounded subgradient} & \multicolumn{2}{|c|}{Unconstrained} & \multicolumn{2}{|c|}{Fusion center} & \multicolumn{3}{|c|}{Constrained}\\
\hline
Exactness & \multicolumn{5}{|c|}{Yes}& No & \multicolumn{4}{|c|}{Yes} \\
\hline
Iteration rule & \multicolumn{2}{|c|}{Dual methods} & \multicolumn{4}{|c|}{Primal methods} & Dual methods & \multicolumn{3}{|c|}{Primal methods} \\
\hline
\multirow{3}{*}{Convergence}& \multicolumn{4}{|c|}{Objective error} & {Fixed point residual} & \multicolumn{5}{|c|}{Objective error} \\
\cline{2-11}
&\multicolumn{1}{|c|}{\bf{Non-ergodic}}&\multicolumn{4}{|c|}{Ergodic} & \multirow{2}{*}{$O(\frac{1}{t})$} &\multicolumn{2}{|c|}{\multirow{2}{*}{$O(\frac{1}{\sqrt{t}})$}} & \multicolumn{2}{|c|}{\multirow{2}{*}{N/A}} \\
\cline{2-6}
&\multicolumn{1}{|c|}{$O(\frac{1}{\sqrt{t}})$}&\multicolumn{2}{|c|}{$O(\frac{\log(t)}{\sqrt{t}})$} & \multicolumn{2}{|c|}{$O(\frac{1}{t})$} & & \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{} \\
\hline
\multirow{2}{*}{Remarks} & \multicolumn{10}{|c|}{{\multirow{2}{*}{Some distributed optimization algorithms based on augmented Lagrangian }}} \\
&\multicolumn{10}{|c|}{method{ \cite{shionthelinear,shersonontheduality}} and operator splitting \cite{xuabregman} are not included due to limited space.}\\
\hline
\end{tabular}
\end{table*}
\section{Problem Statement and Preliminaries}
This section formally presents the distributed optimization problem and preliminaries about the centralized ${\rm SA_2}$ \cite{nesterovquasimonotone}.
\subsection{Problem statement}
We consider a problem where $n$ agents connected via a network manage to collaboratively solve the following constrained optimization problem:
\begin{equation} \label{original_problem}
\min_{x\in \mathcal{X}} f(x) =\frac{1}{n}\sum_{i=1}^{n}f_i(x)
\end{equation}
where ${x}\in\mathbb{R}^m$ denotes the decision variable and $\mathcal{X}\subseteq \mathbb{R}^m$ the common closed convex constraint set.
Throughout this paper, we assume without loss of generality that $0_m\in\mathcal{X}$, since it can always be met by translating $\mathcal{X}$.
Each function $f_i: \mathcal{X}\rightarrow\mathbb{R}$ that is convex and possibly nonsmooth represents the local objective privately known to agent $i$. Suppose that problem in \eqref{original_problem} admits at least one optimal solution. We denote by ${x}^*$ one of the minimizers and $f(x^*)$ the minimal function value. For function $f_i$, we denote by $\triangledown f_i(x)$ its arbitrary subgradient at $x\in\mathcal{X}$ that satisfies
\begin{equation*}
f_i(y)\geq f_i(x)+\langle\triangledown f_i(x),y-x \rangle, \forall y\in\mathcal{X}.
\end{equation*}
The following assumption is made for the objective function.
\begin{assumption}\label{Lipschitz_continuity}
Each function $f_i$ is $L$-Lipschitz with respect to some norm $\lVert \cdot \rVert$, i.e.,
\begin{equation}
\lvert f_i(x)- f_i(y) \rvert \leq L \lVert x-y\rVert, \forall x,y\in\mathcal{X}.
\end{equation}
\end{assumption}
It is worth to mention that the Lipschitz continuity assumed in Assumption \ref{Lipschitz_continuity} holds for many functions, e.g., any convex function on a closed domain or polyhedral function on an arbitrary domain.
A consequence of this assumption is that we have all the subgradients of $f_i(x),\forall x\in\mathcal{X}$ bounded in the dual norm{ \cite{duchidualaveraging}}, i.e.,
\begin{equation*}
\lVert \triangledown f_i(x)\rVert_* \leq L.
\end{equation*}
The communication network that connects the multi-agent system is modeled by an undirected and simple graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, where $\mathcal{V}=\{1,\cdots,n\}$ denotes the set of agents and $\mathcal{E}\subseteq \mathcal{V}\times\mathcal {V}$ the set of edges that correspond to the communication channels between agents.
Note that this general graph imposes communication constraints on the working agents, that is, each agent can only communicate with its neighboring agents $ j\in \mathcal{N}_i=\{j\in \mathcal{V}|(i,j)\in \mathcal{E} \}$.
\subsection{Subgradient method with double averaging}
This subsection briefly reviews the ${\rm SA_2}$, based on which the proposed algorithms are developed. We begin by introducing a {prox-function} $d:\mathcal{X}\rightarrow\mathbb{R}$ that enjoys the following properties.
\begin{assumption}\label{prox_function}
1) $d(x)\geq 0, \forall x\in\mathcal{X}$ and $d(0_m)=0$;
2) $d(x)$ is $1$-strongly convex on $\mathcal{X}$ with respect to the same norm as in Assumption \ref{Lipschitz_continuity}, i.e.,
\begin{equation}
d(y)\geq d(x)+\langle\triangledown d(x), y-x \rangle+\frac{1}{2}\lVert y-x\rVert^2, \forall x,y\in\mathcal{X}.
\end{equation}
\end{assumption}
We remark that this assumption is standard in the sense that it can be easily achieved by a large group of functions. For instance, the quadratic function $d(x)=\frac{1}{2}\lVert x\rVert^2_2$ satisfies $d(0_m)=0$ and is $1$-strongly convex with respect to the $l_2$-norm, { and the entropic function $
d(x)=\sum_{i=1}^{m}x(i)\log x(i) -x(i)
$
is strongly convex with respect to the $l_1$-norm for $x$ in the $m$-dimensional probability simplex $\triangle_m$.}
${\rm SA_2}$ generates sequences of estimates about the minimizer and the corresponding subgradient, i.e., $\{x_t\}_{t\geq 0}$ and $\{\frac{1}{t+1}\sum_{k=0}^{t}\triangledown f(x_k)\}_{t\geq 0}$,
in an iterative way.
In particular, the algorithm at each time stamp $t$ performs the following iteration
\begin{subequations}\label{centralized_DA_2}
\begin{align}
\hat{x}_{t+1}&=\arg \min_{x\in\mathcal{X}}\Big\{\big\langle \sum_{k=0}^{t}\triangledown f(x_k), x \big\rangle +\gamma_td(x) \Big\}\label{local_optimization_centralized} \\
x_{t+1} &= \frac{t+1}{t+2}{x}_t+\frac{1}{t+2}\hat{x}_{t+1} =\frac{1}{t+2}\sum_{k=0}^{t+1}\hat{x}_{k}, \label{extra_averaging_step}
\end{align}
\end{subequations}
where $\gamma_t$ is a non-decreasing sequence of positive parameters. This scheme shares similarities with other dual methods \cite{duchidualaveraging,nesterovprimal} where the calculation of test points involves iteratively minimizing an averaged linear approximation of the objective function $f$.
However, directly minimizing a linear model of the objective may lead to oscillation. Thus in \eqref{local_optimization_centralized} a sum of the linear model and a weighted proximal function $d$ is minimized.
The averaging step in \eqref{extra_averaging_step} is a feature that cannot be found in other dual methods. Due to this feature, ${\rm SA_2}$ is able to produce a convergent minimizing sequence \cite{nesterovquasimonotone}.
\section{Distributed Subgradient Algorithm with Double Averaging}
In this section, we develop ${\rm DSA_2}$ and show its connections with some existing results.
\subsection{Development of ${ DSA_2}$}
Recall that our objective is to minimize the composite function in \eqref{original_problem} with distributed computations being conducted at each vertex of a connected graph.
A classic technique to fulfill this task is to reformulate this problem as a consensus problem. That is, one assigns local copies of the global decision variable $x$ and the subgradient $\triangledown f(x)$ evaluated at the corresponding $x$, i.e., $x_{i}$ and $s_{i}$, to each agent $i$, and encodes agreement constraints on local estimates, i.e., $x_i=x_j,\forall i,j\in\mathcal{V}$, to ensure the equivalence to the original optimization problem.
In doing so, each agent has local versions of the global variable and the corresponding subgradient information to operate with.
We observe from \eqref{centralized_DA_2} that $\hat{x}_{t}$ directly depends on the subgradient accumulated over time and contributes to the local update of $x_{t}$.
This essentially implies that, to mimic the centralized minimization, the mechanism of updating $s_{i,t}$ to generate an accurate local estimate of the global subgradient is crucial.
Further, it may be intuitive to expect that if the disagreement between $s_{i,t}$ and the true subgradient converges then minimization of the global objective can be achieved.
To exploit this feature, we in this work employ the dynamic average consensus scheme \cite{zhudiscretetime} to track the subgradient of the aggregate objective function by using the local subgradient and the information gathered from immediate neighbors. More specifically, each agent properly weights the collected information at each iteration to generate an estimate of the global subgradient.
To model this process, we assign
a positive weight $p_{ij}$ to each communication link $(i,j)\in\mathcal{E}$ and leave $p_{ij}=0$ for other $(i,j)$ pairs.
We make the following standard assumption for the graph and the weight matrix $P=[p_{ij}]$.
\begin{assumption}\label{assumption_weight_matrix}
1) The graph $\mathcal{G}$ is connected; 2) $P$ has a strictly positive diagonal, i.e., $p_{ii}>0$;
3) $P$ is doubly stochastic, i.e., $P\mathbf{1}=\mathbf{1}$ and $\mathbf{1}^{\mathrm{T}}P=\mathbf{1}^{\mathrm{T}}$.
\end{assumption}
Note that Assumption \ref{assumption_weight_matrix} guarantees $\sigma_2(P)<1$, which can be illustrated as follows. Assumptions \ref{assumption_weight_matrix}-1 and \ref{assumption_weight_matrix}-2 make the matrix $P^{\mathrm{T}}P$ irreducible and primitive, respectively. This fact together with Assumption \ref{assumption_weight_matrix}-3 gives that $P^{\mathrm{T}}P$ has a unique Perron-Frobenius eigenvalue which is $1$, meaning that $\sigma_2(P)<1$.
We now are equipped to present the subgradient tracking scheme{ \cite{varagnolonewton}}:
\begin{equation}
s_{i,t+1}=\sum_{j\in\mathcal{N}_i\cup\{i\}}p_{ij}s_{j,t}+\triangledown f_i(x_{i,t+1})-\triangledown f_i(x_{i,t}) . \label{dynamic_average_consensus}
\end{equation}
Denote $
\overline{s}_t=\frac{1}{n}\sum_{i=1}^{n}{s}_{i,t}$, and
$g_t=\frac{1}{n}\sum_{i=1}^{n}\triangledown f_i(x_{i,t})
$.
A lemma known as the conservation property is recalled { (Lemma 3 in \cite{xuconvergence})}.
\newtheorem{lemma}{Lemma}
\begin{lemma} \label{conservation_property}
If $s_{i,0}=\triangledown f_i(x_{i,0}),i\in\mathcal{V}$, then
$
\overline{s}_{t+1}=g_{t+1}.
$
\end{lemma}
Then, each agent is able to perform the following
\begin{subequations}\label{local_iteration}
\begin{align}
\hat{x}_{i,t+1}&=\arg \min_{x\in\mathcal{X}}\Big\{\big\langle \sum_{k=0}^{t}s_{i,k}, x \big\rangle +\gamma_td(x)\Big \} \label{local_optimization}\\
x_{i,t+1} &= \frac{t+1}{t+2}{x}_{i,t}+\frac{1}{t+2}\hat{x}_{i,t+1}=\frac{1}{t+2}\sum_{k=0}^{t+1}\hat{x}_{i,k}, \label{local_averaging}
\end{align}
\end{subequations}
where $x_{i,t}$ denotes the estimated variable maintained by agent $i$ at time stamp $t$. It is worth mentioning that the difference between Eqs. \eqref{local_optimization_centralized} and \eqref{local_optimization} is that \eqref{local_optimization_centralized} uses exactly the accumulated subgradient of the global objective, i.e., $\triangledown f(x_t)$, while \eqref{local_optimization} uses an estimated one, i.e., $s_{i,t}$, due to the incomplete knowledge of each agent about the global objective.
The proposed ${\rm DSA_2}$ is detailed in Algorithm \ref{DSA_2}.
\begin{algorithm}
\begin{algorithmic}[1]
\caption{${\rm DSA_2}$}
\label{DSA_2}
\STATE Set $t=0, s_{i,0}=\triangledown f_i(x_{i,0})$, choose a non-decreasing sequence of positive parameters $\{\gamma_t\}_{t\geq 0}$.
\WHILE { { Convergence is not reached}}
\FOR{Each agent $i\in\mathcal{V}$ (in parallel)}
\STATE Receive $s_{j,t}, \forall j\in\mathcal{N}_{i}$;
\STATE Perform local computation in \eqref{local_iteration} and \eqref{dynamic_average_consensus};
\STATE Broadcast $s_{i,t+1}$ to $j\in\mathcal{N}_i$;
\ENDFOR
\STATE Set $t = t+1$.
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\newtheorem{remark}{Remark}
\begin{remark}
It is worth mentioning that in most of the existing results, the running average is merely used as a supporting sequence for convergence but not involved in the subproblems at each iteration. That is, they have a recursion rule similar to the following{ \cite{duchidualaveraging}}
\begin{equation*}
\begin{split}
\hat{x}_{i,t+1}&=\arg \min_{x\in\mathcal{X}}\Big\{\big\langle \sum_{k=0}^{t}s_{i,k}, x \big\rangle +\gamma_td(x) \Big\} \\
s_{i,t+1}&=\sum_{j=1}^{n}p_{ij}s_{j,t}+\triangledown f_i(\hat{x}_{i,t+1})-\triangledown f_i(\hat{x}_{i,t}),
\end{split}
\end{equation*}
and establish convergence for the supporting running sequence $ x_{i,t+1}= \frac{t+1}{t+2}{x}_{i,t}+\frac{1}{t+2}\hat{x}_{i,t+1}=\frac{1}{t+2}\sum_{k=0}^{t+1}\hat{x}_{i,k}$.
However, sometimes it is more desirable to have a convergent sequence of test points, taking the decentralized dual Lagrangian problem for example where the local test point is further used to coordinate subproblems. In this work, we show that, upon using the subgradient information evaluated at the running sequence $x_{i,t+1}$, i.e., Eq. \eqref{dynamic_average_consensus}, a local convergent sequence of test points can be obtained.
\end{remark}
\subsection{Relation to existing results}
As mentioned in Introduction, existing distributed optimization methods contain two important parts, that is, an update rule for local estimates about the global variable and a properly designed consensus mechanism. In this subsection, we compare ${\rm DSA_2}$ with existing works in terms of these two.
Regarding the local update of global variables, the proposed algorithm works in the dual space, i.e., each agent at each round generates a linear model of the local objective function and maps it back into the primal space by performing a projection-like operation, i.e., \eqref{local_optimization}.
Notable works including \cite{duchidualaveraging,hosseinionline} enjoy this feature too. The difference between them and the proposed one is that we further introduce an averaging step \eqref{local_averaging}
that allows us to generate {a convergent sequence of test points}.
Thanks to this distinct feature, the proposed algorithm further lends itself to consensus-based dual decomposition that is known as a powerful methodology to deal with large scale \emph{constraint-coupled} optimization problems, as we will show later.
We now turn to compare the consensus mechanisms.
Define $z_{i,t}=\sum_{k=0}^{t}s_{i,k}$. It can be seen that if $s_{i,0}=\triangledown f_i(x_{i,0})$
then the subgradient tracking scheme \eqref{dynamic_average_consensus} reduces to
\begin{equation*}
z_{i,t+1}=\sum_{j\in\mathcal{N}_i\cup\{i\}}p_{ij}z_{j,t}+\triangledown f_i(x_{i,t+1}),
\end{equation*}
which is the subgradient update rule in \cite{duchidualaveraging}.
In another line of research, local estimates about the minimizer are updated by using primal methods which directly generate a sequence of points in the feasible set that is contained in the primal space of variables \cite{nedicconstrainedconsensus,simonettoprimalrecovery}. Specifically, each agent at each round conducts the following
\begin{equation}\label{primal_method}
x_{i,t+1}= \mathcal{P}_{\mathcal{X}}\Big[ \sum_{j\in\mathcal{N}_i\cup\{i\}}p_{ij}x_{j,t}-\frac{1}{\gamma_{t}}\triangledown f_i(x_{i,t}) \Big],
\end{equation}
where $\mathcal{P}_{\mathcal{X}}\left[ y \right] =\arg \min_{x\in\mathcal{X}}\|y-x \|_2$ denotes the Euclidean projection of a vector $y$ on the set $\mathcal{X}$.
Note that there is a consensus seeking step before the projection in \eqref{primal_method}. For unconstrained optimization problems with smooth objective functions, recent works \cite{xuconvergence,quharnessing,nedicachieving} make use of the gradient tracking scheme \eqref{dynamic_average_consensus} and modify the rule to update local variables in \eqref{primal_method} as
\begin{equation*}
x_{i,t+1}= \sum_{j\in\mathcal{N}_i\cup\{i\}}p_{ij}x_{j,t}-\frac{1}{\gamma}s_{i,t},
\end{equation*}
where $\frac{1}{\gamma}$ denotes the constant stepsize.
We remark that, to the best of our knowledge, the proposed method is the first distributed optimization method being able to establish non-ergodic convergence rate in terms of objective function error.
\section{Convergence Properties Analysis}
\subsection{Basic convergence analysis}
In this subsection, we state the basic convergence results that reveal how the local estimate $x_{i}$ approaches the minimizer with the help of the global subgradient tracking scheme \eqref{dynamic_average_consensus}, highlighting the network effect due to distributed implementation.
Motivated by the literature regarding consensus-based distributed optimization, we set up an auxiliary sequence $\{y_t\}_{t\geq 0}$ that makes use of the {averaged subgradient} $g_t$ and conducts the following recursion
\begin{equation*}
\begin{split}
\hat{y}_0&=y_0 = 0_m\\
\hat{y}_{t+1}&=\arg \min_{x\in\mathcal{X}}\Big\{\big\langle \sum_{k=0}^{t}g_{k}, x \big\rangle +\gamma_td(x) \Big\} \\
y_{t+1} &= \frac{t+1}{t+2}{y}_{t}+\frac{1}{t+2}\hat{y}_{t+1}.
\end{split}
\end{equation*}
Note that we only impose specific initial conditions for the sequence $\{y_t\}_{t\geq 0}$; the initial guess $x_{i,0}$ for each agent can be arbitrary.
We present a slightly modified result in dual averaging { (Theorem 2 in \cite{nesterovprimal}, Lemma 3 in \cite{duchidualaveraging})}. Note that we let by convention that $\gamma_{-1}=\gamma_0$.
\begin{lemma} \label{auxiliary_sequence}
Suppose Assumptions \ref{Lipschitz_continuity}-\ref{prox_function} hold true. For any non-decreasing sequence $\{\gamma_t\}_{t\geq 0}$ of positive parameters, and $x\in\mathcal{X}$, we have
\begin{equation*}
\sum_{k=0}^{t}\langle g_k, \hat{y}_k-x \rangle\leq \frac{1}{2} \sum_{k=0}^{t}\frac{1}{\gamma_{k-1}}\lVert g_k\rVert_*^2+\gamma_t d(x).
\end{equation*}
\end{lemma}
To establish relations between the local estimate $\{x_t\}_{t\geq 0}$ and $\{y_t\}_{t\geq 0}$, we recall the following standard result in convex analysis { (Lemma 1 in \cite{nesterovprimal})}.
\begin{lemma} \label{gamma_continuity}
For any $u,v\in\mathbb{R}^m$ and $\gamma>0$, we have
\begin{equation*}
\begin{split}
&\big\lVert \arg \min_{x\in \mathcal{X}}\big\{ \langle u,x \rangle+\gamma d(x) \big\} -\arg \min_{x\in \mathcal{X}}\big\{ \langle v,x \rangle+\gamma d(x) \big\} \big\rVert \\
& \leq \frac{1}{\gamma} \lVert u-v \rVert_*.
\end{split}
\end{equation*}
\end{lemma}
\begin{lemma}\label{basic_convergence_thm}
Suppose Assumptions \ref{Lipschitz_continuity}-\ref{prox_function} hold true.
Let the sequences $\{x_{i,t}\}_{t\geq 0}$ and $\{s_{i,t}\}_{t\geq 0}$ be generated by Algorithm \ref{DSA_2}. For any $x\in\mathcal{X}$, we have
\begin{equation}\label{basic_convergence}
\begin{split}
&f(x_{i,t})-f(x)
\leq \frac{L}{t+1}\times\\
& \sum_{k=0}^{t} \frac{1}{\gamma_{k-1}} \Big(\big\lVert \sum_{l=0}^{k-1}( s_{i,l}-g_l ) \big\rVert_* +\frac{2}{n}\sum_{j=1}^{n}\big\lVert \sum_{l=0}^{k-1}( s_{j,l}-g_l ) \big\rVert_* \Big) \\
& + \frac{1}{2(t+1)
} \sum_{k=0}^{t}\frac{1}{\gamma_{k-1}}\lVert g_k\rVert_*^2+\frac{1}{t+1} \gamma_t d(x).
\end{split}
\end{equation}
\end{lemma}
\begin{pf}
For any $x\in\mathcal{X}$, we consider
\begin{equation}\label{ergodic_and_non}
\begin{split}
(t+1)&\sum_{j=1}^{n}\big(f_j(x_{j,t})-f_j(x)\big) = (t+1) \sum_{j=1}^{n}f_j(x_{j,t})\\
&-\sum_{k=0}^{t}\sum_{j=1}^{n}f_j(x_{j,k}) +\sum_{k=0}^{t}\sum_{j=1}^{n}\big(f_j(x_{j,k})-f_j(x)\big).
\end{split}
\end{equation}
By convexity of $f_i$, we have
\begin{equation}\label{convexity_1}
\begin{split}
&(t+1) \sum_{j=1}^{n}f_j(x_{j,t})-\sum_{k=0}^{t}\sum_{j=1}^{n}f_j(x_{j,k}) \\
=& t \sum_{j=1}^{n}f_j(x_{j,t})-\sum_{k=0}^{t-1}\sum_{j=1}^{n}f_j(x_{j,k}) \\
=& \sum_{j=1}^{n}\sum_{k=1}^{t} k \big(f_j(x_{j,k})-f_j(x_{j,k-1})\big) \\
\leq & \sum_{j=1}^{n}\sum_{k=1}^{t}k\langle\triangledown f_j(x_{j,k}), x_{j,k}-x_{j,k-1} \rangle
\end{split}
\end{equation}
and
\begin{equation}\label{conveixty_2}
\begin{split}
\sum_{j=1}^{n}\big(f_j(x_{j,k})-f_j(x)\big)\leq \sum_{j=1}^{n}\langle \triangledown f_j(x_{j,k}), x_{j,k}-x \rangle.
\end{split}
\end{equation}
Plugging inequalities \eqref{convexity_1} and \eqref{conveixty_2} into \eqref{ergodic_and_non} yields
\begin{equation*}
\begin{split}
&(t+1)\sum_{j=1}^{n}\big(f_j(x_{j,t})-f_j(x)\big) \\
\leq & \sum_{j=1}^{n}\Big( \sum_{k=1}^{t} \langle \triangledown f_j(x_{j,k}), (k+1)x_{j,k}-kx_{j,k-1}-x \rangle \\
& + \langle \triangledown f_j(x_{j,0}), x_{j,0}-x \rangle \Big),
\end{split}
\end{equation*}
which in conjunction with an equivalent expression of \eqref{local_averaging}
\begin{equation*}
(t+1)x_{j,t}=tx_{j,t-1}+\hat{x}_{j,t}
\end{equation*}
gives rise to
\begin{equation*}
(t+1)\sum_{j =1}^{n}\big(f_j(x_{j,t})-f_j(x)\big) \leq \sum_{j=1}^{n} \sum_{k=0}^{t} \langle \triangledown f_j(x_{j,k}), \hat{x}_{j,k}-x \rangle .
\end{equation*}
To approach the desired result using Lemma \ref{auxiliary_sequence}, we shall rewrite the above inequality as
\begin{equation} \label{crucial_inequality}
\begin{split}
&(t+1)\sum_{j=1}^{n}\big(f_j(x_{j,t})-f_j(x)\big)\\
\leq & \sum_{j=1}^{n} \sum_{k=0}^{t} \Big( \langle \triangledown f_j(x_{j,k}), \hat{x}_{j,k}-\hat{y}_{k} \rangle+
\langle \triangledown f_j(x_{j,k}), \hat{y}_{k}-x \rangle \Big) \\
=& \sum_{k=0}^{t} \Big( \sum_{j=1}^{n} \langle \triangledown f_j(x_{j,k}), \hat{x}_{j,k}-\hat{y}_{k} \rangle + n\langle g_k, \hat{y}_{k}-x \rangle \Big).
\end{split}
\end{equation}
With this relation in mind, we now turn to consider
\begin{equation}\label{identical_variable}
\begin{split}
&f(x_{i,t})-f(x) = f(x_{i,t})-f(y_t)+f(y_t)-f(x) \\
\leq& \frac{1}{n}\Big( \sum_{j=1}^{n} \big(f_j(y_t)-f_j(x_{j,t})\big) +\sum_{j=1}^{n}\big(f_j(x_{j,t})-f_j(x) \big)\Big)\\
&+ L\lVert x_{i,t}-y_t \rVert \\
\leq& L \Big(\lVert x_{i,t}-y_t \rVert +\frac{1}{n}\sum_{j=1}^{n}\lVert x_{j,t}-y_t \rVert \Big)\\ &+ \frac{1}{n} \sum_{j=1}^{n}\big(f_j(x_{j,t})-f_j(x) \big),
\end{split}
\end{equation}
where we use the $L$-Lipschitz continuity of $f_i$ to derive the first and second inequality. In light of \eqref{crucial_inequality}, we have
\begin{equation*}
\begin{split}
&f(x_{i,t})-f(x)\\
&\leq L \Big(\lVert x_{i,t}-y_t \rVert +\frac{1}{n}\sum_{j=1}^{n}\lVert x_{j,t}-y_t \rVert \Big) \\
& + \frac{1}{t+1} \sum_{k=0}^{t}\Big(\frac{1}{n} \sum_{j=1}^{n}\langle \triangledown f_j(x_{j,k}), \hat{x}_{j,k}-\hat{y}_k \rangle+\langle g_k, \hat{y}_k-x \rangle \Big).
\end{split}
\end{equation*}
By the fact that
$
y_t= \frac{1}{t+1}\Big(y_0+\sum_{k=1}^{t}\hat{y}_k\Big)= \frac{1}{t+1}\sum_{k=0}^{t}\hat{y}_k
$,
we obtain
\begin{equation*}
\begin{split}
&f(x_{i,t})-f(x)\\
\leq & \frac{L}{t+1} \sum_{k=0}^{t}\Big(\lVert \hat{x}_{i,k}-\hat{y}_k \rVert +\frac{1}{n}\sum_{j=1}^{n}\lVert \hat{x}_{j,k}-\hat{y}_k \rVert \Big) \\
+& \frac{1}{t+1} \sum_{k=0}^{t}\Big(\frac{1}{n} \sum_{j=1}^{n}\langle \triangledown f_j(x_{j,k}), \hat{x}_{j,k}-\hat{y}_k \rangle+\langle g_k, \hat{y}_k-x \rangle \Big)\\
\leq & \frac{L}{t+1} \sum_{k=0}^{t}\Big(\lVert \hat{x}_{i,k}-\hat{y}_k \rVert +\frac{2}{n}\sum_{j=1}^{n}\lVert \hat{x}_{j,k}-\hat{y}_k \rVert \Big ) \\
& + \frac{1}{t+1}\sum_{k=0}^{t} \langle g_k, \hat{y}_k-x \rangle.
\end{split}
\end{equation*}
It follows from Lemma \ref{auxiliary_sequence} that
\begin{equation*}
\begin{split}
&f(x_{i,t})-f(x)\\
\leq & \frac{L}{t+1} \sum_{k=0}^{t}\Big(\lVert \hat{x}_{i,k}-\hat{y}_k \rVert +\frac{2}{n}\sum_{j=1}^{n}\lVert \hat{x}_{j,k}-\hat{y}_k \rVert \Big) \\
& + \frac{1}{2(t+1)
} \sum_{k=0}^{t}\frac{1}{\gamma_{k-1}}\lVert g_k\rVert_*^2+\frac{1}{t+1} \gamma_t d(x).
\end{split}
\end{equation*}
Appealing to the $\frac{1}{\gamma}$-Lipschitz continuity of the projection operator in \eqref{local_optimization} (Lemma \ref{gamma_continuity}) allows us to obtain the desired result in \eqref{basic_convergence}. \QEDA
\end{pf}
\begin{remark}
Lemma \ref{basic_convergence_thm} highlights that, after $t$ steps of execution, the objective error $f(x_{i,t})-f(x^*)$ is bounded from above by a summation of four terms. The first two terms are due to the different estimates of the averaged subgradient of the global objective. The third and the fourth terms can be seen as the optimization error terms observed also in centralized nonsmooth optimization. The result suggests that, if the deviation $\big\lVert \sum_{l=0}^{k-1}( s_{i,l}-g_l ) \big\rVert_*$ is finite, i.e., $\lVert s_{i,l}-g_l \rVert_*$ decays fast enough, and $\gamma_{k}$ is properly chosen, then the objective error asymptotically converges to $0$. The assumptions for Lemma \ref{basic_convergence_thm} (Assumptions \ref{Lipschitz_continuity}-\ref{prox_function}) are not restrictive in the sense that the Lipschitz continuity holds for general convex functions defined on a closed domain and the requirements for the prox-function in Assumption \ref{prox_function} can also be easily met.
\end{remark}
{
\begin{remark}
The proposed algorithm needs a synchronized decreasing $\frac{1}{\gamma_t}$ to ensure convergence. The choice is made for technical reasons. Specifically, it has to be made identical for each agent to validate the use of the $\frac{1}{\gamma}$-Lipschitz continuity of the projection operator in \eqref{local_optimization}, and to be decaying to make the objective error convergent in light of Eq. \eqref{basic_convergence}. This requirement can be satisfied by a synchronization step before execution of the algorithm. We note that the work in \cite{xuconvergence} relaxes this requirement, where an unconstrained distributed optimization problem is considered.
\end{remark}}
\subsection{Disagreement analysis}
\begin{lemma}\label{disagreement}
Suppose Assumptions \ref{Lipschitz_continuity} and \ref{assumption_weight_matrix} hold. For the sequences $\{s_{i,t}\}_{t\geq 0}$ generated by the subgradient tracking scheme \eqref{dynamic_average_consensus}, we have
\begin{equation}\label{disagreement_bound}
\big\lVert \sum_{l=0}^{k}s_{i,l}-\sum_{l=0}^{k}g_{l}\big\rVert_*
\leq \frac{\sqrt{n}L}{1-\sigma_2(P)}+2L.
\end{equation}
\end{lemma}
\begin{pf}
Define
\begin{equation*}
\begin{split}
{s}_t=\begin{bmatrix}
s_{1,t}\\ \vdots \\ s_{n,t}
\end{bmatrix},
{\triangledown}_t=\begin{bmatrix}
{\triangledown}f_1(x_{1,t})\\ \vdots \\{\triangledown}f_n(x_{n,t})
\end{bmatrix}
\end{split}
\end{equation*} and rewrite the dynamics in \eqref{dynamic_average_consensus} for all $i$ in a compact form as
\begin{equation} \label{gradient_error}
s_{t+1}=Ps_t+\triangledown_{t+1}-\triangledown_{t}.
\end{equation}
Summing \eqref{gradient_error} over $t$ from $t=0$ to $k-1$ yields
$
\sum_{l=1}^{k}s_{l} =P\sum_{l=0}^{k-1}s_{l}-\triangledown_{0}+\triangledown_{k}
$.
Since $s_0=\triangledown_0$, we have
\begin{equation*}
\begin{split}
&\sum_{l=0}^{k}s_l =P\sum_{l=0}^{k-1}s_{l}+\triangledown_{k}.
\end{split}
\end{equation*}
To establish relation between $\sum_{l=0}^{k}s_l$ and the accumulated averaged subgradient, we subtract $\sum_{l=0}^{k}g_l$ on both sides and get
\begin{equation*}
\begin{split}
\sum_{t=0}^{k}s_{t}-\mathbf{1}\sum_{t=0}^{k}g_{t} =&P\sum_{t=0}^{k-1}s_{t}-\mathbf{1}\sum_{t=0}^{k-1}g_{t}-\mathbf{1}g_{k}+\triangledown_{k}.
\end{split}
\end{equation*}
By using $\big(P-\frac{1}{n}\mathbf{1}\mathbf{1}^{\mathrm{T}}\big)(s_t-\mathbf{1}g_t)=Ps_t-\mathbf{1}g_t$, we obtain
\begin{equation*}
\begin{split}
\sum_{l=0}^{k}s_{l}&-\mathbf{1}\sum_{l=0}^{k}g_{l}\\
=&\big(P-\frac{1}{n}\mathbf{1}\mathbf{1}^{\mathrm{T}}\big)\Big(\sum_{l=0}^{k-1}s_{l}-\mathbf{1}\sum_{l=0}^{k-1}g_{l}\Big)-\mathbf{1}g_{k}+\triangledown_{k}.
\end{split}
\end{equation*}
By recursion, we get
\begin{equation*}
\begin{split}
&\sum_{l=0}^{k}s_{l}-\mathbf{1}\sum_{l=0}^{k}g_{l}\\ =&\sum_{l=0}^{k-1}\big(P-\frac{1}{n}\mathbf{1}\mathbf{1}^{\mathrm{T}}\big)^{k-l}(\triangledown_{l}-\mathbf{1}g_l)-\mathbf{1}g_{k}+\triangledown_{k} \\
=& \sum_{l=0}^{k-1}\big(P^{k-l}-\frac{1}{n}\mathbf{1}\mathbf{1}^{\mathrm{T}}\big)\triangledown_{l}-\mathbf{1}g_{k}+\triangledown_{k},
\end{split}
\end{equation*}
implying that
\begin{equation*}
\begin{split}
&\sum_{l=0}^{k}s_{i,l}-\sum_{l=0}^{k}g_{l}\\
& = \sum_{l=0}^{k-1}\sum_{j=1}^{n}\big(P^{k-l}-\frac{1}{n}\mathbf{1}\mathbf{1}^{\mathrm{T}}\big)_{ij}\triangledown f_j(x_{j,l})-g_{k}+\triangledown f_i(x_{i,k}).
\end{split}
\end{equation*}
Taking the dual norm on both sides gives rise to
\begin{equation*}
\begin{split}
\big\lVert &\sum_{l=0}^{k}s_{i,l}-\sum_{l=0}^{k}g_{l}\big\rVert_*\\
=& \sum_{l=0}^{k-1}\sum_{j=1}^{n}\big\lvert\big(P^{k-l}-\frac{1}{n}\mathbf{1}\mathbf{1}^{\mathrm{T}}\big)_{ij}\big\rvert\lVert\triangledown f_j(x_{j,l})\rVert_*\\
&+\lVert\triangledown f_i(x_{i,k})-g_{k}\rVert_*
\leq \sum_{l=0}^{k-1}\big\lVert P^{k-l}e_i-\frac{\mathbf{1}}{n} \big\rVert_1L+2L,
\end{split}
\end{equation*}
where $e_i\in\mathbb{R}^m $ denotes the $i$-th standard basis vector.
Recall that for a stochastic matrix $P$ \cite{duchidualaveraging,hosseinionline} one has
$
\big\lVert P^{k-l}e-\frac{\mathbf{1}}{n} \big\rVert_1\leq \sqrt{n}\big\lVert P^{k-l}e-\frac{\mathbf{1}}{n} \big\rVert_2 \leq \sigma_2(P)^{k-l}\sqrt{n}
$
for all $e\in\triangle_m$.
The inequality in \eqref{disagreement_bound} then follows, thereby concluding the proof. \QEDA
\end{pf}
\subsection{Non-ergodic convergence rate}
We now are in a position to establish the non-ergodic convergence rate.
\begin{thm} \label{convergence_rate}
Suppose that $d(x^*)\leq R^2$ and $\gamma_{t}=\gamma \sqrt{t+1}$ where $\gamma> 0$, and Assumptions \ref{Lipschitz_continuity}-\ref{assumption_weight_matrix} hold. For the sequences $\{x_{i,t}\}_{t\geq 0}$ and $\{s_{i,t}\}_{t\geq 0}$ being generated by Algorithm \ref{DSA_2}, we have
\begin{equation}\label{convergence_rate_bound}
\begin{split}
&f(x_{i,t})-f(x^*)\\
\leq & \frac{1}{\sqrt{t+1}}\Big( \big( \frac{6L^2\sqrt{n}}{1-\sigma_2(P)}+13L^2\big)\frac{1}{\gamma} +{\gamma R^2}\Big) .
\end{split}
\end{equation}
\end{thm}
\begin{pf}
By invoking Lemma \ref{disagreement} and boundedness of $\lVert g_k \rVert_*$, we can obtain from the result in Lemma \ref{basic_convergence_thm} that
\begin{equation*}
\begin{split}
&f(x_{i,t})-f(x)\\
\leq & \frac{3L^2}{t+1} \big(\frac{\sqrt{n}}{1-\sigma_2(P)}+2\big) \sum_{k=0}^{t} \frac{1}{\gamma_{k-1}} \\
& + \frac{1}{2(t+1)
} \sum_{k=0}^{t}\frac{1}{\gamma_{k-1}}\lVert g_k\rVert_*^2+\frac{1}{t+1} \gamma_t d(x) \\
\leq & \frac{3L^2}{t+1} \big(\frac{\sqrt{n}}{1-\sigma_2(P)}+2\big) \sum_{k=0}^{t} \frac{1}{\gamma_{k-1}} \\
& + \frac{L^2}{2(t+1)
} \sum_{k=0}^{t}\frac{1}{\gamma_{k-1}}+\frac{1}{t+1} \gamma_t d(x).
\end{split}
\end{equation*}
Due to the fact that
\begin{equation*}
\begin{split}
&\sum_{k=0}^{t}\frac{1}{\gamma_{k-1}} = \frac{1}{\gamma_0}+\sum_{k=0}^{t-1}\frac{1}{\gamma_{k}} = \frac{1}{\gamma}+\frac{1}{\gamma}\sum_{k=0}^{t-1}\frac{1}{\sqrt{k+1}} \\
& \leq \frac{2}{\gamma}\sqrt{t+1},
\end{split}
\end{equation*}
we get
\begin{equation*}
\begin{split}
&f(x_{i,t})-f(x)\\
\leq & \big( \frac{6L^2\sqrt{n}}{1-\sigma_2(P)}+13L^2\big)\frac{1}{\gamma\sqrt{t+1}} +\frac{\gamma}{\sqrt{t+1}}d(x).
\end{split}
\end{equation*}
We arrive at the desired result in \eqref{convergence_rate_bound} by using the assumption that $d(x^*)\leq R^2$. \QEDA
\end{pf}
\begin{remark}
In Theorem \ref{convergence_rate}, we establish a non-ergodic convergence rate $O(1/\sqrt{t})$ for nonsmooth objective functions in terms of the objective error.
The result is a little bit stronger than the ergodic one in most of the existing results. To see this, we note
\begin{equation*}
f(\frac{1}{t}\sum_{k=1}^{t}x_{i,k})-f(x^*)\leq \frac{1}{t}\sum_{k=1}^{t}\big(f(x_{i,k})-f(x^*)\big) \leq O(1/\sqrt{t})
\end{equation*}
by convexity of $f$. More importantly, this essentially makes the proposed method applicable to decentralized dual Lagrangian problems where the local test point is further used to coordinate subproblems, as we shall see in the next section.
\end{remark}
Note that the result in Theorem \ref{convergence_rate} allows us to further derive the optimal choice of $\gamma$, i.e.,
$
\gamma = \frac{L}{R}\sqrt{\frac{6\sqrt{n}}{1-\sigma_2(P)}+13},
$
with which the convergence rate result becomes
\begin{equation*}
\begin{split}
f(x_{i,t})-f(x^*)
\leq \frac{2RL}{\sqrt{t+1}} \sqrt{\frac{6\sqrt{n}}{1-\sigma_2(P)}+13}.
\end{split}
\end{equation*}
It is shown in \eqref{convergence_rate_bound} that the asymptotical bound depends on $\sigma_2(P)$. This fact can be explored to achieve a tighter bound by following the methods in \cite{xiaodistributed} to minimize the spectral norm of $P-\frac{1}{n}\mathbf{1}\mathbf{1}^{\mathrm{T}}$. For special graphs such as paths and cycles with a specific structure of weight matrix $P$, $\sigma_2(P)$ can be explicitly identified to make the bound tighter \cite{duchidualaveraging}.
\section{Extension to Constraint-Coupled Distributed Optimization}\label{section5}
In this section, we further develop a ${\rm DSA_2}$-based dual decomposition strategy to solve optimization problems with coupled functional constraints.
Consider the following minimization problem
\begin{equation} \label{coupled_constraints}
\begin{split}
\min_{\{x_i\in \mathcal{X}_i\}_{i=1}^n} &\sum_{i=1}^{n}f_i(x_i) \\
{\rm subject \,\ to} \quad &\sum_{i=1}^{n}h_i(x_i)\leq 0_m,
\end{split}
\end{equation}
where $\mathcal{X}_i$ are compact and convex sets, and $f_i: \mathcal{X}_i\rightarrow\mathbb{R}$ and $h_i: \mathcal{X}_i\rightarrow\mathbb{R}^m$ are closed and convex functions.
We denote by $\{x_i^*\}_{i=1}^n$ one of the optimal solutions and $\sum_{i=1}^{n}f_i^*$ the minimal function value.
We can see from the above problem that the objective function and parts of the constraints, i.e., $\{x_i\in\mathcal{X}_i\}_{i=1}^n$, enjoy a separable structure, but the global constraint $\sum_{i=1}^{n}h_i(x_i)\leq 0_m$ cannot be trivially decomposed.
One powerful methodology to solve this problem is to alternatively consider the corresponding dual Lagrangian problem.
In doing so, the coupling in constraints can be transformed into that in objective functions, thus allowing us to solve it via the proposed ${\rm DSA_2}$.
The Lagrangian of \eqref{coupled_constraints} is
\begin{equation*}
\begin{split}
&L(\{x_i\}_{i=1}^n,\lambda)\\
=&\sum_{i=1}^{n}L_i({x}_i,\lambda)=\sum_{i=1}^{n}\big(f_i(x_i)+\langle \lambda, h_i(x_i) \rangle \big),
\end{split}
\end{equation*}
where $x_i\in\mathcal{X}_i$ and $\lambda\geq0_m$ represents the dual variable associated with the coupled constraint, and the dual Lagrangian problem is
\begin{equation*}
\max_{\lambda\geq0_m} \min_{\{x_i\in\mathcal{X}_i\}_{i=1}^n} \sum_{i=1}^{n}L_i(x_i,\lambda),
\end{equation*}
which is equivalent to
\begin{equation*}
\min_{\lambda\geq0_m}\sum_{i=1}^{n}\psi_i(\lambda) =\min_{\lambda\geq0_m} \max_{\{x_i\in\mathcal{X}_i\}_{i=1}^n} -\sum_{i=1}^{n}L_i(x_i,\lambda).
\end{equation*}
It is worth mentioning that the dual Lagrangian problem has the same structure as that in \eqref{original_problem} treated in previous sections.
To see this, we define
$
\psi_i(\lambda)= \max_{x_i\in\mathcal{X}_i}-L_i(x_i,\lambda)
$
and rewrite the dual Lagrangian problem as
\begin{equation} \label{dual_Lagrangian}
\min_{\lambda\geq0_m} \sum_{i=1}^{n}\psi_i(\lambda).
\end{equation}
It is assumed that the dual Lagrangian problem is solvable. This is true when a constraint qualification, e.g., Slater's condition, holds.
Let $\lambda ^*$ denote the optimal dual variable.
In the sequel, we invoke Algorithm \ref{DSA_2} to solve the dual problem in \eqref{dual_Lagrangian}. Specifically, we choose
$
d(\lambda)=\frac{1}{2}\lVert \lambda \rVert_2^2
$
as the prox-function of the feasible set. Detailed steps at each round for agents are summarized in Algorithm \ref{consensus-based_dual_decomposition} and explained as follows.
Each agent in Algorithm \ref{consensus-based_dual_decomposition} initializes the algorithm by setting $s_{i,0}=\triangledown \psi_i(\lambda_{i,0})$. This calls for a local maximization step, i.e., $x_i(\lambda_{i,0}) \in \arg \max_{x_i\in\mathcal{X}_i} \big\{-f_i(x_i)-\langle \lambda_{i,0}, h_i(x_i)\rangle \big\}$, to calculate $\triangledown \psi_i(\lambda_{i,0})$ according to Danskin's Theorem. For the initialization of $\lambda_{i,0}$, a reasonable choice is to set $\lambda_{i,0}=0_m$ such that $x_i(\lambda_{i,0}) \in \arg \max_{x_i\in\mathcal{X}_i} -f_i(x_i) $, which is the solution to \eqref{coupled_constraints} if the global constraint is removed. At each round, each agent makes use of ${\rm DSA_2}$ to minimize \eqref{dual_Lagrangian} by performing steps $6$, $7$ and $10$, which correspond to the step $5$ in Algorithm \ref{DSA_2}. To derive $\triangledown \psi_i(\lambda_{i,t}) $, agents should further conduct step $8$. Step $9$ can be seen as a primal recovery step, a common feature that can be found in the literature addressing dual decomposition \cite{simonettoprimalrecovery,falsonedualdecomposition}. This issue is mainly due to the fact that the dual objective function at the optimum is generally nonsmooth, i.e., optimal dual variable does not necessarily lead to an optimal primal solution \cite{nesterovdualsubgradient}.
\begin{algorithm}
\begin{algorithmic}[1]
\caption{${\rm DSA_2}$-based dual decomposition}
\label{consensus-based_dual_decomposition}
\STATE Set $t=0, s_{i,0}=\triangledown \psi_i(\lambda_{i,0})$, choose a non-decreasing sequence of positive parameters $\{\gamma_t\}_{t\geq 0}$.
\WHILE { { Convergence is not reached}}
\FOR{Each agent $i\in\mathcal{V}$ (in parallel)}
\STATE Receive $s_{j,t}, \forall j\in\mathcal{N}_{i}$;
\STATE Conduct
\STATE$\hat{\lambda}_{i,t+1}=\arg \min_{\lambda\geq 0}\big\{\big\langle \sum_{k=0}^{t}s_{i,k}, \lambda \big\rangle +\gamma_td(\lambda)\big \}$
\STATE$ \lambda_{i,t+1} = \frac{t+1}{t+2}{\lambda}_{i,t}+\frac{1}{t+2}\hat{\lambda}_{i,t+1}=\frac{1}{t+2}\sum_{k=0}^{t+1}\hat{\lambda}_{i,k}$
\STATE$
x_i(\lambda_{i,t+1})=\arg \max_{x_i\in\mathcal{X}_i} -L_i(x_i,\lambda_{i,t+1})
$
\STATE$
x_{i,t+1}= \frac{t+1}{t+2}{x}_{i,t}+\frac{1}{t+2}x_i(\lambda_{i,t+1})
$
\STATE$
s_{i,t+1}=\sum_{j\in\mathcal{N}_i\cup\{i\}}p_{ij}s_{j,t}+\triangledown \psi_i(\lambda_{i,t+1})-\triangledown \psi_i(\lambda_{i,t})
$;
\STATE Broadcast $s_{i,t+1}$;
\ENDFOR
\STATE Set $t = t+1$.
\ENDWHILE
\end{algorithmic}
\end{algorithm}
In the sequel, we establish convergence results for
the dual objective error $\sum_{j=1}^{n}\big(\psi_j(\lambda_{i,t})-\psi_j(\lambda^*)\big)$, the quadratic penalty for the coupled constraint $\big\lVert \big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+ \big\rVert_2^2$, and the primal objective error $ \sum_{j=1}^{n} \big( f_j(x_{j,t}) -f_j^*\big) $
for the sequences of test points generated by Algorithm \ref{consensus-based_dual_decomposition} by extending the results derived in previous sections.
\begin{thm}\label{DSA_2_dual_decomposition}
Suppose Assumption \ref{assumption_weight_matrix} holds true.
Let the sequences $\{\lambda_{i,t}\}_{t\geq 0}$ and $\{x_{i,t}\}_{t\geq 0}$ be generated by Algorithm \ref{consensus-based_dual_decomposition}. If $\gamma_{t}=\gamma \sqrt{t+1}$ where $\gamma> 0$ and $\lVert \triangledown \psi_i(\lambda_{i,k}) \rVert_2$ is bounded from above, then the dual objective error
\begin{equation*}
\begin{split}
\sum_{j=1}^{n}\big(\psi_j(\lambda_{i,t})-\psi_j(\lambda^*) \big)
\leq \frac{2n\big(\frac{3\sqrt{n}}{1-\sigma_2(P)}+\frac{13}{2}\big)D}{\gamma \sqrt{t+1} }+\frac{\gamma\lVert \lambda^* \rVert_2^2}{2\sqrt{t+1}}
,
\end{split}
\end{equation*}
the quadratic penalty for the coupled constraint
\begin{equation*}
\begin{split}
\big\lVert \big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+ \big\rVert_2^2\leq \frac{4n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{t+1}+\frac{2\gamma C}{\sqrt{t+1}},
\end{split}
\end{equation*}
and the primal objective error \begin{equation*}
\begin{split}
-\lVert \lambda^*\rVert_2& \sqrt{\frac{4n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{t+1}+\frac{2\gamma C}{\sqrt{t+1}} }\\
& \leq \sum_{j=1}^{n} \big( f_j(x_{j,t}) -f_j^*\big)
\leq \frac{2n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{\gamma \sqrt{t+1} },
\end{split}
\end{equation*}
where $D= \big(\max_{j\in\{1,\cdots,n\}}\lVert \triangledown \psi_j(\lambda_{j,k}) \rVert_2\big)^2$ and $C=\sum_{j=1}^{n} f_j^*- \min_{\{x_j\in \mathcal{X}_j\}_{j=1}^n}\sum_{j=1}^{n} f_j(x_{j})$ are constants.
\end{thm}
\begin{pf}
In light of \eqref{convexity_1}, we readily have
\begin{equation*}
\begin{split}
&(t+1) \sum_{j=1}^{n}\psi_j(\lambda_{j,t})-\sum_{k=0}^{t}\sum_{j=1}^{n}\psi_j(\lambda_{j,k}) \\
\leq & \sum_{j=1}^{n}\sum_{k=1}^{t}k\langle\triangledown \psi_j(\lambda_{j,k}), \lambda_{j,k}-\lambda_{j,k-1} \rangle .
\end{split}
\end{equation*}
By adding $\sum_{j=1}^{n}\sum_{k=0}^{t}\langle\triangledown \psi_j(\lambda_{j,k}), \lambda_{j,k}-\lambda \rangle, \forall \lambda\geq 0_m $ on both sides, we obtain
\begin{equation} \label{key_dual_lagrangian}
\begin{split}
&(t+1) \sum_{j=1}^{n}\psi_j(\lambda_{j,t})-\sum_{k=0}^{t}\sum_{j=1}^{n}\big( \langle\triangledown \psi_j(\lambda_{j,k}), \lambda -\lambda_{j,k}\rangle\\
&+\psi_j(\lambda_{j,k}) \big)\\
\leq& \sum_{j=1}^{n}\big(\sum_{k=1}^{t}\langle\triangledown \psi_j(\lambda_{j,k}), (k+1)\lambda_{j,k}-k\lambda_{j,k-1}-\lambda \rangle \\
&+ \langle \triangledown \psi_j(\lambda_{j,0}), \lambda_{j,0}-\lambda \rangle\big) \\
=& \sum_{j=1}^{n}\sum_{k=0}^{t}\langle\triangledown \psi_j(\lambda_{j,k}), \hat{\lambda}_{j,k}-\lambda \rangle,
\end{split}
\end{equation}
where we use the fact that $
(t+1)\lambda_{j,t}=t\lambda_{j,t-1}+\hat{\lambda}_{j,t}
$ to get the last equality.
According to Danskin's Theorem, we have
$
\triangledown \psi_i (\lambda)= -h_i\big(x_i(\lambda)\big) ,
$
where
$
x_i(\lambda) \in \arg \max_{x_i\in\mathcal{X}_i} \big\{-f_i(x_i)-\langle \lambda, h_i(x_i)\rangle \big\}.
$
Then by the definition of $\psi_j(\lambda)$, we are able to obtain
\begin{equation*}
\begin{split}
&\sum_{j=1}^{n}\big( \langle\triangledown \psi_j(\lambda_{j,k}), \lambda -\lambda_{j,k}\rangle+\psi_j(\lambda_{j,k}) \big)\\
=& \sum_{j=1}^{n} \Big( \big\langle -h_j\big(x(\lambda_{j,k})\big), \lambda-\lambda_{j,k} \big\rangle -f_j\big(x(\lambda_{j,k})\big)\\
&-\big\langle \lambda_{j,k},h_j\big(x(\lambda_{j,k})\big) \big\rangle\Big) \\
=& \sum_{j=1}^{n}\Big(-\big\langle h_j\big(x(\lambda_{j,k})\big),\lambda\big\rangle-f_j\big(x(\lambda_{j,k})\big) \Big).
\end{split}
\end{equation*}
Plugging the preceding relation into \eqref{key_dual_lagrangian} leads to
\begin{equation}\label{danskin_theorem}
\begin{split}
&\sum_{k=0}^{t}\sum_{j=1}^{n}\Big(f_j\big(x_j(\lambda_{j,k})\big)+\big\langle h_j\big(x_j(\lambda_{j,k})\big),\lambda\big\rangle\Big)\\
&+(t+1) \sum_{j=1}^{n}\psi_j(\lambda_{j,t})
\leq \sum_{j=1}^{n}\sum_{k=0}^{t}\langle\triangledown \psi_j(\lambda_{j,k}), \hat{\lambda}_{j,k}-\lambda \rangle.
\end{split}
\end{equation}
Recall by definition that $x_{j,t}=\frac{1}{t+1}\sum_{k=0}^{t}x_j(\lambda_{j,k})$. Then by convexity of $f_j$ and $h_j$, we obtain from \eqref{danskin_theorem} that
\begin{equation*}
\begin{split}
&(t+1) \sum_{j=1}^{n} \big( f_j(x_{j,t})+ \langle h_j(x_{j,t}), \lambda \rangle+\psi_j(\lambda_{j,t})\big)\\
\leq& \sum_{j=1}^{n}\sum_{k=0}^{t}\langle\triangledown \psi_j(\lambda_{j,k}), \hat{\lambda}_{j,k}-\lambda \rangle.
\end{split}
\end{equation*}
We now follow the same line as in Lemma \ref{basic_convergence_thm} and Lemma \ref{disagreement} to bound the right-hand side of the preceding inequality. Consider
\begin{equation*}
\begin{split}
&(t+1) \sum_{j=1}^{n} \big( f_j(x_{j,t})+\langle h_j(x_{j,t}),\lambda \rangle-(-\psi_j(\lambda_{j,t}))\big)\\
\leq & \sum_{k=0}^{t} \frac{n}{\gamma_{k-1} } (\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2})(\max_{j\in\{1,\cdots,n\}}\lVert \triangledown \psi_j(\lambda_{j,k}) \rVert_2)^2\\
&+ \gamma_{t} d(\lambda).
\end{split}
\end{equation*}
Rearranging the terms yields
\begin{equation*}
\begin{split}
& \sum_{j=1}^{n} \Big( f_j(x_{j,t}) -\big(-\psi_j(\lambda_{j,t})\big)\Big)\\
\leq & \frac{1}{t+1} \sum_{k=0}^{t} \frac{n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{\gamma_{k-1} } \\
& + \min_{\lambda\geq 0_m}\{ \frac{\gamma_{t}}{t+1} d(\lambda)- \langle \sum_{j=1}^{n}h_j(x_{j,t}), \lambda \rangle \} \\
= & \frac{1}{t+1} \sum_{k=0}^{t} \frac{n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{\gamma_{k-1} } - \frac{t+1}{2\gamma_{t}} \big\lVert \big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+ \big\rVert_2^2,
\end{split}
\end{equation*}
or, equivalently,
\begin{equation} \label{feasibility_measure}
\begin{split}
& \sum_{j=1}^{n} \Big( f_j(x_{j,t}) -\big(-\psi_j(\lambda_{j,t})\big)\Big) + \frac{t+1}{2\gamma_{t}} \big\lVert \big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+ \big\rVert_2^2 \\
&\leq \frac{1}{t+1} \sum_{k=0}^{t} \frac{n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{\gamma_{k-1} }\leq \frac{2n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{\gamma \sqrt{t+1} } .
\end{split}
\end{equation}
Following from the saddle point inequality, we obtain
\begin{equation}\label{saddle_point}
\sum_{j=1}^{n}f_j^*\leq \sum_{j=1}^{n}\big(f_j(x_{j,t})+\langle\lambda^*, h_j(x_{j,t})\rangle\big).
\end{equation}
Adding $\frac{t+1}{2\gamma_{t}} \big\lVert \big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+ \big\rVert_2^2$ and subtracting $ \sum_{j=1}^{n} \big(-\psi_j(\lambda_{j,t})\big)$ on both sides yield
\begin{equation*}
\begin{split}
&\sum_{j=1}^{n}\Big(f_j^* - \big(-\psi_j(\lambda_{j,t})\big)-\langle\lambda^*, h_j(x_{j,t})\rangle\Big)\\
&+\frac{t+1}{2\gamma_{t}} \big\lVert \big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+ \big\rVert_2^2 \\
\leq& \sum_{j=1}^{n}\Big(f_j(x_{j,t})- \big(-\psi_j(\lambda_{j,t})\big)\Big)\\
&+\frac{t+1}{2\gamma_{t}} \big\lVert \big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+ \big\rVert_2^2\\
\leq& \frac{2n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{\gamma \sqrt{t+1} }.
\end{split}
\end{equation*}
Since
\begin{equation*}
\begin{split}
&-\big\langle \lambda^*,\sum_{j=1}^{n}h_j(x_{j,t})\big\rangle+\frac{t+1}{2\gamma_{t}} \big\lVert \big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+ \big\rVert_2^2\\
&\geq \min_{z\in\mathbb{R}^m}\big\{ -\langle \lambda^*,z\rangle+\frac{t+1}{2\gamma_{t}} \lVert z_+ \rVert_2^2 \big\} \\
& = -\frac{\gamma_t}{2(t+1)}\lVert \lambda^* \rVert_2^2 = -\frac{\gamma\lVert \lambda^* \rVert_2^2}{2\sqrt{t+1}}
\end{split}
\end{equation*}
and $\sum_{j=1}^{n}f_j^*=\sum_{j=1}^{n}-\psi_j(\lambda^*)$, we have
\begin{equation*}
\begin{split}
\sum_{j=1}^{n}\big(\psi_j(\lambda_{j,t})-\psi_j(\lambda^*) \big)
\leq \frac{2n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{\gamma \sqrt{t+1} }+\frac{\gamma\lVert \lambda^* \rVert_2^2}{2\sqrt{t+1}}.
\end{split}
\end{equation*}
Following the similar reasoning in \eqref{identical_variable} to bound $\sum_{j=1}^{n}\big(\psi_j(\lambda_{j,t})-\psi_j(\lambda_{i,t}) \big) $, we further have
\begin{equation*}
\begin{split}
\sum_{j=1}^{n}\big(\psi_j(\lambda_{i,t})-\psi_j(\lambda^*) \big)
\leq \frac{2n\big(\frac{3\sqrt{n}}{1-\sigma_2(P)}+\frac{13}{2}\big)D}{\gamma \sqrt{t+1} }+\frac{\gamma\lVert \lambda^* \rVert_2^2}{2\sqrt{t+1}}
.
\end{split}
\end{equation*}
To establish the upper bound on the violation of the coupled constraint, we consider
$\forall \lambda\geq 0_m$,
\begin{equation}\label{weak_duality}
f_j^*\geq L_j(x_j^*,\lambda) \geq \min_{x_j\in\mathcal{X}_j} L_j(x_j,\lambda)=-\psi_j(\lambda),
\end{equation}
and therefore
\begin{equation*}
\begin{split}
& \frac{t+1}{2\gamma_{t}} \big\lVert \big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+ \big\rVert_2^2\\
& \leq \frac{\sum_{k=0}^{t} \frac{n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{\gamma_{k-1} } }{t+1} + \sum_{j=1}^{n} f_j^*- \min_{\{x_j\in \mathcal{X}_j\}_{j=1}^n}\sum_{j=1}^{n} f_j(x_{j}),
\end{split}
\end{equation*}
where \eqref{feasibility_measure} is used. This can be equivalently written as
\begin{equation*}
\begin{split}
& \big \lVert \big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+ \big\rVert_2^2\\
&\leq \frac{2\gamma_t}{(t+1)^2} \Big(\sum_{k=0}^{t} \frac{n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{\gamma_{k-1} } \Big)+\frac{2\gamma_{t}C}{t+1}\\
& \leq \frac{4n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{t+1}+\frac{2\gamma C}{\sqrt{t+1}}.
\end{split}
\end{equation*}
By Eqs. \eqref{feasibility_measure} and \eqref{weak_duality}, we readily have
\begin{equation*}
\begin{split}
\sum_{j=1}^{n} \big( f_j(x_{j,t}) -f_j^*\big)\leq \frac{2n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{\gamma \sqrt{t+1} } .
\end{split}
\end{equation*}
Again, by the saddle point inequality \eqref{saddle_point},
the fact
\begin{equation*}
\big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+\geq \sum_{j=1}^{n}h_j(x_{j,t}),
\end{equation*}
and $\lambda^*\geq 0 $, one obtains
\begin{equation*}
\begin{split}
&\sum_{j=1}^{n} \big( f_j(x_{j,t}) -f_j^*\big)\geq -\lVert \lambda^*\rVert_2 \big\lVert \big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+\big \lVert_2
\\
& \geq -\lVert \lambda^*\rVert_2 \sqrt{\frac{4n\big(\frac{\sqrt{n}}{1-\sigma_2(P)}+\frac{5}{2}\big)D}{t+1}+\frac{2\gamma C}{\sqrt{t+1}} }.
\end{split}
\end{equation*}
This completes the proof.
\end{pf}
\begin{remark}
By Danskin's Theorem, we have $\triangledown \psi_i (\lambda)= -h_i\big(x_i(\lambda)\big) $. We know from the problem statement that $h_i(x_i)$ is convex and defined on a compact domain $\mathcal{X}_i$. Then it may be not restrictive to assume in Theorem \ref{DSA_2_dual_decomposition} that the subgradient of the dual objective is bounded. This assumption has the same role with the Lipschitz continuity of the objective function used in Lemma \ref{basic_convergence_thm}.
\end{remark}
\section{Simulations}
In this section, we verify our theoretical findings by applying them to a setting where a networked multi-agent system of $n=50$ agents solves a distributed optimization problem with coupled nonlinear convex constraints, i.e.,
\begin{equation*}
\begin{split}
&\min_{x_i\in[0,1]} \sum_{i=1}^{50} c_ix_i \\
&{\rm subject \,\ to} \quad \sum_{i=1}^{50}-d_i\log(1+x_i)\leq -b.
\end{split}
\end{equation*}
Networked-structured optimization problems of this form arise in, for example, plug-in electric vehicles charging and quality service of wireless networks, and have been used for numerical studies also in \cite{mateosdistributed}.
To solve this problem by ${\rm DSA_2}$, we should first derive its Lagrangian
$
L(\{x_i\}_{i=1}^n,\lambda)
=\sum_{i=1}^{n}L_i({x}_i,\lambda)=\sum_{i=1}^{n}\Big(c_ix_i+\big\langle \lambda, \frac{b}{50}-d_i\log(1+x_i) \big\rangle \Big)
$
and consider the corresponding dual problem
$
\min_{\lambda\geq0} \sum_{i=1}^{n}\psi_i(\lambda),
$
where $\psi_i(\lambda)= \max_{x_i\in[0,1]}\big\{-c_ix_i-\big\langle \lambda, \frac{b}{50}-d_i\log(1+x_i) \big\rangle\big\}$,
as explained in Section \ref{section5}.
By Danskin's Theorem, we have
$
\triangledown \psi_i (\lambda)= - \frac{b}{50}+d_i\log\big(1+x_i(\lambda)\big) ,
$
where
$
x_i(\lambda) \in \arg \max_{x_i\in[0,1]} \Big\{-c_ix_i-\big\langle \lambda, \frac{b}{50}-d_i\log(1+x_i) \big\rangle \Big\}
$.
In this simulation, the parameters $c_i$ and $d_i$ for each agent $i\in\{1,\cdots,50\}$ are randomly chosen from a uniform distribution, and $b$ is set as $5$. We use the solver \emph{fmincon} with the interior point algorithm in the Optimization Toolbox to calculate the optimal solution and use it as a reference. The communication topology among agents is characterized by a fixed connected small world graph \cite{wattscollective}, and the weighting matrix $P$ is selected as the Metropolis constant edge weight matrix \cite{xiaodistributed}. The prox-function is chosen as $
d(\lambda)=\frac{1}{2}\lVert \lambda \rVert_2^2.
$
The increasing sequence $\gamma_{t}$ is determined as ${0.2}{\sqrt{t+1}}$.
The initial guesses $\lambda_{i,0}$ for all agents are set as $0$. In the simulation, the bound for the subgradient of the dual objective $\max_{j\in\{1,\cdots,n\}}\lVert \triangledown \psi_j(\lambda_{j,k}) \rVert_2$ is identified as $0.55$ and $\sigma_2(P)$ is calculated as $0.9788$.
$C$ is estimated as $27.2067$. The optimal dual variable returned by \emph{fmincon} is $0.6419$. Therefore, the theoretical bounds for the primal objective error $\big\lvert \sum_{j=1}^{n} \big( f_j(x_{j,t}) -f_j^*\big) \big\rvert$ and the quadratic penalty for the coupled constraint $\big\lVert \big(\sum_{j=1}^{n}h_j(x_{j,t})\big)_+ \big\rVert_2^2$ are $\max\{ \frac{5.0849\times10^4}{\sqrt{t+1}}, \frac{91.5467}{\sqrt{t+1}}+\frac{6.9856}{(t+1)^\frac{1}{4}}\}$ and $\frac{2.0340\times10^4}{t+1}+\frac{10.8827}{\sqrt{t+1}}$, respectively.
For comparison, we also simulate the consensus-based dual decomposition strategies recently reported in \cite{falsonedualdecomposition, mateosdistributed,simonettoprimalrecovery} in the same network environment. For \cite{falsonedualdecomposition}, according to the sufficient conditions for ensuring convergence developed therein, the stepsize is chosen as $\frac{10}{t+1}$.
For \cite{simonettoprimalrecovery}, we use a constant stepsize 0.05.
For \cite{mateosdistributed}, we derive the critical feasible consensus stepsize according to Proposition $4$ in \cite{mateosdistributed} as $\sigma = 0.1103$, and using the Slater vector $(\mathbf{1}, 0)$ to get the bound on the optimal dual set as $D=3.3130$. The stepsize is chosen by following the Doubling Trick scheme developed in \cite{mateosdistributed}. The initial guesses $\lambda_{i,0}$ for these two strategies are also set as $0$.
The simulation results are illustrated in Figs. \ref{cost_error} and \ref{cons_violation}. In particular,
Figs. \ref{cost_error} and \ref{cons_violation} depict the primal objective error and the quadratic penalty for the coupled constraint, respectively.
It is worth mentioning that, for methods in \cite{falsonedualdecomposition, mateosdistributed,simonettoprimalrecovery}, the performances are evaluated over the running sequences of the primal variables to be compatible with the theoretical results therein. We can see from them that the algorithm in \cite{falsonedualdecomposition} suffers from the slowest convergence rate among the three, and the proposed algorithm shows a slightly quicker convergence rate than that in \cite{mateosdistributed}.
This may be because that the stepsizes for \cite{mateosdistributed} (Doubling Trick scheme) and the proposed one ($\frac{1}{\gamma_t}=\frac{5}{\sqrt{t+1}}$) are of order $\frac{1}{\sqrt{t}}$ while the stepsize for \cite{falsonedualdecomposition} is chosen to be of order $\frac{1}{t}$ to fulfill the sufficient conditions developed therein.
Due to using a constant stepsize, the method in \cite{simonettoprimalrecovery} does not provide exact convergence.
In Fig. \ref{cost_error}, we draw the absolute value of $ \sum_{i=1}^{50}c_ix_{i,t} -\sum_{i=1}^{50}c_ix^*_{i}$. The identity $\sum_{i=1}^{50}c_ix_{i,t} -\sum_{i=1}^{50}c_ix^*_{i}$ can be negative and positive due to possible violation of the primal coupled constraint. When it jumps from negative to positive, the trajectory of $ \big\lvert \sum_{i=1}^{50}c_ix_{i,t} -\sum_{i=1}^{50}c_ix^*_{i}\big\rvert$ presents a peak.
This phenomenon is typically observed in dual Lagrangian problems.
We also note that the trajectories for the proposed algorithm are within the theoretically developed upper bounds.
\begin{figure}[!htb]
\centerin
\includegraphics[width=3.5in]{CostError_new.eps}
\caption{Trajectories of the primal objective error $\big\lvert \sum_{i=1}^{50}c_ix_{i,t} -\sum_{i=1}^{50}c_ix^*_{i}\big\rvert$.} \label{cost_error}
\end{figure}
\begin{figure}[!htb]
\centerin
\includegraphics[width=3.5in]{ConsViolation_new.eps}
\caption{Trajectories of the quadratic penalty for the coupled constraint $\big\lVert\big(b-\sum_{i=1}^{50}d_i\log(1+x_{i,t})\big)_+\big\lVert_2^2$.} \label{cons_violation}
\end{figure}
\section{Conclusion}
In this paper, we have proposed a distributed subgradient method with double averaging, termed as ${\rm DSA_2}$, for convex constrained optimization problems with aggregate nonsmooth objective functions in a multi-agent setting. In particular, the local iteration rule in ${\rm DSA_2}$ works in the dual space, that is, it involves at each round minimizing a local approximated dual model of the overall objective where the estimated first-order information of the objective is supplied by a dynamic average consensus scheme. A non-ergodic convergence rate of $O(\frac{1}{\sqrt{t}})$ in terms of objective function error has been established for ${\rm DSA_2}$. Furthermore, we have developed a ${\rm DSA_2}$-based dual decomposition strategy for solving distributed optimization problems with coupled functional constraints.
This is made possible by dualizing the coupled constraint via Lagrangian relaxation, thus allowing us to alternatively solve the dual Lagrangian problem where the coupling takes place in cost functions via ${\rm DSA_2}$.
The $O(\frac{1}{\sqrt{t}})$ convergence rate has been theoretically validated for both the dual objective error and the quadratic penalty for the coupled constraint.
Several simulations and comparisons have been performed to verify the advantages of the proposed methods.
|
1,108,101,564,352 | arxiv | \section{Introduction.}
The estimation of a characteristic after selection has been recognized as an important practical problem for many years. The problem arises naturally in multiple applications where one wishes to select a population from the available $k\, (\geq 2)$ populations and then estimate some characteristics (or parametric functions) associated with the population selected by a fixed selection rule. For example, in modelling economic phenomenons, often the economist is faced with the problem of choosing an economic model from $k\, (\geq 2)$ different models that returns a minimum loss to the capital economic. After the selection of the desired economic model, using a pre-specified selection procedure, the economist would like to have an estimate of the return losses from the selected model. In clinical research, after the selection of the most effective treatment from a choice of $k$ available treatments, a doctor may wishes to have an estimate of the effectiveness of the selected treatment. The aforementioned problems are continuation of the general formulation of the Ranking and Selection problems. Several inferential methods for statistical selection and estimation related to these problems have been developed by many authors, see
Cohen and Sackrowitz (1982), Misra and Dhariyal (1994), Misra and van der Meulen (2001), Vellaisamy and Punnen (2002), Stallard et al. (2008), Vellaisamy (2009), Misra and Arshad (2014), Arshad et al. (2015), Arshad and Misra (2015a, 2015b), Fuentes et al. (2018), Meena et al. (2018), Arshad and Abdalghani (2019).
The majority of prior studies on selection and estimation following selection problems have exclusively focused on a selected univariate population, and very few papers have appeared for a selected bivariate/multivariate population.
Some of the works devoted to the bivariate/multivariate case are due to
Amini and Nematollahi (2016) and Mohammadi and Towhidi (2017).
In particular, Mohammadi and Towhidi (2017) considered the estimation of a characteristic after selection from bivariate normal population, using a squared error loss function. The authors used this loss function and derived a Bayes estimator of a characteristic of the bivariate normal population selected by a natural selection rule. The authors also provided some admissibility and inadmissibility results. This paper continues the study of Mohammadi and Towhidi (2017) by considering the following loss function
\begin{equation} \label{loss1.1}
L (\delta , \theta) = e^{a \left(\delta - \theta \right) } - a (\delta - \theta) - 1 . ~ \ \ \delta \in \mathbb{D}, \ \theta \in \Theta,
\end{equation}
where $\delta$ is an estimator of the unknown parameter $\theta$, $a$ is a location parameter of the loss function (\ref{loss1.1}), $\Theta$ denotes the parametric space, and $\mathbb{D}$ represents a class of estimators of $\theta$. The loss function in Equation (\ref{loss1.1}) is generally called an asymmetric linear exponential (LINEX) loss and is useful in situations where positive bias (overestimation) is assumed to be more preferable than negative bias (underestimation) or vice versa. Many researchers have used the above loss function, see among others Zellner (1986), Lu et al. (2013) Nematollahi and Jozani (2016), and Arshad and Abdalghani (in press).
The normal distribution is the most important and used probability model in many natural phenomena. For instance, variables such as psychological, educational, blood pressure, and heights, etc., follow normal distribution.
One generalization of the univariate normal distribution is the bivariate normal distribution.
Consider two independent populations $\pi_1 $ and $\pi_2$.
Let $\boldsymbol{Z}_i =(X_i, Y_i)^\intercal$ be a random vector associated with the bivariate normal population $\pi_i \equiv N (\boldsymbol{\theta}^{(i)}, \boldsymbol{\Sigma})$, where $\boldsymbol{\theta^{(i)}}= \left(\theta_x^{(i)}, \theta_y^{(i)} \right)^\intercal $ denotes the 2-dimensional unknown mean vector $(i=1,2)$, and $\boldsymbol{\Sigma} = \begin{bmatrix}
\sigma_{xx} & \sigma_{xy} \\
\sigma_{xy} & \sigma_{yy}
\end{bmatrix}$ denotes the common known positive-definite variance-covariance matrix.
Suppose that the $Y$-variate is a characteristic which is difficult (or expensive) to measure whose mean is of interest, and the $X$-variate is an auxiliary characteristic which is easy (or inexpensive) to measure. Then, based on an available information of the $X$-variate, we wish to make some inferences about the corresponding $Y$-variate. For instance, $X$ may be the grade of an applicant on a particular test and $Y$ is regarded as a grade on a future test. Then, based on the $X$-grade we want to see the behavior of the corresponding Y-grade.
Let $X_{(1)}$ and $X_{(2)}$ be the order statistics from $X_1$ and $X_2$.
Then, the $Y$-variates induced by the order statistic $X_{(i)}$ is called the concomitant of $X_{(i)}$ and is denoted by $Y_{[i]}$ ($i=1,2$). Assume that the bivariate population associated with $\max\{\theta_{x}^{(1)}, \theta_{x}^{(2)}\}$ is referred as the better population. For selecting the better population, a natural selection rule $\boldsymbol{\psi}=(\psi_{1}, \psi_{2})$ selects the population associated with $X_{(2)}=\max(X_1,X_2)$, so that, the natural selection rule $\boldsymbol{\psi}=(\psi_{1}, \psi_{2}) $ can be expressed as
\begin{equation} \label{sel-rule}
\psi_{1}(\boldsymbol{x})=\left \{ \begin{array}{ll} 1,
& \mbox{if} \ \ X_1 > X_2 \\
0, & \mbox{if} \ \ X_1 \leq X_2, \end{array} \right.
\end{equation}
and $\psi_{2}(\boldsymbol{x}) = 1-\psi_{1}(\boldsymbol{x})$. After a bivariate normal population is selected using the selection rule $\boldsymbol{\psi}$, given in (\ref{sel-rule}),
we are interested in the estimation of the second component of the mean vector associated with the selected population, which can be expressed as
\begin{align*} \label{parameter}
\theta_{\text{y}}^S (\boldsymbol{x})&= \theta_{y}^{(1)} \psi_{1}(\boldsymbol{x}) + \theta_{y}^{(2)} \psi_{2}(\boldsymbol{x}) \vspace{2mm}
\\
& = \left \{ \begin{array}{ll} \theta_{y}^{(1)},
& \mbox{if} \ \ X_1 > X_2 \vspace{2mm} \\
\theta_{y}^{(2)}, & \mbox{if} \ \ X_1 \leq X_2. \end{array} \right.
\end{align*}
Note that $\theta_{\text{y}}^S$ depends on the variable $X_i, \, i=1,2$, so that is a random parameter.
Our goal is to estimate $\theta_{\text{y}}^S$ using the loss function given in (\ref{loss1.1}).
Putter and Rubinstein (1968) have shown that an unbiased estimator of the mean after selection from univariate normal population does not exist.
Dahiya (1974) continued the study of Putter and Rubinstein (1968) by proposing several different estimators of mean and investigated their corresponding bias and mean squared error. Later, Parsian and Farsipour (1999) considered two univariate normal populations having same known variance but unknown means, using the loss function given in (\ref{loss1.1}). They suggested seven different estimators for the mean and investigated their respective biases and risk functions. Misra and van der Muelen (2003) continued the study of Parsian and Farsipour (1999) by deriving some admissibility and inadmissibility results for estimators of the mean of the univariate normal population selected by a natural selection rule. As a consequence, they obtained some estimators better than those suggested by Parsian and Farsipour (1999). Recently, Mohammadi and Towhidi (2017) extended the study of Dahiya (1974) by considering a bivariate normal population. The authors derived Bayes and minimax estimators and an admissible subclass of natural estimators were also obtained. Further, they provided some improved estimators of the mean of the selected bivariate normal population.
This article continues the investigation
of Mohammadi and Towhidi (2017) by deriving various competing estimators and decision theoretic results under the LINEX loss function.
Note that, using the loss function given in (\ref{loss1.1}) for estimating $\theta_{\text{y}}^S$, the estimation problem under consideration is location invariant with regard to a group of permutation and a location group of transformations.
Moreover, its appropriate to use permutation and location invariante estimators satisfying $\delta \left( \boldsymbol{Z}_1, \boldsymbol{Z}_2 \right)= \delta \left( \boldsymbol{Z}_2, \boldsymbol{Z}_1 \right) $ and $\delta\left( \boldsymbol{Z}_1 + \boldsymbol{c}, \boldsymbol{Z}_1+ \boldsymbol{c}\right) =\delta\left( \boldsymbol{Z}_1 ,\boldsymbol{Z}_1 \right)+ c_2, \ \forall \ \boldsymbol{c}= \left( c_1,c_2 \right)^{\intercal} \in \mathbb{R}^2$,
where $\mathbb{R}^2$ denotes the 2-dimensional Euclidean space. Therefore, any location equivariant estimator of $\theta_{\text{y}}^S$ will be of the form
\begin{equation}\label{equi-c}
\delta_\varphi \left( \boldsymbol{Z}_1, \boldsymbol{Z}_2 \right)= Y_{[2]} + \varphi \left( X_{(1)} - X_{(2)}, Y_{[1]}- Y_{[2]} \right),
\end{equation}
where $ \varphi(\cdot) $ is a function of $X_{(1)} - X_{(2)}$ and $Y_{[1]}- Y_{[2]}$. Let $\mathcal{Q}_c$ represents the class of all equivariant estiamtors of the form (\ref{equi-c}).
For notational simplicity, the following notations will be adapted throughout the paper; $\boldsymbol{Z}=(\boldsymbol{Z}_1,\boldsymbol{Z}_2)$,
$ \theta_x = \max \left( \theta_x^{(1)}, \theta_x^{(2)} \right) - \min \left( \theta_x^{(1)}, \theta_x^{(2)} \right) $, $\theta_y= \max \left( \theta_y^{(1)}, \theta_y^{(2)} \right) - \min \left( \theta_y^{(1)}, \theta_y^{(2)} \right)$, $\boldsymbol{\theta}^*=\left(\theta_{x}, \theta_{y} \right)^\intercal \in \mathbb{R}_+^2$, where $\mathbb{R}_+^2$ denotes the positive part of the two dimensional
Euclidean space $\mathbb{R}^2$, and $\phi(\cdot) $ and $ \Phi(\cdot)$ denote the usual pdf and cdf of $N(0,1)$.
We presented some natural estimators and Bayes estimator, under the loss function (\ref{loss1.1}), of $\theta_{\text{y}}^S$ in Section 2. In Section 3, an admissible subclass of natural type estimator is obtained.
Further, a result of improved estimators is derived in Section 4.
In Section 5, a data analysis using a real data set is provided to illustrate the computation of the various estimates of $\theta_{\text{y}}^S$. Finally, in Section 6, using the LINEX loss function, risk comparison of the estimators of $\theta_{\text{y}}^S$ is carried-out using a simulation study.
\section{Estimators of $\theta_{\text{y}}^S$ }
In this section, we present various estimators of $\theta_{\text{y}}^{S}$ of the selected population. First, based on the maximum likelihood estimator (MLE), an estimator of $\theta_{\text{y}}^{S}$ is given by
\begin{align*}
\delta_{N,1} (\boldsymbol{Z})
= Y_{[2]}.
\end{align*}
Similarly, based on the minimum risk equivariant estimator (MREE), an estimator of $\theta_{\text{y}}^{S}$ is given by
\begin{align*}
\delta_{N,2} (\boldsymbol{Z})
= Y_{[2]}- \frac{1}{2} a \sigma_{yy}.
\end{align*}
The third estimator of $\theta_{\text{y}}^{S}$ that we propose is given by
\begin{align*}
\delta_{N,3} \left( \boldsymbol{Z}\right) = Y_{[2]} + \frac{1}{a} \ln \left[ 1 + \left( e^{ a \left( Y_{[1]}- Y_{[2]}\right) } -1\right) \Phi \left( \frac{X_{(1)}- X_{(2)}}{\sqrt{2\sigma_{xx}}} \right) \right].
\end{align*}
Note that the estimator $\delta_{N,3} $ is based on the MLE of $ \frac{1}{a} \ln \left[ E\left( e^{a\theta_{\text{y}}^{S}} \right) \right]$, where $ E\left( e^{a\theta_{\text{y}}^{S}} \right) = e^{a\theta_y^{(2)}} \left[ 1 + \left( e^{a\left( \theta_y^{(1)}- \theta_y^{(2)} \right) } - 1 \right) \Phi \left( \frac{\theta_x^{(1)} - \theta_x^{(2)}}{\sqrt{2\sigma_{xx}}} \right) \right]$.
\noindent Another natural estimator of $\theta_{\text{y}}^{S}$, which is similar to the estiamtor studied by Dahiya (1974), is given by
\begin{align*}
\delta_{N,4} \left( \boldsymbol{Z} \right) = \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]}}{2}, &\mbox{if} \ \ X_{(1)} - X_{(2)}> - c\sqrt{2\sigma_{xx}} \vspace{3mm} \\
Y_{[2]}, &\mbox{if} \ \ X_{(1)} - X_{(2)} \leq - c\sqrt{2\sigma_{xx}},
\end{array} \right.
\end{align*}
where $c>0$ is a constant. The estimator $\delta_{N,4}$ is called hybrid estimator and is same as the estimator $\delta_{N,1}$ for $c=0$.
\begin{remark}
It can be verified that, the estimator $\delta_{N,2}$ is also a generalized Bayes estimator of $\theta_{\textnormal{y}}^{S}$, using the loss function given in (\ref{loss1.1}) and the improper prior
$\Pi \left( \boldsymbol{\boldsymbol{\theta}^{(1)}, \boldsymbol{\theta}^{(2)}} \right)=1,\ \forall \ \boldsymbol{\theta}^{(i)}\in \mathbb{R}^2,\ i=1,2.$
\end{remark}
\begin{theorem}
Under the conjugate prior $ N_2 ( \boldsymbol{\mu}, \boldsymbol{\vartheta})$ and the loss function given in (\ref{loss1.1}), the Bayes estimator of $\theta_{\textnormal{y}}^{S}$ is given by
\begin{align*}\label{Bayes-est}
\delta_{B} \left( \boldsymbol{Z}\right) &= \frac{\mu_2 ( |\Sigma|+ m \sigma_{yy}) + m Y_{[2]} (m+\sigma_{xx}) + m \sigma_{xy} (\mu_1 -X_{(2)})} {m^2 + m \sigma_{xx} + m \sigma_{yy}+ |\Sigma| } \nonumber \\
& \hspace*{1.5cm} - \frac{a}{2} \frac{m^2 \sigma_{yy} + m |\Sigma|}{ \left( m^2 + m \sigma_{xx}+ m \sigma_{yy} + |\Sigma| \right)}.
\end{align*}
\end{theorem}
\begin{proof}
Suppose that $\boldsymbol{\theta}^{(i)}$ has a conjugate bivariate normal prior $ N_2( \boldsymbol{\mu}, \boldsymbol{\vartheta})$
where $ \boldsymbol{\mu}= \left(\mu_1, \mu_2 \right)^\prime$, $\boldsymbol{\vartheta}=mI$, and $I$ denotes an identity matrix of order 2 and $m$ is a positive real number. Then, the posterior distribution of $\boldsymbol{\theta}^{(i)}$, given $\boldsymbol{Z}_i=\boldsymbol{z}_i$, is
\begin{equation}\label{posterior}
\Pi^* \left( \boldsymbol{\theta}^{(i)} \big| \boldsymbol{z}_i \right) \sim N_2 \left( \boldsymbol{K} \left( \boldsymbol{\Sigma}^{-1} \boldsymbol{z}_i + \boldsymbol{\vartheta}^{-1} \boldsymbol{\mu} \right) , \boldsymbol{K}\right), \ \ \ i=1,2,
\end{equation}
where $\boldsymbol{K} = \left(\boldsymbol{\Sigma}^{-1} + \boldsymbol{\vartheta}^{-1} \right)^{-1}$.
\\
The posterior risk of an estimator $\delta_i$ of $\theta_y^{(i)}$ under the loss function (\ref{loss1.1}) is
\begin{equation}\label{post-exp}
E_{\Pi^*} L \left( \delta_i (\boldsymbol{Z}_i\right) , \theta_y^{(i)}) = e^{a \delta_i \left( \boldsymbol{Z}_i\right) } E_{\Pi^*} \left[ e^{-a \theta_y^{(i)} } \Big| \boldsymbol{Z}_i=\boldsymbol{z}_i \right] - a \left( \delta_i \left( \boldsymbol{Z}_i \right) - E_{\Pi^*} \left( \theta_y^{(i)} \big | \boldsymbol{Z}_i=\boldsymbol{z}_i \right) \right) - 1, \end{equation}
$i=1,2$. It is not difficult to check that the Bayes estimator $\delta_i^B (\boldsymbol{Z}_i)$ of $\theta_y^{(i)}$, which minimizes the posterior risk (\ref{post-exp}), is given by
\begin{equation}\label{post-minz}
\delta_i^B (\boldsymbol{Z}_i) = -\frac{1}{a} \ln \left[ E_{\Pi^*} \left[ e^{-a \theta_y^{(i)} } \Big| \boldsymbol{Z}_i=\boldsymbol{z}_i \right] \right] = - \frac{1}{a} \ln \left[ M_{\theta_y^{(i)} \big | \boldsymbol{z}_i} (-a ) \right], \ \ i=1,2,
\end{equation}
where $M_{\theta_y^{(i)} \big | \boldsymbol{z}_i} (\cdot)$ denotes the moment generating function (MGF) of $\theta_y^{(i)} \big | \boldsymbol{z}_i$. It follows from (\ref{posterior}) that $\theta_y^{(i)} \big | \boldsymbol{z}_i $ has univariate normal distribution $N (p_i^*, q_i^*)$, where
\begin{equation*}
p_i^*= \frac{\mu_2 ( |\Sigma|+ m \sigma_{yy}) + m Y_i (m+\sigma_{xx}) + m \sigma_{xy} (\mu_1 -X_i)} {m^2 + m \sigma_{xx} + m \sigma_{yy}+ |\Sigma| },
\end{equation*}
and
\begin{equation*}
q_i^*= \frac{m^2 \sigma_{yy} + m |\Sigma|}{ \left( m^2 + m \sigma_{xx}+ m \sigma_{yy} + |\Sigma| \right) }, \ \ \ i=1,2.
\end{equation*}
Therefore,
\begin{equation}\label{mgf-1}
M _{\boldsymbol{\theta}^{(i)} \big | \boldsymbol{z}_i} (-a ) = e^{-a p_i^* + \frac{1 }{2} a^2 q_i^*}, \ \ \ i=1,2.
\end{equation}
Combining (\ref{post-minz}) and (\ref{mgf-1}), we get
\begin{align*}
\delta_i^{B} (\boldsymbol{Z}_i) & = \frac{\mu_2 ( |\Sigma|+ m \sigma_{yy}) + m Y_i (m+\sigma_{xx}) + m \sigma_{xy} (\mu_1 -X_i)} {m^2 + m \sigma_{xx} + m \sigma_{yy}+ |\Sigma| } \\
& \hspace*{1.5cm} - \frac{a}{2} \frac{m^2 \sigma_{yy} + m |\Sigma|}{ \left( m^2 + m \sigma_{xx}+ m \sigma_{yy} + |\Sigma| \right) }, \ \ \ i=1,2.
\end{align*}
\noindent It can be verified that the posterior risk of the Bayes estimator $ \delta_i^{B} (\boldsymbol{Z}_i)$ of $\theta_{y}^{(i)}$, is given by
\begin{equation}\label{psot-risk}
r (\delta_i^{B}\left( \boldsymbol{Z}_i\right) ) = \frac{a^2}{2} \frac{ \left( m^2 \sigma_{yy}+ |\Sigma| m \right) } { \left( |\Sigma| + m^2 + m \sigma_{yy}+ m\sigma_{xx} \right) }.
\end{equation}
Since the posterior risk (\ref{psot-risk}) does not depend on $\boldsymbol{Z}_i,\ i=1,2$, it follows form Theorem 3.1 of Sackrowitz and Samuel-Cohen (1987) that the posterior risk $ r \left( \delta_i^{B} \left( \boldsymbol{Z}_i\right) \right) $, given in (\ref{psot-risk}), is also the Bayes risk of $ \delta_i^{B} \left( \boldsymbol{Z}_i\right)$.
Now an application of Lemma 3.2 of Sackrowitz and Samuel-Cohen (1987) leads to the result.
\end{proof}
\begin{remark}
It can be easily checked that the estimator $\delta_{N,2}$ is a limit of the Bayes estimators $\delta_{B}\left(\boldsymbol{Z} \right) $ as $ m \to \infty$.
\end{remark}
\section{Some Admissibility Results}
In this section, an admissible subclass of equivariant estimators within the class $\mathcal{Q}_d$ is obtained, using the loss funtion given in (\ref{loss1.1}), where
\begin{equation*}\label{sub-class}
\mathcal{Q}_d= \left\{ \delta_d : \delta_d (\boldsymbol{Z}_1,\boldsymbol{Z}_2) = Y_{[2]}+d, \ \forall \ d \in \mathbb{R} \right\},
\end{equation*}
where $\mathbb{R}$ denotes the real line. For obtaining the admissibility of the estimators within the above class we require the following lemma.
\begin{lemma}\label{lemma-pdf}
Let $W=Y_{[2]}-\theta_{\textnormal{y}}^S$, and $\rho = \frac{\sigma_{xy}}{\sqrt{\sigma_{xx} \sigma_{yy}}}$. Then, $W$ has the pdf
\begin{align*}
f_{W}(w \big|\boldsymbol{\theta}^*) = \frac{1}{\sqrt{\sigma_{yy}} } \phi \left( \frac{w}{\sqrt{\sigma_{yy}}} \right) \left\{ \Phi \left( \frac{ \frac{\rho w }{\sqrt{\sigma_{yy}}} + \frac{{\theta}_x }{\sqrt{\sigma_{xx}}} }{\sqrt{2-\rho^2}} \right) + \Phi \left( \frac{\frac{\rho w }{\sqrt{\sigma_{yy}}} - \frac{{\theta}_x}{\sqrt{\sigma_{xx}}}}{\sqrt{2-\rho^2}} \right) \right\}, \ \ w \in \mathbb{R}.
\end{align*}
\end{lemma}
The following theorem establishes the admissibility of the estimators $\delta_d$ within the class $\mathcal{Q}_d$.
\begin{theorem}\label{thm-adm}
Let
$$d_0 = \left\{ \begin{array}{ll} -\frac{a\sigma_{yy}}{2} -\frac{1}{a} \left[ \ln 2 + \ln \left\{ \Phi \left( \frac{a \sigma_{xy} }{\sqrt{2\sigma_{xx}}}\right) \right\} \right], & \textup{if} \ \ \sigma_{xy} >0 \vspace{2mm} \\
-\frac{a\sigma_{yy}}{2}, & \textup{if} \ \ \sigma_{xy} \leq 0,
\end{array}\right.
$$
and
$$d_1 = \left\{ \begin{array}{ll} -\frac{a\sigma_{yy}}{2}, & \textup{if} \ \ \sigma_{xy} \geq 0 \vspace{2mm} \\
-\frac{a\sigma_{yy}}{2} -\frac{1}{a} \left[ \ln 2 + \ln \left\{ \Phi \left( \frac{a \sigma_{xy} }{\sqrt{2\sigma_{xx}}}\right) \right\} \right], & \textup{if} \ \ \sigma_{xy} <0.
\end{array}\right.
$$
Let $\delta_d \in \mathcal{Q}_d$ be given estimators of $\theta_{\textnormal{y}}^S$. Then, \\ (i) Within the class $\mathcal{Q}_d$, the equivariant estimators $\delta_d$ are admissible for $d_0 \leq d \leq d_1$, under the loss function (\ref{loss1.1}),
\\
(ii) The equivariant estimators $\delta_d$ for $ d \in \left(-\infty, d_0 \right) \cup \left( d_1, \infty \right)$ are inadmissible even within the class $\mathcal{Q}_d$.
\end{theorem}
\begin{proof}
For a fixed $\boldsymbol{\theta}^* \in \mathbb{R}_+^2$, define $\Psi (\boldsymbol{\theta}^*) = - \frac{1}{a} \ln \left[ E_{\theta^*} \left( e^{a W } \right) \right]$, where $W=Y_{[2]}-\theta_{\text{y}}^S$. Then, for fixed $\boldsymbol{\theta}^* \in \mathbb{R}_+^2$, the risk function of the estimators $\delta_d$ is given by
\begin{align*}\label{risk-dc}
R(\delta_d, \boldsymbol{\theta}^*)= E_{\boldsymbol{\theta}^*} \left[ e^{a \left( Y_{[2]} +d - \theta_{\text{y}}^S \right)} - a \left( Y_{[2]} +d - \theta_{\text{y}}^S \right) - 1 \right]
\end{align*}
It is easy to verify that $R (\delta_d, \boldsymbol{\theta}^*)$ is minimized at $d = \Psi (\boldsymbol{\theta}^*) = - \frac{1}{a} \ln \left[ E_{\boldsymbol{\theta}^*} \left( e^{a W } \right) \right]$. Using Lemma \ref{lemma-pdf}, we have
\begin{align*}
\Psi (\boldsymbol{\theta}^*) = - \frac{a \sigma_{yy}}{2} - \frac{1}{a} \ln \left[H_a(\theta_x)\right],
\end{align*}
where for $a \neq 0, \ H_a \left( \theta_x \right) = \Phi \left( \frac{a \sigma_{xy} + \theta_x }{\sqrt{2\sigma_{xx}}}\right) + \Phi \left( \frac{a \sigma_{xy} - \theta_x }{\sqrt{2\sigma_{xx}}}\right)$.
Clearly, the behaviour of $H_a (\theta_x)$ depends on $\theta_x \in (0, \infty)$. It can be verified that for $ a \sigma_{xy} >0
\ \left( a \sigma_{xy} <0 \right) ,$ $H_a(\theta_x)$ is a decreasing (an increasing) function of $\theta_x \in (0, \infty)$. Using the monotonicity of $ H_a \left( \theta_x \right)$, we conclude that for $\sigma_{xy}>0 \ \left( \sigma_{xy} <0 \right) $, $\Psi \left( \boldsymbol{\theta}^* \right) $ is an increasing (a decreasing) function of $\theta_x$. Therefore, for $\sigma_{xy}>0$
\begin{equation}\label{adm-inf-sup1}
\inf_{\boldsymbol{\theta}^* \in \mathbb{R}_+^2} \Psi (\boldsymbol{\theta}^*) =d_0 \ \ \text{and} \ \ \sup_{\boldsymbol{\theta}^* \in \mathbb{R}_+^2} \Psi (\boldsymbol{\theta}^*) = \lim\limits_{{\theta}_x \to \infty} \Psi(\boldsymbol{\theta}^*)= d_1,
\end{equation}
and for $\sigma_{xy}<0$
\begin{equation}\label{adm-inf-sup2}
\inf_{\boldsymbol{\theta}^* \in \mathbb{R}_+^2} \Psi (\boldsymbol{\theta}^*) = \lim\limits_{{\theta}_x \to \infty} \Psi(\boldsymbol{\theta}^*)= d_0 \ \ \text{and} \ \ \sup_{\boldsymbol{\theta}^* \in \mathbb{R}_+^2} \Psi (\boldsymbol{\theta}^*) = d_1.
\end{equation}
\noindent (i) Since $ \Psi (\boldsymbol{\theta}^*)$ is a continuous function of $\boldsymbol{\theta}^*$, it follows from (\ref{adm-inf-sup1}) and (\ref{adm-inf-sup2}) that any value of $d$ in the interval $ \left(d_0, d_1 \right)$ minimizes the risk function $R(\delta_d, \boldsymbol{\theta}^*)$ for some $\boldsymbol{\theta}^* \in \mathbb{R}_+^2$. Consequently, the estimators $\delta_d$, for any value of $d \in \left(d_0, d_1 \right)$ are admissible within the subclass $\mathcal{Q}_d$. The admissibility of the estimators $\delta_{d_0}$ and $\delta_{d_1}$, within the class $\mathcal{Q}_d$, follows form continuity of $R(\delta_d, \boldsymbol{\theta}^*)$.
\noindent (ii) For a fixed $\boldsymbol{\theta}^* \in \mathbb{R}_+^2$, the risk function $R(\delta_d, \boldsymbol{\theta}^*)$ is a decreasing (an increasing) function of $d$ for $d< \Psi (\boldsymbol{\theta}^*) \, \left(d> \Psi (\boldsymbol{\theta}^*) \right)$. Since $d_0 \leq \Psi (\boldsymbol{\theta}^*) \leq d_1, \forall \; \boldsymbol{\theta}^* \in \mathbb{R}_{+}^{2} $, it follows that the equivariant estimators $\delta_d $ are dominated by $\delta_{d_0} \ \text{for} \ d< d_0$ and $\delta_{d_1} \ \text{for} \ d> d_1$.
\end{proof}
\begin{remark}
The estimator $\delta_{N,2}$
is a member of the class $\mathcal{Q}_d$ for $d=-\frac{1}{2}a \sigma_{yy}$.
Then, using Theorem \ref{thm-adm}, the estimator $\delta_{N,2}$ is admissible within the class $\mathcal{Q}_d$.
\end{remark}
\section{Some Results of Improved Estimators}
In this section, using the loss function given in (\ref{loss1.1}), a sufficient condition for improving equivariant estimators of $\theta_{\text{y}}^S$ in the general class $\mathcal{Q}_c$ is derived. The following lemmas are needed for establishing the result.
\begin{lemma}\label{cond-pdf}
Let $T_1 = X_{(1)} - X_{(2)}, $ $T_2 = Y_{[1]} - Y_{[2]},$ $T_3= Y_{[2]} - \theta_{\textnormal{y}}^S,$ and $\rho = \frac{\sigma_{xy}}{\sqrt{\sigma_{xx} \sigma_{yy}}}$.
For $t_1 \leq 0$, $t_2 \in \mathbb{R}$, the conditional pdf of $T_3$ given $T_1=t_1, T_2=t_2$ is given by
\newpage
\begin{align*}
f_{T_3|T_1,T_2} & \left( T_3|T_1,T_2 \right) \\
& = \sqrt{\frac{ 2}{\sigma_{yy}}} \left[ \frac{ \phi \left( \sqrt{\frac{ 2}{\sigma_{yy}}}
\left( t_3 + \frac{t_2 - \boldsymbol{\theta}_y}{2} \right) D_1 \left( t_1,t_2, \boldsymbol{\theta}^* \right)
\right) + \phi \left( \sqrt{\frac{ 2}{\sigma_{yy}}}
\left( t_3 + \frac{t_2 + \boldsymbol{\theta}_y}{2} \right) D_2 \left( t_1,t_2,\boldsymbol{\theta}^* \right)
\right) }{D_1 \left( t_1,t_2, \boldsymbol{\theta}^* \right) + D_2 \left( t_1,t_2, \boldsymbol{\theta}^* \right) }
\right],
\end{align*}
where
\begin{align*}
D_1 \left( t_1,t_2, {\theta}^* \right) = \phi \left( \frac{ t_2 - {\theta}_{y}}{\sqrt{2\sigma_{yy}}} \right) \phi \left( \frac{\rho \left( \frac{ t_2 - {\theta}_{y}}{\sqrt{ \sigma_{yy}}} \right) - \left( \frac{ t_1 - {\theta}_{x}}{\sqrt{ \sigma_{xx}}} \right) } {\sqrt{2(1-\rho^2)}} \right),
\end{align*}
and
\begin{align*}
D_2 \left( t_1,t_2, {\theta}^* \right) = \phi \left( \frac{ t_2 + {\theta}_{y}}{\sqrt{2 \sigma_{yy}}} \right) \phi \left( \frac{\rho \left( \frac{ t_2 + {\theta}_{y}}{\sqrt{ \sigma_{yy}}} \right) - \left( \frac{ t_1 + {\theta}_{x}}{\sqrt{2\sigma_{xx}}} \right) } {\sqrt{2(1-\rho^2)}} \right).
\end{align*}
\item [(ii)]
For $t_1 \leq 0$ and $t_2 \in \mathbb{R}$,
\begin{equation*}
E \left( e^{aT_3 } \big| T_1=t_1, T_2=t_2\right) = e^{ \frac{ a^2 \sigma_{yy}}{4} -\frac{a t_2}{2} } \left[ \Delta \left( t_1,t_2, \boldsymbol{\theta}^* \right) \right],
\end{equation*}
where for $t_1 \leq 0$ and $t_2 \in \mathbb{R}$,
\begin{equation}\label{Delta}
\Delta \left( t_1,t_2, \boldsymbol{\theta}^* \right) = \frac{D_1 \left( t_1,t_2, \boldsymbol{\theta}^* \right)e^{\frac{a\boldsymbol{\theta}_{y}}{2}}+D_2 \left( t_1,t_2, \boldsymbol{\theta}^* \right) e^{\frac{-a\boldsymbol{\theta}_{y}}{2}}}{D_1 \left( t_1,t_2, \boldsymbol{\theta}^* \right) +D_2 \left( t_1,t_2, \boldsymbol{\theta}^* \right)}, \ \ \ \forall \ \boldsymbol{\theta}^* \in \mathbb{R}_+^2.
\end{equation}
\end{lemma}
\begin{lemma}\label{inf-sup}
For $t_1 \leq 0$ and $t_2 \in \mathbb{R}$, define
\begin{align*}
\varphi \left( t_1,t_2, \boldsymbol{\theta}^* \right) & = -\frac{1}{a} \ln \left[ E \left( e^{aT_3} \big| T_1=t_1, T_2=t_2\right) \right] \\
& = \frac{t_2}{2} -\frac{a \sigma_{yy}}{4} - \frac{1}{a} \ln \left[ \Delta \left( t_1,t_2,\boldsymbol{\theta}^* \right) \right] \ \ \text{(Using Lemma \ref{cond-pdf} (ii))},
\end{align*}
where $\Delta (\cdot)$ is given by (\ref{Delta}). Then, for $t_1 \leq 0$ and $t_2 \in \mathbb{R}$,
\begin{equation*}
\varphi_I \left( t_1,t_2 \right) \leq \varphi \left( t_1,t_2, \boldsymbol{\theta}^* \right) \leq \varphi_S \left( t_1,t_2 \right), \ \ \forall \, \boldsymbol{\theta}^* \in \mathbb{R}_+^2,
\end{equation*}
where
\begin{align*}
\varphi_{I} \left( t_1,t_2 \right) &= \left\{ \begin{array}{ll}
\frac{t_2}{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ \ t_1 \xi - \rho t_2<0 \ \text{and} \ t_2- \xi \rho t_1 <- a \frac{\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
-\infty, & \textup{otherwise},
\end{array}
\right.
\end{align*}
and
\begin{align*}
\varphi_{S} \left( t_1,t_2 \right)
&= \left\{ \begin{array}{ll}
\frac{t_2}{2}- \frac{a \sigma_{yy}}{4}, &\textup{if} \ \ t_1 \xi - \rho t_2 > 0 \ \text{and} \ t_2 - \xi \rho t_1 > -a \frac{\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\infty, & \textup{otherwise},
\end{array}
\right.
\end{align*}
where $ \xi=\sqrt{\frac{\sigma_{yy}}{\sigma_{xx}}}$.
\end{lemma}
Now,
we exploit the approach of Brewster and Zidek (1974) to obtain a sufficient condition for improving the equivariant estimators of the form $ \delta_{\varphi} \left( \boldsymbol{Z} \right) = Y_{[2]}+ \varphi \left( T_1,T_2 \right) $, where $T_1 = X_{(1)} - X_{(2)} $ and $T_2 = Y_{[1]} - Y_{[2]}.$
\begin{theorem}\label{thm-s-c}
Consider an equivariant estimator $\delta_{\varphi} \left( \boldsymbol{Z} \right) = Y_{[2]} + \varphi \left( T_1,T_2 \right) $ of $\theta_{\textnormal{y}}^S$, where $\varphi (\cdot)$ denotes a function of $T_1$ and $T_2$.
Suppose that $P \left(
\left\{ \varphi(T_1, T_2) \leq \varphi_{I}(T_1, T_2) \right\} \right.$ $\left. \cup
\left\{ \varphi(T_1, T_2) \geq \varphi_{S}(T_1, T_2) \right\} \right) >0$, where $\varphi_{I}(\cdot)$ and $\varphi_{S}(\cdot)$ are as given in Lemma \ref{inf-sup}. Then, using the loss function given in (\ref{loss1.1}), the estimator $\delta_{\varphi} (\cdot)$ is improved by
$\delta_{\varphi}^* (\boldsymbol{Z})=Y_{[2]} + \varphi^*(T_1, T_2)$, where
\begin{equation*}
\varphi^*(T_1, T_2)= \left\{ \begin{array}{ll}
\varphi_{I} (T_1, T_2), & \mbox{if} \ \ \varphi(T_1, T_2) \leq \varphi_{I}(T_1, T_2) \vspace{3mm} \\
\varphi (T_1, T_2) , & \mbox{if} \ \ \varphi_{I}(T_1, T_2) < \varphi(T_1, T_2) < \varphi_{S}(T_1, T_2) \vspace{3mm} \\
\varphi_{S}(T_1, T_2), & \mbox{if} \ \ \varphi(T_1, T_2) \geq \varphi_{S}(T_1, T_2).
\end{array} \right.
\end{equation*}
\end{theorem}
\begin{proof}
(i) \ Consider the risk difference of the estimators $\delta_\varphi$ and $\delta_\varphi^*$ and
\begin{equation*}
R(\boldsymbol{\theta}^*, \delta_\varphi)-R(\boldsymbol{\theta}^*, \delta_\varphi^*)= E \left[ K_{\boldsymbol{\theta}^*} (T_1, T_2) \right],
\end{equation*}
where, for $t_1 \leq 0, \ t_2 \in \mathbb{R}$,
\begin{align*}
K_{\boldsymbol{\theta}^*} (t_1,t_2) &= E \left[ e^{a\left( \delta_{\varphi} (\boldsymbol{Z})-\theta_{\text{y}}^S\right) }-a(\delta_{\varphi}(\boldsymbol{Z})-\theta_{\text{y}}^S)-1 \Big|\, T_1=t_1, T_2=t_2 \right] \vspace{4mm} \\
& \hspace*{3cm} - E \left[ e^{a\left( \delta_{\varphi}^*(\boldsymbol{Z})-\theta_{\text{y}}^S\right) }-a \left( \delta_{\varphi}^* (\boldsymbol{Z})-\theta_{\text{y}}^S \right) -1 \Big| \, T_1=t_1, T_2=t_2 \right] \vspace{4mm} \\
& = E \left[ e^{a\left( \delta_{\varphi} (\boldsymbol{Z})-\theta_{\text{y}}^S \right) }-e^{a\left( \delta_{\varphi}^* (\boldsymbol{Z})-\theta_{\text{y}}^S \right) } \Big| T_1=t_1, T_2=t_2 \right] \vspace{4mm} \\
& \hspace*{3cm} -aE \left[ \delta_{\varphi} (\boldsymbol{Z})-\delta_{\varphi}^* (\boldsymbol{Z}) \Big| T_1=t_1, T_2=t_2 \right] \vspace{4mm} \\
& = E \left[ e^{a \left( Y_{[2]}+\varphi(t_1,t_2)-\theta_{\text{y}}^S \right) }-e^{a\left( Y_{[2]}+\varphi^* (t_1,t_2)-\theta_{\text{y}}^S \right) } \big| T_1=t_1, T_2=t_2 \big]-a\big[ \varphi(t_1,t_2)-\varphi^* (t_1,t_2)\right] \\
& = \left[ e^{a\varphi(t_1,t_2)}-e^{ a \varphi^*(t_1,t_2)} \right] E \left(e^{a\left( Y_{[2]} - \theta_{\text{y}}^S\right) } \big| T_1=t_1, T_2=t_2 \right) - a\left[\varphi(t_1,t_2)-\varphi^* (t_1,t_2)\right]\\
& = \left[ e^{a\varphi(t_1,t_2)}-e^{a\varphi^* \left( t_1,t_2\right) } \right] e^{-a\varphi(t_1,t_2,\boldsymbol{\theta}^*)} - a\left[\varphi\left( t_1,t_2\right) -\varphi^* \left( t_1,t_2\right) \right].
\end{align*}
The last line of the above expression follows from Lemma \ref{cond-pdf} and Lemma \ref{inf-sup}.
Now, for a fixed $t_1 \leq 0$ and $t_2 \in \mathbb{R},$ if $\varphi(t_1,t_2)\leq \varphi_{I}(t_1,t_2) \left( \text{so that} \, \varphi^* (t_1,t_2)=\varphi_{I}(t_1,t_2) \right) $, then,
\begin{align*}
K_{\boldsymbol{\theta}^*}(t_1,t_2) &= \left[ e^{a\varphi(t_1,t_2)}-e^{a\varphi_{I}(t_1,t_2)} \right] e^{-a\varphi(t_1,t_2,\boldsymbol{\theta}^*)} - a\left(\varphi(t_1,t_2)-\varphi_{I}(t_1,t_2)\right) \\
&\geq \left[ e^{a\varphi(t_1,t_2)}-e^{a\varphi_{I}(t_1,t_2)} \right] e^{-a\varphi_I (t_1,t_2)} - a\left(\varphi(t_1,t_2)-\varphi_{I}(t_1,t_2)\right) \\
&= \left[ e^{a\{\varphi(t_1,t_2)-\varphi_{I}(t_1,t_2)\}} -1\right]-a\left[\varphi(t_1,t_2)-\varphi_{I}(t_1,t_2)\right].
\end{align*}
Using the property $e^x > 1+x, \, \forall \, x \neq 0$, we have $K_{\boldsymbol{\theta}^*} \left( t_1,t_2 \right) \geq 0$. If $\varphi_I(t_1,t_2) < \varphi(t_1,t_2) < \varphi_{S}(t_1,t_2) \left( \text{so that} \varphi^* (t_1,t_2)=\varphi(t_1,t_2)\right) $, then, $K_{\boldsymbol{\theta}^*}(t_1,t_2)=0$.
If $\varphi (t_1,t_2) \geq \varphi_{S}(t_1,t_2) \left( \text{so that} \, \varphi^* (t_1,t_2)=\varphi_{S}(t_1,t_2) \right) $, then,
\begin{align*}
K_{\boldsymbol{\theta}^*}(t_1,t_2) &= \left[ e^{a\varphi(t_1,t_2)}-e^{a\varphi_{S}(t_1,t_2)} \right] e^{-a\varphi(t_1,t_2,\boldsymbol{\theta}^*)} - a\left(\varphi(t_1,t_2)-\varphi_{S}(t_1,t_2)\right) \\
&\geq \left[ e^{a\varphi(t_1,t_2)}-e^{a\varphi_{S}(t_1,t_2)} \right] e^{-a\varphi_I (t_1,t_2)} - a\left(\varphi(t_1,t_2)-\varphi_{S}(t_1,t_2)\right) \\
&= \left[ e^{a\{\varphi(t_1,t_2)-\varphi_{S}(t_1,t_2)\}} -1\right]-a\left[\varphi(t_1,t_2)-\varphi_{S}(t_1,t_2)\right].
\end{align*}
Again using the property $e^x > 1+x, \, \forall \, x \neq 0$, we have $K_{\boldsymbol{\theta}^*} \left( t_1,t_2 \right) \geq 0$.
\noindent Now, since $P \left(
\left\{ \varphi(T_1, T_2) \leq \varphi_{I}(T_1, T_2) \right\} \right.$ $ \left. \cup
\left\{ \varphi(T_1, T_2) \geq \varphi_{S}(T_1, T_2) \right\} \right) >0$, we conclude that
\begin{equation*}
R(\boldsymbol{\theta}^*, \delta_{\varphi})-R(\boldsymbol{\theta}^*, \delta_{\varphi}^*) \geq 0, \ \ \forall \, \boldsymbol{\theta}^* \in \mathbb{R}_+^2,
\end{equation*}
and the srtict inequality holds for some $ \boldsymbol{\theta}^* \in \mathbb{R}_+^2.$
Hence the result follows.
\end{proof}
\section*{Improved Estimators}
Here, we provide some improved estimators of $\theta_{\text{y}}^S$ by using the reuslt of Theorem \ref{thm-s-c}.
\noindent \textbf{Improved estimator 1:}
For $a>0$ and $ 0 < \rho \leq 1$, the estimator $\delta_{N,1}$ is improved by
\begin{align*}
\delta_{N,1}^{I1} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]} }{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ \ T_1 > \frac{\rho T_2}{\xi} \ \textup{and} \ \frac{a \sigma_{yy}}{2} \geq T_2 >\xi \rho T_1 -a \frac{\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\delta_{N,1}, & \textup{otherwise}.
\end{array}
\right.
\end{align*}
\noindent \textbf{Improved estimator 2:}
For $a<0$ and $ -1 \leq \rho <0$, the estimator $\delta_{N,1}$ is improved by
\begin{align*}
\delta_{N,1}^{I2} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]} }{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ \ T_1 < \frac{\rho T_2}{\xi} \ \textup{and} \ \frac{a \sigma_{yy}}{2} \leq T_2 < \xi \rho T_1 -a \frac{\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\delta_{N,1}, & \textup{otherwise}.
\end{array}
\right.
\end{align*}
\noindent \textbf{Improved estimator 3:}
For $a>0$ $(a<0)$ and $-1 \leq \rho < 0$ $(0< \rho \leq1)$, the estimator $\delta_{N,1}$ is improved by
\begin{align*}
\delta_{N,1}^{I3} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]} }{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ \ T_1<\frac{\rho T_2}{\xi} \ \textup{and} \ \frac{a \sigma_{yy}}{2} \leq T_2 < \xi \rho T_1 -a \frac{\sigma_{yy}}{2} (1-\rho^2) \vspace{3mm} \\
& \textup{or} \ \ T_1>\frac{\rho T_2}{\xi} \ \textup{and} \ \frac{a \sigma_{yy}}{2} \geq T_2 > \xi \rho T_1 -a \frac{\sigma_{yy}}{2}(1-\rho^2)
\vspace{3mm} \\
\delta_{N,1}, & \textup{otherwise}.
\end{array}
\right.
\end{align*}
\noindent \textbf{Improved estimator 4:}
For $a<0$ and $ \rho = 0$, the estimator $\delta_{N,1}$ is improved by
\begin{align*}
\delta_{N,1}^{I4} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]} }{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ \ \frac{a \sigma_{yy}}{2} \leq T_2 < -\frac{a \sigma_{yy}}{2}
\vspace{3mm} \\
\delta_{N,1}, & \textup{otherwise}.
\end{array}
\right.
\end{align*}
For $a>0$ and $\rho =0$, Theorem \ref{thm-s-c} fails to provide an improved estimator upon the estimator $\delta_{N,1}$.
\noindent \textbf{Improved estimator 5:}
For $a>0$ and $-1 \leq \rho < 0$, the estimator $\delta_{N,2}$ is improved by
\begin{align*}
\delta_{N,2}^{I1} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]} }{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ \ T_1<\frac{\rho T_2}{\xi} \ \textup{and} \ -\frac{a \sigma_{yy}}{2} \leq T_2 < \xi \rho T_1 -a \frac{\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\delta_{N,2}, & \textup{otherwise}.
\end{array}
\right.
\end{align*}
\noindent \textbf{Improved estimator 6:}
For $a > 0 $ $(a <0)$ and $0 < \rho \leq 1$ $(-1 \leq \rho < 0)$, the estimator $\delta_{N,2}$ is improved by
\begin{align*}
\delta_{N,2}^{I2} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]} }{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ \ T_1<\frac{\rho T_2}{\xi} \ \textup{and} \ -\frac{a \sigma_{yy}}{2} \leq T_2 < \xi \rho T_1 -\frac{a\sigma_{yy}}{2} (1-\rho^2) \vspace{3mm} \\
& \textup{or} \ \ T_1>\frac{\rho T_2}{\xi} \ \textup{and} \ -\frac{a \sigma_{yy}}{2} \geq T_2> \xi \rho T_1 -a \frac{\sigma_{yy}}{2}(1-\rho^2)
\vspace{3mm} \\
\delta_{N,2}, & \textup{otherwise}.
\end{array}
\right.
\end{align*}
For $a<0 \ (a\neq 0)$ and $0 < \rho \leq 1 \ (\rho =0)$, Theorem \ref{thm-s-c} fails to provide an improved estimator upon the estimator $\delta_{N,2}$.
\noindent \textbf{Improved estimator 7:}
For $a>0$, $0 <\rho \leq 1$, and $\varphi_3\leq \varphi_{I}$ or $\varphi_3\geq \varphi_{S}$, where $\varphi_3=\frac{1}{a} \ln \left[ 1 + \left( e^{ a T_2 } -1\right) \Phi \left( \frac{T_1}{\sqrt{2\sigma_{xx}}} \right) \right]$, and $\varphi_{I}$ and $\varphi_{S}$ are as given in Lemma \ref{inf-sup}, the estimator $\delta_{N,3}$ is improved by
\begin{align*}
\delta_{N,3}^{I1} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]} }{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ \ T_1 < \frac{\rho T_2}{ \xi} \ \ \textup{and} \ \ T_2 < \xi \rho T_1 -a \frac{\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
& \textup{or} \ \ T_1 > \frac{\rho T_2}{ \xi} \ \ \textup{and} \ \ T_2> \xi \rho T_1 -a \frac{\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\delta_{N,3}, & \textup{otherwise}.
\end{array}
\right.
\end{align*}
\noindent \textbf{Improved estimator 8:}
For $a<0$ and $0 <\rho \leq 1$ and $\varphi_3\leq \varphi_{I}$, the estimator $\delta_{N,3}$ is improved by
\begin{align*}
\delta_{N,3}^{I2} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]} }{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ T_1 < \frac{\rho T_2}{ \xi} \ \ \textup{and} \ \ T_2 < \xi \rho T_1 -a \frac{\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\delta_{N,3}, & \textup{otherwise}.
\end{array}
\right.
\end{align*}
\noindent \textbf{Improved estimator 9:}
For $a \neq 0$, $ -1 \leq \rho < 0$ and $\varphi_3\leq \varphi_{I}$ or $\varphi_3\geq \varphi_{I}$, the estimator $\delta_{N,3}$ is improved by
\begin{align*}
\delta_{N,3}^{I3} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]} }{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ \ T_2 < \min \left\{ \frac{\xi T_1}{\rho }, \xi \rho T_1 -a \frac{\sigma_{yy}}{2} (1-\rho^2) \right\}
\vspace{3mm} \\
& \textup{or} \ \ \max \left\{ \frac{\xi T_1}{\rho }, \xi \rho T_1 -a \frac{\sigma_{yy}}{2} (1-\rho^2) \right\} < T_2
\vspace{3mm} \\
\delta_{N,3}, & \textup{otherwise}.
\end{array}
\right.
\end{align*}
\noindent \textbf{Improved estimator 10:}
For $a \neq 0$, $ \rho = 0$ and $\varphi_3\leq \varphi_{I}$, the estimator $\delta_{N,3}$ is improved by
\begin{align*}
\delta_{N,3}^{I4} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]} }{2}- \frac{a \sigma_{yy}}{4} & \textup{if} \ \ T_1< 0 \ \textup{and} \ T_2 < - \frac{a\sigma_{yy}}{2}
\vspace{3mm} \\
\delta_{N,3}, & \textup{otherwise}.
\end{array}
\right.
\end{align*}
\noindent \textbf{Improved estimator 11:}
For $a>0$ and $0 < \rho \leq 1$, the estimator $\delta_{N,4}$ is improved by
\begin{align*}
\delta_{N,4}^{I1} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]}}{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ T_1 > \max \left\{- c\sqrt{2\sigma_{xx}}, \frac{\rho T_2}{\xi}\right\} \ \text{and} \ T_2 > \xi \rho T_1 - \frac{a\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\frac{Y_{[1]}+Y_{[2]}}{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ \frac{\rho T_2}{\xi} < T_1 \leq - c\sqrt{2\sigma_{xx}} \ \text{and} \ \ \frac{a\sigma_{yy}}{2} \geq T_2 > \xi \rho T_1 -\frac{a\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\delta_{N,4}, & \textup{otherwise}.
\end{array}
\right.
\end{align*}
\noindent \textbf{Improved estimator 12:}
For $a>0$ and $-1\leq \rho < 0$, the estimator $\delta_{N,4}$ is improved by
\begin{gather}
\scalebox{1}{$ \begin{align*}
\delta_{N,4}^{I2} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]}}{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ T_1 > \max \left\{- c\sqrt{2\sigma_{xx}}, \frac{\rho T_2}{\xi}\right\} \ \text{and} \ T_2 > \xi \rho T_1 - \frac{a \sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\frac{Y_{[1]}+Y_{[2]}}{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ T_1< \min \left\{- c\sqrt{2\sigma_{xx}}, \frac{\rho T_2}{\xi}\right\} \ \text{and} \ \frac{a\sigma_{yy}}{2} \leq T_2 < \xi \rho T_1 - \frac{a\sigma_{yy}}{2} (1-\rho^2) \vspace{3mm} \\
& \textup{or} \ \frac{\rho T_2}{\xi} < T_1 \leq - c\sqrt{2\sigma_{xx}} \ \text{and} \ \frac{a\sigma_{yy}}{2} \geq T_2 > \xi \rho T_1 - \frac{a\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\delta_{N,4}, & \textup{otherwise}.
\end{array}
\right.\end{align*}$}
\end{gather}
\noindent\textbf{Improved estimator 13:}
For $a<0$ and $0 < \rho \leq 1$, the estimator $\delta_{N,4}$ is improved by
\begin{gather}
\scalebox{1}{$ \begin{align*}
\delta_{N,4}^{I3} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]}}{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ - c\sqrt{2\sigma_{xx}} <T_1 <\frac{\rho T_2}{\xi} \ \text{and} \ T_2 < \xi \rho T_1 - \frac{ a \sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\frac{Y_{[1]}+Y_{[2]}}{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ T_1< \min \left\{- c\sqrt{2\sigma_{xx}}, \frac{\rho T_2}{\xi}\right\} \ \text{and} \ \frac{a\sigma_{yy}}{2} \leq T_2 < \xi \rho T_1 -\frac{a \sigma_{yy}}{2} (1-\rho^2) \vspace{3mm} \\
& \textup{or} \ \frac{\rho T_2}{\xi} < T_1 \leq - c\sqrt{2\sigma_{xx}} \ \text{and} \ \frac{a\sigma_{yy}}{2} \geq T_2 > \xi \rho T_1 - \frac{a\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\delta_{N,4}, & \textup{otherwise}.
\end{array}
\right.\end{align*}$}
\end{gather}
\noindent \textbf{Improved estimator 14:}
For $a<0$ and $ -1 \leq \rho < 0$, the estimator $\delta_{N,4}$ is improved by
\begin{gather}
\scalebox{.95}{$ \begin{align*}
\delta_{N,4}^{I4} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]}}{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ - c\sqrt{2\sigma_{xx}} <T_1 <\frac{\rho T_2}{\xi} \ \text{and} \ T_2 < \xi \rho T_1 - \frac{ a \sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\frac{Y_{[1]}+Y_{[2]}}{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ T_1< \min \left\{- c\sqrt{2\sigma_{xx}}, \frac{\rho T_2}{\xi}\right\} \ \text{and} \ \frac{a\sigma_{yy}}{2} \leq T_2 < \xi \rho T_1 -a \frac{\sigma_{yy}}{2} (1-\rho^2)
\vspace{3mm} \\
\delta_{N,4}, & \textup{otherwise}.
\end{array}
\right.\end{align*}$}
\end{gather}
\noindent \textbf{Improved estimator 15:}
For $a<0$ and $ \rho = 0$, the estimator $\delta_{N,4}$ is improved by
\begin{align*}
\delta_{N,4}^{I5} \left( \boldsymbol{Z}_1,\boldsymbol{Z}_2 \right)
&= \left\{ \begin{array}{ll}
\frac{Y_{[1]}+Y_{[2]}}{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ T_1> - c\sqrt{2\sigma_{xx}} \ \text{and} \ T_2 < - \frac{a\sigma_{yy}}{2}
\vspace{3mm} \\
\frac{Y_{[1]}+Y_{[2]}}{2}- \frac{a \sigma_{yy}}{4}, & \textup{if} \ T_1\leq- c\sqrt{2\sigma_{xx}} \ \text{and} \ \frac{a\sigma_{yy}}{2} \leq T_2 < - \frac{a\sigma_{yy}}{2}
\vspace{3mm} \\
\delta_{N,4}, & \textup{otherwise}.
\end{array}
\right.\end{align*}
\\
For $a>0$ and $\rho =0$, Theorem \ref{thm-s-c} fails to provide an improved estimator upon the estimator $\delta_{N,4}$.
\section{An application to Poultry feeds data }
In this section, a data analysis is presented using a real data set (reported in Olosunde (2013)) to domenstrate the computation of various estimates of $\theta_{\text{y}}^S$.
Olosunde (2013) conducted a study to compare the effect of two different copper-salt combinations on eggs produced by chicken in poultry feeds. An equal number of chickens were randomly assigned to be fed with each of the two combinations. A sample of 96 chickens were randomly selected from the poultry and were divided into two groups, of 48 chickens each. One group was given an organic copper-salt combination and an inorganic copper-salt combination was given to the another group. After a period of time, the weight and the cholesterol level of the eggs produced by the two groups were measured. The observed data from the organic and the inorganic Copper-Salt combinations are reported in Olosunde (2013) and presented in Table \ref{Data}. The eggs with more weights and less cholesterol is preferable.
Let $\pi_1$ and $\pi_2$ represent the populations given an organic copper-salt combination and an inorganic copper-salt combination, respectively.
Let $(X_i, Y_i)$ be a pair of observations from the population $\pi_i, \, i=1,2,$ where the $X$-variate denotes the average weights of eggs and the $Y$-variate denotes the corresponding average cholesterol levels. A number of 48 observations corresponding to each measurement is available from the data obtained by Olosunde (2013). Since the sample sizes of the two populations are same, the pooled variance-covaraince matrix is used. The obtained data are assumed to have a bivariate normal distribution with different means and common known variance-covariance matrix. To check the validity of the bivariate normality assumption for the available data set, we apply the Royston's normality test, given in the R-software package “MVN" that provided by Korkmaz et al. 2014. Royston's test combines the Shapiro-Wilk (S-W) test statistics for univaraite normality and obtain one test statistic for bivariate/multivariate normality. The Royston's and Shapiro-Wilk tests statistic with corresponding p-values are presented in Table \ref{royston-test}.
\begin{table}[H]
\begin{center}
\caption{Normality test, p-values, kurtosis and skewness.}
\vspace{.1cm}
\label{royston-test} \setlength{\tabcolsep}{6pt}
\def\arraystretch{1.2}
\begin{tabular}{|ccccccc|}
\hline
\cline{1-7}
Test& Measure & Statistic &p-value &kurtosis & Skewness &Normality \\
\hline
Royston & {$\pi_1$} &5.878109 &0.0529& && Yes
\\
S-W & {$\pi_1$}-weight &0.9569 & 0.0758 &-1.256476& 0.01487668 & Yes \\
S-W &{$\pi_1$}-cholesterol &0.9598 & 0.0988 &-1.213823& -0.09288089& Yes \\
Royston &{$\pi_2$} & 2.867&0.1051& && Yes
\\
S-W & {$\pi_2$}-weight &0.9679 &0.2089 &-1.110509 & 0.1015675& Yes\\
S-W &{$\pi_2$}-cholesterol &0.9543 &0.0592 &-1.263555 & -0.0816612 & Yes \\
\hline
\end{tabular}
\end{center}
\end{table}
From Table \ref{royston-test}, we may conclude that the data set satisfy the bivariate normality assumption at 0.05 level of significance. The estimated parameters of the bivariate normal model (based on ML) are presented in Table \ref{table-data}.
\begin{table}[H]
\begin{center}
\caption{Estimated parameters of the bivariate normal distribution.}
\vspace{.1cm}
\label{table-data} \setlength{\tabcolsep}{7pt}
\def\arraystretch{1.2}
\begin{tabular}{|ccccc|}
\hline
\cline{1-5}
Population& Measure& Mean& Variance &Covariance\\
\hline
{$\pi_1$} &weight &59.0997& 8.1645& 40.0655 \\
& cholesterol &131.4569&952.9425&\\
{$\pi_2$} &weight &58.3516&8.1645& 40.0655\\
& cholesterol &195.7275&952.9425&\\
\hline
\end{tabular}
\end{center}
\end{table}
Recall that, the quality of a population is determined with regard to their X-variate, while the corresponding Y-variate is of main interest.
We say that the population $\pi_1 \equiv N \left( \boldsymbol{\theta}^{(1)}, \boldsymbol{\Sigma} \right)$ is better than the population $\pi_2 \equiv N \left( \boldsymbol{\theta}^{(2)}, \boldsymbol{\Sigma} \right)$ if
$\theta_x^{(1)} > \theta_x^{(2)}$ and the population $\pi_2$ is considered better than the population $\pi_1$ if $\theta_x^{(1)} \leq \theta_x^{(2)}$, where
$ \boldsymbol{\theta}^{(1)}= \left(\theta_x^{(1)}, \theta_y^{(1)} \right)^\intercal$ and $ \boldsymbol{\theta}^{(2)}= \left(\theta_x^{(2)}, \theta_y^{(2)} \right)^\intercal$ are the mean vectors of the populations $\pi_1$ and $\pi_2$ respectively. From the data we have $ \boldsymbol{\hat{\theta}}^{(1)} = \left( 59.0998, 131.4569 \right)^\intercal$, $ \boldsymbol{\hat{\theta}}^{(2)} = \left( 58.3517, 195.7275 \right)^\intercal$, and $ \boldsymbol{\Sigma} = \begin{bmatrix}
8.1645 & 40.0655 \\
40.0655 & 952.9425
\end{bmatrix}$.
It can be observed that the average weight of eggs from chicken fed with an organic copper-salt combination is larger than the one with an in-organic copper-salt combination.
Therefore, using the natural selection rule $\boldsymbol{\psi}$ given in (\ref{sel-rule}), we may conclude that the population $\pi_1$ is preferable over the population $\pi_2$. Also, the average cholesterol level for the population $\pi_1$ is less than that for the population $\pi_2$. Hence, based on the above observations, the organic copper-salt combination is recommended. This result was also obtained by Olosunde (2013). The various estimates of $\theta_{\textnormal{y}}^S$ of the selected bivariate normal population are presented in Tables \ref{table-est1} and \ref{table-est2}.
\begin{table}[H]
\begin{center}
\caption{The various estimates of $\theta_{\textnormal{y}}^S$ for $a=1$.}
\vspace{.1cm}
\label{table-est1} \setlength{\tabcolsep}{6pt}
\def\arraystretch{1.4}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$\delta_{N,1}$& $\delta_{N,1}^{I1}$ & $\delta_{N,2}$& $\delta_{N,2}^{I2}$& $\delta_{N,3}$ & $\delta_{N,3}^{I1}$ & $\delta_{N,4}$ & $\delta_{N,4}^{I1}$\\
\cline{1-8}
131.4569 &131.4569&-345.0144 &-345.0144&194.9654&194.9654&163.5922& 163.5922 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\caption{The various estimates of $\theta_{\textnormal{y}}^S$ for $a=-1$.}
\vspace{.1cm}
\label{table-est2} \setlength{\tabcolsep}{8pt}
\def\arraystretch{1.4}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$\delta_{N,1}$& $\delta_{N,1}^{I3}$ & $\delta_{N,2}$& $\delta_{N,3}$ & $\delta_{N,3}^{I2}$ & $\delta_{N,4}$ & $\delta_{N,4}^{I3}$\\
\cline{1-7}
131.4569 &401.8278 &607.9281&132.0856&401.8278&163.5922& 163.5922\\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Risk Comparisons of Estimators}
In this section, we compare the risk performance of the proposed estimators of $\theta_{\textnormal{y}}^S$, using the loss function given in (\ref{loss1.1}). For this purpose, a simulation study is performed using MATLAB software to compute the values of risk of the various estimators. 20,000 simulation runs with different configurations of parameters are used to obtain the risk values.
(For this purpose, a simulation study is performed using MATLAB software with 20,000 simulation runs and different configurations of parameters are used.
Note that the estimator with the least average risk values is preferable. Further, the natural selection rule $\boldsymbol{\psi}$ presented in Equation (\ref{sel-rule}) is used for achieving the aim of selecting the best bivariate normal population. It is easy to see that, the risk of the proposed estimators of $\theta_{\textnormal{y}}^S$ depend on the parameters $\sigma_{xx}$, $\sigma_{yy}$, $\rho$, $a$ and $\theta^{(1)}=\left( \theta_{x}^{(1)}, \theta_{y}^{(1)}\right)$, $\theta^{(2)}=\left( \theta_{x}^{(2)}, \theta_{y}^{(2)}\right)$ (only through $\theta_x$ and $\theta_y$). So that, the risk functions are vary for different combinations of these parameters. The computed values of risks of the various estimators of $\theta_{\textnormal{y}}^S$ are presented in Tables \ref{table:risk1}-\ref{table:risk6}, for different combinations of $\theta^{(1)}$, $\theta^{(2)}$,
and for $\sigma_{xx}= \sigma_{yy}= 2$, $\rho \in \left\{ -1,0,1 \right\}$, and $a \in \left\{ -1,1 \right\}.$ Note that the computation of risk values was carried-out for other values of $a$ and $\rho$ but these values were omitted from the tables because the same results were obtained. The risk values of the hybrid estimator $\delta_{N,4}$ were calculated for $c=1$. In view of the risk values in Tables \ref{table:risk1}-\ref{table:risk6}, we present the following assessment of the estimators of $\theta_{\textnormal{y}}^S$.
\begin{itemize}
\item[(1)] For $a>0$ and $0<\rho \leq 1$, the imrpoved estimators $\delta_{N,1}^{I1}$ and $\delta_{N,2}^{I2}$ provide a considerable improvement upon the estimators $\delta_{N,1}$ and $\delta_{N,2}$, respectively. The improved estimators $\delta_{N,3}^{I1}$ and $\delta_{N,4}^{I1}$ have the same performance with the estimators $\delta_{N,3}$ and $\delta_{N,4}$, respectively, hence their risk values were omitted form Table \ref{table:risk1}. The improved estimator $\delta_{N,2}^{I2}$ dominate all other estimators and
has the least values of risk among other estimators.
\item[(2)] For $a>0$ and $-1 \leq \rho < 0$, the improved estimators $\delta_{N,1}^{I3}$, $\delta_{N,2}^{I1}$, $\delta_{N,3}^{I3}$ and $\delta_{N,4}^{I2}$ perform better than their respective natural estimators. However, among all these estimators the improved estimator $\delta_{N,1}^{I3}$ has the best performance.
\item[(3)] For $a>0$ and $\rho=0$, the improved estimator $\delta_{N,3}^{I4}$ provides a significant improvement upon the estimator $\delta_{N,3}$. Also, the estimator $\delta_{N,3}^{I4}$ has better performance than the estimators $\delta_{N,2}$ and $\delta_{N,4}$ only when $\theta_x \geq -0.2$ and $\theta_y \leq 0.2$. But, when $\theta_x < -0.2$ and $\theta_y > 0.2$ the estimator $\delta_{N,2}$ performs better than $\delta_{N,3}^{I4}$. Further, the estimator $\delta_{N,2}$ dominates the three estimators $\delta_{N,1}$, $\delta_{N,3}$ and $\delta_{N,4}$.
\item[(4)] For $a<0$ and $0 < \rho \leq 1$, the estimator $\delta_{N,4}$ dominates the estimators $\delta_{N,2}$ and $\delta_{N,3}$, but, when $\theta_x$ and $\theta_y$ are very close to zero, $\delta_{N,3}$ dominates $\delta_{N,4}$. The estimator $\delta_{N,1}$ dominates all the estimators of $\theta_{\textnormal{y}}^S$. The improved estimators $\delta_{N,1}^{I3}$, $\delta_{N,3}^{I2}$ and $\delta_{N,4}^{I3}$ have the same values of risk with the estimators $\delta_{N,1}$ $\delta_{N,3}$ and $\delta_{N,4}$, respectively, hence their risk values were omitted form Table \ref{table:risk4}.
\item[(5)] For $a<0$ and $-1 \leq \rho < 0$, the improved estimators $\delta_{N,1}^{I2}$, $\delta_{N,2}^{I2}$, $\delta_{N,3}^{I3}$ and $\delta_{N,4}^{I4}$ provide considerable improvement upon their respective natural estimators. However, the improved estimator $\delta_{N,2}^{I2}$ has the least risk values among all these estimators.
\item[(6)] For $a<0$ and $ \rho = 0$, the improved estimators $\delta_{N,1}^{I4}$, $\delta_{N,3}^{I4}$ and $\delta_{N,4}^{I5}$ provide only marginal improvement upon the estimators $\delta_{N,1}$, $\delta_{N,3}$ and $\delta_{N,4}$, respectively. The estimator $\delta_{N,4}^{I5}$ domintes the other estimators when $\theta_x$ and $\theta_y$ are very close to zero, but when $\theta_x$ and $\theta_y$ are not close to zero the estimator $\delta_{N,2}$ dominates $\delta_{N,4}^{I5}$.
\end{itemize}
Based on the above observations, we conclude that, for $a>0$ and $ 0\leq \rho \leq 1$ the performance of the estimator $\delta_{N,2}^{I2}$ is satisfactory, hence is recommended for practical purposes. For $a>0$ and $ -1\leq \rho <0$, the estimator $\delta_{N,1}^{I3}$ is recommended. For $a>0$ and $\rho=0$, the estimator $\delta_{N,3}^{I4}$ is recommended when $\theta_x \geq -0.2$ and $\theta_y \leq 0.2$ and the estimator $\delta_{N,2}$ is recommended for other values of $\theta_{x}$ and $\theta_{y}$.
For $a<0$, the use of the natural estimator $\delta_{N,1}$ is recommended for $0< \rho \leq 1$ and the estimator $\delta_{N,2}^{I2}$ is recommended for $-1\leq \rho <0$. Also, for $a<0$ and $\rho =0$, the estimator $\delta_{N,4}^{I5}$ is recommended when $\theta_x$ and $\theta_y$ are very close to zero, and the estimator $\delta_{N,2}$ is recommended when $\theta_x$ and $\theta_y$ are not close to zero.
\section*{Acknowledgement}
The authors are thankful to Dr. A. A. Olosunde for providing a complete data set that used as an application in this paper.
|
1,108,101,564,353 | arxiv | \section{Introduction}
The interest in decentralized cryptocurrencies has grown rapidly in recent years. Bitcoin \cite{nakamoto2008bitcoin}, as the first and most famous system, has attracted massive attention. Subsequently, a handful of cryptocurrencies, such as Ethereum \cite{wood2014ethereum}, Namecoin \cite{ali2016blockstack} and Litecoin \cite{reed2017litecoin}, were proposed. Blockchain-based cryptocurrencies significantly facilitate the convenience of payment by providing a decentralized online solution for customers. However, merely online processing of transactions confronts the problem of low performance and high congestion. Offline delegation provides an alternative way to mitigate the issue by enabling users to exchange the coin without having to connect to an online blockchain platform~\cite{gudgeon2020sok}. Unfortunately, decentralized offline delegation still confronts risks caused by unreliable participants. The misbehaviours may easily happen due to the absence of effective supervision. To be specific, let us start from a real scenario: imagine that Bob, the son of Alex, a wild teenager, wants some digital currency (\textit{e.g.}, BTC) to buy a film ticket. According to current decentralized cryptocurrency payment technologies \cite{nakamoto2008bitcoin}\cite{wood2014ethereum}, Alex has two delegation approaches: (1) \textit{Coin-transfer.} Alex asks for Bob's BTC address, and then transfers a specific amount of coins to Bob's address. In such a scenario, Bob can only spend received coins from Alex. (2) \textit{Ownership-transfer.} Alex directly gives his own private key to Bob. Then, Bob can freely spend the coins using such a private key. In this situation, Bob obtains all coins that are saved in Alex's address.
We observe that both approaches suffer drawbacks. For the first approach, coin-transfer requires a global consensus of the blockchain, which makes it time-consuming \cite{kiayias2015speed}. For example, a confirmed transaction in the Bitcoin \cite{nakamoto2008bitcoin} takes around one hour (6 blocks), making the coin-transfer lose the essential property of real-time. For the other approach, ownership-transfer highly relies on the honesty of the delegatee. The promise between the delegator and delegatee depends on their trust or relationship. But it is weak and unreliable. The delegatee may spend all coins in the address for other purposes. Back to the example, Alex's original intention is to give Bob 200 $\mu BTC$ to buy a film ticket, but Bob may spend all coins to purchase his favorite toys. That means Alex loses control of the rest of coins. These two types of approaches represent most of the mainstream schemes ever aiming to achieve a secure delegation, but neither of them provide a satisfactory solution. This leads to the following research problem:
\begin{center}
\begin{tcolorbox}[colback=gray!10
colframe=black
width=12cm
arc=1mm, auto outer arc,
boxrule=0.5pt,
]
Is it possible to build a secure offline peer-to-peer delegatable system for decentralized cryptocurrencies?
\end{tcolorbox}
\end{center}
\noindent The answer would intuitively be ``NO''. Without interacting with the online blockchain network, the coins that have been used confront the risk of being spent twice after another successful delegation. This is because a delegation is only witnessed by the owner and delegatee, where no authoritative third parties perform final confirmation. The pending status leaves a window for attacks in which a malicious coin owner could spend this delegated transaction before the delegatee uses it. Even if a third party can be introduced as a judge between the delegator (owner) and delegatee to secure transactions, she faces the threat of being compromised or provided with misleading assure. Furthermore, the approach equipped with a third party contradicts the real intention of decentralized cryptocurrency systems.
In this paper, we propose \textit{DelegaCoin}, an offline delegatable electronic cash system. The trusted execution environments (TEEs) are utilized to play the role of \textit{virtual agent}. TEEs prevent malicious delegation of the coins (\textit{e.g.} double-delegation on the same coins). As shown in Figure.\ref{size}, the proposed scheme allows the owner to delegate her coins without interacting with the blockchain or any trusted third parties. The owner is able to directly delegate specific amounts of coins to others by sending them through a secure channel. This delegation can only be executed once under the supervision of delegation policy inside TEEs. In a nutshell, this paper makes the following contributions.
\begin{itemize}
\item[-] We propose an offline delegatable payment solution, called \textit{DelegaCoin}. It employs the trusted execution environments (TEEs) as the decentralized \textit{virtual agents} to prevent the malicious owner from delegating the same coins multiple times.
\item[-] We formally define our protocols and provide a security analysis. Designing a provably secure system from TEEs is a non-trivial task that lays the foundation for many upper-layer applications. The formal analysis indicates that our system is secure.
\item[-] We implement the system with Intel’s Software Guard Extensions (SGX) and conduct a series of experiments including the time cost for each function and the used disk space under different configurations. The evaluations demonstrate that our system is feasible and practical.
\end{itemize}
\smallskip
\noindent\textbf{Paper Structure.} Section~\ref{sec-rw} gives the background and related studies. Section~\ref{sec-prelimi} provides the preliminaries and building blocks. Section~\ref{sec-design} outlines the general construction of our scheme. Section~\ref{sec-formal} presents a formal model for our protocols. Section~\ref{sec-seurity} provides the corresponding security analysis. Section~\ref{sec-implementation} and Section~\ref{sec-evaluation} show our implementation and evaluation, respectively. Section~\ref{sec-conclusion} concludes our work. Appendix A provides an overview of protocol workflow, Appendix B shows the resources availability and Appendix C presents featured notations in this paper.
\section{Related Work}
\label{sec-rw}
\noindent\textbf{Decentralized Cryptocurrency System.}
Blockchain-based cryptocurrencies facilitate the convenience of payment by providing a decentralized online solution for customers. Bitcoin \cite{nakamoto2008bitcoin} was the first and most popular decentralized cryptocurrency. Litecoin \cite{reed2017litecoin} modified the PoW by using the Script algorithm and shortened the block confirmation time. Namecoin \cite{ali2016blockstack} was the first hard fork of Bitcoin to record and transfer arbitrary names (keys) securely. Ethereum \cite{wood2014ethereum} extended Bitcoin by enabling state-transited transactions. Zcash \cite{hopwood2016zcash} provides a privacy-preserving payment solution by utilizing zero-knowledge proofs. CryptoNote-style schemes \cite{van2013cryptonote}, instead, enhance the privacy by adopting ring-signatures. However, slow confirmation of transactions retards their wide adoption from developers and users. Current cryptocurrencies, with ten to hundreds of TPS~\cite{nakamoto2008bitcoin,zheng2018detailed}, cannot rival established payment systems such as Visa or PayPal that process thousands. Thus, various methods have been proposed for better throughput. The scaling techniques can be categorized in two ways: (i) On-chain solutions that aim to create highly efficient blockchain protocols, either by reconstructing structures \cite{wang2020sok}, connecting chains \cite{zamyatin2019sok} or via sharding the blockchain \cite{wang2019sok}. However, on-chain solutions are typically not applicable to existing blockchain systems (require a hard fork). (b) Off-chain (layer 2) solutions that regard the blockchain merely as an underlying mechanism and process transactions offline \cite{gudgeon2020sok}. Off-chain solutions generally operate independently on top of the consensus layer of blockchain systems, not changing their original designs. In this paper, we explore the second avenue.
\smallskip
\noindent\textbf{TEEs and Intel SGX.}
The Trusted Execution Environments (TEEs) provide a secure environment for executing code to ensure the confidentiality and integrity of code and logic \cite{ekberg2013trusted}. State-of-the-art implementations include Intel Software Guard Extensions (SGX)~\cite{costan2016intel}, ARM TrustZone~\cite{pinto2019demystifying}, AMD memory encryption~\cite{kaplan2016amd}, Keystone ~\cite{lee2020keystone}, \textit{etc}. Besides, many other applications like BITE \cite{Matetic2018BITEBL}, Tesseract \cite{bentov2019tesseract}, Ekiden \cite{cheng2019ekiden} and Fialka \cite{li2020accountable} propose their TEEs-empowered schemes, but they still miss the focus of offline delegation. In this paper, we utilize SGX \cite{costan2016intel} to construct the system. SGX is one of TEEs representatives, and offers a set of instructions embedded in central processing units (CPUs). These instructions are used for building and maintaining CPUs' security areas. To be specific, SGX allows to create private regions (\textit{a.k.a.} enclave) of memory to protect the inside contents. The following features are highlighted in this technique: (1) \textit{Attestation.} Attestation mechanisms are used to prove to a validator that the enclave has been correctly instantiated, and used to establish a secure, authenticated connection to transfer sensitive data. The attestation guarantees the secret (private key) to be provisioned to the enclave only after a successful substantiation. (2) \textit{Runtime Isolation.} Processes inside the enclave are protectively isolated from the software running outside. Specifically, the enclave prevents higher privilege processes and outside operating system codes from falsifying inside executions of loaded codes. (3) \textit{Sealing identity technique.} SGX offers a seal sealing identity technique, where the enclave data is allowed to store in untrusted disk space. The private sealing key comes from the same platform key, which enables data sharing across different enclaves.
\smallskip
\noindent\textbf{Payment Delegation.} The payment delegation plays a crucial role in e-commercial activities, and it has been comprehensively studied for decades. Several widely adopted approaches are such that using credit cards (Visa, Mastercard, etc.), using reimbursement, using third-party platforms (like PayPal~\cite{williams2007introduction}, AliPay~\cite{guo2016ecosystem}). These schemes allow users to delegate their cash spending capability to their own devices or other users. However, these delegation mechanisms heavily rely on a centralized party that needs a fairly great amount of trust. Decentralized cryptocurrencies, like Bitcoin \cite{nakamoto2008bitcoin} and Ethereum \cite{wood2014ethereum}, remove the role of trusted third parties, making the payment reliable and guaranteed by distributed blockchain nodes. However, such payment is time-consuming since the online transactions need to get confirmed by the majority of participated nodes. The delegation provides the decentralized cryptocurrency with an efficient payment approach to delegate the coin owner's spending capability. The cryptocurrency delegation using SGX was first explored in \cite{matetic2018delegatee}, where they only focused on the credential delegation in the fair exchange. Teechan \cite{lind2019teechain} provided a full-duplex payment channel framework that employs TEEs, in which the parties can pay each other without interacting with the blockchain in a bounded time. However, Teechan requires a complex setup: the parties must commit a \textit{multisig} transaction before the channel started. In contrast, our scheme is simple and more practical.
\section{Preliminaries and Definitions}
\label{sec-prelimi}
We make use of the following notions, definitions and assumptions to construct our scheme. Details are shown as follows.
\subsection{Notions}
Let $\mathsf{\lambda}$ denote a security parameter, $\mathsf{negl(\lambda)}$ represent a negligible function and $\mathcal{A}$ refer to an adversary. $\mathsf{b_{\star}}$ and $\mathsf{c_{\star}}$ are wildcard characters, representing the balance and the encrypted balance, respectively. A full notion list is provided in Appendix~\ref{appendix:b}.
\subsection{Crypto Primitive Definitions}
\noindent \textbf{Semantically Secure Encryption.} A semantically secure encryption $\mathsf{SE}$ consists of a triple of algorithms $\mathsf{(KGen, Enc, Dec)}$ defined as follows.
\begin{itemize}
\item[-] $\mathsf{SE.KGen}(1^\lambda)$ The algorithm takes as input security parameter $1^{\lambda}$ and generates a private key $\sk$ from the
key space $\mathcal{M}$.
\item[-] $\mathsf{SE.Enc(\sk, msg)}$ The algorithm takes as input a private key $\sk$ and a message $\mathsf{msg} \in \mathcal{M}$, and outputs a ciphertext $\mathsf{ct}$.
\item[-] $\mathsf{SE.Dec(\sk,ct)}$ The algorithm takes as input a verification key $\sk$, a message $\mathsf{ct}$, and outputs $\mathsf{msg}$.
\end{itemize}
\smallskip
\noindent\textit{Correctness}. A semantically secure encryption scheme $\mathsf{SE}$ is correct if for all $\mathsf{msg} \in \mathcal{M}$,
\begin{align*}
\Pr\big[\mathsf{SE.Dec(\sk,(SE.Enc(\sk,msg))) \neq msg} \big| \mathsf{\sk \gets SE.KGen(1^\lambda)}\big] \leq \mathsf{negl(\lambda)},
\end{align*}
where $ \mathsf{negl(\lambda)}$ is a negligible function and the probability is taken over the random coins of the algorithms $\mathsf{SE.Enc}$ and $\mathsf{SE.Dec}$.
\begin{defi}[IND-CPA security of $\mathsf{SE}$]\label{secpa}
A semantically secure encryption scheme $\mathsf{SE}$ achieves Indistinguishability under Chosen-Plaintext Attack(IND-CPA) if
all PPT adversaries, there exists a negligible function $\mathsf{negl(\lambda)}$ such that
\begin{align*}
\big| \Pr\big[ \mathsf{G_{\adv, SE}^{IND-CPA}(\lambda)} = 1\big] - \frac{1}{2} \big| \leq \mathsf{negl(\lambda)},
\end{align*}
where $\mathsf{G_{\adv, SE}^{IND-CPA}(\lambda)}$ is defined as follows:
\begin{pcvstack}[center]%
\procedure{$\mathsf{G_{\adv, SE}^{IND-CPA}(\lambda)}$}{%
\pcln \mathsf{\sk} \stackrel{\$}{\leftarrow} \mathsf{SE.KGen}(1^\lambda); \\
\pcln \mathsf{b} \stackrel{\$}{\leftarrow} \{0,1\} \\
\pcln \mathsf{m_{0},m_{1}} \gets \mathcal{A}^{\mathsf{SE}(\cdot)} \\
\pcln \mathsf{c^\star} \gets \mathsf{SE.Enc(\sk,m_b}) \\
\pcln \mathsf{b^{'}} \gets \adv^{\mathsf{SE}(\cdot)}\mathsf{(c^\star)}\\
\pcln \pcreturn \mathsf{b = b^{'}}
}
\end{pcvstack}
\end{defi}
\noindent \textbf{Signature Scheme.} A signature scheme $\mathsf{S}$ consists of the following algorithms.
\begin{itemize}
\item[-] $\mathsf{S.KeyGen}(1^\lambda)$ The algorithm takes as input security parameter $1^{\lambda}$ and generates a private signing key $\sk$ and a public verification key $\vk$.
\item[-] $\mathsf{S.Sign(\sk, msg)}$ The algorithm takes as input a signing key $\sk$ and a message $\mathsf{msg} \in \mathcal{M}$, and outputs a signature $\mathsf{\sigma}$.
\item[-] $\mathsf{S.Verify(\vk,\sigma,msg)}$ The algorithm takes as input a verification key $\vk$,
a signature $\mathsf{\sigma}$ and a message $\mathsf{msg} \in \mathcal{M}$, and outputs $1$ or $0$.
\end{itemize}
\smallskip
\noindent\textit{Correctness}. A signature scheme $\mathsf{S}$ is correct if for all $\mathsf{msg} \in \mathcal{M}$,
\begin{align*}
\Pr\big[\mathsf{S.Verify(\vk,(S.Sign(\sk, msg)),msg)} \neq 1 \big| \mathsf{(\vk,\sk) \gets S.KeyGen(1^\lambda)}\big] \leq \mathsf{negl(\lambda)},
\end{align*}
where $ \mathsf{negl(\lambda)}$ is a negligible function and the probability is taken over the random coins of the algorithms $\mathsf{S.Sign}$ and $\mathsf{S.Verify}$.
\begin{defi}[(EUF-CMA security of $\mathsf{S}$]\label{secpa}
A signature scheme $\mathsf{S}$ is called Existentially Unforgeable under Chosen Message Attack(EUF-CMA) if all PPT adversaries, there exists a negligible function $\mathsf{negl(\lambda)}$ such that
\begin{align*}
\Pr\big[ \mathsf{G_{\adv, S}^{EUF-CMA}(\lambda)} = 1\big] \leq \mathsf{negl(\lambda)},
\end{align*}
where $\mathsf{G_{\adv, S}^{EUF-CMA}(\lambda)}$ is defined as follows:
\begin{pcvstack}[center]%
\procedure{$\mathsf{G_{\adv, S}^{IND-CPA}(\lambda)}$}{%
\pcln \mathsf{(\sk, pk)} \stackrel{\$}{\leftarrow} \mathsf{S.KeyGen}(1^\lambda); \\
\pcln \mathcal{L} \gets \mathsf{S.Sign(\sk, m_{\{0,\dots,n\}})}; \\
\pcln \mathsf{(m^{\star},\sigma^{\star})} \gets \mathcal{A}^{\mathcal{O}(sk, \cdot)} \mathsf{(pk)}\\
\pcln \pcreturn \mathsf{( S.Verify(vk,\sigma^{\star}, m^{\star})} = 1) \wedge \mathsf{m^{\star} \notin \mathcal{L}}
}
\end{pcvstack}
\end{defi}
\noindent \textbf{Public Key Encryption.} A public key encryption scheme $\mathsf{PKE}$ consists of the following algorithms.
\begin{itemize}
\item[-] $\mathsf{PKE.KeyGen}(1^\lambda)$ The algorithm takes as input security parameter $1^{\lambda}$ and generates a private signing key $\sk$ and a public verification key $\vk$.
\item[-] $\mathsf{PKE.Enc(pk, msg)}$ The algorithm takes as input in a public key $\mathsf{pk}$ and a message $\mathsf{msg} \in \mathcal{M}$, and outputs a ciphertext $\mathsf{ct}$.
\item[-] $\mathsf{PKE.Dec(\sk,ct)}$ The algorithm takes as input a secret key $\sk$, a ciphertext $\mathsf{ct}$, and outputs $\mathsf{msg}$ or $\bot$.
\end{itemize}
\smallskip
\noindent\textit{Correctness}. A public key encryption scheme $\mathsf{PKE}$ is correct if for all $\mathsf{msg} \in \mathcal{M}$,
\begin{align*}
\Pr\big[\mathsf{SE.PKE(\sk,(PKE.Enc(pk,msg))) \neq msg} \big| \mathsf{(\sk,pk) \gets PKE.KeyGen(1^\lambda)}\big] \leq \mathsf{negl(\lambda)},
\end{align*}
where $ \mathsf{negl(\lambda)}$ is a negligible function and the probability is taken over the random coins of the algorithms $\mathsf{PKE.KeyGen}$ and $\mathsf{PKE.Enc}$.
\begin{defi}[(IND-CCA2 security of $\mathsf{PKE}$]\label{ccapke}
A PKE scheme $\mathsf{PKE}$ is said to have Indistinguishability Security under Adaptively Chosen Ciphertext Attack(IND-CCA2) if
all PPT adversaries, there exists a negligible function $\mathsf{negl(\lambda)}$ such that
\begin{align*}
\Pr\big[ \mathsf{G_{\adv,PKE}^{IND-CCA2}(\lambda)} = 1\big] \leq \mathsf{negl(\lambda)},
\end{align*}
where $\mathsf{G_{\adv,PKE}^{IND-CCA2}(\lambda)}$ is defined as follows:
\begin{pcvstack}[center]%
\procedure{$\mathsf{G_{\adv, PKE}^{IND-CCA2}(\lambda)}$}{%
\pcln \mathsf{(\sk, pk)} \stackrel{\$}{\leftarrow} \mathsf{PKE.KGen}(1^\lambda); \\
\pcln \mathsf{b} \stackrel{\$}{\leftarrow} \{0,1\} \\
\pcln \mathsf{m_{0},m_{1}} \gets \mathcal{A}^{\mathsf{PKE.Dec}(\sk,\cdot)} \\
\pcln \mathsf{c^\star} \gets \mathsf{PKE.Enc(\sk,m_b}) \\
\pcln \mathsf{b^{'}} \gets \adv^{\mathsf{PKE.Dec}(\sk,\cdot)}\mathsf{(c^\star)}\\
\pcln \pcreturn \mathsf{b = b^{'}}
}
\end{pcvstack}
\end{defi}
\subsection{Secure Hardware}
In our scheme, parties will have access to TEEs, in which they serve as isolated environments to guarantee the confidentiality and integrity of inside code and data. To capture the secure functionality of TEEs, inspired by~\cite{fisch2017iron,barbosa2016foundations} we define TEEs as a black-box program that provides some interfaces exposed to users. The abstraction is given as follows. Note that, Due to the scope of usage, we only capture the remote attestation of TEEs and refer to~\cite{fisch2017iron} for a full definition.
\begin{defi}
\label{TEEmode}
A secure hardware functionality $\mathsf{HW}$ for a class of probabilistic polynomial time (PPT) programs $\mathcal{P}$ includes the algorithms: $\mathsf{Setup}$, $\mathsf{Load}$, $\mathsf{Run}$, $\mathsf{Run Quote}$, $\mathsf{QuoteVerify}$.
\begin{itemize}
\item[-] $\mathsf{HW.Setup(1^\lambda)}:$ The algorithm takes as input a security parameter $\lambda$, and outputs the secret key $\mathsf{sk_{quote}}$ and public parameters $\mathsf{pms}$.
\item[-] $\mathsf{HW.Load(pms}, P):$ The algorithm loads a stateful program $P$ into an enclave. It takes as input a program $P \in \mathcal{P}$ and $\mathsf{pms}$, and outputs a new enclave handle $\mathsf{hdl}$.
\item[-] $\mathsf{HW.Run(hdl,in)}:$ The algorithm runs enclave. It inputs a handle $\mathsf{hdl}$ that relates to an enclave (running program $P$) and an input $\mathsf{in}$, and outputs execution results $\mathsf{out}$.
\item[-] $\mathsf{HW.RunQuote(hdl, in)}:$ The algorithm executes programs in an enclave and generates an attestation quote. It takes as input $\mathsf{hdl}$ and $\mathsf{in}$, and executes $P$ on $\mathsf{in}$. Then, it outputs $\mathsf{quote = (hdl,tag_P, in, out, \sigma)}$, where $\mathsf{tag_P}$ is a measurement to identify the program running inside an enclave and $\sigma$ is a corresponding signature.
\item[-] $\mathsf{HW.QuoteVerify(pms,quote)}:$ The algorithm verifies the quote. It firstly executes $P$ on $\mathsf{in}$ to get $\mathsf{out}$. Then, it takes as input $\mathsf{pms}$, $\mathsf{quote = (hdl,tag_P,in,out,\sigma)}$, and outputs $\mathsf{1}$ if the signature $\sigma$ is correct. Otherwise, it outputs $\mathsf{0}$.
\end{itemize}
\end{defi}
\smallskip
\noindent\textit{Correctness}. The $\mathsf{HW}$ scheme is correct if the following properties hold: For all program $\mathcal{P}$, all input $\mathsf{in}$
\begin{itemize}
\item Correctness of $\mathsf{HW.Run}$: for any specific program $P \in \mathcal{P}$, the output of $\mathsf{HW.Run(hdl,in)}$ is deterministic.
\item Correctness of $\mathsf{RunQuote}$ and $\mathsf{QuoteVerify}$:
\begin{align*}
\Pr[\mathsf{QuoteVerify(pms, RunQuote}
(\mathsf{hdl}, \mathsf{in})) \neq 1] \leq \mathsf{negl(\lambda).}\\
\end{align*}
\end{itemize}
Remote attestation in TEEs provides functionality for verifying the execution and corresponding output of a certain code run inside the enclave by using a signature-based quote. Thus, the remote attestation unforgeability security~\cite{fisch2017iron} is defined similarly to the unforgeability of a signature scheme.
\begin{defi}[Remote Attestation Unforgeability (RemAttUnf)]
\label{remoteAttestation} A $\mathsf{HW}$ scheme is RemAttUnf secure if all PPT adversaries, there exists a negligible function $\mathsf{negl(\lambda)}$ such that
\begin{align*}
\Pr\big[ \mathsf{G_{\adv, S}^{RemAttUnf}(\lambda)} = 1\big] \leq \mathsf{negl(\lambda)},
\end{align*}
where $\mathsf{G_{\adv, HW}^{RemAttUnf}(\lambda)}$ is defined as follows:
\begin{pcvstack}[center]%
\procedure{$\mathsf{G_{\adv, HW}^{RemAttUnf}(\lambda)}$}{%
\pcln \mathsf{pms} \gets \mathsf{HW.Setup}(1^\lambda); \\
\pcln \mathsf{hdl} \gets \mathsf{HW.Load} (\mathsf{pms},P); \\
\pcln \mathcal{Q} \gets \mathsf{HW.RunQuote (hdl, in_{\{0,\dots,n\}})}; \\
\pcln \mathsf{(in^{\star}, quote^{\star})} \gets \mathcal{A}^{\mathcal{O}(hdl,\cdot)} \mathsf{(pms)}\\
\pcln \pcreturn \mathsf{( HW.QuoteVerify(pms, quote^{\star})} = 1) \wedge \mathsf{quote^{\star} \notin \mathcal{Q}}
}
\end{pcvstack}
\end{defi}
\section{DelegaCoin}
\label{sec-design}
In DelegaCoin, three types of entities are involved: coin owner (or delegator) $\mathcal{O}$, coin delegatee $\mathcal{D}$, and blockchain $\mathcal{B}$ (see Figure~\ref{size}). The main idea behind DelegaCoin is to exploit the TEEs as trusted agents between the coin owner and coin delegatee. TEEs are used to maintain delegation policies and ensure faithful executions of the delegation protocol. In particular, TEEs guarantee that the coin owner (either honest or malicious) cannot arbitrarily spend the delegated coins. The workflow is described as follows. Firstly, both $\mathcal{O}$ and $\mathcal{D}$ initialize and run the enclaves, and the owner $\mathcal{O}'s$ enclave generates an address $\mathsf{addr}$ for further transactions with a private key maintained internally. Next, $\mathcal{O}$ deploys delegation policies into the owner $\mathcal{O}'s$ enclave and deposits the coins to the address $\mathsf{addr}$. Then, $\mathcal{O}$ delegates the coins to $\mathcal{D}$ by triggering the execution of delegation inside the enclave. Finally, $\mathcal{D}$ spends delegated transaction to
the blockchain network $\mathcal{B}$. Note that the enclaves in our scheme are decentralized, meaning that each $\mathcal{O}$ and $\mathcal{D}$ has its own enclave without depending on a centralized agent, which satisfies the requirements of current cryptocurrency systems.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.65\textwidth]{image/flow.png}
\caption{DelegaCoin Workflow}
\label{size}
\end{figure}
\subsection{System Framework}
\smallskip
\noindent\textbf{System Setup.} In this phase, the coin owner $\mathcal{O}$ and the delegatee $\mathcal{D}$ initialize their TEEs to provide environments for the operations with respect to the further delegation.
\begin{itemize}
\item \textit{Negotiation.} $\mathsf{pms} \gets \mathsf{ParamGen(1^\lambda)}$: $\mathcal{O}$ agrees with $\mathcal{D}$ for the pre-shared information. Here, $\mathsf{\lambda}$ is a security parameter.
\item \textit{Enclave Initiation.} $\mathsf{hdl}_\mathcal{O},\mathsf{hdl}_\mathcal{D} \gets \mathsf{EncvInit(1^\lambda,pms)}$: $\mathcal{O}$ and $\mathcal{D}$ initialize the enclave \textit{E}$_{\mathcal{O}}$ and \textit{E}$_{\mathcal{D}}$ with outputting the enclave handles $\mathsf{hdl}_\mathcal{O}$ and $\mathsf{hdl}_\mathcal{D}$.
\item \textit{Key Generation.} $\mathsf{(pk_{Tx},sk_{Tx}),(pk_{\mathcal{O}},sk_{\mathcal{O}}), key_{seal}} \gets \mathsf{KeyGen^{TEE}}(\mathsf{hdl_{\mathcal{O}}},1^\lambda)$ and
$\mathsf{(pk_{\mathcal{D}}},\\ \mathsf{sk_{\mathcal{D}}}),(\mathsf{vk_{sign}}, \mathsf{sk_{sign}}), \mathsf{r} \gets \mathsf{KeyGen^{TEE}}(\mathsf{hdl_{\mathcal{D}}},1^\lambda)$: $\mathcal{O}$ and $\mathcal{D}$ run the enclaves \textit{E}$_{\mathcal{O}}$ and \textit{E}$_{\mathcal{D}}$ to create their internal keys. Key pair $\mathsf{(pk_{Tx},sk_{Tx})}$ is used for transaction generation. Key pair $(\mathsf{pk_{\mathcal{O}},sk_{\mathcal{O}}})$ and $(\mathsf{pk_{\mathcal{D}},sk_{\mathcal{D}}})$ are used for remote assertion, while $\mathsf{key_{seal}}$ is a sealing key used to export the state to the trusted storage. Key pair $(\mathsf{vk_{sign}}, \mathsf{sk_{sign}})$ is used to identify a specific delegatee, while $\mathsf{r}$ is a private key for transaction encryption.
\item \textit{Quote Generation.} $\mathsf{ quote \gets QuoGen^{TEE}(\mathsf{sk_{\mathcal{O}}}, \mathsf{vk_{sign}}, pms)}$: $\mathcal{O}$ generate a $\mathsf{quote}$ for requesting an encrypted symmetric encryption key from $\mathcal{D}$.
\item \textit{Key Provision.} $\mathsf{ ct_{r} \gets Provision^{TEE}(quote,\mathsf{sk_{sign}}, \mathsf{pk_{\mathcal{O}}}, pms)}$: $\mathcal{O}$ proves to $\mathcal{D}$ that $\textit{E}_{\mathcal{O}}$ has been instantiated with a $\mathsf{quote}$ to request an encrypted symmetric encryption key $\mathsf{ct_{r}}$. The symmetric encryption is used to encrypt the messages inside TEEs.
\item \textit{Key Extraction.} $\mathsf{ r \gets Extract^{TEE}(\mathsf{sk_{\mathcal{O}}}, ct_{r})}$: $\mathcal{O}$ extracts a symmetric encryption key $\mathsf{r}$ from $\mathsf{ct_{r}}$ using $\mathsf{sk_{\mathcal{O}}}$.
\item \textit{State Retrieval.} $\mathsf{b_{init} = Dec^{TEE}(key_{seal}, c_{init})}$: Encrypted states are read back by the enclave \textit{E}$_{\mathcal{O}}$ under $\mathsf{key_{seal}}$, where $\mathsf{b_{init}}$ is the initial balance and $\mathsf{c_{init}}$ is the initial encrypted balance. This step prevents unexpected occasions that may destroy the state in TEEs memory.
\end{itemize}
\smallskip
\noindent\textbf{Coin Deposit.} The enclave \textit{E}$_{\mathcal{O}}$ generates an address and its corresponding private key $\mathsf{pk_{Tx}}$ for the deposit. Afterwards, $\mathcal{O}$ sends coins to this address in the form of fund deposits.
\begin{itemize}
\item \textit{Address Creation.} $\mathsf{addr} \gets \mathsf{AddrGen^{TEE}}(1^\lambda,\mathsf{pk_{Tx}})$: $\mathcal{O}$ calls \textit{E}$_{\mathcal{O}}$ to generate a transaction address $\mathsf{addr}$. The private key $\mathsf{sk_{Tx}}$ of $\mathsf{addr}$ is secretly stored inside TEEs and is generated by an internal pseudo-random number.
\item \textit{Coin Deposit.} $\mathsf{ b_{deposit} = Update^{B}(addr,b_{init})}$: $\mathcal{O}$ generates an arbitrary transaction and transfers some coins to $\mathsf{addr}$ as the fund deposits.
\end{itemize}
\smallskip
\noindent\textbf{Coin Delegation.} In this phase, neither $\mathcal{O}$ nor $\mathcal{D}$ interacts with blockchain. $\mathcal{O}$ can instantly complete the coin delegation through offline transactions.
\begin{itemize}
\item \textit{Balance Update.} $\mathsf{b_{update} \gets Update^{TEE}(b_{deposit},b_{Tx})}$: \textit{E}$_{\mathcal{O}}$ checks current balance to ensure that it is enough for deduction. Then, \textit{E}$_{\mathcal{O}}$ updates the balance.
\item \textit{Signature Generation.} $\mathsf{\sigma_{Tx}} \gets \mathsf{TranSign^{TEE}(\mathsf{sk_{Tx}},\mathsf{addr},b_{Tx})}$: \textit{E}$_{\mathcal{O}}$ generates a valid signature $\mathsf{\sigma_{Tx}}$.
\item \textit{Transaction Generation.} $\mathsf{Tx} \gets \mathsf{TranGen^{TEE}(\mathsf{addr},b_{Tx},\mathsf{\sigma_{Tx}})}$: \textit{E}$_{\mathcal{O}}$ generates a transaction $\mathsf{Tx}$ using $\mathsf{\sigma_{Tx}}$.
\item \textit{Coin Delegation.} $\mathsf{ct_{tx}} \gets \mathsf{TranEnc^{TEE}}(\mathsf{r},\mathsf{Tx})$: $\mathcal{O}$ sends encrypted transaction $\mathsf{ct_{tx}}$ to $\mathcal{D}$.
\item \textit{State Seal.} $\mathsf{c_{update} \gets Enc^{TEE}(key_{seal},b_{update})}$: Once completing the delegation, the records $\mathsf{c_{update}}$ are permanently stored outside the enclave. If any abort or halt happens, a re-initiated enclave starts to reload the missing information.
\end{itemize}
All the algorithms in the step of \textbf{Coin Delegation} must be run as an atomic operation, meaning that either all algorithms finish or none of them finish.
A hardware Root of Trust can guarantee this, and we refer to~\cite{costan2016intel} for more detail.
\smallskip
\noindent\textbf{Coin Spend.} $\mathsf{Tx} \gets \mathsf{TranDec^{TEE}(r,\mathsf{ct_{tx}})}$: $\mathcal{D}$ decrypts $\mathsf{ct_{tx}}$ with $\mathsf{r}$, and then spends $\mathsf{Tx}$ by forwarding it to blockchain network.
\smallskip
\noindent\textit{Correctness}. The DelegaCoin scheme is correct if the following properties hold:
For all $\mathsf{Tx}$, $\mathsf{b_{deposit}}$, $\mathsf{b_{update}}$ and $\mathsf{b_{Tx}}$.
\begin{itemize}
\item Correctness of $\mathsf{Update}$:
$$\Pr \left[\mathsf{b_{Tx} \neq (b_{deposit} - b_{update})}\right] \leq \mathsf{negl(\lambda)}.$$
\item Correctness of $\mathsf{Seal}$:
$$\Pr[\mathsf{Dec^{TEE}(key_{seal},Enc^{TEE}(key_{seal}, b_{init})) \neq b_{init}}] \leq \mathsf{negl(\lambda)}.$$
\item Correctness of $\mathsf{Delegation}$:
\begin{align*}
\Pr[\mathsf{TranDec^{TEE}(r, TranEnc^{TEE}}
(\mathsf{r}, \mathsf{Tx})) \neq \mathsf{Tx}] \leq \mathsf{negl(\lambda).}\\
\end{align*}
\end{itemize}
\subsection{Oracles for Security Definitions}
\label{oracles}
We now define oracles to simulate an honest owner and delegatee for further security definitions and proofs. Each oracle maintains a series of (initially empty) sets $\mathcal{R}_1$, $\mathcal{R}_2$ and $\mathcal{C}$ which will be used later. Here, we use $\mathsf{(instruction; parameter)}$ to denote both the instructions and inputs of oracles.
\smallskip
\noindent\textbf{Honest Owner Oracle $\mathsf{O}^{\mathsf{owner}}:$} This oracle gives the adversary access to honest owners. An adversary $\mathcal{A}$ can obtain newly deletgated transactions or sealed storage with his customized inputs. The oracle provides the following interfaces.
\begin{itemize}
\item[-] On input $( \mathsf{signature\; creation}; \mathsf{addr})$, the oracle checks whether a tuple $(\mathsf{addr},\mathsf{\sigma_{Tx}}) \in \mathcal{R}_1$ exists, where $\mathsf{addr}$ is an input of transactions. If successful, the oracle returns $\mathsf{\sigma_{Tx}}$ to $\mathcal{A}$; otherwise, it computes $\mathsf{\sigma_{Tx}} \gets \mathsf{TranSign^{TEE}(\mathsf{sk_{Tx}}, \mathsf{addr},b_{Tx})}$ and adds $(\mathsf{addr},\mathsf{\sigma_{Tx}})$ to $\mathcal{R}_1$, and then returns $\mathsf{\sigma_{Tx}}$ to $\mathcal{A}$.
\item[-] On input $(\mathsf{quote\; generation} ;\mathsf{vk_{sign}})$, the oracle checks if a tuple $(\mathsf{vk_{sign}},\mathsf{quote}) \in \mathcal{R}_2$ exists. If successful, the oracle returns $\mathsf{quote}$ to $\mathcal{A}$. Otherwise, it computes $\mathsf{quote} \gets \mathsf{QuoGen^{TEE}(sk_{\mathcal{O}}, vk_{sign}, pms})$ and adds $(\mathsf{vk_{sign}},\mathsf{quote})$ to $\mathcal{R}_2$, and then returns $\mathsf{quote}$ to $\mathcal{A}$.
\end{itemize}
\noindent\textbf{Honest Delegatee Oracle $\mathsf{O}^{\mathsf{delegatee}}:$} This oracle gives the adversary access to honest delegatees. The oracle provides the following interfaces.
\begin{itemize}
\item[-] On input $(\mathsf{key \;provision} ;\mathsf{quote})$, the oracle checks whether a tuple $(\mathsf{quote},\mathsf{ct_{r}}) \in \mathcal{C}$ exists. If successful, the oracle returns $\mathsf{ct_{r}}$ to $\mathcal{A}$; otherwise, it computes \\ $\mathsf{ ct_{r} \gets Provision^{TEE}(quote,sk_{sign}, \mathsf{pk_{\mathcal{O}}}, pms)}$, adds $(\mathsf{quote},\mathsf{ct_{r}})$ to $\mathcal{C}$, and then returns $(\mathsf{quote},\mathsf{ct_{r}})$ to $\mathcal{A}$.
\end{itemize}
\begin{figure}[htb!]
\centering
\caption{Oracles Interaction Diagram}
\begin{bbrenv}{A}
\begin{bbrbox}[name=Real experiment]
\pseudocode{
\text{DelegaCoin protocol}
}
\begin{bbrenv}{B}
\begin{bbrbox}[name=Adversary $\mathcal{A}$,minheight=3cm,xshift=3cm]
\end{bbrbox}
\end{bbrenv}
\end{bbrbox}
\bbrinput{input}
\bbroutput{output}
\begin{bbroracle}{OraA}
\begin{bbrbox}[name=Oracle $\mathsf{O}^{\mathsf{owner}}$,minheight=1.2cm,minwidth=2.3cm]
\end{bbrbox}
\end{bbroracle}
\bbroracleqryto{bottom=$query$}
\bbroracleqryfrom{bottom=$reply$}
\begin{bbroracle}{OraB}
\begin{bbrbox}[name=Oracle $\mathsf{O}^{\mathsf{delegatee}}$,minheight=1.7cm,minwidth=2.3cm]
\end{bbrbox}
\end{bbroracle}
\bbroracleqryto{bottom=$query$}
\bbroracleqryfrom{bottom=$reply$}
\begin{bbrbox}[name= $\mathsf{HW}$ Oracle ,minheight=4cm,xshift=11.3cm,minwidth=1.7cm]
\end{bbrbox}
\bbrmsgto{}
\bbrmsgfrom{}
\end{bbrenv}
\end{figure}
\noindent\textbf{HW Oracle:} This oracle gives the adversary the access to honest hardware.
The oracle provides the interfaces as defined as in~\ref{TEEmode}. Note that, to ensure that anything $\mathcal{A}$ sees in the real world can be simulated ideal experiment, we require that an adversary get access to \textbf{$\mathsf{HW}$ Oracle} through $\mathsf{O}^{\mathsf{delegatee}}$ and $\mathsf{O}^{\mathsf{owner}}$ rather than directly interact with $\mathsf{HW}$ Oracle.
\subsection{Threat Model and Assumptions}
As for involved entities, we assume that $\mathcal{O}$ attempts to delegate some coins to the delegatee. Each party may potentially be malicious. $\mathcal{O}$ may maliciously delegate an exceptional transaction, represented as sending the same transaction to multiple delegatees or spending the delegated transactions before the $\mathcal{D}$ spends them. $\mathcal{D}$ may also attempt to assemble an invalid transaction or double spend the delegated coins. We also assume the blockchain $\mathcal{B}$ is robust and publicly accessible.
With regard to devices, we assume that TEEs are secure, which means that an adversary cannot access the enclave runtime memory and their hardware-related keys (\textit{e.g.,} sealing key or attestation key). In contrast, we do not assume the components outside TEEs are trusted. For example, the adversary may control the operating system or high-level privileged software.
\subsection{Security Goals.}
DelegaCoin aims to employ TEEs to provide a secure delegatable cryptocurrency system. In brief, TEEs prevent malicious delegation in three aspects: (1) The private key of a delegated transaction and the delegated transaction itself are protected against the public. If an adversary learns any knowledge about the private key or the delegated transaction, she may spend the coin before the delegatee uses it; (2) The delegation executions are correctly executed. In particular, the spendable amount of delegated coins must be less than (or equal to) original coins; (3) The delegation records are securely stored to guarantee consistency considering accidental TEEs failures or malicious TEEs compromises. DelegaCoin is secure if adversaries cannot learn any knowledge about the private key, the delegated transaction, and the sealed storage.
To capture such security properties, we formalize our system through a game inspired by \cite{bernhard2015sok}. In our game, a PPT adversary attempts to distinguish between a real-world and a simulated (ideal) world. In the real world, the DelegaCoin algorithms work as defined in the construction. The adversary is allowed to access the transaction-related secret messages created by honest users through oracles as in Definition~\ref{oracles}. Obviously, the ideal world does not leak any useful information to the adversary. Since we model the additional information explicitly to respond to the adversary, we construct a polynomial-time simulator $\mathcal{S}$ that can \textit{fake} the additional information corresponding to the real result, but with respect to the fake TEEs. Thus, a universal oracle $\mathcal{U}(\cdot)$ in the ideal world is introduced to simulate the corresponding answers of $\adv$ called in oracles in the real world. We give a formal model as follows, in which these two experiments begin with the same setup assumptions.
\begin{defi}[Security]
DelegaCoin is simulation-secure if for all PPT adversaries $\mathcal{A}$, there exists a stateful PPT simulator $\mathcal{S}$ and a negligible function $ \mathsf{negl(\lambda)}$ such that the probability of that $\mathcal{A}$ distinguishes between $\mathsf{Exp_{\adv,{DelegaCoin}}^{real}(\lambda)}$ and $\mathsf{Exp_{\adv,{DelegaCoin}}^{idea}(\lambda)}$ is negligible, i.e.,
\begin{eqnarray}\nonumber
\left|\mathsf{Pr[Exp_{\adv,{DelegaCoin}}^{real}(\lambda)} = 1 ] - \mathsf{Pr[Exp_{\adv,{DelegaCoin}}^{ideal}(\lambda)} = 1 ] \right| \leq \mathsf{negl(\lambda)}.
\end{eqnarray}
\end{defi}
\begin{figure}[htb]
\begin{pchstack}[center]
\resizebox{1.1\linewidth}{!}{
\fbox{
\begin{pcvstack}
\procedure{$\mathsf{Exp_{\adv,{DelegaCoin}}^{real}(\lambda)}$}{%
\pcln \mathsf{pms} \gets \mathsf{ParamGen(1^\lambda)} \\
\pcln \mathsf{hdl_{\mathcal{O}}}, \mathsf{hdl_{\mathcal{D}}} \gets \mathsf{EncvInit(1^\lambda,pms)} \\
\pcln \mathsf{(pk_{Tx}, sk_{Tx}), (\mathsf{pk_{\mathcal{O}}}, \mathsf{sk_{\mathcal{O}}}), key_{seal}} \gets \mathsf{KeyGen^{TEE}}(\mathsf{hdl_{\mathcal{O}}},1^\lambda) \\
\pcln \mathsf{(pk_{\mathcal{D}},sk_{\mathcal{D}})}, (\mathsf{vk_{sign}}, \mathsf{sk_{sign}}), \mathsf{r} \gets \mathsf{KeyGen^{TEE}}(\mathsf{hdl_{\mathcal{D}}},1^\lambda) \\
\pcln \mathsf{ quote \gets \mathcal{A}(\mathsf{hdl_{\mathcal{O}}}, vk_{sign}, pms)} \\
\pcln \mathsf{ ct_{r} \gets \adv^{Provision^{TEE}(sk_{sign})}(\mathsf{hdl_{\mathcal{D}}}, quote, \mathsf{pk_{\mathcal{O}}}, pms)} \\
\pcln \mathsf{ r \gets \adv^{Extract^{TEE}(sk_{\mathcal{O}})}(\mathsf{hdl_{\mathcal{O}}}, ct_{r})}
\pclb
\pcintertext[dotted]{Setup Completed}
\pcln \mathsf{b_{init} = Dec^{TEE}(\mathsf{hdl_{\mathcal{O}}},key_{seal}, c_{init})}\\
\pcln \mathsf{addr} \gets \mathsf{AddrGen^{TEE}}(1^\lambda,\mathsf{pk_{Tx}})\\
\pcln \mathsf{ b_{deposit} = Update^{B}(\mathsf{addr},b_{init})}\\
\pcln \mathsf{b_{update} \gets Update^{TEE}(\mathsf{hdl_{\mathcal{O}}},b_{deposit},b_{Tx})} \\
\pcln \mathsf{\sigma_{Tx}} \gets \adv^{\mathsf{TranSign^{TEE}}(\mathsf{sk_{Tx}})}(\mathsf{hdl_{\mathcal{O}}}, \mathsf{addr,b_{Tx}}) \\
\pcln \mathsf{Tx} \gets \mathsf{TranGen^{TEE}}(\mathsf{hdl_{\mathcal{O}}},\mathsf{addr,b_{Tx},\mathsf{\sigma_{Tx}}} )\\
\pcln \mathsf{ct_{tx}} \gets \adv^{\mathsf{TranEnc^{TEE}(r)}}(\mathsf{hdl_{\mathcal{O}}}, \mathsf{Tx}) \\
\pcln \mathsf{c_{update} = \adv^{Enc^{TEE}(key_{seal})}(\mathsf{hdl_{\mathcal{O}}},b_{update})}
\pclb
\pcintertext[dotted]{Delegation Completed}
\pcln \mathsf{Tx} \gets \mathsf{TranDec^{TEE}}(\mathsf{hdl_{\mathcal{D}}},\mathsf{r},\mathsf{ct_{tx}}) \\
\pcln \pcreturn (\mathsf{Tx},\mathsf{c_{update}})}
\end{pcvstack}
\pchspace
\procedure{$\mathsf{Exp_{\adv,{DelegaCoin}}^{ideal}(\lambda)}$}
\pcln \mathsf{pms} \gets \mathsf{ParamGen(1^\lambda)} \\
\pcln \mathsf{hdl_{\mathcal{O}}^\star}, \mathsf{hdl_{\mathcal{D}}^\star} \gets \mathsf{\mathcal{S}(1^\lambda,pms)} \\
\pcln \mathsf{(pk_{Tx},sk_{Tx}), (\mathsf{pk_{\mathcal{O}}}, \mathsf{sk_{\mathcal{O}}}), key_{seal}} \gets \mathsf{KeyGen^{TEE}}(\mathsf{hdl_{\mathcal{O}}^{\star}},1^\lambda) \\
\pcln \mathsf{(pk_{\mathcal{D}},sk_{\mathcal{D}})},(\mathsf{vk_{sign}}, \mathsf{sk_{sign}}), \mathsf{r} \gets \mathsf{KeyGen^{TEE}}(\mathsf{hdl_{\mathcal{D}}^{\star}},1^\lambda) \\
\pcln \mathsf{quote \gets \mathcal{A}(\mathsf{hdl_{\mathcal{O}}^{\star}}, vk_{sign}, pms)} \\
\pcln \mathsf{ ct_{r} \gets \adv^{\mathcal{S}^{\mathcal{U}(\cdot)}}(\mathsf{hdl_{\mathcal{D}}^{\star}}, quote, \mathsf{pk_{\mathcal{O}}}, pms)} \\
\pcln \mathsf{ r \gets \adv^{\mathcal{S}^{\mathcal{U}(\cdot)}}(\mathsf{hdl_{\mathcal{O}}^{\star}}, ct_{r})}
\pclb
\pcintertext[dotted]{Setup Completed}
\pcln \mathsf{b_{init}} \gets \mathsf{\mathcal{S}(\mathsf{hdl_{\mathcal{O}}},key_{seal}, c_{init})} \\
\pcln \mathsf{addr} \gets \mathcal{S}(1^\lambda,\mathsf{pk_{Tx}})\\
\pcln \mathsf{b_{deposit} = \mathcal{S}(\mathsf{addr},b_{init})}\\
\pcln \mathsf{b_{update} \gets \mathcal{S}(\mathsf{hdl_{\mathcal{O}}^\star},b_{deposit},1^{|b_{Tx}|})} \\
\pcln \mathsf{\sigma_{Tx}} \gets \mathsf{\adv^{\mathcal{S}^{\mathcal{U}(\cdot)}}(\mathsf{hdl_{\mathcal{O}}^\star}, \mathsf{addr},b_{Tx})} \\
\pcln \mathsf{Tx} \gets \mathcal{S}(\mathsf{hdl_{\mathcal{O}}^\star},\mathsf{addr}, \mathsf{1^{|b_{Tx}}|, \mathsf{\sigma_{Tx}}}) \\
\pcln \mathsf{ct_{tx}} \gets \mathsf{\adv^{\mathcal{S}^{\mathcal{U}(\cdot)}}(\mathsf{hdl_{\mathcal{O}}^\star}, 1^{|Tx|})} \\
\pcln \mathsf{c_{update} = \adv^{\mathcal{S}^{\mathcal{U}(\cdot)}}(\mathsf{hdl_{\mathcal{O}}^\star}, 1^{|b_{update}|})}
\pclb
\pcintertext[dotted]{Delegation Completed}
\pcln \mathsf{Tx} \gets \mathcal{S}(\mathsf{hdl_{\mathcal{D}}^\star},\mathsf{r},\mathsf{ct_{tx}}) \\
\pcln \pcreturn (\mathsf{Tx},\mathsf{c_{update}})
}}
}
\end{pchstack}
\end{figure}
\section{Formal Protocols}
\label{sec-formal}
In this section, we present a formal model of our electronic cash system by utilizing the syntax of the $\mathsf{HW}$ model. In particular, we model the interactions of Intel SGX enclaves as calling to the $\mathsf{HW}$ functionality defined in Definition~\ref{TEEmode}. The formal protocols are provide as follows.
\smallskip
The owner enclave program $\mathsf{P_{\mathcal{O}}}$ is defined as follows. The value $\mathsf{tag_{P}}$ is a measurement of the program $\mathsf{P_{\mathcal{O}}}$, and it is hardcoded in the static data of $\mathsf{P_{\mathcal{O}}}$. Let $\mathsf{state}_{\mathcal{O}}$ denote an internal state variable.
$\mathsf{P_{\mathcal{O}}}$:
\begin{itemize}
\item On input (``init setup'', $\mathsf{sid, vk_{sign}}$\footnote{We assume that the combination ($\mathsf{sid}$,$\mathsf{vk_{sign}}$), represented as the identity of a delegatee, has already been distributed before the system setup. }):
\begin{itemize}
\item[-] Run $(\mathsf{pk_{\mathcal{O}}}, \mathsf{sk_{\mathcal{O}}}) \gets \mathsf{\mathsf{PKE}.KeyGen}(1^\lambda)$ and $\mathsf{key_{seal}}\footnote{Multiple enclaves from the same signing authority can derive the same key, since seal key is based on the enclave’s certificate-based identity.} \gets \mathsf{\mathsf{SE}.KeyGen}(1^\lambda)$.
\item[-] Update $\mathsf{state}$ to $(\mathsf{sk_{\mathcal{O}}, vk_{sign}})$ and output $(\mathsf{pk_{\mathcal{O}}, sid, vk_{sign}})$.
\end{itemize}
\item On input (``complete setup'', $\mathsf{sid, ct_{r}, \sigma_{r}})$:
\begin{itemize}
\item[-] Look up the $\mathsf{state}_{\mathcal{O}}$ to obtain the entry $\mathsf{(sk_{\mathcal{O}}, sid, vk_{sign})}$. If no entry
exists for $\mathsf{sid}$, output $\bot$.
\item[-] Receive the $(\mathsf{sid, vk_{sign}})$ from $\mathcal{O}$ and check if $\mathsf{vk_{sign}}$ matches with the one in $\mathsf{state}_{\mathcal{O}}$. If not, output $\bot$.
\item[-] Verify signature $\mathsf{b \gets \mathsf{\mathsf{S}.Verify}(vk_{sign}, \sigma_{r}, (sid, ct_{r} ))}$.
If $\mathsf{b}$ = 0, output $\bot$.
\item[-] Run $\mathsf{r \gets \mathsf{\mathsf{PKE}.dec}(sk_{\mathcal{O}}, ct_{r})}$.
\item[-] Add the tuple $(\mathsf{r, sid, vk_{sign}})$ to $\mathsf{state}_{\mathcal{O}}$.
\end{itemize}
\item On input (``state retrieval'', $\mathsf{sid}$):
\begin{itemize}
\item[-] Retrieve identity-balance pair ($\mathsf{sid, c_{init}}$) from the sealed storage.
\item[-] Run $\mathsf{b_{init} = \mathsf{SE}.Dec(key_{seal},c_{init})}$ and update $\mathsf{state}_{\mathcal{O}}$ to $(\mathsf{sid, b_{init}})$
\end{itemize}
\item On input (``address generation'', $1^\lambda$):
\begin{itemize}
\item[-] Run $(\mathsf{sk_{Tx}}, \mathsf{pk_{Tx}}) \gets \mathsf{\mathsf{S}.KeyGen}(1^\lambda)$ and $\mathsf{addr} \gets \mathsf{AddrGen^{TEE}}(1^\lambda,\mathsf{pk_{Tx}})$.
\item[-] Update $(\mathsf{sk_{Tx}, addr})$ to $\mathsf{state}_{\mathcal{O}}$ and output $(\mathsf{pk_{Tx}}, \mathsf{addr})$.
\end{itemize}
\item On input (``transaction generation'', $\mathsf{addr}$ ):
\begin{itemize}
\item[-] Retrieve the private key $\mathsf{sk_{Tx}}$.
\item[-] Run $\mathsf{\sigma_{Tx}} \gets \mathsf{\mathsf{S}.Sign(sk_{Tx},\mathsf{addr}, b_{Tx})}$ and output a signature $\mathsf{\sigma_{Tx}}$.
\item[-] Run $\mathsf{Tx} \gets \mathsf{TranGen(\mathsf{addr}, b_{Tx},\mathsf{\sigma_{Tx}}})$ and update $(\mathsf{sid, Tx})$ to $\mathsf{state}_{\mathcal{O}}$.
\end{itemize}
\item On input (``state update'', $\mathsf{addr}$):
\begin{itemize}
\item[-] Check $\mathsf{b_{deposit}}$ and $\mathsf{b_{Tx}}$. If $\mathsf{b_{deposit} < b_{Tx}}$, output $\bot$.
\item[-] Run $\mathsf{b_{update} \gets Update(b_{deposit},b_{Tx})}$.
\end{itemize}
\item On input (``start delegation'', $\mathsf{addr}$):
\begin{itemize}
\item[-] Retrieve the provision private key $\mathsf{r}$ and $\mathsf{Tx}$ from $\mathsf{state}_{\mathcal{O}}$.
\item[-] Run $\mathsf{\mathsf{ct_{tx}} \gets \mathsf{SE}.\mathsf{Enc(r,Tx)}}$.
\end{itemize}
\item On input (``state seal'', $\mathsf{addr}$):
\begin{itemize}
\item[-] Run $\mathsf{\mathsf{c_{update}} = \mathsf{SE}.Enc(key_{seal},\mathsf{b_{update}})}$ and update $\mathsf{state}_{\mathcal{O}}$ to $(\mathsf{addr, b_{update}})$.
\item[-] Store $\mathsf{addr}$ and $\mathsf{c_{update}}$ to sealed storage.
\end{itemize}
\end{itemize}
\smallskip
The delegatee enclave program $\mathsf{P_{\mathcal{D}}}$ is defined as follows. The value $\mathsf{tag_{\mathcal{D}}}$ is the measurement of the program $\mathsf{P_{\mathcal{D}}}$, and it is hardcoded in the static data of $\mathsf{P_{\mathcal{D}}}$. Let $\mathsf{state}_\mathcal{D}$ denote an internal state variable. Also, the security parameter $\lambda$ is hardcoded into the program.
$\mathsf{P_{\mathcal{D}}}$:
\begin{itemize}
\item On input (``init setup'', $1^\lambda$):
\begin{itemize}
\item[-] Generate a session ID, $\mathsf{sid \gets \{0,1\}^{\lambda}}$.
\item[-] Run $(\mathsf{pk_{\mathcal{D}}}, \mathsf{sk_{\mathcal{D}}}) \gets \mathsf{\mathsf{PKE}.KeyGen}(1^\lambda)$, and $(\mathsf{vk_{sign}}, \mathsf{sk_{sign}}) \gets \mathsf{\mathsf{S}.KeyGen}(1^\lambda)$.
\item[-] Update $\mathsf{state}_\mathcal{D}$ to $(\mathsf{sk_{\mathcal{D}},sk_{sign}})$ and output $(\mathsf{sid, pk_{\mathcal{D}},vk_{sign}})$.
\end{itemize}
\item On input (``provision'', $\mathsf{quote}, \mathsf{pk_{\mathcal{O}}}, \mathsf{pms}$):
\begin{itemize}
\item[-] Parse $\mathsf{quote =(hdl_{\mathcal{O}}, tag_P, in, out, \sigma)}$, check that $\mathsf{tag_{P}== tag_{\mathcal{O}}}$. If not, output $\bot$.
\item[-] Parse $\mathsf{out = (sid, pk_{\mathcal{O}})}$, and run $\mathsf{b \gets HW.QuoteVerify(pms,quote)}$ on $\mathsf{quote}$. If $\mathsf{b} = 0$, output $\bot$.
\item[-] Select a random number $\mathsf{r}$ and compute the algorithm
$\mathsf{ct_{r} = \mathsf{PKE}.Enc(pk_{\mathcal{O}},r)}$ and $\mathsf{\sigma_{r} = \mathsf{S}.Sign(sk_{sign}, (sid, ct_{r}))}$ and output $\mathsf{(sid, ct_{r}, \sigma_{r})}$.
\end{itemize}
\item On input (``complete delegation'', $\mathsf{ct_{tx}}$):
\begin{itemize}
\item[-] Retrieve $\mathsf{r}$ from $\mathsf{state}_{\mathcal{D}}$.
\item[-] Run $\mathsf{\mathsf{Tx} \gets \mathsf{SE}.\mathsf{Dec(r,ct_{tx})}}$.
\end{itemize}
\end{itemize}
\smallskip
\noindent\hangindent 2em $\mathbf{Setup}.$ The following steps are based on the completed initialization of the programs of the delegator $\mathsf{P_{\mathcal{O}}}$ and delegatee $\mathsf{P_{\mathcal{D}}}$. The delegatee $\mathcal{D}$ runs $\mathsf{hdl_{\mathcal{D}}} \gets \mathsf{HW.Load(pms,P_{\mathcal{D}})}$ and $\mathsf{(\mathsf{vk_{sign}}, pk_{\mathcal{D}}) \gets HW.Run(hdl_{\mathcal{D}}, (\text{``init setup''}, 1^\lambda))}$. Then, $\mathcal{D}$ sends $\mathsf{vk_{sign}}$ to the delegator $\mathcal{O}$. Next, $\mathcal{O}$ runs $\mathsf{hdl}_{\mathcal{O}} \gets \mathsf{HW.Load(pms,P_{\mathcal{O}})}$ to load the handle. Meanwhile, $\mathcal{O}$
calls $\mathsf{quote \gets HW.Run\&Quote(hdl_{\mathcal{O}}, (\text{``init setup''}, \mathsf{sid, vk_{sign}}))}$, and sends a $\mathsf{quote}$ to $\mathcal{D}$. After that, $\mathcal{D}$ calls $\mathsf{(sid, ct_{r}, \sigma_{r})} \gets \mathsf{HW.Run(hdl_{\mathcal{D}}, (\text{``provision''}, \mathsf{quote,pk_{\mathcal{O}}, pms}))}$, and sends $\mathsf{(sid, ct_{r}, \sigma_{r})}$ to $\mathcal{O}$. Last, $\mathcal{O}$
calls $\mathsf{HW.Run(hdl_{\mathcal{O}}, (\text{``complete setup''}, \mathsf{vk_{sign}}))}$. At the end of completing setup, $\mathcal{O}$'s enclave \textit{E}$_{\mathcal{O}}$ obtains the private key $\mathsf{r}$ used for transaction delegation.
\smallskip
\noindent\hangindent 2em $\mathbf{Deposit}$. $\mathcal{O}$
calls $\mathsf{c_{init} \gets HW.Run(hdl_{\mathcal{O}}, (\text{``state retrieval''}, sid))}$. If $\mathsf{c_{init}}$ does not exist or equals to $0$, $\mathcal{O}$ calls
$\mathsf{addr \gets HW.Run(hdl_{\mathcal{O}}, (\text{``address generation''},1^\lambda))}$ to create a new address $\mathsf{addr}$. Then, $\mathcal{O}$ transfers some coins to $\mathsf{addr}$ through a normal blockchain transaction.
\smallskip
\noindent\hangindent 2em $\mathbf{Delegation}$. $\mathcal{O}$ firstly parses $\mathsf{hdl_{\mathcal{O}}}$ and
calls \textit{E}$_{\mathcal{O}}$. Then, \textit{E}$_{\mathcal{O}}$ retrieves the $\mathsf{addr}$. Afterwards, it calls $\mathsf{b_{update} \gets HW.Run(hdl_{\mathcal{O}}, (\text{``state update''},addr))}$. If the update algorithm returns false or failure, \textit{E}$_{\mathcal{O}}$ aborts the following operations. Otherwise, it looks up the state to obtain $\mathsf{sk_{Tx}}$, runs $\mathsf{Tx \gets HW.Run(hdl_{\mathcal{O}}, (\text{``transaction generation''}, addr ))}$ and outputs a transaction $\mathsf{Tx}$. After that, the delegator's enclave \textit{E}$_{\mathcal{O}}$ retrieves $\mathsf{r}$ and runs $\mathsf{ct_{tx} \gets HW.Run(hdl_{\mathcal{O}}, (\text{``start delegation''},addr))}$. Finally, ${\mathcal{O}}$ sends $\mathsf{ct_{tx}}$ to $\mathcal{D}$.
\smallskip
\noindent\hangindent 2em $\mathbf{Spend}$. $\mathcal{D}$ parses $\mathsf{hdl_{\mathcal{D}}}$ and runs $\mathsf{Tx \gets HW.Run(hdl_{\mathcal{D}}, (\text{``complete delegation''},\mathsf{ct_{tx}}))}$. After that, $\mathcal{D}$ spends the received transaction $\mathsf{Tx}$ by forwarding it to the blockchain network. Then, a blockchain node firstly parses
$\mathsf{Tx = (addr,pk_{Tx},metadata,\sigma_{Tx})}$ and runs $\mathsf{b} \gets \mathsf{\mathsf{S}.Verify^{B}(pk_{Tx},\sigma_{Tx})}$. If $\mathsf{b} = 0$, output $\bot$. Otherwise, the node broadcasts $\mathsf{Tx}$ to other blockchain nodes.
\section{Security Analysis}
\label{sec-seurity}
\begin{thm}[Security]\label{prf-consistency-tee}
Assume that $\mathsf{\mathsf{SE}}$ is IND-CPA secure,
$\mathsf{PKE}$ is IND-CCA2 secure, $\mathsf{S}$ holds the EUF-CMA security, and the TEEs are secure as in Definition~\ref{TEEmode}, DelegaCoin scheme is simulation-secure.
\end{thm}
Inspired by~\cite{lindell2017simulate, fisch2017iron}, we use a simulation-based paradigm to conduct security analysis, and explain the crux of our security proof as follows. We firstly construct a simulator $\mathcal{S}$ which can simulate the challenge responses in the real world. It provides the adversary $\adv$ with a simulated delegated transaction, a simulated quote and sealed storage. The information that $\adv$ can obtain is merely the instruction code and oracle responses queried by $\adv$ in the real experiment. At a high level, the proof idea is simple: $\mathcal{S}$ encrypts zeros as the challenge message. In the ideal experiment, $\mathcal{S}$ intercepts $\adv$'s queries to user oracle and provides simulated responses. It uses its $\mathcal{U(\cdot)}$ oracle to simulate oracles in the real world and sends the response back to $\adv$ as the simulated oracle output. $\mathcal{U(\cdot)}$ and $\mathcal{S}$'s algorithms are described as follows.
\smallskip
\noindent\textbf{Pre-processing phase.} $\mathcal{S}$ simulates the pre-processing phase similar to in the real world. It firstly runs $\mathsf{ParamGen(1^\lambda)}$ and records system parameters $\mathsf{pms}$ that are generated during the process. Then, it calls $\mathsf{EncvInit(1^\lambda,pms)}$ to create the simulated enclave instances.
$\mathcal{S}$ also creates empty lists $\mathcal{R}_1^\star$, $\mathcal{R}_2^\star$, $\mathcal{C}^\star$, $\mathcal{K}^\star$ and $\mathcal{L}^\star$ to be used later.
\smallskip
\noindent\hangindent 2em \smallskip
\noindent$\mathbf{KeyGen^{\star}(1^\lambda)}$ When $\mathcal{A}$ makes a query to \noindent$\mathbf{KeyGen(1^\lambda)}$ oracle, $\mathcal{S}$ responds the same way as in the
real world except that now stores all the public keys queried in a list $\mathcal{K}^{\star}$. That is, $\mathcal{S}$ does the following algorithms.
\begin{itemize}
\item[-] Compute and output $(\mathsf{pk_{\mathcal{O}}}, \mathsf{sk_{\mathcal{O}}}),(\mathsf{pk_{Tx}}, \mathsf{sk_{Tx}}) \gets \mathsf{\mathsf{PKE}.KeyGen}(1^\lambda)$.
\item[-] Store the keys $(\mathsf{pk_{\mathcal{O}}}, \mathsf{sk_{\mathcal{O}}}),(\mathsf{pk_{Tx}}, \mathsf{sk_{Tx}})$ in the list $\mathcal{K}^{\star}$.
\end{itemize}
\smallskip
\noindent\hangindent 2em \smallskip
\noindent$\mathbf{Enc^{\star}(key^\star, 1^{|{msg}^\star|})}$\footnote{Here, $\mathbf{msg^{\star}}$ is a wildcard character, representing any messages.} When $\mathcal{A}$ provides the challenge message $\mathsf{{msg}^\star}$ for symmetric encryption, the following algorithm is used by $\mathcal{S}$ to simulate the challenge ciphertext.
\begin{itemize}
\item[-] Compute and output $\mathsf{ct^\star \gets \mathsf{SE}.Enc(key^\star, 1^{|{msg}^\star|)}}$.
\item[-] Store $\mathsf{ct}^\star$ in the list $\mathcal{L}^{\star}$.
\end{itemize}
\smallskip
\noindent\hangindent 2em $\mathbf{\mathsf{O}^{owner\star}(\mathsf{signature\; creation;addr)}}.$ When $\mathcal{A}$ takes a query to $\mathbf{\mathsf{O}^{owner}}$ oracle, $\mathcal{S}$ responds the same way as in the real world, except that $\mathcal{S}$ now stores all the $\mathsf{addr}$ corresponding to the user's queries in a list $\mathcal{R}_1^\star$. That is, $\mathcal{S}$ does the following algorithms.
\begin{itemize}
\item[-] Call $\mathbf{\mathsf{O}^{owner}}$ oracle with an input $( \mathsf{signature\; creation}; \mathsf{addr})$ and output $\mathsf{\sigma_{Tx}}$.
\item[-] Store $(\mathsf{addr, \sigma_{Tx}})$ in the list $\mathcal{R}_1^\star$.
\end{itemize}
\smallskip
\noindent\hangindent 2em $\mathbf{\mathsf{O}^{owner\star}(\mathsf{quote\; generation;vk_{sign})}}.$ When $\mathcal{A}$ takes a query to the $\mathbf{\mathsf{O}^{owner}}$ oracle, $\mathcal{S}$ responds the same way as in the real world, except that $\mathcal{S}$ now stores all the $\mathsf{quote}$ corresponding to the user's queries in a list $\mathcal{R}_2^\star$. That is, $\mathcal{S}$ does the following algorithms.
\begin{itemize}
\item[-] Call the $\mathbf{\mathsf{O}^{owner}}$ oracle with an input $(\mathsf{quote\; generation; vk_{sign}})$ and output $\mathsf{quote}$.
\item[-] Store $(\mathsf{vk_{sign}, quote})$ in the list $\mathcal{R}_2^\star$.
\end{itemize}
\smallskip
\noindent\hangindent 2em $\mathbf{\mathsf{O}^{delegatee\star}(\mathsf{key \;provision} ;\mathsf{quote})}.$ When $\mathcal{A}$ takes a query to the $\mathbf{\mathsf{O}^{delegatee}}$ oracle, $\mathcal{S}$ responds the same way as in the real world, except that $\mathcal{S}$ now stores all the $\mathsf{quote}$ corresponding to the user's queries in a list $\mathcal{C}^\star$. That is, $\mathcal{S}$ does the following algorithm.
\begin{itemize}
\item[-] Call $\mathbf{\mathsf{O}^{delegatee}}$ oracle with an input $(\mathsf{key \;provision} ;\mathsf{quote})$ and output $\mathsf{ct_{r}}$.
\item[-] Store $\mathsf{(quote,ct_{r})}$ in the list $\mathcal{C}^\star$.
\end{itemize}
\smallskip
For the PPT simulator $\mathcal{S}$, we prove the security by showing that the view of an adversary $\mathcal{A}$ in the real world is computationally indistinguishable from its view in the ideal world. Specifically, we establish a series of \textbf{Hybrids} that $\mathcal{A}$ cannot be distinguished with a non-negligible advantage as follows.
\medskip
\noindent\textbf{Hybrid 0.} $\mathsf{Exp^{real}_{DelegaCoin}(1^\lambda)}$ runs.
\smallskip
\noindent\textbf{Hybrid 1.} As in \textit{Hybrid 0}, except that $\mathbf{KeyGen^{\star}(1^\lambda)}$ run by $\mathcal{S}$ is used to generate secret keys instead of
$\mathbf{KeyGen(1^\lambda)}$.
\begin{prf}
The proof is straightforward, storing corresponding answers in lists does not affect the view of $\mathcal{A}$. Thus,
$\textit{Hybrid 1}$ is indistinguishable from $\textit{Hybrid 0}$.
\qed \end{prf}
\smallskip
\noindent\textbf{Hybrid 2.} As in \textit{Hybrid 1}, except that $\mathcal{S}$ maintains a list $\mathsf{\mathcal{C}^{\star}}$ of all $\mathsf{quote =(hdl,tag_P,in,out,\sigma)}$ output by $\mathsf{HW.Run\&Quote(hdl_{\mathcal{O}},in)}$. And, when $\mathsf{HW.QuoteVerify(hdl_{\mathcal{D}}, pms,quote)}$ is called, $\mathcal{S}$ outputs $\bot$ if $\mathsf{quote \notin \mathcal{R}_2}$. ($\mathcal{R}_2$ is a quote returned by the real-world oracles that $\adv$ has queried as defined in Section~\ref{oracles}).
\begin{prf} If a fake quote is produced, then the step $\mathsf{HW.QuoteVerify(hdl_{\mathcal{O}}, pms,quote)}$ in the real word would make it output $\bot$. Thus, $\textit{Hybrid 2}$ differs from $\textit{Hybrid 1}$ only when $\mathcal{A}$ can produce a valid $\mathsf{quote}$ without knowing $\mathsf{sk_{\mathcal{O}}}$. Assume that there is an adversary $\mathcal{A}$ can distinguish between $\textit{Hybrid 2}$ and $\textit{Hybrid 1}$. Obviously, this can be transformed to the ability against Remote Attestation as in Definition~\ref{remoteAttestation}. However, our assumption relies on the fact that the security of Remote Attestation holds. Thuerefore, \textit{Hybrid 2} is indistinguishable from \textit{Hybrid 1}. \qed
\end{prf}
\smallskip
\noindent\textbf{Hybrid 3.} As in \textit{Hybrid 2}, except that when the $\mathbf{\mathsf{O}^{delegatee}}$ oracle calls $ \mathsf{HW.Run(hdl_{\mathcal{D}}, }$ $ \mathsf{ (\text{``provision''}, \mathsf{quote,\mathsf{pk_{\mathcal{O}}}, pms}))}$, $\mathcal{S}$ replaces $\mathsf{ct_r}$ as an encryption of zeros $\mathsf{\mathsf{PKE}.Enc(pk_{\mathcal{O}},1^{|r|})}$.
\begin{prf} The IND-CCA2 challenger provides the challenge public key $pk_{\mathcal{O}}$, and an adversary $\adv$ provides two messages $\mathsf{r}$ and $1^{|\mathsf{r}|}$, and further, the challenge returns an encryption of $\mathsf{r}$ or an encryption of $1^{|\mathsf{r}|}$, which is represented $\mathsf{ct_{\star}}$.
$\mathcal{S}$ sets $\mathsf{ct_{\star}}$ as the real output $\mathsf{ct_{r}}$. For $\mathsf{ct_r} \in \mathcal{C}$,
$\mathcal{S}$ can use $\mathsf{O}^{\mathsf{delegatee}}$ as it used in the real world. However, For $\mathsf{ct_r} \notin \mathcal{C}$, $\mathcal{S}$ neither has the oracles nor has the $\sk_{\mathcal{O}}$. But, the decryption oracle offered by the IND-CCA2 challenger can be used for any $\mathsf{ct_r} \notin \mathcal{C}$. Under this condition, if $\mathcal{A}$ can still distinguish \textit{Hybrid 3} and \textit{Hybrid 2}, we can forward the answer corresponding to $\mathcal{A}$'s answer to the IND-CCA2 challenger. If $\mathcal{A}$ can
distinguish between these two hybrids with a non-negligible probability, the IND-CCA2 security of $\mathsf{PKE}$ (see Definition~\ref{ccapke}) can
be broken with a non-negligible probability. \qed
\end{prf}
\smallskip
\noindent\textbf{Hybrid 4.} As in \textit{Hybrid 3}, except that $\mathcal{S}$ maintains a list $\mathcal{R}_1^\star$ of all transaction signature $\mathsf{\sigma_{Tx}}$ output by $\mathbf{\mathsf{O}^{owner}(\mathsf{signature\; creation; addr})}$ for $\mathsf{addr} \in \mathcal{R}_1$. When $\mathsf{b} \gets \mathsf{\mathsf{S}.verify^{B}(pk_{Tx},\sigma_{Tx})}$ is called $\mathcal{S}$ outputs $\bot$ if $(\mathsf{addr}, \mathsf{\sigma_{Tx}})$, as components of a $\mathsf{Tx}$, do not belong to $\mathcal{R}_1$. Namely, $\mathsf{(\mathsf{addr}, \mathsf{\sigma_{Tx}}) \notin \mathcal{R}_1}$.
\begin{prf} If a transaction is given with an invalid signature, then the step $\mathsf{\mathsf{S}.Verify^{B}( pk_{Tx},\sigma_{Tx})}$ in the real word would make it output $\bot$. Thus, $\textit{Hybrid 4}$ differs from $\textit{Hybrid 3}$ only when $\mathcal{A}$ can produce a valid signature on $\mathsf{addr}$ which has never appeared before in the communication between $\mathcal{A}$ and the oracles. Let $\mathcal{A}$ be an adversary who can distinguish $\textit{Hybrid 4}$ and $\textit{Hybrid 3}$. We use it to break the EUF-CMA~\cite{goldwasser1988digital} security of signature scheme $\mathcal{S}$. We get a verification key $\mathsf{pk_{Tx}}$ and an access to $\mathsf{\mathsf{S}.Sign(sk_{Tx},\cdot)}$ oracle from the EUF-CMA challenger. Whenever $\mathcal{S}$ signs a message using $\mathsf{sk_{Tx}}$, it uses the $\mathsf{\mathsf{S}.Sign(sk_{Tx},\cdot)}$ oracle. Also, our construction does not need a direct access to $\mathsf{sk_{Tx}}$ sign; it is used only to sign messages for the oracle provided by the challenger. Now, if $\mathcal{A}$ can distinguish two hybrids, the only reason is that $\mathcal{A}$ generates a valid signature $\mathsf{\sigma_{Tx}}$. Then, we can send such signature as forgery to the EUF-CMA~\cite{goldwasser1988digital} challenger. \qed
\end{prf}
\noindent\textbf{Hybrid 5.} As shown in \textit{Hybrid 4}, except that when the $\mathbf{\mathsf{O}^{owner}}$ oracle calls the function $\mathsf{HW.Run(hdl_{\mathcal{O}}, (\text{``start delegation''},addr))}$, $\mathcal{S}$ replaces $\mathsf{Enc}$ with $\mathsf{Enc^{\star}}$.
\begin{lemma}\label{lemma1}
If symmetric encryption scheme $\mathsf{SE}$ is IND-CPA secure, \textit{Hybrid 5} is indistinguishable from \textit{Hybrid 4}.
\end{lemma}
\begin{prf}
Whenever $\mathcal{A}$ provides a transaction $\mathsf{Tx}$ of its choice, $\mathcal{S}$ replies with zeros, e.g., $\mathsf{\mathsf{SE}.Enc(1^{|r|})}$, which is shown as follows.
\vspace{10ex}
\begin{center}
\begin{gameproof}[nr=3,name=\mathsf{Hybrid },arg=(1^n)]
\gameprocedure{%
\pcln \text{\dots} \\
\pcln \mathsf{\mathsf{ct_{tx}} \gets \mathsf{SE}.\mathsf{Enc(r,Tx)}} \\
\pcln \text{\dots}
}
\gameprocedure{%
\text{\dots} \\
\gamechange{$\mathsf{\mathsf{ct_{tx}} \gets \mathsf{SE}.\mathsf{Enc(1^{|r|},Tx)}}$} \\
\text{\dots}
}
\addgamehop{4}{5}{hint=\footnotesize replace the encryption with zeros, nodestyle=red}
\end{gameproof}
\end{center}
Assume that there is an adversary $\mathcal{A}$ that is able
to distinguish the environments of \textit{Hybrid 5} and \textit{Hybrid 4}. Then, we build an adversary $\mathcal{A}^\star$ against IND-CPA secure of $\mathsf{SE}$. Given a transaction $\mathsf{Tx}$ , if $\mathcal{A}$ distinguishes the encryption
of $\mathsf{r}$ from the encryption of $1^{\mathsf{|r|}}$, we forward the corresponding answer to the IND-CPA challenger. \qed
\end{prf}
\noindent\textbf{Hybrid 6.} As in \textit{Hybrid 5}, except that when the $\mathcal{A}$ calls $\mathsf{HW.Run(hdl_{\mathcal{O}}, (\text{``state seal''},addr))}$, $\mathcal{S}$ replaces $\mathsf{Enc}$ with $\mathsf{Enc^{\star}}$.
\begin{prf}
The Indistinguishability between $\textit{Hybrid 6}$ and $\textit{Hybrid 5}$ can be directly
reduced to the IND-CPA property of $\mathsf{SE}$, which is similar to the lemma~\ref{lemma1}\qed
\end{prf}
\section{Implementation}
\label{sec-implementation}
We implement a prototype with three types of entities: the owner node, the delegatee node, and the blockchain system. The owner node and the delegatee node are separately running on two computers. The codes of these nodes are both developed in C++ using the $\text{Intel}^\circledR$ SGX SDK 1.6 under the operating system of Ubuntu 20.04.1 LTS. For the blockchain network, we adopt the Bitcoin testnet~\cite{bitcointest} as our prototype platform. Specifically, we employ SHA-256 as the hash algorithm, and ECDSA~\cite{johnson2001elliptic} with \textit{secp256k1}~\cite{sec20002} as the initial setting to sign transactions, which is the same with Bitcoin testnet's configuration.
\smallskip
\noindent\textbf{Functionalities.} We emphasize two main functionalities in our protocol, including \textit{isolated transaction generation} and \textit{remote attestation}. The delegation inside TEEs has full responsibility to govern the behaviours of participants. In particular, TEEs first calls the function $sgx\_create\_enclave$ and $enclave\_init\_ra$ to create and initialize an enclave \textit{E}$_{\mathcal{O}}$. Then, it derives the transaction key $sk_{Tx}$ under the user's invocation.
\begin{algorithm}
\label{algorithm1}
\caption{Remote Attestation}
\BlankLine
\KwIn{$\mathsf{request(quote, pms)}$}
\KwOut{$\mathsf{b=0/1}$ }
\BlankLine
\textbf{parse} the received $\mathsf{quote}$ into $\mathsf{hdl,tag_P,in,out,\sigma}$ \\
\textbf{verify} the validity of $\mathsf{vk_{sign}}$ \\
\textbf{run} the algorithm $\mathsf{HW.quoteVerify}$ with an input $\mathsf{(pms,quote)}$\\
\textbf{verify} the validity of $\mathsf{quote}$ \\
\textbf{return} the results $\mathsf{b}$ if it passes ($\mathsf{1}$), or not ($\mathsf{0}$) \\
\end{algorithm}
Next, the system generates a bitcoin address and a transaction with calling the function
$create\_address\_from\_string$ and $generate\_transaction$ respectively. \textit{E}$_{\mathcal{O}}$ keeps $sk_{Tx}$ in its global variable storage and signs the transaction with it while calling $generate\_transaction$. The transaction can only be generated inside the enclave without exposing to the public. Afterwards, \textit{E}$_{\mathcal{O}}$ creates a quote by calling the function $ra\_network\_send\_receive$, and proves to the delegatee that
its enclave has been successfully initialized and is ready for the further delegation.
\section{Evaluation}
\label{sec-evaluation}
In this section, we evaluate the system with respect to \textit{performance} and \textit{disk space}. To have an accurate and fair test result, we repeat the measure for each operation 500 times and calculate the average.
\subsection{Performance}
The operations of public key generation and address create cost approximately the same time. This is due to the reason that they are both based on the same type of basic cryptographic primitives. The operations of transaction generation, state seal, and transaction decryption spend more time than the aforementioned operations because they combine more complex cryptographic functions. We also observe that the enclave initiation spends much more time than (transactions) key pair generations. Fortunately, the time used on enclave initiation can be omitted since the enclave each time launches only once (one-time operation). The state update spends the lowest time since most of the recorded messages are overlapped without the changes and only a small portion of data requires an update. The operations of coin deposit and transaction confirmation depend on the configuration of the Bitcoin testnet, varying from 10+ seconds to several minutes. Furthermore, we attach the time costs of the \textit{state seal} operation under increased transactions in Figure.\ref{fig-test} (right column). The time consumption grows slowly because a large portion of transactions are processed in batch. Remarkably, it costs less than 25 millisecond to finish all operations of coin delegation, which is significantly lower than the online transaction of Bitcoin testnet. This indicates that our solution is efficient in transaction processing and practical coin delegation.
\begin{table}[!hbtp]
\caption{The average performance of various operations}
\label{tab-test}
\centering
\resizebox{0.75\linewidth}{!}{
\begin{tabular}{llr}
\toprule
\textbf{Phase} & \textbf{Operation} & \textbf{Average Time / ms} \\
\midrule
\multirow{2}{*}{\textit{System setup}} & Enclave initiation & $ 13.18940 $\\
& Public key generation (Tx) & $ 0.34223 $ \\
& Private key generation (Tx) & $0.01119 $ \\
\cmidrule{1-2}
\multirow{2}{*}{\textit{Coin deposit}} & Address creation & $0.00690 $ \\
& Coin deposit & $ -$ \\
\cmidrule{1-2}
\multirow{4}{*}{\textit{Coin delegation}} & Transaction generation & $ 0.78565 $ \\
& Remote attestation & $19.50990 $ \\
& State update & $ 0.00366 $ \\
& State seal & $ 5.43957 $ \\
\cmidrule{1-2}
\multirow{2}{*}{\textit{Coin spend}} & Transaction decryption & $ - $ \\
& Transaction confirmation & $ - $ \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Disk Space}
In this part, we provide an evaluation of the disk space of the sealed state. We simulate the situation in DelegaCoin when more delegation transactions join the network. The transaction creation rate is set to be 560 transactions/second. We monitor space usage and the corresponding growth rate. Each transaction occupies approximately 700 KB of storage space. We test eight sets of experiments with an increased number of transactions in the sequence $1, 10, 100, 200, 400, 600, 800, 1000$. The results, as shown in Figure.\ref{fig-test} (left column), indicate that the size of the disk usage grows linearly with increased delegation transactions. The reason is straightforward: the disk usage closely relates to the involved transactions that are stored in the list. In our configurations, the transaction generation rate stays fixed. Therefore, the used space is proportional to the increased transactions.
\begin{figure}[!hbt]
\centering
\caption{Used disk space and time consuming of state seal}
\includegraphics[width=0.65\textwidth]{image/space_textm.jpg}
\label{fig-test}
\end{figure}
\section{Conclusion}
\label{sec-conclusion}
Decentralized cryptocurrencies such as Bitcoin~\cite{nakamoto2008bitcoin} provide an alternative approach for peer-to-peer payments. However, such payments are time-consuming. In this paper, we provide a secure and practical TEEs-based offline delegatable cryptocurrency system. TEEs are used as the primitives to establish a secure delegation channel and offer better storage protection of metadata (keys, policy). An owner can delegate the coin through an offline-transaction asynchronously with the blockchain network. A formal analysis, prototype implementation and further evaluation demonstrate that our scheme is provably secure and practically feasible.
\textit{Future Work.} There is an insurmountable gap between the theoretical case and real application. Although our scheme is proved to be theoretically secure, a lot of risks still exist in practical scenarios. The countermeasures to reduce these risks will be explored.
\smallskip
\noindent\textbf{Acknowledgments.} Rujia Li and Qi Wang were supported by Guangdong Provincial Key Laboratory (Grant No. 2020B121201001).
\normalem
\bibliographystyle{unsrt}
\section{Introduction}
The interest in decentralized cryptocurrencies has grown rapidly in recent years. Bitcoin \cite{nakamoto2008bitcoin}, as the first and most famous system, has attracted massive attention. Subsequently, a handful of cryptocurrencies, such as Ethereum \cite{wood2014ethereum}, Namecoin \cite{ali2016blockstack} and Litecoin \cite{reed2017litecoin}, were proposed. Blockchain-based cryptocurrencies significantly facilitate the convenience of payment by providing a decentralized online solution for customers. However, merely online processing of transactions confronts the problem of low performance and high congestion. Offline delegation provides an alternative way to mitigate the issue by enabling users to exchange the coin without having to connect to an online blockchain platform~\cite{gudgeon2020sok}. Unfortunately, decentralized offline delegation still confronts risks caused by unreliable participants. The misbehaviours may easily happen due to the absence of effective supervision. To be specific, let us start from a real scenario: imagine that Bob, the son of Alex, a wild teenager, wants some digital currency (\textit{e.g.}, BTC) to buy a film ticket. According to current decentralized cryptocurrency payment technologies \cite{nakamoto2008bitcoin}\cite{wood2014ethereum}, Alex has two delegation approaches: (1) \textit{Coin-transfer.} Alex asks for Bob's BTC address, and then transfers a specific amount of coins to Bob's address. In such a scenario, Bob can only spend received coins from Alex. (2) \textit{Ownership-transfer.} Alex directly gives his own private key to Bob. Then, Bob can freely spend the coins using such a private key. In this situation, Bob obtains all coins that are saved in Alex's address.
We observe that both approaches suffer drawbacks. For the first approach, coin-transfer requires a global consensus of the blockchain, which makes it time-consuming \cite{kiayias2015speed}. For example, a confirmed transaction in the Bitcoin \cite{nakamoto2008bitcoin} takes around one hour (6 blocks), making the coin-transfer lose the essential property of real-time. For the other approach, ownership-transfer highly relies on the honesty of the delegatee. The promise between the delegator and delegatee depends on their trust or relationship. But it is weak and unreliable. The delegatee may spend all coins in the address for other purposes. Back to the example, Alex's original intention is to give Bob 200 $\mu BTC$ to buy a film ticket, but Bob may spend all coins to purchase his favorite toys. That means Alex loses control of the rest of coins. These two types of approaches represent most of the mainstream schemes ever aiming to achieve a secure delegation, but neither of them provide a satisfactory solution. This leads to the following research problem:
\begin{center}
\begin{tcolorbox}[colback=gray!10
colframe=black
width=12cm
arc=1mm, auto outer arc,
boxrule=0.5pt,
]
Is it possible to build a secure offline peer-to-peer delegatable system for decentralized cryptocurrencies?
\end{tcolorbox}
\end{center}
\noindent The answer would intuitively be ``NO''. Without interacting with the online blockchain network, the coins that have been used confront the risk of being spent twice after another successful delegation. This is because a delegation is only witnessed by the owner and delegatee, where no authoritative third parties perform final confirmation. The pending status leaves a window for attacks in which a malicious coin owner could spend this delegated transaction before the delegatee uses it. Even if a third party can be introduced as a judge between the delegator (owner) and delegatee to secure transactions, she faces the threat of being compromised or provided with misleading assure. Furthermore, the approach equipped with a third party contradicts the real intention of decentralized cryptocurrency systems.
In this paper, we propose \textit{DelegaCoin}, an offline delegatable electronic cash system. The trusted execution environments (TEEs) are utilized to play the role of \textit{virtual agent}. TEEs prevent malicious delegation of the coins (\textit{e.g.} double-delegation on the same coins). As shown in Figure.\ref{size}, the proposed scheme allows the owner to delegate her coins without interacting with the blockchain or any trusted third parties. The owner is able to directly delegate specific amounts of coins to others by sending them through a secure channel. This delegation can only be executed once under the supervision of delegation policy inside TEEs. In a nutshell, this paper makes the following contributions.
\begin{itemize}
\item[-] We propose an offline delegatable payment solution, called \textit{DelegaCoin}. It employs the trusted execution environments (TEEs) as the decentralized \textit{virtual agents} to prevent the malicious owner from delegating the same coins multiple times.
\item[-] We formally define our protocols and provide a security analysis. Designing a provably secure system from TEEs is a non-trivial task that lays the foundation for many upper-layer applications. The formal analysis indicates that our system is secure.
\item[-] We implement the system with Intel’s Software Guard Extensions (SGX) and conduct a series of experiments including the time cost for each function and the used disk space under different configurations. The evaluations demonstrate that our system is feasible and practical.
\end{itemize}
\smallskip
\noindent\textbf{Paper Structure.} Section~\ref{sec-rw} gives the background and related studies. Section~\ref{sec-prelimi} provides the preliminaries and building blocks. Section~\ref{sec-design} outlines the general construction of our scheme. Section~\ref{sec-formal} presents a formal model for our protocols. Section~\ref{sec-seurity} provides the corresponding security analysis. Section~\ref{sec-implementation} and Section~\ref{sec-evaluation} show our implementation and evaluation, respectively. Section~\ref{sec-conclusion} concludes our work. Appendix A provides an overview of protocol workflow, Appendix B shows the resources availability and Appendix C presents featured notations in this paper.
\section{Related Work}
\label{sec-rw}
\noindent\textbf{Decentralized Cryptocurrency System.}
Blockchain-based cryptocurrencies facilitate the convenience of payment by providing a decentralized online solution for customers. Bitcoin \cite{nakamoto2008bitcoin} was the first and most popular decentralized cryptocurrency. Litecoin \cite{reed2017litecoin} modified the PoW by using the Script algorithm and shortened the block confirmation time. Namecoin \cite{ali2016blockstack} was the first hard fork of Bitcoin to record and transfer arbitrary names (keys) securely. Ethereum \cite{wood2014ethereum} extended Bitcoin by enabling state-transited transactions. Zcash \cite{hopwood2016zcash} provides a privacy-preserving payment solution by utilizing zero-knowledge proofs. CryptoNote-style schemes \cite{van2013cryptonote}, instead, enhance the privacy by adopting ring-signatures. However, slow confirmation of transactions retards their wide adoption from developers and users. Current cryptocurrencies, with ten to hundreds of TPS~\cite{nakamoto2008bitcoin,zheng2018detailed}, cannot rival established payment systems such as Visa or PayPal that process thousands. Thus, various methods have been proposed for better throughput. The scaling techniques can be categorized in two ways: (i) On-chain solutions that aim to create highly efficient blockchain protocols, either by reconstructing structures \cite{wang2020sok}, connecting chains \cite{zamyatin2019sok} or via sharding the blockchain \cite{wang2019sok}. However, on-chain solutions are typically not applicable to existing blockchain systems (require a hard fork). (b) Off-chain (layer 2) solutions that regard the blockchain merely as an underlying mechanism and process transactions offline \cite{gudgeon2020sok}. Off-chain solutions generally operate independently on top of the consensus layer of blockchain systems, not changing their original designs. In this paper, we explore the second avenue.
\smallskip
\noindent\textbf{TEEs and Intel SGX.}
The Trusted Execution Environments (TEEs) provide a secure environment for executing code to ensure the confidentiality and integrity of code and logic \cite{ekberg2013trusted}. State-of-the-art implementations include Intel Software Guard Extensions (SGX)~\cite{costan2016intel}, ARM TrustZone~\cite{pinto2019demystifying}, AMD memory encryption~\cite{kaplan2016amd}, Keystone ~\cite{lee2020keystone}, \textit{etc}. Besides, many other applications like BITE \cite{Matetic2018BITEBL}, Tesseract \cite{bentov2019tesseract}, Ekiden \cite{cheng2019ekiden} and Fialka \cite{li2020accountable} propose their TEEs-empowered schemes, but they still miss the focus of offline delegation. In this paper, we utilize SGX \cite{costan2016intel} to construct the system. SGX is one of TEEs representatives, and offers a set of instructions embedded in central processing units (CPUs). These instructions are used for building and maintaining CPUs' security areas. To be specific, SGX allows to create private regions (\textit{a.k.a.} enclave) of memory to protect the inside contents. The following features are highlighted in this technique: (1) \textit{Attestation.} Attestation mechanisms are used to prove to a validator that the enclave has been correctly instantiated, and used to establish a secure, authenticated connection to transfer sensitive data. The attestation guarantees the secret (private key) to be provisioned to the enclave only after a successful substantiation. (2) \textit{Runtime Isolation.} Processes inside the enclave are protectively isolated from the software running outside. Specifically, the enclave prevents higher privilege processes and outside operating system codes from falsifying inside executions of loaded codes. (3) \textit{Sealing identity technique.} SGX offers a seal sealing identity technique, where the enclave data is allowed to store in untrusted disk space. The private sealing key comes from the same platform key, which enables data sharing across different enclaves.
\smallskip
\noindent\textbf{Payment Delegation.} The payment delegation plays a crucial role in e-commercial activities, and it has been comprehensively studied for decades. Several widely adopted approaches are such that using credit cards (Visa, Mastercard, etc.), using reimbursement, using third-party platforms (like PayPal~\cite{williams2007introduction}, AliPay~\cite{guo2016ecosystem}). These schemes allow users to delegate their cash spending capability to their own devices or other users. However, these delegation mechanisms heavily rely on a centralized party that needs a fairly great amount of trust. Decentralized cryptocurrencies, like Bitcoin \cite{nakamoto2008bitcoin} and Ethereum \cite{wood2014ethereum}, remove the role of trusted third parties, making the payment reliable and guaranteed by distributed blockchain nodes. However, such payment is time-consuming since the online transactions need to get confirmed by the majority of participated nodes. The delegation provides the decentralized cryptocurrency with an efficient payment approach to delegate the coin owner's spending capability. The cryptocurrency delegation using SGX was first explored in \cite{matetic2018delegatee}, where they only focused on the credential delegation in the fair exchange. Teechan \cite{lind2019teechain} provided a full-duplex payment channel framework that employs TEEs, in which the parties can pay each other without interacting with the blockchain in a bounded time. However, Teechan requires a complex setup: the parties must commit a \textit{multisig} transaction before the channel started. In contrast, our scheme is simple and more practical.
\section{Preliminaries and Definitions}
\label{sec-prelimi}
We make use of the following notions, definitions and assumptions to construct our scheme. Details are shown as follows.
\subsection{Notions}
Let $\mathsf{\lambda}$ denote a security parameter, $\mathsf{negl(\lambda)}$ represent a negligible function and $\mathcal{A}$ refer to an adversary. $\mathsf{b_{\star}}$ and $\mathsf{c_{\star}}$ are wildcard characters, representing the balance and the encrypted balance, respectively. A full notion list is provided in Appendix~\ref{appendix:b}.
\subsection{Crypto Primitive Definitions}
\noindent \textbf{Semantically Secure Encryption.} A semantically secure encryption $\mathsf{SE}$ consists of a triple of algorithms $\mathsf{(KGen, Enc, Dec)}$ defined as follows.
\begin{itemize}
\item[-] $\mathsf{SE.KGen}(1^\lambda)$ The algorithm takes as input security parameter $1^{\lambda}$ and generates a private key $\sk$ from the
key space $\mathcal{M}$.
\item[-] $\mathsf{SE.Enc(\sk, msg)}$ The algorithm takes as input a private key $\sk$ and a message $\mathsf{msg} \in \mathcal{M}$, and outputs a ciphertext $\mathsf{ct}$.
\item[-] $\mathsf{SE.Dec(\sk,ct)}$ The algorithm takes as input a verification key $\sk$, a message $\mathsf{ct}$, and outputs $\mathsf{msg}$.
\end{itemize}
\smallskip
\noindent\textit{Correctness}. A semantically secure encryption scheme $\mathsf{SE}$ is correct if for all $\mathsf{msg} \in \mathcal{M}$,
\begin{align*}
\Pr\big[\mathsf{SE.Dec(\sk,(SE.Enc(\sk,msg))) \neq msg} \big| \mathsf{\sk \gets SE.KGen(1^\lambda)}\big] \leq \mathsf{negl(\lambda)},
\end{align*}
where $ \mathsf{negl(\lambda)}$ is a negligible function and the probability is taken over the random coins of the algorithms $\mathsf{SE.Enc}$ and $\mathsf{SE.Dec}$.
\begin{defi}[IND-CPA security of $\mathsf{SE}$]\label{secpa}
A semantically secure encryption scheme $\mathsf{SE}$ achieves Indistinguishability under Chosen-Plaintext Attack(IND-CPA) if
all PPT adversaries, there exists a negligible function $\mathsf{negl(\lambda)}$ such that
\begin{align*}
\big| \Pr\big[ \mathsf{G_{\adv, SE}^{IND-CPA}(\lambda)} = 1\big] - \frac{1}{2} \big| \leq \mathsf{negl(\lambda)},
\end{align*}
where $\mathsf{G_{\adv, SE}^{IND-CPA}(\lambda)}$ is defined as follows:
\begin{pcvstack}[center]%
\procedure{$\mathsf{G_{\adv, SE}^{IND-CPA}(\lambda)}$}{%
\pcln \mathsf{\sk} \stackrel{\$}{\leftarrow} \mathsf{SE.KGen}(1^\lambda); \\
\pcln \mathsf{b} \stackrel{\$}{\leftarrow} \{0,1\} \\
\pcln \mathsf{m_{0},m_{1}} \gets \mathcal{A}^{\mathsf{SE}(\cdot)} \\
\pcln \mathsf{c^\star} \gets \mathsf{SE.Enc(\sk,m_b}) \\
\pcln \mathsf{b^{'}} \gets \adv^{\mathsf{SE}(\cdot)}\mathsf{(c^\star)}\\
\pcln \pcreturn \mathsf{b = b^{'}}
}
\end{pcvstack}
\end{defi}
\noindent \textbf{Signature Scheme.} A signature scheme $\mathsf{S}$ consists of the following algorithms.
\begin{itemize}
\item[-] $\mathsf{S.KeyGen}(1^\lambda)$ The algorithm takes as input security parameter $1^{\lambda}$ and generates a private signing key $\sk$ and a public verification key $\vk$.
\item[-] $\mathsf{S.Sign(\sk, msg)}$ The algorithm takes as input a signing key $\sk$ and a message $\mathsf{msg} \in \mathcal{M}$, and outputs a signature $\mathsf{\sigma}$.
\item[-] $\mathsf{S.Verify(\vk,\sigma,msg)}$ The algorithm takes as input a verification key $\vk$,
a signature $\mathsf{\sigma}$ and a message $\mathsf{msg} \in \mathcal{M}$, and outputs $1$ or $0$.
\end{itemize}
\smallskip
\noindent\textit{Correctness}. A signature scheme $\mathsf{S}$ is correct if for all $\mathsf{msg} \in \mathcal{M}$,
\begin{align*}
\Pr\big[\mathsf{S.Verify(\vk,(S.Sign(\sk, msg)),msg)} \neq 1 \big| \mathsf{(\vk,\sk) \gets S.KeyGen(1^\lambda)}\big] \leq \mathsf{negl(\lambda)},
\end{align*}
where $ \mathsf{negl(\lambda)}$ is a negligible function and the probability is taken over the random coins of the algorithms $\mathsf{S.Sign}$ and $\mathsf{S.Verify}$.
\begin{defi}[(EUF-CMA security of $\mathsf{S}$]\label{secpa}
A signature scheme $\mathsf{S}$ is called Existentially Unforgeable under Chosen Message Attack(EUF-CMA) if all PPT adversaries, there exists a negligible function $\mathsf{negl(\lambda)}$ such that
\begin{align*}
\Pr\big[ \mathsf{G_{\adv, S}^{EUF-CMA}(\lambda)} = 1\big] \leq \mathsf{negl(\lambda)},
\end{align*}
where $\mathsf{G_{\adv, S}^{EUF-CMA}(\lambda)}$ is defined as follows:
\begin{pcvstack}[center]%
\procedure{$\mathsf{G_{\adv, S}^{IND-CPA}(\lambda)}$}{%
\pcln \mathsf{(\sk, pk)} \stackrel{\$}{\leftarrow} \mathsf{S.KeyGen}(1^\lambda); \\
\pcln \mathcal{L} \gets \mathsf{S.Sign(\sk, m_{\{0,\dots,n\}})}; \\
\pcln \mathsf{(m^{\star},\sigma^{\star})} \gets \mathcal{A}^{\mathcal{O}(sk, \cdot)} \mathsf{(pk)}\\
\pcln \pcreturn \mathsf{( S.Verify(vk,\sigma^{\star}, m^{\star})} = 1) \wedge \mathsf{m^{\star} \notin \mathcal{L}}
}
\end{pcvstack}
\end{defi}
\noindent \textbf{Public Key Encryption.} A public key encryption scheme $\mathsf{PKE}$ consists of the following algorithms.
\begin{itemize}
\item[-] $\mathsf{PKE.KeyGen}(1^\lambda)$ The algorithm takes as input security parameter $1^{\lambda}$ and generates a private signing key $\sk$ and a public verification key $\vk$.
\item[-] $\mathsf{PKE.Enc(pk, msg)}$ The algorithm takes as input in a public key $\mathsf{pk}$ and a message $\mathsf{msg} \in \mathcal{M}$, and outputs a ciphertext $\mathsf{ct}$.
\item[-] $\mathsf{PKE.Dec(\sk,ct)}$ The algorithm takes as input a secret key $\sk$, a ciphertext $\mathsf{ct}$, and outputs $\mathsf{msg}$ or $\bot$.
\end{itemize}
\smallskip
\noindent\textit{Correctness}. A public key encryption scheme $\mathsf{PKE}$ is correct if for all $\mathsf{msg} \in \mathcal{M}$,
\begin{align*}
\Pr\big[\mathsf{SE.PKE(\sk,(PKE.Enc(pk,msg))) \neq msg} \big| \mathsf{(\sk,pk) \gets PKE.KeyGen(1^\lambda)}\big] \leq \mathsf{negl(\lambda)},
\end{align*}
where $ \mathsf{negl(\lambda)}$ is a negligible function and the probability is taken over the random coins of the algorithms $\mathsf{PKE.KeyGen}$ and $\mathsf{PKE.Enc}$.
\begin{defi}[(IND-CCA2 security of $\mathsf{PKE}$]\label{ccapke}
A PKE scheme $\mathsf{PKE}$ is said to have Indistinguishability Security under Adaptively Chosen Ciphertext Attack(IND-CCA2) if
all PPT adversaries, there exists a negligible function $\mathsf{negl(\lambda)}$ such that
\begin{align*}
\Pr\big[ \mathsf{G_{\adv,PKE}^{IND-CCA2}(\lambda)} = 1\big] \leq \mathsf{negl(\lambda)},
\end{align*}
where $\mathsf{G_{\adv,PKE}^{IND-CCA2}(\lambda)}$ is defined as follows:
\begin{pcvstack}[center]%
\procedure{$\mathsf{G_{\adv, PKE}^{IND-CCA2}(\lambda)}$}{%
\pcln \mathsf{(\sk, pk)} \stackrel{\$}{\leftarrow} \mathsf{PKE.KGen}(1^\lambda); \\
\pcln \mathsf{b} \stackrel{\$}{\leftarrow} \{0,1\} \\
\pcln \mathsf{m_{0},m_{1}} \gets \mathcal{A}^{\mathsf{PKE.Dec}(\sk,\cdot)} \\
\pcln \mathsf{c^\star} \gets \mathsf{PKE.Enc(\sk,m_b}) \\
\pcln \mathsf{b^{'}} \gets \adv^{\mathsf{PKE.Dec}(\sk,\cdot)}\mathsf{(c^\star)}\\
\pcln \pcreturn \mathsf{b = b^{'}}
}
\end{pcvstack}
\end{defi}
\subsection{Secure Hardware}
In our scheme, parties will have access to TEEs, in which they serve as isolated environments to guarantee the confidentiality and integrity of inside code and data. To capture the secure functionality of TEEs, inspired by~\cite{fisch2017iron,barbosa2016foundations} we define TEEs as a black-box program that provides some interfaces exposed to users. The abstraction is given as follows. Note that, Due to the scope of usage, we only capture the remote attestation of TEEs and refer to~\cite{fisch2017iron} for a full definition.
\begin{defi}
\label{TEEmode}
A secure hardware functionality $\mathsf{HW}$ for a class of probabilistic polynomial time (PPT) programs $\mathcal{P}$ includes the algorithms: $\mathsf{Setup}$, $\mathsf{Load}$, $\mathsf{Run}$, $\mathsf{Run Quote}$, $\mathsf{QuoteVerify}$.
\begin{itemize}
\item[-] $\mathsf{HW.Setup(1^\lambda)}:$ The algorithm takes as input a security parameter $\lambda$, and outputs the secret key $\mathsf{sk_{quote}}$ and public parameters $\mathsf{pms}$.
\item[-] $\mathsf{HW.Load(pms}, P):$ The algorithm loads a stateful program $P$ into an enclave. It takes as input a program $P \in \mathcal{P}$ and $\mathsf{pms}$, and outputs a new enclave handle $\mathsf{hdl}$.
\item[-] $\mathsf{HW.Run(hdl,in)}:$ The algorithm runs enclave. It inputs a handle $\mathsf{hdl}$ that relates to an enclave (running program $P$) and an input $\mathsf{in}$, and outputs execution results $\mathsf{out}$.
\item[-] $\mathsf{HW.RunQuote(hdl, in)}:$ The algorithm executes programs in an enclave and generates an attestation quote. It takes as input $\mathsf{hdl}$ and $\mathsf{in}$, and executes $P$ on $\mathsf{in}$. Then, it outputs $\mathsf{quote = (hdl,tag_P, in, out, \sigma)}$, where $\mathsf{tag_P}$ is a measurement to identify the program running inside an enclave and $\sigma$ is a corresponding signature.
\item[-] $\mathsf{HW.QuoteVerify(pms,quote)}:$ The algorithm verifies the quote. It firstly executes $P$ on $\mathsf{in}$ to get $\mathsf{out}$. Then, it takes as input $\mathsf{pms}$, $\mathsf{quote = (hdl,tag_P,in,out,\sigma)}$, and outputs $\mathsf{1}$ if the signature $\sigma$ is correct. Otherwise, it outputs $\mathsf{0}$.
\end{itemize}
\end{defi}
\smallskip
\noindent\textit{Correctness}. The $\mathsf{HW}$ scheme is correct if the following properties hold: For all program $\mathcal{P}$, all input $\mathsf{in}$
\begin{itemize}
\item Correctness of $\mathsf{HW.Run}$: for any specific program $P \in \mathcal{P}$, the output of $\mathsf{HW.Run(hdl,in)}$ is deterministic.
\item Correctness of $\mathsf{RunQuote}$ and $\mathsf{QuoteVerify}$:
\begin{align*}
\Pr[\mathsf{QuoteVerify(pms, RunQuote}
(\mathsf{hdl}, \mathsf{in})) \neq 1] \leq \mathsf{negl(\lambda).}\\
\end{align*}
\end{itemize}
Remote attestation in TEEs provides functionality for verifying the execution and corresponding output of a certain code run inside the enclave by using a signature-based quote. Thus, the remote attestation unforgeability security~\cite{fisch2017iron} is defined similarly to the unforgeability of a signature scheme.
\begin{defi}[Remote Attestation Unforgeability (RemAttUnf)]
\label{remoteAttestation} A $\mathsf{HW}$ scheme is RemAttUnf secure if all PPT adversaries, there exists a negligible function $\mathsf{negl(\lambda)}$ such that
\begin{align*}
\Pr\big[ \mathsf{G_{\adv, S}^{RemAttUnf}(\lambda)} = 1\big] \leq \mathsf{negl(\lambda)},
\end{align*}
where $\mathsf{G_{\adv, HW}^{RemAttUnf}(\lambda)}$ is defined as follows:
\begin{pcvstack}[center]%
\procedure{$\mathsf{G_{\adv, HW}^{RemAttUnf}(\lambda)}$}{%
\pcln \mathsf{pms} \gets \mathsf{HW.Setup}(1^\lambda); \\
\pcln \mathsf{hdl} \gets \mathsf{HW.Load} (\mathsf{pms},P); \\
\pcln \mathcal{Q} \gets \mathsf{HW.RunQuote (hdl, in_{\{0,\dots,n\}})}; \\
\pcln \mathsf{(in^{\star}, quote^{\star})} \gets \mathcal{A}^{\mathcal{O}(hdl,\cdot)} \mathsf{(pms)}\\
\pcln \pcreturn \mathsf{( HW.QuoteVerify(pms, quote^{\star})} = 1) \wedge \mathsf{quote^{\star} \notin \mathcal{Q}}
}
\end{pcvstack}
\end{defi}
\section{DelegaCoin}
\label{sec-design}
In DelegaCoin, three types of entities are involved: coin owner (or delegator) $\mathcal{O}$, coin delegatee $\mathcal{D}$, and blockchain $\mathcal{B}$ (see Figure~\ref{size}). The main idea behind DelegaCoin is to exploit the TEEs as trusted agents between the coin owner and coin delegatee. TEEs are used to maintain delegation policies and ensure faithful executions of the delegation protocol. In particular, TEEs guarantee that the coin owner (either honest or malicious) cannot arbitrarily spend the delegated coins. The workflow is described as follows. Firstly, both $\mathcal{O}$ and $\mathcal{D}$ initialize and run the enclaves, and the owner $\mathcal{O}'s$ enclave generates an address $\mathsf{addr}$ for further transactions with a private key maintained internally. Next, $\mathcal{O}$ deploys delegation policies into the owner $\mathcal{O}'s$ enclave and deposits the coins to the address $\mathsf{addr}$. Then, $\mathcal{O}$ delegates the coins to $\mathcal{D}$ by triggering the execution of delegation inside the enclave. Finally, $\mathcal{D}$ spends delegated transaction to
the blockchain network $\mathcal{B}$. Note that the enclaves in our scheme are decentralized, meaning that each $\mathcal{O}$ and $\mathcal{D}$ has its own enclave without depending on a centralized agent, which satisfies the requirements of current cryptocurrency systems.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.65\textwidth]{image/flow.png}
\caption{DelegaCoin Workflow}
\label{size}
\end{figure}
\subsection{System Framework}
\smallskip
\noindent\textbf{System Setup.} In this phase, the coin owner $\mathcal{O}$ and the delegatee $\mathcal{D}$ initialize their TEEs to provide environments for the operations with respect to the further delegation.
\begin{itemize}
\item \textit{Negotiation.} $\mathsf{pms} \gets \mathsf{ParamGen(1^\lambda)}$: $\mathcal{O}$ agrees with $\mathcal{D}$ for the pre-shared information. Here, $\mathsf{\lambda}$ is a security parameter.
\item \textit{Enclave Initiation.} $\mathsf{hdl}_\mathcal{O},\mathsf{hdl}_\mathcal{D} \gets \mathsf{EncvInit(1^\lambda,pms)}$: $\mathcal{O}$ and $\mathcal{D}$ initialize the enclave \textit{E}$_{\mathcal{O}}$ and \textit{E}$_{\mathcal{D}}$ with outputting the enclave handles $\mathsf{hdl}_\mathcal{O}$ and $\mathsf{hdl}_\mathcal{D}$.
\item \textit{Key Generation.} $\mathsf{(pk_{Tx},sk_{Tx}),(pk_{\mathcal{O}},sk_{\mathcal{O}}), key_{seal}} \gets \mathsf{KeyGen^{TEE}}(\mathsf{hdl_{\mathcal{O}}},1^\lambda)$ and
$\mathsf{(pk_{\mathcal{D}}},\\ \mathsf{sk_{\mathcal{D}}}),(\mathsf{vk_{sign}}, \mathsf{sk_{sign}}), \mathsf{r} \gets \mathsf{KeyGen^{TEE}}(\mathsf{hdl_{\mathcal{D}}},1^\lambda)$: $\mathcal{O}$ and $\mathcal{D}$ run the enclaves \textit{E}$_{\mathcal{O}}$ and \textit{E}$_{\mathcal{D}}$ to create their internal keys. Key pair $\mathsf{(pk_{Tx},sk_{Tx})}$ is used for transaction generation. Key pair $(\mathsf{pk_{\mathcal{O}},sk_{\mathcal{O}}})$ and $(\mathsf{pk_{\mathcal{D}},sk_{\mathcal{D}}})$ are used for remote assertion, while $\mathsf{key_{seal}}$ is a sealing key used to export the state to the trusted storage. Key pair $(\mathsf{vk_{sign}}, \mathsf{sk_{sign}})$ is used to identify a specific delegatee, while $\mathsf{r}$ is a private key for transaction encryption.
\item \textit{Quote Generation.} $\mathsf{ quote \gets QuoGen^{TEE}(\mathsf{sk_{\mathcal{O}}}, \mathsf{vk_{sign}}, pms)}$: $\mathcal{O}$ generate a $\mathsf{quote}$ for requesting an encrypted symmetric encryption key from $\mathcal{D}$.
\item \textit{Key Provision.} $\mathsf{ ct_{r} \gets Provision^{TEE}(quote,\mathsf{sk_{sign}}, \mathsf{pk_{\mathcal{O}}}, pms)}$: $\mathcal{O}$ proves to $\mathcal{D}$ that $\textit{E}_{\mathcal{O}}$ has been instantiated with a $\mathsf{quote}$ to request an encrypted symmetric encryption key $\mathsf{ct_{r}}$. The symmetric encryption is used to encrypt the messages inside TEEs.
\item \textit{Key Extraction.} $\mathsf{ r \gets Extract^{TEE}(\mathsf{sk_{\mathcal{O}}}, ct_{r})}$: $\mathcal{O}$ extracts a symmetric encryption key $\mathsf{r}$ from $\mathsf{ct_{r}}$ using $\mathsf{sk_{\mathcal{O}}}$.
\item \textit{State Retrieval.} $\mathsf{b_{init} = Dec^{TEE}(key_{seal}, c_{init})}$: Encrypted states are read back by the enclave \textit{E}$_{\mathcal{O}}$ under $\mathsf{key_{seal}}$, where $\mathsf{b_{init}}$ is the initial balance and $\mathsf{c_{init}}$ is the initial encrypted balance. This step prevents unexpected occasions that may destroy the state in TEEs memory.
\end{itemize}
\smallskip
\noindent\textbf{Coin Deposit.} The enclave \textit{E}$_{\mathcal{O}}$ generates an address and its corresponding private key $\mathsf{pk_{Tx}}$ for the deposit. Afterwards, $\mathcal{O}$ sends coins to this address in the form of fund deposits.
\begin{itemize}
\item \textit{Address Creation.} $\mathsf{addr} \gets \mathsf{AddrGen^{TEE}}(1^\lambda,\mathsf{pk_{Tx}})$: $\mathcal{O}$ calls \textit{E}$_{\mathcal{O}}$ to generate a transaction address $\mathsf{addr}$. The private key $\mathsf{sk_{Tx}}$ of $\mathsf{addr}$ is secretly stored inside TEEs and is generated by an internal pseudo-random number.
\item \textit{Coin Deposit.} $\mathsf{ b_{deposit} = Update^{B}(addr,b_{init})}$: $\mathcal{O}$ generates an arbitrary transaction and transfers some coins to $\mathsf{addr}$ as the fund deposits.
\end{itemize}
\smallskip
\noindent\textbf{Coin Delegation.} In this phase, neither $\mathcal{O}$ nor $\mathcal{D}$ interacts with blockchain. $\mathcal{O}$ can instantly complete the coin delegation through offline transactions.
\begin{itemize}
\item \textit{Balance Update.} $\mathsf{b_{update} \gets Update^{TEE}(b_{deposit},b_{Tx})}$: \textit{E}$_{\mathcal{O}}$ checks current balance to ensure that it is enough for deduction. Then, \textit{E}$_{\mathcal{O}}$ updates the balance.
\item \textit{Signature Generation.} $\mathsf{\sigma_{Tx}} \gets \mathsf{TranSign^{TEE}(\mathsf{sk_{Tx}},\mathsf{addr},b_{Tx})}$: \textit{E}$_{\mathcal{O}}$ generates a valid signature $\mathsf{\sigma_{Tx}}$.
\item \textit{Transaction Generation.} $\mathsf{Tx} \gets \mathsf{TranGen^{TEE}(\mathsf{addr},b_{Tx},\mathsf{\sigma_{Tx}})}$: \textit{E}$_{\mathcal{O}}$ generates a transaction $\mathsf{Tx}$ using $\mathsf{\sigma_{Tx}}$.
\item \textit{Coin Delegation.} $\mathsf{ct_{tx}} \gets \mathsf{TranEnc^{TEE}}(\mathsf{r},\mathsf{Tx})$: $\mathcal{O}$ sends encrypted transaction $\mathsf{ct_{tx}}$ to $\mathcal{D}$.
\item \textit{State Seal.} $\mathsf{c_{update} \gets Enc^{TEE}(key_{seal},b_{update})}$: Once completing the delegation, the records $\mathsf{c_{update}}$ are permanently stored outside the enclave. If any abort or halt happens, a re-initiated enclave starts to reload the missing information.
\end{itemize}
All the algorithms in the step of \textbf{Coin Delegation} must be run as an atomic operation, meaning that either all algorithms finish or none of them finish.
A hardware Root of Trust can guarantee this, and we refer to~\cite{costan2016intel} for more detail.
\smallskip
\noindent\textbf{Coin Spend.} $\mathsf{Tx} \gets \mathsf{TranDec^{TEE}(r,\mathsf{ct_{tx}})}$: $\mathcal{D}$ decrypts $\mathsf{ct_{tx}}$ with $\mathsf{r}$, and then spends $\mathsf{Tx}$ by forwarding it to blockchain network.
\smallskip
\noindent\textit{Correctness}. The DelegaCoin scheme is correct if the following properties hold:
For all $\mathsf{Tx}$, $\mathsf{b_{deposit}}$, $\mathsf{b_{update}}$ and $\mathsf{b_{Tx}}$.
\begin{itemize}
\item Correctness of $\mathsf{Update}$:
$$\Pr \left[\mathsf{b_{Tx} \neq (b_{deposit} - b_{update})}\right] \leq \mathsf{negl(\lambda)}.$$
\item Correctness of $\mathsf{Seal}$:
$$\Pr[\mathsf{Dec^{TEE}(key_{seal},Enc^{TEE}(key_{seal}, b_{init})) \neq b_{init}}] \leq \mathsf{negl(\lambda)}.$$
\item Correctness of $\mathsf{Delegation}$:
\begin{align*}
\Pr[\mathsf{TranDec^{TEE}(r, TranEnc^{TEE}}
(\mathsf{r}, \mathsf{Tx})) \neq \mathsf{Tx}] \leq \mathsf{negl(\lambda).}\\
\end{align*}
\end{itemize}
\subsection{Oracles for Security Definitions}
\label{oracles}
We now define oracles to simulate an honest owner and delegatee for further security definitions and proofs. Each oracle maintains a series of (initially empty) sets $\mathcal{R}_1$, $\mathcal{R}_2$ and $\mathcal{C}$ which will be used later. Here, we use $\mathsf{(instruction; parameter)}$ to denote both the instructions and inputs of oracles.
\smallskip
\noindent\textbf{Honest Owner Oracle $\mathsf{O}^{\mathsf{owner}}:$} This oracle gives the adversary access to honest owners. An adversary $\mathcal{A}$ can obtain newly deletgated transactions or sealed storage with his customized inputs. The oracle provides the following interfaces.
\begin{itemize}
\item[-] On input $( \mathsf{signature\; creation}; \mathsf{addr})$, the oracle checks whether a tuple $(\mathsf{addr},\mathsf{\sigma_{Tx}}) \in \mathcal{R}_1$ exists, where $\mathsf{addr}$ is an input of transactions. If successful, the oracle returns $\mathsf{\sigma_{Tx}}$ to $\mathcal{A}$; otherwise, it computes $\mathsf{\sigma_{Tx}} \gets \mathsf{TranSign^{TEE}(\mathsf{sk_{Tx}}, \mathsf{addr},b_{Tx})}$ and adds $(\mathsf{addr},\mathsf{\sigma_{Tx}})$ to $\mathcal{R}_1$, and then returns $\mathsf{\sigma_{Tx}}$ to $\mathcal{A}$.
\item[-] On input $(\mathsf{quote\; generation} ;\mathsf{vk_{sign}})$, the oracle checks if a tuple $(\mathsf{vk_{sign}},\mathsf{quote}) \in \mathcal{R}_2$ exists. If successful, the oracle returns $\mathsf{quote}$ to $\mathcal{A}$. Otherwise, it computes $\mathsf{quote} \gets \mathsf{QuoGen^{TEE}(sk_{\mathcal{O}}, vk_{sign}, pms})$ and adds $(\mathsf{vk_{sign}},\mathsf{quote})$ to $\mathcal{R}_2$, and then returns $\mathsf{quote}$ to $\mathcal{A}$.
\end{itemize}
\noindent\textbf{Honest Delegatee Oracle $\mathsf{O}^{\mathsf{delegatee}}:$} This oracle gives the adversary access to honest delegatees. The oracle provides the following interfaces.
\begin{itemize}
\item[-] On input $(\mathsf{key \;provision} ;\mathsf{quote})$, the oracle checks whether a tuple $(\mathsf{quote},\mathsf{ct_{r}}) \in \mathcal{C}$ exists. If successful, the oracle returns $\mathsf{ct_{r}}$ to $\mathcal{A}$; otherwise, it computes \\ $\mathsf{ ct_{r} \gets Provision^{TEE}(quote,sk_{sign}, \mathsf{pk_{\mathcal{O}}}, pms)}$, adds $(\mathsf{quote},\mathsf{ct_{r}})$ to $\mathcal{C}$, and then returns $(\mathsf{quote},\mathsf{ct_{r}})$ to $\mathcal{A}$.
\end{itemize}
\begin{figure}[htb!]
\centering
\caption{Oracles Interaction Diagram}
\begin{bbrenv}{A}
\begin{bbrbox}[name=Real experiment]
\pseudocode{
\text{DelegaCoin protocol}
}
\begin{bbrenv}{B}
\begin{bbrbox}[name=Adversary $\mathcal{A}$,minheight=3cm,xshift=3cm]
\end{bbrbox}
\end{bbrenv}
\end{bbrbox}
\bbrinput{input}
\bbroutput{output}
\begin{bbroracle}{OraA}
\begin{bbrbox}[name=Oracle $\mathsf{O}^{\mathsf{owner}}$,minheight=1.2cm,minwidth=2.3cm]
\end{bbrbox}
\end{bbroracle}
\bbroracleqryto{bottom=$query$}
\bbroracleqryfrom{bottom=$reply$}
\begin{bbroracle}{OraB}
\begin{bbrbox}[name=Oracle $\mathsf{O}^{\mathsf{delegatee}}$,minheight=1.7cm,minwidth=2.3cm]
\end{bbrbox}
\end{bbroracle}
\bbroracleqryto{bottom=$query$}
\bbroracleqryfrom{bottom=$reply$}
\begin{bbrbox}[name= $\mathsf{HW}$ Oracle ,minheight=4cm,xshift=11.3cm,minwidth=1.7cm]
\end{bbrbox}
\bbrmsgto{}
\bbrmsgfrom{}
\end{bbrenv}
\end{figure}
\noindent\textbf{HW Oracle:} This oracle gives the adversary the access to honest hardware.
The oracle provides the interfaces as defined as in~\ref{TEEmode}. Note that, to ensure that anything $\mathcal{A}$ sees in the real world can be simulated ideal experiment, we require that an adversary get access to \textbf{$\mathsf{HW}$ Oracle} through $\mathsf{O}^{\mathsf{delegatee}}$ and $\mathsf{O}^{\mathsf{owner}}$ rather than directly interact with $\mathsf{HW}$ Oracle.
\subsection{Threat Model and Assumptions}
As for involved entities, we assume that $\mathcal{O}$ attempts to delegate some coins to the delegatee. Each party may potentially be malicious. $\mathcal{O}$ may maliciously delegate an exceptional transaction, represented as sending the same transaction to multiple delegatees or spending the delegated transactions before the $\mathcal{D}$ spends them. $\mathcal{D}$ may also attempt to assemble an invalid transaction or double spend the delegated coins. We also assume the blockchain $\mathcal{B}$ is robust and publicly accessible.
With regard to devices, we assume that TEEs are secure, which means that an adversary cannot access the enclave runtime memory and their hardware-related keys (\textit{e.g.,} sealing key or attestation key). In contrast, we do not assume the components outside TEEs are trusted. For example, the adversary may control the operating system or high-level privileged software.
\subsection{Security Goals.}
DelegaCoin aims to employ TEEs to provide a secure delegatable cryptocurrency system. In brief, TEEs prevent malicious delegation in three aspects: (1) The private key of a delegated transaction and the delegated transaction itself are protected against the public. If an adversary learns any knowledge about the private key or the delegated transaction, she may spend the coin before the delegatee uses it; (2) The delegation executions are correctly executed. In particular, the spendable amount of delegated coins must be less than (or equal to) original coins; (3) The delegation records are securely stored to guarantee consistency considering accidental TEEs failures or malicious TEEs compromises. DelegaCoin is secure if adversaries cannot learn any knowledge about the private key, the delegated transaction, and the sealed storage.
To capture such security properties, we formalize our system through a game inspired by \cite{bernhard2015sok}. In our game, a PPT adversary attempts to distinguish between a real-world and a simulated (ideal) world. In the real world, the DelegaCoin algorithms work as defined in the construction. The adversary is allowed to access the transaction-related secret messages created by honest users through oracles as in Definition~\ref{oracles}. Obviously, the ideal world does not leak any useful information to the adversary. Since we model the additional information explicitly to respond to the adversary, we construct a polynomial-time simulator $\mathcal{S}$ that can \textit{fake} the additional information corresponding to the real result, but with respect to the fake TEEs. Thus, a universal oracle $\mathcal{U}(\cdot)$ in the ideal world is introduced to simulate the corresponding answers of $\adv$ called in oracles in the real world. We give a formal model as follows, in which these two experiments begin with the same setup assumptions.
\begin{defi}[Security]
DelegaCoin is simulation-secure if for all PPT adversaries $\mathcal{A}$, there exists a stateful PPT simulator $\mathcal{S}$ and a negligible function $ \mathsf{negl(\lambda)}$ such that the probability of that $\mathcal{A}$ distinguishes between $\mathsf{Exp_{\adv,{DelegaCoin}}^{real}(\lambda)}$ and $\mathsf{Exp_{\adv,{DelegaCoin}}^{idea}(\lambda)}$ is negligible, i.e.,
\begin{eqnarray}\nonumber
\left|\mathsf{Pr[Exp_{\adv,{DelegaCoin}}^{real}(\lambda)} = 1 ] - \mathsf{Pr[Exp_{\adv,{DelegaCoin}}^{ideal}(\lambda)} = 1 ] \right| \leq \mathsf{negl(\lambda)}.
\end{eqnarray}
\end{defi}
\begin{figure}[htb]
\begin{pchstack}[center]
\resizebox{1.1\linewidth}{!}{
\fbox{
\begin{pcvstack}
\procedure{$\mathsf{Exp_{\adv,{DelegaCoin}}^{real}(\lambda)}$}{%
\pcln \mathsf{pms} \gets \mathsf{ParamGen(1^\lambda)} \\
\pcln \mathsf{hdl_{\mathcal{O}}}, \mathsf{hdl_{\mathcal{D}}} \gets \mathsf{EncvInit(1^\lambda,pms)} \\
\pcln \mathsf{(pk_{Tx}, sk_{Tx}), (\mathsf{pk_{\mathcal{O}}}, \mathsf{sk_{\mathcal{O}}}), key_{seal}} \gets \mathsf{KeyGen^{TEE}}(\mathsf{hdl_{\mathcal{O}}},1^\lambda) \\
\pcln \mathsf{(pk_{\mathcal{D}},sk_{\mathcal{D}})}, (\mathsf{vk_{sign}}, \mathsf{sk_{sign}}), \mathsf{r} \gets \mathsf{KeyGen^{TEE}}(\mathsf{hdl_{\mathcal{D}}},1^\lambda) \\
\pcln \mathsf{ quote \gets \mathcal{A}(\mathsf{hdl_{\mathcal{O}}}, vk_{sign}, pms)} \\
\pcln \mathsf{ ct_{r} \gets \adv^{Provision^{TEE}(sk_{sign})}(\mathsf{hdl_{\mathcal{D}}}, quote, \mathsf{pk_{\mathcal{O}}}, pms)} \\
\pcln \mathsf{ r \gets \adv^{Extract^{TEE}(sk_{\mathcal{O}})}(\mathsf{hdl_{\mathcal{O}}}, ct_{r})}
\pclb
\pcintertext[dotted]{Setup Completed}
\pcln \mathsf{b_{init} = Dec^{TEE}(\mathsf{hdl_{\mathcal{O}}},key_{seal}, c_{init})}\\
\pcln \mathsf{addr} \gets \mathsf{AddrGen^{TEE}}(1^\lambda,\mathsf{pk_{Tx}})\\
\pcln \mathsf{ b_{deposit} = Update^{B}(\mathsf{addr},b_{init})}\\
\pcln \mathsf{b_{update} \gets Update^{TEE}(\mathsf{hdl_{\mathcal{O}}},b_{deposit},b_{Tx})} \\
\pcln \mathsf{\sigma_{Tx}} \gets \adv^{\mathsf{TranSign^{TEE}}(\mathsf{sk_{Tx}})}(\mathsf{hdl_{\mathcal{O}}}, \mathsf{addr,b_{Tx}}) \\
\pcln \mathsf{Tx} \gets \mathsf{TranGen^{TEE}}(\mathsf{hdl_{\mathcal{O}}},\mathsf{addr,b_{Tx},\mathsf{\sigma_{Tx}}} )\\
\pcln \mathsf{ct_{tx}} \gets \adv^{\mathsf{TranEnc^{TEE}(r)}}(\mathsf{hdl_{\mathcal{O}}}, \mathsf{Tx}) \\
\pcln \mathsf{c_{update} = \adv^{Enc^{TEE}(key_{seal})}(\mathsf{hdl_{\mathcal{O}}},b_{update})}
\pclb
\pcintertext[dotted]{Delegation Completed}
\pcln \mathsf{Tx} \gets \mathsf{TranDec^{TEE}}(\mathsf{hdl_{\mathcal{D}}},\mathsf{r},\mathsf{ct_{tx}}) \\
\pcln \pcreturn (\mathsf{Tx},\mathsf{c_{update}})}
\end{pcvstack}
\pchspace
\procedure{$\mathsf{Exp_{\adv,{DelegaCoin}}^{ideal}(\lambda)}$}
\pcln \mathsf{pms} \gets \mathsf{ParamGen(1^\lambda)} \\
\pcln \mathsf{hdl_{\mathcal{O}}^\star}, \mathsf{hdl_{\mathcal{D}}^\star} \gets \mathsf{\mathcal{S}(1^\lambda,pms)} \\
\pcln \mathsf{(pk_{Tx},sk_{Tx}), (\mathsf{pk_{\mathcal{O}}}, \mathsf{sk_{\mathcal{O}}}), key_{seal}} \gets \mathsf{KeyGen^{TEE}}(\mathsf{hdl_{\mathcal{O}}^{\star}},1^\lambda) \\
\pcln \mathsf{(pk_{\mathcal{D}},sk_{\mathcal{D}})},(\mathsf{vk_{sign}}, \mathsf{sk_{sign}}), \mathsf{r} \gets \mathsf{KeyGen^{TEE}}(\mathsf{hdl_{\mathcal{D}}^{\star}},1^\lambda) \\
\pcln \mathsf{quote \gets \mathcal{A}(\mathsf{hdl_{\mathcal{O}}^{\star}}, vk_{sign}, pms)} \\
\pcln \mathsf{ ct_{r} \gets \adv^{\mathcal{S}^{\mathcal{U}(\cdot)}}(\mathsf{hdl_{\mathcal{D}}^{\star}}, quote, \mathsf{pk_{\mathcal{O}}}, pms)} \\
\pcln \mathsf{ r \gets \adv^{\mathcal{S}^{\mathcal{U}(\cdot)}}(\mathsf{hdl_{\mathcal{O}}^{\star}}, ct_{r})}
\pclb
\pcintertext[dotted]{Setup Completed}
\pcln \mathsf{b_{init}} \gets \mathsf{\mathcal{S}(\mathsf{hdl_{\mathcal{O}}},key_{seal}, c_{init})} \\
\pcln \mathsf{addr} \gets \mathcal{S}(1^\lambda,\mathsf{pk_{Tx}})\\
\pcln \mathsf{b_{deposit} = \mathcal{S}(\mathsf{addr},b_{init})}\\
\pcln \mathsf{b_{update} \gets \mathcal{S}(\mathsf{hdl_{\mathcal{O}}^\star},b_{deposit},1^{|b_{Tx}|})} \\
\pcln \mathsf{\sigma_{Tx}} \gets \mathsf{\adv^{\mathcal{S}^{\mathcal{U}(\cdot)}}(\mathsf{hdl_{\mathcal{O}}^\star}, \mathsf{addr},b_{Tx})} \\
\pcln \mathsf{Tx} \gets \mathcal{S}(\mathsf{hdl_{\mathcal{O}}^\star},\mathsf{addr}, \mathsf{1^{|b_{Tx}}|, \mathsf{\sigma_{Tx}}}) \\
\pcln \mathsf{ct_{tx}} \gets \mathsf{\adv^{\mathcal{S}^{\mathcal{U}(\cdot)}}(\mathsf{hdl_{\mathcal{O}}^\star}, 1^{|Tx|})} \\
\pcln \mathsf{c_{update} = \adv^{\mathcal{S}^{\mathcal{U}(\cdot)}}(\mathsf{hdl_{\mathcal{O}}^\star}, 1^{|b_{update}|})}
\pclb
\pcintertext[dotted]{Delegation Completed}
\pcln \mathsf{Tx} \gets \mathcal{S}(\mathsf{hdl_{\mathcal{D}}^\star},\mathsf{r},\mathsf{ct_{tx}}) \\
\pcln \pcreturn (\mathsf{Tx},\mathsf{c_{update}})
}}
}
\end{pchstack}
\end{figure}
\section{Formal Protocols}
\label{sec-formal}
In this section, we present a formal model of our electronic cash system by utilizing the syntax of the $\mathsf{HW}$ model. In particular, we model the interactions of Intel SGX enclaves as calling to the $\mathsf{HW}$ functionality defined in Definition~\ref{TEEmode}. The formal protocols are provide as follows.
\smallskip
The owner enclave program $\mathsf{P_{\mathcal{O}}}$ is defined as follows. The value $\mathsf{tag_{P}}$ is a measurement of the program $\mathsf{P_{\mathcal{O}}}$, and it is hardcoded in the static data of $\mathsf{P_{\mathcal{O}}}$. Let $\mathsf{state}_{\mathcal{O}}$ denote an internal state variable.
$\mathsf{P_{\mathcal{O}}}$:
\begin{itemize}
\item On input (``init setup'', $\mathsf{sid, vk_{sign}}$\footnote{We assume that the combination ($\mathsf{sid}$,$\mathsf{vk_{sign}}$), represented as the identity of a delegatee, has already been distributed before the system setup. }):
\begin{itemize}
\item[-] Run $(\mathsf{pk_{\mathcal{O}}}, \mathsf{sk_{\mathcal{O}}}) \gets \mathsf{\mathsf{PKE}.KeyGen}(1^\lambda)$ and $\mathsf{key_{seal}}\footnote{Multiple enclaves from the same signing authority can derive the same key, since seal key is based on the enclave’s certificate-based identity.} \gets \mathsf{\mathsf{SE}.KeyGen}(1^\lambda)$.
\item[-] Update $\mathsf{state}$ to $(\mathsf{sk_{\mathcal{O}}, vk_{sign}})$ and output $(\mathsf{pk_{\mathcal{O}}, sid, vk_{sign}})$.
\end{itemize}
\item On input (``complete setup'', $\mathsf{sid, ct_{r}, \sigma_{r}})$:
\begin{itemize}
\item[-] Look up the $\mathsf{state}_{\mathcal{O}}$ to obtain the entry $\mathsf{(sk_{\mathcal{O}}, sid, vk_{sign})}$. If no entry
exists for $\mathsf{sid}$, output $\bot$.
\item[-] Receive the $(\mathsf{sid, vk_{sign}})$ from $\mathcal{O}$ and check if $\mathsf{vk_{sign}}$ matches with the one in $\mathsf{state}_{\mathcal{O}}$. If not, output $\bot$.
\item[-] Verify signature $\mathsf{b \gets \mathsf{\mathsf{S}.Verify}(vk_{sign}, \sigma_{r}, (sid, ct_{r} ))}$.
If $\mathsf{b}$ = 0, output $\bot$.
\item[-] Run $\mathsf{r \gets \mathsf{\mathsf{PKE}.dec}(sk_{\mathcal{O}}, ct_{r})}$.
\item[-] Add the tuple $(\mathsf{r, sid, vk_{sign}})$ to $\mathsf{state}_{\mathcal{O}}$.
\end{itemize}
\item On input (``state retrieval'', $\mathsf{sid}$):
\begin{itemize}
\item[-] Retrieve identity-balance pair ($\mathsf{sid, c_{init}}$) from the sealed storage.
\item[-] Run $\mathsf{b_{init} = \mathsf{SE}.Dec(key_{seal},c_{init})}$ and update $\mathsf{state}_{\mathcal{O}}$ to $(\mathsf{sid, b_{init}})$
\end{itemize}
\item On input (``address generation'', $1^\lambda$):
\begin{itemize}
\item[-] Run $(\mathsf{sk_{Tx}}, \mathsf{pk_{Tx}}) \gets \mathsf{\mathsf{S}.KeyGen}(1^\lambda)$ and $\mathsf{addr} \gets \mathsf{AddrGen^{TEE}}(1^\lambda,\mathsf{pk_{Tx}})$.
\item[-] Update $(\mathsf{sk_{Tx}, addr})$ to $\mathsf{state}_{\mathcal{O}}$ and output $(\mathsf{pk_{Tx}}, \mathsf{addr})$.
\end{itemize}
\item On input (``transaction generation'', $\mathsf{addr}$ ):
\begin{itemize}
\item[-] Retrieve the private key $\mathsf{sk_{Tx}}$.
\item[-] Run $\mathsf{\sigma_{Tx}} \gets \mathsf{\mathsf{S}.Sign(sk_{Tx},\mathsf{addr}, b_{Tx})}$ and output a signature $\mathsf{\sigma_{Tx}}$.
\item[-] Run $\mathsf{Tx} \gets \mathsf{TranGen(\mathsf{addr}, b_{Tx},\mathsf{\sigma_{Tx}}})$ and update $(\mathsf{sid, Tx})$ to $\mathsf{state}_{\mathcal{O}}$.
\end{itemize}
\item On input (``state update'', $\mathsf{addr}$):
\begin{itemize}
\item[-] Check $\mathsf{b_{deposit}}$ and $\mathsf{b_{Tx}}$. If $\mathsf{b_{deposit} < b_{Tx}}$, output $\bot$.
\item[-] Run $\mathsf{b_{update} \gets Update(b_{deposit},b_{Tx})}$.
\end{itemize}
\item On input (``start delegation'', $\mathsf{addr}$):
\begin{itemize}
\item[-] Retrieve the provision private key $\mathsf{r}$ and $\mathsf{Tx}$ from $\mathsf{state}_{\mathcal{O}}$.
\item[-] Run $\mathsf{\mathsf{ct_{tx}} \gets \mathsf{SE}.\mathsf{Enc(r,Tx)}}$.
\end{itemize}
\item On input (``state seal'', $\mathsf{addr}$):
\begin{itemize}
\item[-] Run $\mathsf{\mathsf{c_{update}} = \mathsf{SE}.Enc(key_{seal},\mathsf{b_{update}})}$ and update $\mathsf{state}_{\mathcal{O}}$ to $(\mathsf{addr, b_{update}})$.
\item[-] Store $\mathsf{addr}$ and $\mathsf{c_{update}}$ to sealed storage.
\end{itemize}
\end{itemize}
\smallskip
The delegatee enclave program $\mathsf{P_{\mathcal{D}}}$ is defined as follows. The value $\mathsf{tag_{\mathcal{D}}}$ is the measurement of the program $\mathsf{P_{\mathcal{D}}}$, and it is hardcoded in the static data of $\mathsf{P_{\mathcal{D}}}$. Let $\mathsf{state}_\mathcal{D}$ denote an internal state variable. Also, the security parameter $\lambda$ is hardcoded into the program.
$\mathsf{P_{\mathcal{D}}}$:
\begin{itemize}
\item On input (``init setup'', $1^\lambda$):
\begin{itemize}
\item[-] Generate a session ID, $\mathsf{sid \gets \{0,1\}^{\lambda}}$.
\item[-] Run $(\mathsf{pk_{\mathcal{D}}}, \mathsf{sk_{\mathcal{D}}}) \gets \mathsf{\mathsf{PKE}.KeyGen}(1^\lambda)$, and $(\mathsf{vk_{sign}}, \mathsf{sk_{sign}}) \gets \mathsf{\mathsf{S}.KeyGen}(1^\lambda)$.
\item[-] Update $\mathsf{state}_\mathcal{D}$ to $(\mathsf{sk_{\mathcal{D}},sk_{sign}})$ and output $(\mathsf{sid, pk_{\mathcal{D}},vk_{sign}})$.
\end{itemize}
\item On input (``provision'', $\mathsf{quote}, \mathsf{pk_{\mathcal{O}}}, \mathsf{pms}$):
\begin{itemize}
\item[-] Parse $\mathsf{quote =(hdl_{\mathcal{O}}, tag_P, in, out, \sigma)}$, check that $\mathsf{tag_{P}== tag_{\mathcal{O}}}$. If not, output $\bot$.
\item[-] Parse $\mathsf{out = (sid, pk_{\mathcal{O}})}$, and run $\mathsf{b \gets HW.QuoteVerify(pms,quote)}$ on $\mathsf{quote}$. If $\mathsf{b} = 0$, output $\bot$.
\item[-] Select a random number $\mathsf{r}$ and compute the algorithm
$\mathsf{ct_{r} = \mathsf{PKE}.Enc(pk_{\mathcal{O}},r)}$ and $\mathsf{\sigma_{r} = \mathsf{S}.Sign(sk_{sign}, (sid, ct_{r}))}$ and output $\mathsf{(sid, ct_{r}, \sigma_{r})}$.
\end{itemize}
\item On input (``complete delegation'', $\mathsf{ct_{tx}}$):
\begin{itemize}
\item[-] Retrieve $\mathsf{r}$ from $\mathsf{state}_{\mathcal{D}}$.
\item[-] Run $\mathsf{\mathsf{Tx} \gets \mathsf{SE}.\mathsf{Dec(r,ct_{tx})}}$.
\end{itemize}
\end{itemize}
\smallskip
\noindent\hangindent 2em $\mathbf{Setup}.$ The following steps are based on the completed initialization of the programs of the delegator $\mathsf{P_{\mathcal{O}}}$ and delegatee $\mathsf{P_{\mathcal{D}}}$. The delegatee $\mathcal{D}$ runs $\mathsf{hdl_{\mathcal{D}}} \gets \mathsf{HW.Load(pms,P_{\mathcal{D}})}$ and $\mathsf{(\mathsf{vk_{sign}}, pk_{\mathcal{D}}) \gets HW.Run(hdl_{\mathcal{D}}, (\text{``init setup''}, 1^\lambda))}$. Then, $\mathcal{D}$ sends $\mathsf{vk_{sign}}$ to the delegator $\mathcal{O}$. Next, $\mathcal{O}$ runs $\mathsf{hdl}_{\mathcal{O}} \gets \mathsf{HW.Load(pms,P_{\mathcal{O}})}$ to load the handle. Meanwhile, $\mathcal{O}$
calls $\mathsf{quote \gets HW.Run\&Quote(hdl_{\mathcal{O}}, (\text{``init setup''}, \mathsf{sid, vk_{sign}}))}$, and sends a $\mathsf{quote}$ to $\mathcal{D}$. After that, $\mathcal{D}$ calls $\mathsf{(sid, ct_{r}, \sigma_{r})} \gets \mathsf{HW.Run(hdl_{\mathcal{D}}, (\text{``provision''}, \mathsf{quote,pk_{\mathcal{O}}, pms}))}$, and sends $\mathsf{(sid, ct_{r}, \sigma_{r})}$ to $\mathcal{O}$. Last, $\mathcal{O}$
calls $\mathsf{HW.Run(hdl_{\mathcal{O}}, (\text{``complete setup''}, \mathsf{vk_{sign}}))}$. At the end of completing setup, $\mathcal{O}$'s enclave \textit{E}$_{\mathcal{O}}$ obtains the private key $\mathsf{r}$ used for transaction delegation.
\smallskip
\noindent\hangindent 2em $\mathbf{Deposit}$. $\mathcal{O}$
calls $\mathsf{c_{init} \gets HW.Run(hdl_{\mathcal{O}}, (\text{``state retrieval''}, sid))}$. If $\mathsf{c_{init}}$ does not exist or equals to $0$, $\mathcal{O}$ calls
$\mathsf{addr \gets HW.Run(hdl_{\mathcal{O}}, (\text{``address generation''},1^\lambda))}$ to create a new address $\mathsf{addr}$. Then, $\mathcal{O}$ transfers some coins to $\mathsf{addr}$ through a normal blockchain transaction.
\smallskip
\noindent\hangindent 2em $\mathbf{Delegation}$. $\mathcal{O}$ firstly parses $\mathsf{hdl_{\mathcal{O}}}$ and
calls \textit{E}$_{\mathcal{O}}$. Then, \textit{E}$_{\mathcal{O}}$ retrieves the $\mathsf{addr}$. Afterwards, it calls $\mathsf{b_{update} \gets HW.Run(hdl_{\mathcal{O}}, (\text{``state update''},addr))}$. If the update algorithm returns false or failure, \textit{E}$_{\mathcal{O}}$ aborts the following operations. Otherwise, it looks up the state to obtain $\mathsf{sk_{Tx}}$, runs $\mathsf{Tx \gets HW.Run(hdl_{\mathcal{O}}, (\text{``transaction generation''}, addr ))}$ and outputs a transaction $\mathsf{Tx}$. After that, the delegator's enclave \textit{E}$_{\mathcal{O}}$ retrieves $\mathsf{r}$ and runs $\mathsf{ct_{tx} \gets HW.Run(hdl_{\mathcal{O}}, (\text{``start delegation''},addr))}$. Finally, ${\mathcal{O}}$ sends $\mathsf{ct_{tx}}$ to $\mathcal{D}$.
\smallskip
\noindent\hangindent 2em $\mathbf{Spend}$. $\mathcal{D}$ parses $\mathsf{hdl_{\mathcal{D}}}$ and runs $\mathsf{Tx \gets HW.Run(hdl_{\mathcal{D}}, (\text{``complete delegation''},\mathsf{ct_{tx}}))}$. After that, $\mathcal{D}$ spends the received transaction $\mathsf{Tx}$ by forwarding it to the blockchain network. Then, a blockchain node firstly parses
$\mathsf{Tx = (addr,pk_{Tx},metadata,\sigma_{Tx})}$ and runs $\mathsf{b} \gets \mathsf{\mathsf{S}.Verify^{B}(pk_{Tx},\sigma_{Tx})}$. If $\mathsf{b} = 0$, output $\bot$. Otherwise, the node broadcasts $\mathsf{Tx}$ to other blockchain nodes.
\section{Security Analysis}
\label{sec-seurity}
\begin{thm}[Security]\label{prf-consistency-tee}
Assume that $\mathsf{\mathsf{SE}}$ is IND-CPA secure,
$\mathsf{PKE}$ is IND-CCA2 secure, $\mathsf{S}$ holds the EUF-CMA security, and the TEEs are secure as in Definition~\ref{TEEmode}, DelegaCoin scheme is simulation-secure.
\end{thm}
Inspired by~\cite{lindell2017simulate, fisch2017iron}, we use a simulation-based paradigm to conduct security analysis, and explain the crux of our security proof as follows. We firstly construct a simulator $\mathcal{S}$ which can simulate the challenge responses in the real world. It provides the adversary $\adv$ with a simulated delegated transaction, a simulated quote and sealed storage. The information that $\adv$ can obtain is merely the instruction code and oracle responses queried by $\adv$ in the real experiment. At a high level, the proof idea is simple: $\mathcal{S}$ encrypts zeros as the challenge message. In the ideal experiment, $\mathcal{S}$ intercepts $\adv$'s queries to user oracle and provides simulated responses. It uses its $\mathcal{U(\cdot)}$ oracle to simulate oracles in the real world and sends the response back to $\adv$ as the simulated oracle output. $\mathcal{U(\cdot)}$ and $\mathcal{S}$'s algorithms are described as follows.
\smallskip
\noindent\textbf{Pre-processing phase.} $\mathcal{S}$ simulates the pre-processing phase similar to in the real world. It firstly runs $\mathsf{ParamGen(1^\lambda)}$ and records system parameters $\mathsf{pms}$ that are generated during the process. Then, it calls $\mathsf{EncvInit(1^\lambda,pms)}$ to create the simulated enclave instances.
$\mathcal{S}$ also creates empty lists $\mathcal{R}_1^\star$, $\mathcal{R}_2^\star$, $\mathcal{C}^\star$, $\mathcal{K}^\star$ and $\mathcal{L}^\star$ to be used later.
\smallskip
\noindent\hangindent 2em \smallskip
\noindent$\mathbf{KeyGen^{\star}(1^\lambda)}$ When $\mathcal{A}$ makes a query to \noindent$\mathbf{KeyGen(1^\lambda)}$ oracle, $\mathcal{S}$ responds the same way as in the
real world except that now stores all the public keys queried in a list $\mathcal{K}^{\star}$. That is, $\mathcal{S}$ does the following algorithms.
\begin{itemize}
\item[-] Compute and output $(\mathsf{pk_{\mathcal{O}}}, \mathsf{sk_{\mathcal{O}}}),(\mathsf{pk_{Tx}}, \mathsf{sk_{Tx}}) \gets \mathsf{\mathsf{PKE}.KeyGen}(1^\lambda)$.
\item[-] Store the keys $(\mathsf{pk_{\mathcal{O}}}, \mathsf{sk_{\mathcal{O}}}),(\mathsf{pk_{Tx}}, \mathsf{sk_{Tx}})$ in the list $\mathcal{K}^{\star}$.
\end{itemize}
\smallskip
\noindent\hangindent 2em \smallskip
\noindent$\mathbf{Enc^{\star}(key^\star, 1^{|{msg}^\star|})}$\footnote{Here, $\mathbf{msg^{\star}}$ is a wildcard character, representing any messages.} When $\mathcal{A}$ provides the challenge message $\mathsf{{msg}^\star}$ for symmetric encryption, the following algorithm is used by $\mathcal{S}$ to simulate the challenge ciphertext.
\begin{itemize}
\item[-] Compute and output $\mathsf{ct^\star \gets \mathsf{SE}.Enc(key^\star, 1^{|{msg}^\star|)}}$.
\item[-] Store $\mathsf{ct}^\star$ in the list $\mathcal{L}^{\star}$.
\end{itemize}
\smallskip
\noindent\hangindent 2em $\mathbf{\mathsf{O}^{owner\star}(\mathsf{signature\; creation;addr)}}.$ When $\mathcal{A}$ takes a query to $\mathbf{\mathsf{O}^{owner}}$ oracle, $\mathcal{S}$ responds the same way as in the real world, except that $\mathcal{S}$ now stores all the $\mathsf{addr}$ corresponding to the user's queries in a list $\mathcal{R}_1^\star$. That is, $\mathcal{S}$ does the following algorithms.
\begin{itemize}
\item[-] Call $\mathbf{\mathsf{O}^{owner}}$ oracle with an input $( \mathsf{signature\; creation}; \mathsf{addr})$ and output $\mathsf{\sigma_{Tx}}$.
\item[-] Store $(\mathsf{addr, \sigma_{Tx}})$ in the list $\mathcal{R}_1^\star$.
\end{itemize}
\smallskip
\noindent\hangindent 2em $\mathbf{\mathsf{O}^{owner\star}(\mathsf{quote\; generation;vk_{sign})}}.$ When $\mathcal{A}$ takes a query to the $\mathbf{\mathsf{O}^{owner}}$ oracle, $\mathcal{S}$ responds the same way as in the real world, except that $\mathcal{S}$ now stores all the $\mathsf{quote}$ corresponding to the user's queries in a list $\mathcal{R}_2^\star$. That is, $\mathcal{S}$ does the following algorithms.
\begin{itemize}
\item[-] Call the $\mathbf{\mathsf{O}^{owner}}$ oracle with an input $(\mathsf{quote\; generation; vk_{sign}})$ and output $\mathsf{quote}$.
\item[-] Store $(\mathsf{vk_{sign}, quote})$ in the list $\mathcal{R}_2^\star$.
\end{itemize}
\smallskip
\noindent\hangindent 2em $\mathbf{\mathsf{O}^{delegatee\star}(\mathsf{key \;provision} ;\mathsf{quote})}.$ When $\mathcal{A}$ takes a query to the $\mathbf{\mathsf{O}^{delegatee}}$ oracle, $\mathcal{S}$ responds the same way as in the real world, except that $\mathcal{S}$ now stores all the $\mathsf{quote}$ corresponding to the user's queries in a list $\mathcal{C}^\star$. That is, $\mathcal{S}$ does the following algorithm.
\begin{itemize}
\item[-] Call $\mathbf{\mathsf{O}^{delegatee}}$ oracle with an input $(\mathsf{key \;provision} ;\mathsf{quote})$ and output $\mathsf{ct_{r}}$.
\item[-] Store $\mathsf{(quote,ct_{r})}$ in the list $\mathcal{C}^\star$.
\end{itemize}
\smallskip
For the PPT simulator $\mathcal{S}$, we prove the security by showing that the view of an adversary $\mathcal{A}$ in the real world is computationally indistinguishable from its view in the ideal world. Specifically, we establish a series of \textbf{Hybrids} that $\mathcal{A}$ cannot be distinguished with a non-negligible advantage as follows.
\medskip
\noindent\textbf{Hybrid 0.} $\mathsf{Exp^{real}_{DelegaCoin}(1^\lambda)}$ runs.
\smallskip
\noindent\textbf{Hybrid 1.} As in \textit{Hybrid 0}, except that $\mathbf{KeyGen^{\star}(1^\lambda)}$ run by $\mathcal{S}$ is used to generate secret keys instead of
$\mathbf{KeyGen(1^\lambda)}$.
\begin{prf}
The proof is straightforward, storing corresponding answers in lists does not affect the view of $\mathcal{A}$. Thus,
$\textit{Hybrid 1}$ is indistinguishable from $\textit{Hybrid 0}$.
\qed \end{prf}
\smallskip
\noindent\textbf{Hybrid 2.} As in \textit{Hybrid 1}, except that $\mathcal{S}$ maintains a list $\mathsf{\mathcal{C}^{\star}}$ of all $\mathsf{quote =(hdl,tag_P,in,out,\sigma)}$ output by $\mathsf{HW.Run\&Quote(hdl_{\mathcal{O}},in)}$. And, when $\mathsf{HW.QuoteVerify(hdl_{\mathcal{D}}, pms,quote)}$ is called, $\mathcal{S}$ outputs $\bot$ if $\mathsf{quote \notin \mathcal{R}_2}$. ($\mathcal{R}_2$ is a quote returned by the real-world oracles that $\adv$ has queried as defined in Section~\ref{oracles}).
\begin{prf} If a fake quote is produced, then the step $\mathsf{HW.QuoteVerify(hdl_{\mathcal{O}}, pms,quote)}$ in the real word would make it output $\bot$. Thus, $\textit{Hybrid 2}$ differs from $\textit{Hybrid 1}$ only when $\mathcal{A}$ can produce a valid $\mathsf{quote}$ without knowing $\mathsf{sk_{\mathcal{O}}}$. Assume that there is an adversary $\mathcal{A}$ can distinguish between $\textit{Hybrid 2}$ and $\textit{Hybrid 1}$. Obviously, this can be transformed to the ability against Remote Attestation as in Definition~\ref{remoteAttestation}. However, our assumption relies on the fact that the security of Remote Attestation holds. Thuerefore, \textit{Hybrid 2} is indistinguishable from \textit{Hybrid 1}. \qed
\end{prf}
\smallskip
\noindent\textbf{Hybrid 3.} As in \textit{Hybrid 2}, except that when the $\mathbf{\mathsf{O}^{delegatee}}$ oracle calls $ \mathsf{HW.Run(hdl_{\mathcal{D}}, }$ $ \mathsf{ (\text{``provision''}, \mathsf{quote,\mathsf{pk_{\mathcal{O}}}, pms}))}$, $\mathcal{S}$ replaces $\mathsf{ct_r}$ as an encryption of zeros $\mathsf{\mathsf{PKE}.Enc(pk_{\mathcal{O}},1^{|r|})}$.
\begin{prf} The IND-CCA2 challenger provides the challenge public key $pk_{\mathcal{O}}$, and an adversary $\adv$ provides two messages $\mathsf{r}$ and $1^{|\mathsf{r}|}$, and further, the challenge returns an encryption of $\mathsf{r}$ or an encryption of $1^{|\mathsf{r}|}$, which is represented $\mathsf{ct_{\star}}$.
$\mathcal{S}$ sets $\mathsf{ct_{\star}}$ as the real output $\mathsf{ct_{r}}$. For $\mathsf{ct_r} \in \mathcal{C}$,
$\mathcal{S}$ can use $\mathsf{O}^{\mathsf{delegatee}}$ as it used in the real world. However, For $\mathsf{ct_r} \notin \mathcal{C}$, $\mathcal{S}$ neither has the oracles nor has the $\sk_{\mathcal{O}}$. But, the decryption oracle offered by the IND-CCA2 challenger can be used for any $\mathsf{ct_r} \notin \mathcal{C}$. Under this condition, if $\mathcal{A}$ can still distinguish \textit{Hybrid 3} and \textit{Hybrid 2}, we can forward the answer corresponding to $\mathcal{A}$'s answer to the IND-CCA2 challenger. If $\mathcal{A}$ can
distinguish between these two hybrids with a non-negligible probability, the IND-CCA2 security of $\mathsf{PKE}$ (see Definition~\ref{ccapke}) can
be broken with a non-negligible probability. \qed
\end{prf}
\smallskip
\noindent\textbf{Hybrid 4.} As in \textit{Hybrid 3}, except that $\mathcal{S}$ maintains a list $\mathcal{R}_1^\star$ of all transaction signature $\mathsf{\sigma_{Tx}}$ output by $\mathbf{\mathsf{O}^{owner}(\mathsf{signature\; creation; addr})}$ for $\mathsf{addr} \in \mathcal{R}_1$. When $\mathsf{b} \gets \mathsf{\mathsf{S}.verify^{B}(pk_{Tx},\sigma_{Tx})}$ is called $\mathcal{S}$ outputs $\bot$ if $(\mathsf{addr}, \mathsf{\sigma_{Tx}})$, as components of a $\mathsf{Tx}$, do not belong to $\mathcal{R}_1$. Namely, $\mathsf{(\mathsf{addr}, \mathsf{\sigma_{Tx}}) \notin \mathcal{R}_1}$.
\begin{prf} If a transaction is given with an invalid signature, then the step $\mathsf{\mathsf{S}.Verify^{B}( pk_{Tx},\sigma_{Tx})}$ in the real word would make it output $\bot$. Thus, $\textit{Hybrid 4}$ differs from $\textit{Hybrid 3}$ only when $\mathcal{A}$ can produce a valid signature on $\mathsf{addr}$ which has never appeared before in the communication between $\mathcal{A}$ and the oracles. Let $\mathcal{A}$ be an adversary who can distinguish $\textit{Hybrid 4}$ and $\textit{Hybrid 3}$. We use it to break the EUF-CMA~\cite{goldwasser1988digital} security of signature scheme $\mathcal{S}$. We get a verification key $\mathsf{pk_{Tx}}$ and an access to $\mathsf{\mathsf{S}.Sign(sk_{Tx},\cdot)}$ oracle from the EUF-CMA challenger. Whenever $\mathcal{S}$ signs a message using $\mathsf{sk_{Tx}}$, it uses the $\mathsf{\mathsf{S}.Sign(sk_{Tx},\cdot)}$ oracle. Also, our construction does not need a direct access to $\mathsf{sk_{Tx}}$ sign; it is used only to sign messages for the oracle provided by the challenger. Now, if $\mathcal{A}$ can distinguish two hybrids, the only reason is that $\mathcal{A}$ generates a valid signature $\mathsf{\sigma_{Tx}}$. Then, we can send such signature as forgery to the EUF-CMA~\cite{goldwasser1988digital} challenger. \qed
\end{prf}
\noindent\textbf{Hybrid 5.} As shown in \textit{Hybrid 4}, except that when the $\mathbf{\mathsf{O}^{owner}}$ oracle calls the function $\mathsf{HW.Run(hdl_{\mathcal{O}}, (\text{``start delegation''},addr))}$, $\mathcal{S}$ replaces $\mathsf{Enc}$ with $\mathsf{Enc^{\star}}$.
\begin{lemma}\label{lemma1}
If symmetric encryption scheme $\mathsf{SE}$ is IND-CPA secure, \textit{Hybrid 5} is indistinguishable from \textit{Hybrid 4}.
\end{lemma}
\begin{prf}
Whenever $\mathcal{A}$ provides a transaction $\mathsf{Tx}$ of its choice, $\mathcal{S}$ replies with zeros, e.g., $\mathsf{\mathsf{SE}.Enc(1^{|r|})}$, which is shown as follows.
\vspace{10ex}
\begin{center}
\begin{gameproof}[nr=3,name=\mathsf{Hybrid },arg=(1^n)]
\gameprocedure{%
\pcln \text{\dots} \\
\pcln \mathsf{\mathsf{ct_{tx}} \gets \mathsf{SE}.\mathsf{Enc(r,Tx)}} \\
\pcln \text{\dots}
}
\gameprocedure{%
\text{\dots} \\
\gamechange{$\mathsf{\mathsf{ct_{tx}} \gets \mathsf{SE}.\mathsf{Enc(1^{|r|},Tx)}}$} \\
\text{\dots}
}
\addgamehop{4}{5}{hint=\footnotesize replace the encryption with zeros, nodestyle=red}
\end{gameproof}
\end{center}
Assume that there is an adversary $\mathcal{A}$ that is able
to distinguish the environments of \textit{Hybrid 5} and \textit{Hybrid 4}. Then, we build an adversary $\mathcal{A}^\star$ against IND-CPA secure of $\mathsf{SE}$. Given a transaction $\mathsf{Tx}$ , if $\mathcal{A}$ distinguishes the encryption
of $\mathsf{r}$ from the encryption of $1^{\mathsf{|r|}}$, we forward the corresponding answer to the IND-CPA challenger. \qed
\end{prf}
\noindent\textbf{Hybrid 6.} As in \textit{Hybrid 5}, except that when the $\mathcal{A}$ calls $\mathsf{HW.Run(hdl_{\mathcal{O}}, (\text{``state seal''},addr))}$, $\mathcal{S}$ replaces $\mathsf{Enc}$ with $\mathsf{Enc^{\star}}$.
\begin{prf}
The Indistinguishability between $\textit{Hybrid 6}$ and $\textit{Hybrid 5}$ can be directly
reduced to the IND-CPA property of $\mathsf{SE}$, which is similar to the lemma~\ref{lemma1}\qed
\end{prf}
\section{Implementation}
\label{sec-implementation}
We implement a prototype with three types of entities: the owner node, the delegatee node, and the blockchain system. The owner node and the delegatee node are separately running on two computers. The codes of these nodes are both developed in C++ using the $\text{Intel}^\circledR$ SGX SDK 1.6 under the operating system of Ubuntu 20.04.1 LTS. For the blockchain network, we adopt the Bitcoin testnet~\cite{bitcointest} as our prototype platform. Specifically, we employ SHA-256 as the hash algorithm, and ECDSA~\cite{johnson2001elliptic} with \textit{secp256k1}~\cite{sec20002} as the initial setting to sign transactions, which is the same with Bitcoin testnet's configuration.
\smallskip
\noindent\textbf{Functionalities.} We emphasize two main functionalities in our protocol, including \textit{isolated transaction generation} and \textit{remote attestation}. The delegation inside TEEs has full responsibility to govern the behaviours of participants. In particular, TEEs first calls the function $sgx\_create\_enclave$ and $enclave\_init\_ra$ to create and initialize an enclave \textit{E}$_{\mathcal{O}}$. Then, it derives the transaction key $sk_{Tx}$ under the user's invocation.
\begin{algorithm}
\label{algorithm1}
\caption{Remote Attestation}
\BlankLine
\KwIn{$\mathsf{request(quote, pms)}$}
\KwOut{$\mathsf{b=0/1}$ }
\BlankLine
\textbf{parse} the received $\mathsf{quote}$ into $\mathsf{hdl,tag_P,in,out,\sigma}$ \\
\textbf{verify} the validity of $\mathsf{vk_{sign}}$ \\
\textbf{run} the algorithm $\mathsf{HW.quoteVerify}$ with an input $\mathsf{(pms,quote)}$\\
\textbf{verify} the validity of $\mathsf{quote}$ \\
\textbf{return} the results $\mathsf{b}$ if it passes ($\mathsf{1}$), or not ($\mathsf{0}$) \\
\end{algorithm}
Next, the system generates a bitcoin address and a transaction with calling the function
$create\_address\_from\_string$ and $generate\_transaction$ respectively. \textit{E}$_{\mathcal{O}}$ keeps $sk_{Tx}$ in its global variable storage and signs the transaction with it while calling $generate\_transaction$. The transaction can only be generated inside the enclave without exposing to the public. Afterwards, \textit{E}$_{\mathcal{O}}$ creates a quote by calling the function $ra\_network\_send\_receive$, and proves to the delegatee that
its enclave has been successfully initialized and is ready for the further delegation.
\section{Evaluation}
\label{sec-evaluation}
In this section, we evaluate the system with respect to \textit{performance} and \textit{disk space}. To have an accurate and fair test result, we repeat the measure for each operation 500 times and calculate the average.
\subsection{Performance}
The operations of public key generation and address create cost approximately the same time. This is due to the reason that they are both based on the same type of basic cryptographic primitives. The operations of transaction generation, state seal, and transaction decryption spend more time than the aforementioned operations because they combine more complex cryptographic functions. We also observe that the enclave initiation spends much more time than (transactions) key pair generations. Fortunately, the time used on enclave initiation can be omitted since the enclave each time launches only once (one-time operation). The state update spends the lowest time since most of the recorded messages are overlapped without the changes and only a small portion of data requires an update. The operations of coin deposit and transaction confirmation depend on the configuration of the Bitcoin testnet, varying from 10+ seconds to several minutes. Furthermore, we attach the time costs of the \textit{state seal} operation under increased transactions in Figure.\ref{fig-test} (right column). The time consumption grows slowly because a large portion of transactions are processed in batch. Remarkably, it costs less than 25 millisecond to finish all operations of coin delegation, which is significantly lower than the online transaction of Bitcoin testnet. This indicates that our solution is efficient in transaction processing and practical coin delegation.
\begin{table}[!hbtp]
\caption{The average performance of various operations}
\label{tab-test}
\centering
\resizebox{0.75\linewidth}{!}{
\begin{tabular}{llr}
\toprule
\textbf{Phase} & \textbf{Operation} & \textbf{Average Time / ms} \\
\midrule
\multirow{2}{*}{\textit{System setup}} & Enclave initiation & $ 13.18940 $\\
& Public key generation (Tx) & $ 0.34223 $ \\
& Private key generation (Tx) & $0.01119 $ \\
\cmidrule{1-2}
\multirow{2}{*}{\textit{Coin deposit}} & Address creation & $0.00690 $ \\
& Coin deposit & $ -$ \\
\cmidrule{1-2}
\multirow{4}{*}{\textit{Coin delegation}} & Transaction generation & $ 0.78565 $ \\
& Remote attestation & $19.50990 $ \\
& State update & $ 0.00366 $ \\
& State seal & $ 5.43957 $ \\
\cmidrule{1-2}
\multirow{2}{*}{\textit{Coin spend}} & Transaction decryption & $ - $ \\
& Transaction confirmation & $ - $ \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Disk Space}
In this part, we provide an evaluation of the disk space of the sealed state. We simulate the situation in DelegaCoin when more delegation transactions join the network. The transaction creation rate is set to be 560 transactions/second. We monitor space usage and the corresponding growth rate. Each transaction occupies approximately 700 KB of storage space. We test eight sets of experiments with an increased number of transactions in the sequence $1, 10, 100, 200, 400, 600, 800, 1000$. The results, as shown in Figure.\ref{fig-test} (left column), indicate that the size of the disk usage grows linearly with increased delegation transactions. The reason is straightforward: the disk usage closely relates to the involved transactions that are stored in the list. In our configurations, the transaction generation rate stays fixed. Therefore, the used space is proportional to the increased transactions.
\begin{figure}[!hbt]
\centering
\caption{Used disk space and time consuming of state seal}
\includegraphics[width=0.65\textwidth]{image/space_textm.jpg}
\label{fig-test}
\end{figure}
\section{Conclusion}
\label{sec-conclusion}
Decentralized cryptocurrencies such as Bitcoin~\cite{nakamoto2008bitcoin} provide an alternative approach for peer-to-peer payments. However, such payments are time-consuming. In this paper, we provide a secure and practical TEEs-based offline delegatable cryptocurrency system. TEEs are used as the primitives to establish a secure delegation channel and offer better storage protection of metadata (keys, policy). An owner can delegate the coin through an offline-transaction asynchronously with the blockchain network. A formal analysis, prototype implementation and further evaluation demonstrate that our scheme is provably secure and practically feasible.
\textit{Future Work.} There is an insurmountable gap between the theoretical case and real application. Although our scheme is proved to be theoretically secure, a lot of risks still exist in practical scenarios. The countermeasures to reduce these risks will be explored.
\smallskip
\noindent\textbf{Acknowledgments.} Rujia Li and Qi Wang were supported by Guangdong Provincial Key Laboratory (Grant No. 2020B121201001).
\normalem
\bibliographystyle{unsrt}
|
1,108,101,564,354 | arxiv | \section{Introduction}
The binomial transform is useful in several contexts, including analytic continuation and series
acceleration \cite{doi:10.1080/10652469.2016.1231674,Hirofumi}. With an eye toward these applications, we will first show how to derive a generalization of the binomial transform; second, we will show that the set of p-recursive sequences (sequences that satisfy a linear recursion with polynomial coefficients) is closed under the binomial transform; and finally, we will apply these methods to the Maclaurin series for the dilogarithm function. The result is a series that gives a scheme for numerical evaluation of the dilogarithm function that is accurate, efficient, and stable.
Our notation is fairly standard. The sets of integers, real numbers, and complex numbers are \(\mathbf{Z}} % or \mathbb{Z, \mathbf{R}} % or \mathbb{R\), and \(\mathbf{C}} % or \mathbb{C\), respectively. We will use subscript modifiers on each of these sets to indicate various subsets; for example, \(\mathbf{Z}} % or \mathbb{Z_{\geq 0}\) is the set of nonnegative integers. The real part of a complex number \(x\) is denoted by \(\mathrm{Re}(x)\), the imaginary unit is \(\mathrm{i}\mkern1mu\) (not \(i\)), and we use an overline for the complex conjugate. Finally, the identity operator is \(\ident\) and a superscript \(\star\) denotes the operator adjoint.
\section{Generalized binomial transform}
For analytic continuation and series acceleration, the utility of the binomial transform stems from that fact that it can be derived from a sequence of extrapolated sequences. To show this, we start by defining the backward shift operator \(\Sop\) as
\begin{equation}
\Sop F = n \in \mathbf{Z}} % or \mathbb{Z_{\geq 0} \mapsto \begin{cases} 0 & n = 0 \\ F_{n-1} & n > 0 \end{cases}.
\end{equation}
For a sequence \(F^{(0)}\), we define extrapolated sequences \(F^{(1)}, F^{(2)}, \dotsc\) by \(F^{(k)} = \left (\alpha \ident + \beta \Sop \right)^k F^{(0)}\), where \(\alpha, \beta \in \mathbf{C}} % or \mathbb{C\). We call these extrapolated sequences because assuming \(F^{(0)}\) converges linearly to \(L\), there is a choice of \(\alpha\) and \(\beta\) that makes the convergence of \(F^{(\ell)} \) faster with larger \(\ell\). Specifically, suppose \(F^{(0)} - L \in \bigoh{k \mapsto g^k / k^\mu}\), where \(g, \mu \in \mathbf{C}} % or \mathbb{C\) and \(|g| < 1\). Choosing \(\alpha = 1/(1-g)\) and \(\beta = g/(g-1)\), we have
\(F^{(\ell)} - L \in \bigoh{k \mapsto g^k / k^{\ell + \mu}}\). Although each of these sequences shares the same linear convergence rate \(g\), the factor of \(1/k^{\mu+\ell}\) in the leading term of the asymptotic form for \(F^{(\ell)} \) makes convergence of \(F^{(\ell)} \) faster with larger \(\ell\).
Extracting the \nth term of the \nth extrapolated sequence yields a sequence \(n \mapsto F^{(n)}_n\). We call this sequence the \emph{generalized binomial transform} of \(F^{(0)} \). Defined this way, a calculation shows that the binomial transform operator \(\BinOp{\alpha}{\beta} \) is
\begin{equation}
\BinOp{\alpha}{\beta} F=
n \in \mathbf{Z}} % or \mathbb{Z_{\geq 0} \mapsto \sum_{k=0}^n \binom{n}{k} \alpha^{n-k} \beta^{k} F_k.
\end{equation}
In a different context, Prodinger \cite{Prodinger} introduced this form of the binomial transform. We note that depending on the author \cite{oeis,Knuth:1997:ACP:270146}, the standard binomial transformation is either \(\BinOp{1}{1}\) or \(\BinOp{-1}{1}\).
The composition rule for the generalized binomial transform is
\(
\BinOp{\alpha}{\beta} \BinOp{\alpha^\prime}{\beta^\prime} =
\BinOp{\alpha + \alpha^\prime \beta}{\beta \beta^\prime}.
\)
Since \(\BinOp{0}{1}\) is the identity operator, it follows from the composition rule that for \(\beta \neq 0\), the operator \(\BinOp{\alpha}{\beta}\) is invertible and its inverse is
\(
\BinOp{-\alpha/\beta}{1/\beta}
\).
Specializing the composition rule to \(\beta = 1\) and \(\beta^\prime =1\), we have \(\BinOp{\alpha}{1} \BinOp{\alpha^\prime}{1} =
\BinOp{\alpha + \alpha^\prime}{1}\). Thus \( \BinOp{\alpha}{1}\) is the \(\alpha\)-fold composition of \(\BinOp{1}{1}\) with itself.
The adjoint of the binomial transform is
\begin{equation}
{\BinOp{\alpha}{\beta}}^\star F = n \mapsto \sum_{k = n}^\infty \binom{k}{n} \alpha^{k-n} \beta^n F_k.
\end{equation}
Our interest in the adjoint is the formal identity
\begin{equation}
\sum_{k=0}^\infty F_k G_k = \sum_{k=0}^\infty \big( {\BinOp{-\alpha / \beta}{1/\beta}}^\star G \big)_k \big(\BinOp{\alpha}{\beta} F \big)_k.
\end{equation}
We say this is a formal identity because it is valid provided both series converge. Specializing to \(G_k =1\) and assuming \(\alpha+\beta \neq 0\) and \(\beta \neq 0\) gives
\begin{equation}
\sum_{k=0}^\infty F_k = \sum_{k=0}^\infty \frac{\beta}{(\alpha+\beta)^{k+1}} \big(\BinOp{\alpha}{\beta} F \big)_k.
\end{equation}
Simplifying the summand shows that it is a function of the quotient \(\beta/\alpha\) and it does not depend individually on \( \alpha \) and \(\beta\). Thus we can assume that \(\beta = 1\). Our identity is an extension of the Euler transform. For a description of the Euler transform, see \S3.9 of the \emph{NIST Handbook of Mathematical Functions} \cite{NIST:DLMF}.
We will use this summation identity to derive a new series representation for the dilogarithm function \(\Li2\). The key to deriving this result is a new binomial coefficient identity.
\section{Binomial coefficient identities}
A sequence that satisfies a linear homogeneous recursion relation with polynomial coefficients is said to be \emph{p-recursive} \cite{Schneider:2013:CAQ:2541763}. The set of p-recursive sequences is known to be closed under addition and multiplication \cite{Kauers:2011:CT:1993886.1993892, ZEILBERGER1990321}. We will show that the set of p-recursive sequences is closed under the binomial transform. The key to this result is the binomial coefficient identity
\begin{equation}
k \binom{n}{k} = n \binom{n}{k} - n \binom{n-1}{k}.
\end{equation}
The proof is a calculation that uses only simplification and the factorial representation for the binomial coefficients. Extending this identity by multiplying it by \(k\) and iterating, allows us to express \(k^p \binom{n}{k} \), where \(k \in \mathbf{Z}} % or \mathbb{Z_{\geq 0}\), as a linear combination of \(\left \{\binom{n}{k},\binom{n-1}{k}, \binom{n-2}{k}, \dotsc, \binom{n-p}{k} \right \}\) with coefficients that involve only \(n\). Table \ref{BI} displays these results for \(p \) up to three.
\begin{comment}
For an example of how to use these identities, suppose \(G = n \mapsto n F_n\) and \(\widehat{F} = \BinOp{\alpha}{\beta} F\). We have
\begin{equation}
\left(\BinOp{\alpha}{\beta} G \right)_n =
\sum_{k=0}^n \binom{n}{k} \alpha^{n-k} \beta^k k F_k =
\sum_{k=0}^n \left(n \binom{n}{k} - n \binom{n-1}{k} \right ) \alpha^{n-k} \beta^k F_k= n \widehat{F}_n - \alpha n \widehat{F}_{n-1}.
\end{equation}
\end{comment}
Introducing a multiplication operator \(\mathrm{M}\) on the set of sequences defined by \(\mathrm{M} F = n \mapsto n F_n\) and using the identity \( k \binom{n}{k} = n \binom{n}{k} - n \binom{n-1}{k}\), we can show that
\(
\BinOp{\alpha}{\beta} \mathrm{M} = \mathrm{M} \big( \ident - \alpha \Sop\big) \BinOp{\alpha}{\beta}
\), where \(\Sop\) is the backward shift operator.
Consequently, for all \(p \in \mathbf{Z}} % or \mathbb{Z_{\geq 0}\), we have
\begin{equation}
\BinOp{\alpha}{\beta} \mathrm{M}^p = \big ( \mathrm{M} (\ident - \alpha S)\big)^p \BinOp{\alpha}{\beta}.
\end{equation}
Further, using the Pascal identity
\(
\binom{n+1}{k} = \binom{n}{k} + \binom{n}{k-1}
\), we can show that
\(
\beta \BinOp{\alpha}{\beta} \Sop^\star = \left( \Sop^\star - \alpha \ident \right) \BinOp{\alpha}{\beta}
\). Extending this result to any positive integer power \(p\) of \(\Sop^\star\) yields
\begin{equation}
\beta^p \BinOp{\alpha}{\beta} \Sop^{\star \, p} = \left(\Sop^{\star} - \alpha \ident \right)^p \BinOp{\alpha}{\beta}.
\end{equation}
Using these two results, we can express the binomial transform of \(n \mapsto n^p F_{n+q} \) in terms of \(\BinOp{\alpha}{\beta} F\) for all positive integers \(p\) and \(q\). Consequently, we have shown that the set of p-recursive sequences is closed under the generalized binomial transform.
{\renewcommand{\arraystretch}{1.5}%
\begin{table}[ht]
\centering
\begin{tabular}[p]{| l | l | l | l | l |} \hline
&
\(\binom{n-3}{k} \)
& \(\binom{n-2}{k} \)
& \(\binom{n-1}{k} \)
& \(\binom{n}{k} \) \\ \hline \hline
\(\binom{n}{k} \)
& \(0\)
& \(0\)
& \(0\)
& \(1\) \\
\(k \, \binom{n}{k} \)
& \(0\)
& \(0\)
& \(-n\)
& \(n\) \\
\(k^2 \binom{n}{k} \)
& \(0\)
& \(\left( n-1\right) n \)
& \(-n\,\left( 2n-1\right) \)
& \( n^2 \) \\
\(k^3 \binom{n}{k} \)
& \(-\left( n-2\right) \,\left( n-1\right) n\)
& \(3{{\left( n-1\right) }^{2}}n \)
& \(-n\,\left( 3{{n}^{2}}-3n+1\right) \) & \({{n}^{3}} \) \\ \hline
\end{tabular}
\caption{Each row of this table expresses
\(k^p \binom{n}{k} \) as a linear combination of
\(\{\binom{n}{k},\binom{n-1}{k}, \binom{n-2}{k}, \dotsc \binom{n-p}{k}\}\) where the coefficients are functions of \(n\) only. The third row, for example, corresponds to the identity
\(k^2 \binom{n}{k} =n^2 \binom{n}{k}- n \left(2
n-1\right) \binom{n-1}{k} + \left(n-1\right)\,n \binom{n-2}{k} . \)
}\label{BI}
\end{table}
}
\section{The dilogarithm function}
The dilogarithm function \(\Li2\) can be defined by its Maclaurin series \cite{NIST:DLMF}
\begin{equation}
\Li2(x) = \sum_{k=0}^\infty \frac{x^{k+1}}{(k+1)^2}.
\end{equation}
Inside the unit circle, the series converges linearly; on the unit circle, it converges sublinearly, and outside the unit circle, it diverges.
The summand of the Maclaurin series, call it \(Q\), is p-recursive. Thus we consider the convergence set for the formal identity
\(
\Li2(x) = \sum_{k=0}^\infty \widehat Q_k / (\alpha+1)^{k+1},
\)
where \(\widehat{Q} = \BinOp{\alpha}{1} Q\).
\begin{comment}
Although we will not use this fact, the sequence \(\widehat{Q}\) has a representation in terms of a \({}_{3} \! \operatorname{F}_{2} \!\) hypergeometric function; it is
\begin{equation}
\widehat{Q}_k = \pFq{3}{2}{-k,1,1}{2,2}{-x/\alpha} \, x \alpha^n.
\end{equation}
\end{comment}
The sequence \(Q\) satisfies the recursion
\(
\left( k+2\right)^2 Q_{k+1} = \left( k+1\right)^2 Q_{k}
\).
Using Table \ref{BI}, the recursion for \(\widehat{Q}\) has the form \(0=P_0(n) \widehat{Q}_n + P_1(n) \widehat{Q}_{n+1}
P_2(n) \widehat{Q}_{n+2} + P_3(n) \widehat{Q}_{n+3}\), where the polynomials \(P_0\) through \(P_3\) are
\begin{align}
P_0(n) &= -\alpha^2 (\alpha+x) (n+1)(n+2), \\
P_1(n) &= \alpha(n+2)(3n\alpha+2n x+8\alpha + 5 x), \\
P_2(n) &= - \left(3\,\alpha+x\right) n^2\ -\left(19\,\alpha+6\,x\right) n\,-26\,\alpha-9 \, x, \\
P_3(n) &= (n+2)(n+6).
\end{align}
Assuming \( \alpha \neq -x\), a fundamental solution set for this recursion is
\begin{equation}
\left \{ n \mapsto \frac{\alpha^n}{n+1}, \quad n \mapsto \frac{\alpha^n}{n+1} \sum_{k=0}^n \frac{1}{k+1}, \quad
n \mapsto \frac{(\alpha+x)^n}{n^2} \left(1 + \bigoh{1/n} \right)
\right \}.
\end{equation}
The first two members of this set are exact, but the third is an asymptotic solution that is valid toward infinity.
Both \(\alpha = -x\) and \(\alpha = 0\) are special cases. For \(\alpha = -x\), the order of the recursion is reduced from three to two. For this case, one solution to the recursion is \(n \mapsto (-x)^n /(1+n) \). Since this series diverges everywhere outside the unit circle, we will discard it. Similarly, the case \(\alpha = 0\) is not pertinent.
The fundamental solution set shows that the formal series converges linearly, provided that
\begin{equation}
\max \left( \left| \frac{\alpha}{\alpha+1} \right|, \left| \frac{\alpha+ x}{\alpha + 1} \right| \right) < 1,
\mbox{ and } \alpha \in \mathbf{C}} % or \mathbb{C_{\neq -x, \neq -1}.
\end{equation}
The convergence set is maximized when \(\left| \frac{\alpha}{\alpha+1} \right| = \left| \frac{\alpha+ x}{\alpha + 1} \right|\). Assuming \(x \in \mathbf{R}} % or \mathbb{R\), the convergence set is maximized when \(\alpha=-x/2\). For this choice, the linear convergence rate is \(|x/(x-2)| \) and the series converges in the half plane \(\mathrm{Re}(x) < 1\).
For \(x \in \mathbf{C}} % or \mathbb{C \setminus [1, \infty) \), the convergence set is maximized when
\begin{equation}
\alpha = \frac{\mathrm{e}^{\mathrm{i}\mkern1mu \theta}}{\mathrm{e}^{\mathrm{i}\mkern1mu \theta} - 1} x \mbox{ , where }
\mathrm{e}^{\mathrm{i}\mkern1mu \theta} = \pm \sqrt{\frac{\overline{x}-1}{x-1}}.
\end{equation}
Setting \(x = 1 + R \exp( \mathrm{i}\mkern1mu \omega) \), where \(R \in \mathbf{R}} % or \mathbb{R_{\geq 0} \) and \(\omega \in [0, 2 \pi) \), the minimum of the linear convergence rate \(|\alpha/(\alpha+1)| \) is
\begin{equation}
\min\left( \frac{2 R \cos{(\omega)}+{{R}^{2}}+1}{{{\left( R-1\right) }^{2}}},\frac{2 R \cos{(\omega)}+{{R}^{2}}+1}{{{\left( R+1\right) }^{2}}}\right).
\end{equation}
For \(\omega \in (0, 2 \pi)\), or equivalently for \(x \in \mathbf{C}} % or \mathbb{C \setminus [1,\infty)\), the linear convergence rate is less than one. Consequently, there is a value of \(\alpha\) that makes the series \(\sum_{k=0}^\infty \widehat Q_k / (\alpha+1)^{k+1} \) converge on \(\mathbf{C}} % or \mathbb{C \setminus [1,\infty)\). Although this is a satisfying result, its additional complexity over the choice \(\alpha = -x/2\) is erased by the fact that \(\Li2\) satisfies several functional identities that allow the convergence set \(\mathrm{Re}(x) < 1 \) to be adequate, at least for numerical evaluation.
Returning to the choice \(\alpha = -x/2\), the recursion for the entire summand \(W_k = \widehat{Q}_k / (1+\alpha)^{k+1} \) is
\begin{comment} as typeset by Maxima
\[{{\left( n+4\right) }^{2}} \operatorname{Q}\left( n+3\right) {{\left( x-2\right) }^{3}}=-\left( n+1\right) \, \left( n+2\right) \operatorname{Q}(n) {{x}^{3}}+{{\left( n+2\right) }^{2}} \operatorname{Q}\left( n+1\right) \left( x-2\right) \, {{x}^{2}}+\left( n+3\right) \, \left( n+4\right) \operatorname{Q}\left( n+2\right) {{\left( x-2\right) }^{2}} x\]
\end{comment}
\begin{equation}
(x-2)^3 (n+4)^2 W_{n+3} = -x^3 (n+1)(n+2)W_n
+x^2 (x-2) (n+2)^2 W_{n+1} + x (x-2)^2 (n+3)(n+4) W_{n+2}.
\end{equation}
In terms of the forward shift operator \(S^\star\), the recursion relation factors as
\begin{equation}
\big ( (x-2)^2 (n+4) S^{\star 2} - x^2 (n+2) \big)
\big ((x-2) (n+2) S^\star - x (n + 1) \big) W_k = 0.
\end{equation}
The three initial values of the sequence \(W\) are
\begin{align}
W_0 &= x/(1-x/2), \\
W_1 &= -x^2/\left(4(1-x/2)^2 \right), \\
W_2 &= x^3/\left(9(1-x/2)^3 \right).
\end{align}
For a series that converges in the half-plane \(\mathrm{Re}(x) < 1/2\), see \cite{MaxieSchmidt}.
In the next section, we will investigate the practical considerations of using this series to numerically evaluate \(\Li2\).
\section{Accuracy, efficiency, and stability}
For our series representation to be useful for numerical evaluation, the sum must be well conditioned (accuracy), the
convergence must be fast (efficiency), and every solution to the fundamental solution set to the recursion for the summand must converge to zero (stability).
Of these three conditions, we have already shown that each member of the fundamental solution set to the recursion relation converges to zero when \(\mathrm{Re}(x) < 1\). Thus the recursion for \(W\) is stable.
We can achieve greater efficiency by leveraging two functional identities. The algorithm can automatically choose between them to minimize the linear convergence rate of \(|x/(2-x) |\). These functional identities are (see \cite{NIST:DLMF})
\begin{comment} was:
\begin{align}
\Li2(x) + \Li2(1-x) &=\frac{\pi^{2}}{6}- \ln(x) \, \ln (1-x), \quad x \in \mathbf{C}} % or \mathbb{C_{\neq 0, \neq 1} \\
\mathrm{Li}_{2}\left(x\right)+\mathrm{Li}_{2}\left(\frac{1}{x}\right) & =-\frac{\pi^2}{6}-\frac{1}{2}(\ln\left(-x\right))^{2}, \quad x \in \mathbf{C}} % or \mathbb{C \setminus [0,1].
\end{align}
\end{comment}
\begin{align}
\Li2(x) + \Li2(1-x) &= \pi^{2}/6 - \ln(x) \, \ln (1-x), \quad x \in \mathbf{C}} % or \mathbb{C_{\neq 0, \neq 1}, \\
\mathrm{Li}_{2}\left(x\right)+\mathrm{Li}_{2}\left(\frac{1}{x}\right) & =-\pi^2 /6 - (\ln\left(-x\right))^{2}/2, \quad x \in \mathbf{C}} % or \mathbb{C \setminus [0,1].
\end{align}
The second identity is the \(\Li2\) reciprocal formula. Choosing between these identities and using our series to compute \(\Li2\) on the unit circle, the largest number of terms that must be summed to achieve full accuracy using IEEE binary64 numbers is less than 70; see Figure \ref{nbr_summed}. Inside the unit circle, the number of terms needed is less than 70; and outside, the \(\Li2\) reciprocal formula reduces this evaluation to a number on the inside of the unit circle.
\begin{figure}[ht!]
\includegraphics[width=0.5\textwidth]{nbr_termsX}
\centering
\caption{This graph shows the number of terms that need to be summed to achieve full accuracy using IEEE binary64 numbers to compute the value of \(\Li2(\exp( \mathrm{i}\mkern1mu \theta / 2 \pi)) \) for \(\theta \in [0, 2 \pi]\). The maximum number of terms is less than 70.}\label{nbr_summed}
\end{figure}
\begin{figure}[ht!]
\includegraphics[width=0.5\textwidth]{cndX}
\centering
\caption{The condition number for summing \(\sum_k W_k(\exp(\mathrm{i}\mkern1mu \theta)) \). On the unit circle, the condition number is apparently bounded
above by \(3/2\).}\label{cndX}
\end{figure}
Finally, we study the condition number of the sum. Recall that the condition number \cite{Higham:2002:ASN} of a sum \(\sum_k W_k\) is the quotient \(\sum_k |W_k| / | \sum_k W_k | \). Using Kahan summation \cite{Kahan:1965:PRR:363707.363723}, the floating point rounding error is bounded by the machine epsilon (\(2^{-53}\) for a IEEE binary64 number) times twice the condition number. Again, automatically choosing between the functional identities, the condition number for the sum \(\sum_k W_k\) is shown in Figure \ref{cndX} for inputs on the unit circle. The condition number is apparently bounded above by \(3/2\); thus the sum is well conditioned on the unit circle.
For testing, we implemented the algorithm in the Julia language \cite{bezanson2017julia}. Our implementation uses Kahan summation and it accumulates the condition number for the sum. The condition number indicates the total rounding error; for details, see Higham \cite{Higham:2002:ASN}. Finally, the method is generic for both real and complex IEEE floats, as well as real and complex extended precision floating point numbers.
\section{Acknowledgments}
The work by Stephanie Harshbarger was supported by the University of Nebraska at Kearney Undergraduate Research Fellows Program. We used the Maxima computer algebra system \cite{maxima} to do the calculations in this paper. We thank the volunteers who make Maxima freely available.
|
1,108,101,564,355 | arxiv |
\section{Discussion and Future Work}
In this paper, we address the problem of risk-sensitive RL under safety constraints and coherent risk measures. We propose that maximizing the value function under risk or safety constraints is equivalent to playing a risk-sensitive non-zero sum (RNS) game. In the RNS game, an adversary tries to maximize the risk of a decision trajectory while the agent tries to maximize a weighted sum of its value function given the adversary's feedback. Specifically, under the MaxEnt RL framework, this RNS game reduces to deploying two soft-actor critics for the agent and the adversary while accounting for a repulsion term between their policies. This allows us to formulate a duelling SAC-based algorithm, called $\mathtt{SAAC}$\xspace. We instantiate our method for subspace, mean-standard deviation, and CVaR constraints, and also experimentally test it on various continuous control tasks. Our algorithm leads to better risk-sensitive performance than SAC and the risk-sensitive distributional RL baselines in all these environments.
In future work, further study on leveraging the flexibility of $\mathtt{SAAC}$\xspace to incorporate more safety constraints is anticipated.
\section{Problem Formulation: Safe RL as a Non-Zero Sum Game}\label{sec:problem}
\textbf{Safe RL as Constrained MDP (CMDP).} All of the aforementioned methods to safe RL can be expressed as a CMDP problem that aims to maximize the value function $V_{\pol}$ of a policy $\pol$ while constraining the total risk $\rho_{\pol}$ below a certain threshold $\delta$
\begin{align}\label{eq:cmdp}
&\argmax_{\pi} V_{\pol}(s)
\text{ s.t. } \rho_{\pol}(s) \leq \delta \text{ for } \delta >0.
\end{align
\begin{itemize}[leftmargin=*]
\item If Mean-Standard Deviation (MSD)~\cite{prashanth2016variance} is the risk measure, $\rho_{\pol}(s) \triangleq \expect\left[\returns|\pol, s_0=s\right] + \lambda \sqrt{\var\left[\returns|\pol, s_0=s\right]}$ ($\lambda < 0$).
\item If CVaR is the risk measure, $\rho_{\pol}(s) \triangleq \cvar_{\lambda}\left[\returns|\pol, s_0=s\right]$ for $\lambda \in [0,1)$.
\item For the constraint of staying in the `non-error' states $\states\setminus\mathcal{E}$, $\rho_{\pol}(s) \triangleq \expect\left[\sum_{t=0}^T \mathds{1}(s_{t+1} \in \mathcal{E}) |\pol, s_0=s\in \states\setminus\mathcal{E}\right] = \sum_{t=0}^T\mathbb{P}_{\pol}[s_{t+1} \in \mathcal{E}]$ such that $s_0=s$ is a non-error state. We refer to this as \textit{subspace risk} $\mathrm{Risk}(A, \states)$ for $A \subseteq \states$.
\end{itemize
\textbf{CMDP as a Non-Zero Sum (NZS) Game.} The most common technique to address the constraint optimization in Eq.~\eqref{eq:cmdp} is formulating its Lagrangian:
\begin{equation}\label{eq:lag}
\lag(\pol, \beta) \triangleq V_{\pol}(s) - \beta_0 \rho_{\pol}(s), \text{ for } \beta_0 \geq 0.
\end{equation}
For $\beta_0=0$, this reduces to its risk-neutral counterpart. Instead, as $\beta_0\rightarrow\infty$, this reduces to the unconstrained risk-sensitive approach. Thus, the choice of $\beta_0$ is important. We automatically tune it as described in Sec.~\ref{sec:temp}.
Now, the important question is to estimate the risk function $\rho_{\pol}(s)$. Researchers have either solved an explicit optimization problem to estimate the parameter or subspace corresponding to the risk measure, or used a stochastic estimator of the risk gradients. These approaches are poorly scalable and lead to high variance estimates as there is no provably convergent CVaR estimator in RL settings. In order to circumvent these issues, we deploy \textit{an adversary} that aims to maximize the cumulative risk $\rho_{\pol}(s)$ given the same initial state $s$ and trajectory $\tau$ as \textit{the agent} maximizing Eq.~\eqref{eq:lag} and use it as a proxy for the risk constraint in Eq.~\eqref{eq:lag}:
\begin{align}
&\theta^* \triangleq \argmax_{\theta} \lag(\theta, \beta) = V_{\pol_{\theta}}(s) - \beta_0 V_{\pol_{\omega}}(s),\notag\\
&\omega^* \triangleq \argmax_{\omega} V_{\pol_{\omega}}(s).\label{eq:nzs}
\end{align}
Here, we consider that the policies of the agent and the adversary are parameterized by $\theta$ and $\omega$ respectively. The value function of the adversary $V_{\pol_{\omega}}(s,\cdot)$ is designed to estimate the corresponding risk $\rho_{\pol}(s)$.
This is a non-zero sum game (NZS) as the objectives of the adversary and the agent are not the same and do not sum up to $0$.
Following this formulation, any safe RL problem expressed as a CMDP (Eq.~\eqref{eq:cmdp}), can be reduced to a corresponding agent-adversary non-zero sum game (Eq.~\eqref{eq:nzs}). The adversary tries to maximize the risk, and thus to shrink the feasibility region of the agent's value function. The agent tries to maximize the regularized Lagrangian objective in the shrinked feasibility region. We refer to this duelling game as \textit{Risk-sensitive Non-zero Sum (RNS)} game.
Given this RNS formulation of Safe RL problems, we derive a MaxEnt RL equivalent of it in the next section. This formulation naturally leads to a dueling soft actor-critic algorithm ($\mathtt{SAAC}$\xspace) for performing safe RL tasks.
\section{SAAC: Safe Adversarial Soft Actor-Critics}\label{sec:method}
In this section, we first derive a MaxEnt RL formulation of the Risk-sensitive Non-zero Sum (RNS) game. We show that this naturally leads to a duel between the adversary and the agent in the policy space. Following that, we elaborate the generic architecture of $\mathtt{SAAC}$\xspace, and the details of designing the risk-seeking adversary for different risk constraints. We conclude the section with a note on automatic adjustment of regularization parameters.
\subsection{Risk-sensitive Non-zero Sum (RNS) Game with MaxEnt RL}
In order to perform the RNS game with MaxEnt RL, we substitute the Q-values in Eq.~\eqref{eq:nzs} with corresponding soft Q-values.
Thus, the adversary's objective is maximizing:
\begin{equation*}
\expect_{\pol_{\omega}}[Q_{\omega}(s,\cdot)] + \alpha_0 \ent_{\pol_{\omega}}(\pi_\omega(.|s))
\end{equation*}
for $\pi_{\omega} \in \Pi_{\omega}$, and the agent's objective is maximizing:
\begin{align}
\begin{split}
&\expect_{\pol_{\theta}}[Q_{\theta}(s,\cdot)] + \alpha_0 \ent_{\pol_{\theta}}(\pi_\theta(.|s))-\beta_0 (\expect_{\pi_{\theta}}[Q_{\omega}(s,\cdot)] +\alpha_0 \ent_{\pol_{\omega}}(\pi_\omega(.|s)))
\end{split}\label{eq:agent1}
\end{align}
for $\pi_{\theta} \in \Pi_{\theta}$.\\
Following the equivalent KL-divergence formulation in policy space, the adversary aims to compute
\begin{equation}\label{eq:adversary
\omega^* = \argmin_{\omega} \KL{\pol_\omega(.|s)}{\exp\left(\alpha_0^{-1}Q_{\omega}(s,\cdot)\right)/Z_{\omega}(s)}
\end{equation}
Similarly, the agent's objective is to compute:
\begin{align}\label{eq:agent}
{\theta}^* &=\argmax_{\theta}~~\expect_{\pol_{\theta}}[Q_{\theta}(s,\cdot)] + \alpha_0(1+\beta_0) \ent_{\pol_{\theta}}(\pi_\theta(.|s))\notag\\
&+ \alpha_0 \beta_0 \expect_{\pi_{\theta}}[\ln(\pi_\omega(.|s)) - \ln \exp[\alpha_0^{-1}Q_{\omega}(s,\cdot)]
+\alpha_0\beta_0 \KL{\pi_{\theta}(\cdot|s)}{\pi_{\omega}(\cdot|s)}\notag\\
&= \argmin_{\theta} \KL{\pol_\theta(.|s)}{\exp\left((\alpha_0(1+\beta_0))^{-1}Q_{\theta}(s,\cdot)\right)/Z_{\theta}(s)}\notag\\
&-\alpha_0\beta_0 \expect_{\pi_{\theta}}[\ln(\pi_\omega(.|s)) - \ln \exp[\alpha_0^{-1}Q_{\omega}(s,\cdot)]
- \alpha_0\beta_0 \KL{\pi_{\theta}(\cdot|s)}{\pi_{\omega}(\cdot|s)}\notag\\
&= \argmin_{\theta} \KL{\pol_\theta(.|s)}{\exp\left(\alpha^{-1}Q_{\theta}(s,\cdot)\right)/Z_{\theta}(s)
- \beta \KL{\pi_{\theta}(\cdot|s)}{\pi_{\omega^*}(\cdot|s)}.
\end{align}
Here, $\alpha = \alpha_0(1+\beta_0)$ and $\beta =\alpha_0\beta_0$.
The last equality holds true as $\pi_{\omega^*}(.|s)= \exp\left(\alpha_0^{-1}Q_{\omega^*}(s,\cdot)\right)/Z_{\omega^*}(s)$ for the adversary's optimal policy $\pol_{\omega^*}$, and since the optimization is over $\theta$, adding $\ln Z_{\omega}(s)$ does not make a change.
Additionally, for $\omega \neq \omega^*$, the relaxed objective $-(\KL{\pol_\theta(.|s)}{\exp\left(\alpha^{-1}Q_{\theta}(s,\cdot)\right)/Z_{\theta}(s)} - \beta \KL{\pi_{\theta}(\cdot|s)}{\pi_{\omega}(\cdot|s)})$ is a strict lower bound of the goal of the agent in Eq.~\eqref{eq:agent1}. Thus, maximizing the reduced objective is similar to maximizing the lower bound on the actual objective. This is a similar trick adopted in general EM algorithms~\cite{em} for maximizing likelihoods. Thus, not only in asymptotics, but at every step optimizing the reduced objective allows to maximize the agent's risk-sensitive soft Q-value.
Following this reduction, we observe that performing the RNS game with MaxEnt RL is equivalent to performing the traditional MaxEnt RL for adversary with a risk-seeking Q-function $Q_{\adversary}$, and a modified MaxEnt RL for the agent that includes the usual soft Q-function and a KL-divergence term repulsing the agent's policy $\pol_{\agent}$ from the adversary's policy $\pol_{\adversary}$. This behaviour of RNS game in policy space allows to propose a duelling soft actor-critic algorithm, namely $\mathtt{SAAC}$\xspace, to solve risk-sensitive RL problems.
\subsection{The $\mathtt{SAAC}$\xspace Algorithm}\label{sec:saac-algo}
We propose an algorithm $\mathtt{SAAC}$\xspace to solve the objectives of the agent (Eq.~\eqref{eq:agent}) and of the adversary (Eq.~\eqref{eq:adversary}). In $\mathtt{SAAC}$\xspace, we deploy two soft actor-critics (SACs) to enact the agent and the adversary respectively. We illustrate the schematic of $\mathtt{SAAC}$\xspace in Fig.~\ref{fig:saac}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=\textwidth,height=8cm]{saac_framework.pdf}
\caption{The schematic of the Safe Adversarially guided Actor-Critics ($\mathtt{SAAC}$\xspace) algorithm.}\label{fig:saac}\vspace*{-1em}
\end{figure*}
As a building block for $\mathtt{SAAC}$\xspace, we deploy the recent version of SAC~\cite{haarnoja2018soft} that uses two soft Q-functions to mitigate positive bias in the policy improvement step in Eq.~\eqref{eq:policysac}, which was encountered in~\cite{hasselt2010double,fujimoto2018addressing}.
In the design of $\mathtt{SAAC}$\xspace, we introduce two new ideas: an off-policy deep actor-critic algorithm within the MaxEnt RL framework and a Risk-sensitive Non-zero Sum (RNS) game. $\mathtt{SAAC}$\xspace engages the agent in safer strategies while finding the optimal actions to \textit{maximize} the expected returns. The role of the adversary is to find a policy that maximizes the probability of breaking the constraints given by the environment. The adversary is trained online with off-policy data given by the agent. We denote the parameter of the adversary policy using $\omega$\footnote{resp. $\omega_\text{old}$ the parameter at the previous iteration.}. For each sequence of transition from the replay buffer, the adversary should find actions that minimize the following loss:
\begin{equation*}
J(\pi_{\omega})=\mathbb{E}_{s_{t} \sim \mathcal{D}}\left[\mathbb{E}_{a_{t} \sim \pi_{\omega}}\left[\alpha \log \left(\pi_{\omega}\left(a_{t} | s_{t}\right)\right)-Q_{\psi}\left(s_{t}, a_{t}\right)\right]\right].
\end{equation*}
Finally, leveraging the RNS based reduced objective, $\mathtt{SAAC}$\xspace makes the agent's actor minimize $J(\pi_{\theta})$:
\begin{align*}
J(\pi_{\theta})=\mathbb{E}_{s_{t} \sim \mathcal{D}}\Big[\mathbb{E}_{a_{t} \sim \pi_{\theta}}\Big[\alpha \log \left(\pi_{\theta}\left(a_{t} | s_{t}\right)\right)-Q_{\phi}\left(s_{t}, a_{t}\right) \textcolor{blue}{- \beta \Big(\log \pi_{\theta_\text{old}}(a_t | s_t) - \log \pi_{\omega_\text{old}}(a_t | s_{t})\Big)}\Big]\Big].
\end{align*}
In \textcolor{blue}{blue} is the repulsion term introduced by $\mathtt{SAAC}$\xspace. The method alternates between collecting samples from the environment with the current agent's policy and updating the function approximators, namely the adversary's critic $Q_\psi$, the adversary's policy $\pi_\omega$, the agent's critic $Q_\phi$ and the agent's policy $\pi_\theta$. It performs stochastic gradient descent on corresponding loss functions with batches sampled from the replay buffer. We provide a generic description of $\mathtt{SAAC}$\xspace in Algorithm~\ref{alg:saac}. Now, we provide a few examples of designing the adversary's critic $Q_{\psi}$ for different safety constraints.
\begin{figure}[ht]
\centering
\vspace{-1em}
\begin{minipage}{\textwidth}
\begin{algorithm}[H]
\caption{$\mathtt{SAAC}$\xspace}\label{alg:saac}
\begin{algorithmic}
\STATE \textbf{Input parameters:} $\tau, \lambda_Q, \lambda_\pi, \lambda_\alpha, \lambda_\beta$
\STATE \textbf{Initialize} adversary's and agent's policies and Q-functions parameters $\omega$, $\psi$, $\theta$ and $\phi$
\STATE \textbf{Initialize} temperature parameters $\alpha$ and $\beta$
\STATE $\mathcal{D} \gets \emptyset$
\FOR {each iteration}
\FOR {each step}
\STATE $a_{t} \sim \pi_{\theta}(a_t|s_t)$
\STATE $s_{t+1} \sim {\cal P}\left(s_{t}, a_{t}\right)$
\STATE $\mathcal{D} \gets \mathcal{D} \cup\left\{\left(s_{t}, a_{t}, r_t, s_{t+1}\right)\right\}$
\ENDFOR
\FOR {each gradient step}
\STATE sample batch $\mathcal{B}$ from $\mathcal{D}$
\textcolor{blue}{\STATE$\psi \gets \psi-\lambda_{Q} \hat{\nabla}_{\psi} J_{Q}\left(\psi\right)$\;\;\tikzmark{top}\tikzmark{right}
\STATE$\omega \gets \omega-\lambda_{\pi} \hat{\nabla}_{\omega} J(\pi_{\omega})$}
\textcolor{blue}{\STATE$\beta \gets \beta-\lambda_{\beta} \hat{\nabla}_{\beta} J(\beta)$}
\textcolor{blue}{\STATE$\bar{\psi} \gets \tau \psi+(1-\tau) \bar{\psi}$}\tikzmark{bottom}
\STATE$\phi \gets \phi-\lambda_{Q} \hat{\nabla}_{\phi} J_{Q}\left(\phi\right)$\;\;\,\,\tikzmark{top1}\tikzmark{right1}
\STATE$\theta \gets \theta-\lambda_{\pi} \hat{\nabla}_{\theta} J(\pi_{\theta})$
\STATE$\alpha \gets \alpha-\lambda_{\alpha} \hat{\nabla}_{\alpha} J(\alpha)$
\STATE$\bar{\phi} \gets \tau \phi+(1-\tau) \bar{\phi}$\tikzmark{bottom1}
\ENDFOR
\ENDFOR
\end{algorithmic}
\AddNote{top}{bottom}{right}{Update Adversary}
\AddNotee{top1}{bottom1}{right1}{\;Update Agent}
\end{algorithm}
\end{minipage}
\vspace{-1em}
\end{figure}
\noindent\textbf{$\mathtt{SAAC}$-$\mathtt{Cons}$\xspace: Subspace Risk.} At every step, the environment signals whether the constraints have been satisfied or not. We construct a reward signal based on this information. This constraint reward, denoted as $r_c$, is $1$ if all the constraints have been broken, and $0$ otherwise. $J(Q_\psi)$ is the soft Bellman residual for the critic responsible with constraint satisfaction:
\begin{align
\label{eq:softqcritic}
J(Q_\psi)=\mathbb{E}_{\left(s_{t}, a_{t}\right) \sim \mathcal{D}}\Big[\frac{1}{2}\Big(Q_{\psi}\left(s_{t}, a_{t}\right)-\big(r_c\left(s_{t}, a_{t}\right) +\gamma \mathbb{E}_{s_{t+1} \sim \rho}\mathbb{E}_{a_{t} \sim \pi_{\omega}}\left[Q_{\bar{\psi}}\left(s_{t}, a_{t}\right)-\alpha \log \pi\left(a_{t} | s_{t}\right)\right]\Big)^{2}\Big].
\end{align}
\noindent\textbf{$\mathtt{SAAC}$-$\mathtt{MSD}$\xspace: Mean-Standard Deviation (MSD)}. In this case, we consider optimizing a Mean-Standard Deviation risk~\cite{prashanth2016variance}, which we estimate using:
$Q_\psi(s,a) = Q_\phi(s,a) + \lambda\sqrt{\var[Q_\phi(s,a)]}.$
$\lambda<0$ is a hyperparameter that dictates the lower $\lambda-\mathrm{SD}$ considered to represent the lower tail. In the experiments, we use $\lambda=-1$.
In practice, we approximate the variance $\var[Q_\phi(s,a)]$ using the state-action pairs in the current batch of samples. We refer to the associated method as $\mathtt{SAAC}$-$\mathtt{MSD}$\xspace.
\noindent\textbf{$\mathtt{SAAC}$-$\mathtt{CVaR}$\xspace: CVaR.} Given a state-action pair $(s,a)$, the Q-value distribution is approximated by a set of quantile values at quantile fractions~\cite{eriksson2021sentinel}. Let $\left\{\tau_{i}\right\}_{i=0, \ldots, N}$ denote a set of quantile fractions, which satisfy $\tau_{0}=0$, $\tau_{N}=1$, $\tau_{i}<\tau_{j}\, \forall i<j$, $\tau_{i} \in[0,1]\, \forall i=0, \ldots, N$, and $\hat{\tau}_{i}=\left(\tau_{i}+\tau_{i+1}\right) / 2$. If $Z^{\pi}: \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{Z}$ denotes the soft action-value of policy $\pi$, $Q_\psi(s,a) = - \sum_{i=0}^{N-1}\left(\tau_{i+1}-\tau_{i}\right) g^{\prime}\left(\hat{\tau}_{i}\right) Z^{\pi_\theta}_{\hat{\tau}_{i}}(s,a;\phi)$
with $g(\tau)=\min \{\tau / \lambda, 1\}$, where $\lambda \in(0,1)$. In the experiments, we set $\lambda=0.25$, i.e. we truncate the right tail of the return distribution by dropping 75\% of the topmost atoms.
\subsection{Automating Adversarial Adjustment}\label{sec:temp}
Similar to the solution introduced in~\cite{haarnoja2018soft}, the adversary temperature $\beta$ and the entropy temperature are automatically adjusted.
Since the adversary bonus can differ across tasks and during training, a fixed coefficient would be a poor solution.
We use $\bar{\mathcal{A}}$ to denote the adversary's bonus target, which is a hyperparameter in $\mathtt{SAAC}$\xspace. By formulating a constrained optimization problem where the KL-divergence between the agent and the adversary is constrained, $\beta$ is learned by gradient descent with respect to:
\begin{align*}
J(\beta)=\mathbb{E}_{s_{t} \sim \mathcal{D}}\left[\log \beta \cdot\left(\KL{\pi_{\theta}(\cdot|s_t)}{\pi_{\omega}(\cdot|s_t)} -\bar{\mathcal{A}}\right)\right].
\end{align*}
In addition, the entropy temperature $\alpha$ is also learned by taking a gradient step with respect to the loss:
\[J(\alpha)=\mathbb{E}_{s_{t} \sim \mathcal{D}}\left[\log \alpha \cdot\left(-\log \pi_{\theta}\left(a_{t}|s_{t}\right)-\bar{\mathcal{H}}\right)\right].\]
$\bar{\mathcal{H}}$ is the target entropy: a hyperparameter needed in SAC. We illustrate this in the pseudo-code of SAAC as in Algorithm~\ref{alg:saac}.
\section{Background}
In this section, we elaborate the details of the three main components of our work: Markov Decision Process (MDP), Maximum-Entropy RL, and risk-sensitive RL.
\subsection{Markov Decision Process (MDP)}
We consider the RL problems that can be modelled as a \textit{Markov Decision Process (MDP)}~\cite{sutton2018reinforcement}. An MDP is defined as a tuple $\mdp \triangleq \left( \states, \actions, \rewards, \transitions, \gamma \right)$. $\states \subseteq \real^d$ is the \textit{state space}. $\actions$ is the admissible \textit{action space}. $\rewards: \states \times \actions \rightarrow \real$ is the \textit{reward function} that quantifies the goodness or badness of a state-action pair $(s,a)$. $\transitions: \states \times \actions \rightarrow \Delta_{\states}$ is the \textit{transition kernel} that dictates the probability to go to a next state given the present state and action.
Here, $\gamma \in (0,1]$ is the \textit{discount factor} that affects how much weight is given to future rewards.
The goal of the agent is to compute a \textit{policy} $\pi: \states \rightarrow \Delta_{\actions}$ that maximizes the expected value of cumulative rewards obtained by a time horizon $T \in \mathbb{N}$. For a given policy $\pol$, the \textit{value function} or the expected value of discounted cumulative rewards is
\begin{align*}
V_{\pol}(s) &\triangleq \underset{\underset{s_t \sim \transitions(s_{t-1},a_{t-1})}{a_t \sim \pol(s_t)}}{\expect}\left[\sum_{t=0}^T \gamma^t \rewards(s_t, a_t)|s_0 = s\right]\triangleq \expect_{\pol\mdp}[\returns].
\end{align*}
We refer to $\returns$ as the \textit{return} of policy $\pi$ up to time $T$ and $Q_{\pol}(s,a)$ as the action-value function which is the expected return starting from state $s$, taking action $a$ and following policy $\pol$.
\subsection{Maximum-Entropy RL}
In this paper, we adopt the Maximum-Entropy RL (MaxEnt RL) framework~\cite{eysenbach2019if,eysenbach2021maximum}, also known as entropy-regularized RL~\cite{neu2017unified}.
In MaxEnt RL, we aim to maximize the sum of value function and the conditional action entropy, $\ent_{\pol}(a|s)$, for a policy $\pol$:
\begin{align*}
&\argmax_{\pol} \quad V_{\pol}(s) + \ent_{\pol}(a|s)= \underset{\underset{s_t \sim \transitions(s_{t-1},a_{t-1})}{a_t \sim \pol(s_t)}}{\expect}\left[\returns - \log \pol(a_t|s_t) \mid s_0 = s\right].
\end{align*}
Unlike the classical value function maximizing RL that always has a deterministic policy as a solution~\cite{puterman2014markov}, MaxEnt RL tries to learn stochastic policies such that states with multiple near-optimal actions have higher entropy and states with single optimal action have lower entropy.
Interestingly, solving MaxEnt RL is equivalent to computing a policy $\pol$ that has minimum KL-divergence from a target trajectory distribution $\transitions\circ\rewards$:
\begin{equation
\argmax_{\pol} V_{\pol}(s) + \ent_{\pol}(a|s)
= \argmin_{\pol} \KL{\pol(\tau)}{\transitions\circ\rewards(\tau)}.\label{eq:kl_equiv}
\end{equation}
Here, $\tau$ is a trajectory $\lbrace (s_0, a_0), \ldots, (s_T, a_T)\rbrace$. Target distribution $\transitions\circ\rewards$ is a Boltzmann distribution (or softmax) on the cumulative rewards given the trajectory: $\transitions\circ\rewards(\tau) \propto p_0(s) \prod_{t=0}^T \transitions(s_{t+1}|s_t, a_t) \exp[\returns]$. Policy distribution is the distribution of generating trajectory $\tau$ given the policy $\pol$ and MDP $\mdp$: $\pol(\tau) \propto p_0(s) \prod_{t=0}^T \transitions(s_{t+1}|s_t, a_t) \pi(a_t|s_t)$.
Thus in MaxEnt RL, the optimal policy is a Boltzmann distribution over the expected future return of state-action pairs.
This perspective of MaxEnt RL allows us to design $\mathtt{SAAC}$\xspace which transforms the robust RL into an adversarial game in the softmax policy space.
MaxEnt RL is widely used in solving complex RL problems as: it enhances exploration~\cite{haarnoja2018soft}, it transforms the optimal control problem in RL into a probabilistic inference problem~\cite{todorov2007linearly,toussaint2009robot}, and it modifies the optimization problem by smoothing the value function landscape~\cite{williams1991function,ahmed2019understanding}.
\noindent\textbf{Soft Actor-Critic (SAC)~\cite{haarnoja2018soft}.} Specifically, we use the SAC framework to solve the MaxEnt RL problem.
Following the actor-critic methodology, SAC uses two components, an actor and a critic, to iteratively maximize $V_{\pol}(s) + \ent_{\pol}(a|s)$.
The critic minimizes the soft Bellman residual with a functional approximation $Q_{\phi}$:
\begin{align
J(Q_\phi)=&\mathbb{E}_{\left(s_{t}, a_{t}\right) \sim \mathcal{D}}\Big[\frac{1}{2}\Big(Q_{\phi}\left(s_{t}, a_{t}\right
-\left(\rewards\left(s_{t}, a_{t}\right)+\gamma \mathbb{E}_{s_{t+1} \sim \rho}\left[V_{\bar{\phi}}\left(s_{t+1}\right)\right]\right)\Big)^2 \Big],\label{eq:softq}
\end{align}
where $\rho$ is the state marginal of the policy distribution, and $V_{\bar{\phi}}\left(s_{t}\right)\triangleq \mathbb{E}_{a_{t} \sim \pi_{\theta}}\left[Q_{\bar{\phi}}\left(s_{t}, a_{t}\right)-\alpha \log \pi\left(a_{t} | s_{t}\right)\right]$.
Eq.~\eqref{eq:softq} makes use of a target soft Q-function with parameters $\bar{\phi}$ obtained using an exponentially moving average of the soft Q-function parameters $\phi$.~\cite{mnih2015human} has demonstrated this technique stabilizes training.
Given the $Q_{\phi}$, the actor learns the policy parameters $\theta$ by minimizing $J(\pi_{\theta})$:
\begin{equation}
J(\pi_{\theta})=\mathbb{E}_{s_{t} \sim \mathcal{D}}\left[\mathbb{E}_{a_{t} \sim \pi_{\theta}}\left[\alpha \log \left(\pi_{\theta}\left(a_{t} | s_{t}\right)\right)-Q_{\phi}\left(s_{t}, a_{t}\right)\right]\right].
\label{eq:policysac}
\end{equation}
Here, $\alpha$ is called the entropy temperature; it regulates the relative importance of the entropy term versus the reward and produces better results. We use the version of SAC with an automatic temperature tuning scheme for $\alpha$.
\subsection{Safe RL}
\textbf{Risk Measure for Safety.} Safe or risk-sensitive RL with MDPs is first considered in~\cite{howard1972risk}, where they aim to maximize an exponential utility function over the cumulative reward: $V_{\pol}(s|\lambda) = \lambda^{-1} \log \expect[\exp(\lambda \returns)]$. This is equivalent to maximizing $V_{\pol}(s)+\lambda \var[\returns]$, such that the high variance in return is penalized for $\lambda<0$ and encouraged for $\lambda>0$.
Though this approach of using exponential utility in risk-sensitive discrete MDPs dominates the initial phase of safe RL research~\cite{marcus1997risk,coraluppi1999risk,garcia2015comprehensive}, with the invent of coherent risks~\cite{artzner1999coherent}\footnote{Variance is not a coherent risk but standard deviation is.}, researchers have looked into other risk measures, such as Conditional Value-at-Risk (CVaR)\footnote{$\text{CVaR}_{\lambda}$ quantifies expectation of the lowest $\lambda\%$ of a probability distribution~\cite{rockafellar2000optimization}.}~\cite{chow2015risk}. Also, application of RL to large scale problems~\cite{chow2014algorithms,chow2015riskconstrained}, tried to make the algorithms scalable and to extend to the continuous MDPs~\cite{ray2019benchmarking}. Our approach is flexible to consider all these risk measures and both discrete and continuous MDP settings.
\noindent\textbf{Safe Exploration.} Another approach is to consider a part of the state-space to be `safe' and constrain the RL algorithm to explore inside it with high probability. \cite{geibel2005risk} considered a subset of terminal states as `error' states $\mathcal{E} \subseteq \states$ and developed a constrained MDP problem to avoid reaching it:
\begin{align}
\argmax_{\pi} V_{\pol}(s) \text{ s.t. } \forall s \in \states\setminus\mathcal{E}, \rho_{\pol}(s) \leq \delta.\label{eq:safe_exp}
\end{align}
Here, $\rho_{\pol}(s)$ is the total number of times the agent goes to the terminal error states $\mathcal{E}$.
Due to existence of these error states, even a policy with low variance can produce large risks (e.g. falls or accidents)~\cite{ray2019benchmarking}.
The other approach is to use the Lyapunov theory of stability on the value function. This approach computes a compatible Lyapunov function ensuring safety, and then computes a corresponding region of attraction, i.e., a safe region. Given this structure, the goal becomes to compute a safe policy that stays in this safe region with high probability while maximizing the corresponding value function. Given a Lyapunov function and thus, a region of attraction, this approach can also be formulated as Eq.~\eqref{eq:safe_exp} but with a different $\rho$.
In the following section, we express the aforementioned two approaches to safe RL as a constrained MDP.
\paragraph{Robustness with Chance Constraints.} Another family of approaches are developed from the minimax analysis of robustness. In the minimax approach, an agent tries to maximize the value function for the MDP that yields minimum return. Since this approach is worst-case, it is often too conservative in practice and harder to optimize for a plausible family of MDPs in which the MDP of interest is in.
Thus, for a given unknown MDP, a stochastic version~\cite{heger1994consideration} of this problem is developed using chance constraints. In the chance constraint formulation, the agent maximizes the value given that the return is lower than a threshold $\lambda \in \real$ with probability less than or equal to $\delta \in (0,1]$:
\begin{align*}
\argmax_{\pi} V_{\pol}(s) &\text{ s.t. } \prob{\returns \leq \lambda} \leq \delta.
\end{align*}
As mentioned in~\cite{prashanth2018risk} and~\cite{chow2014algorithms}, safety constraints can be adopted to develop constrained MDP~\cite{altman1999constrained} formulation of risk-sensitive RL. This motivates the constrained MDP formulation.
\section{Introduction}
Reinforcement Learning (RL) is a paradigm of Machine Learning (ML) that addresses the problem of sequential decision making and learning under incomplete information~\cite{puterman2014markov,sutton2018reinforcement}.
Designing an RL algorithm requires both efficient quantification of uncertainty regarding the incomplete information and the probabilistic decision making policy, and effective design of a policy that can leverage these quantifications to achieve optimal performance.
Recent success of RL in structured games, like Chess and Go~\cite{mnih2015human,gibney2016google}, and simulated environments, like continuous control using simulators~\cite{lillicrap2015continuous,degrave2019differentiable}, have drawn significant amount of interest.
Still, real-world deployment of RL in industrial processes, unmanned vehicles, robotics etc., does not only require effectiveness in terms of performance but also being sensitive to risks involved in decisions~\cite{pan2017virtual,dulacarnold2020realworldrlempirical,thananjeyan2021recovery}.
This has motivated a surge in works quantifying risks in RL and designing risk-sensitive (or robust, or safe) RL algorithms~\cite{garcia2015comprehensive,pinto2017robust,ray2019benchmarking,wachi2020safe,eriksson2021sentinel,eysenbach2021maximum}.
\noindent\textbf{Risk-sensitive RL.} In risk-sensitive RL, the perception of risk-sensitivity or safety is embedded mainly using two approaches. The first approach is constraining the RL algorithm to converge in a restricted, `safe' region of the state space~\cite{geibel2005risk,thananjeyan2021recovery,koller2018learning,ray2019benchmarking}. Here, the `safe' region is the part of the state space that obeys some external risk-based constraints, such as the non-slippery part of the floor for a walker. RL algorithms developed using this approach either try to construct policies that generate trajectories which stay in this safe region with high probability~\cite{geibel2005risk}, or to start with a conservative `safe' policy and then to incrementally estimate the maximal safe region~\cite{7798979}.
The other approach is to define a risk-measure on the long-term cumulative return of a policy for a fixed environment, and then to minimize the corresponding total risk~\cite{howard1972risk,garcia2015comprehensive,prashanth2018risk}. A risk-measure is a statistics computed on the cumulative return which quantifies either the spread of the return distribution around its mean value or the heaviness of this distribution's tails~\cite{szego2004risk}. Example of such risk measures are variance, conditional value-at-risk (CVaR)~\cite{rockafellar2000optimization}, exponential utility~\cite{howard1972risk}, variance~\cite{prashanth2016variance}, etc. These risk-measures are also extensively used in dynamic pricing~\cite{lim2007relative}, financial decision making~\cite{artzner1999coherent}, robust control~\cite{chen2005risk}, and other decision making problems where risk has consequential effects.
\noindent\textbf{Our Contributions.} In this paper, we unify both of these approaches as a constrained RL problem, and further derive an equivalent non-zero-sum (NZS) stochastic game formulation~\cite{sorin1986asymptotic} of it.
In our NZS game formulation, \textit{risk-sensitive RL reduces to a game between an agent and an adversary} (Sec.~\ref{sec:problem}). The adversary tries to break the \textit{safety constraints}, i.e., either to move out of the `safe' region or to increase the risk measures corresponding to a given policy. In contrast, the agent tries to construct a policy that maximizes its expected long-term return given the adversarial feedback, which is a statistics computed on the adversary's constraint breaking.
Given this formulation, we propose a generic actor-critic framework where any two compatible actor-critic RL algorithms are employed to enact as the agent and the adversary to ensure risk-sensitive performance (Sec.~\ref{sec:method}). In order to instantiate our approach, we propose a specific algorithm, \textit{Safe Adversarially guided Actor-Critic} ($\mathtt{SAAC}$\xspace), that deploys two Soft Actor-Critics (SAC)~\cite{haarnoja2018soft} as the agent and the adversary. We further derive the policy gradients for the two SACs, showing that the risk-sensitivity of the agent is ensured by a term repulsing it from the adversary in the policy space. Interestingly, this term can also be used to seek risk and explore more.
In Sec.~\ref{sec:experiments}, we experimentally verify the risk-sensitive performance of $\mathtt{SAAC}$\xspace under safe region, CVaR, and variance constraints for continuous control tasks from real-world RL suite~\cite{dulacarnold2020realworldrlempirical}. We show that $\mathtt{SAAC}$\xspace is not only risk-sensitive but also outperforms the state-of-the-art risk-sensitive RL and distributional RL algorithms.
\section{Experimental Analysis}\label{sec:experiments}
\input{sections/saac_variants}
\textbf{Experimental Setup.}
First, we compare some possible variants of our method. Indeed, as presented in Sec.~\ref{sec:saac-algo}, the adversary has different quantifications of risk to fulfill the objective of finding actions with high probability of breaking the constraints: $\mathtt{SAAC}$-$\mathtt{Cons}$\xspace, $\mathtt{SAAC}$-$\mathtt{CVaR}$\xspace, and $\mathtt{SAAC}$-$\mathtt{MSD}$\xspace.
Following that, we compare our method with best performing competitors in continuous control problems: SAC~\cite{haarnoja2018soft} and TQC~\cite{kuznetsov2020controlling}. TQC builds on top of C51~\cite{bellemare2017distributional} and QR-DQN~\cite{dabney2018distributional}, and adapt the distributional RL methods for continuous control. Further, they apply truncation for the approximated distributions to control their overestimation and use ensembling on the approximators for additional performance improvement. Finally, we qualitatively compare the behavior of our risk-averse method with that of SAC, using state vectors collected during validation in test environments. Note that for all the experiments (repeated over 9 random seeds), the agents are trained for 1M timesteps and their performance is evaluated at every 1000-th step.
Similar to TQC, we implement $\mathtt{SAAC}$\xspace on top of SAC and choose to automatically tune the adversary temperature $\beta$ (Sec.~\ref{sec:temp}) and the entropy temperature $\alpha$. Last but not least, using $\mathtt{SAAC}$\xspace on top of SAC introduces only one hyperparameter: the learning rate for the automatic tuning of $\beta$. All the other hyperparameters are the same as for SAC and are available for consultation in ~\cite[Appendix D]{haarnoja2018soft}. For TQC, we employ the same hyperparameters as reported in~\cite{kuznetsov2020controlling}.
\noindent\textbf{Description of Environments.}
To validate the framework of a RNS Game with MaxEnt RL, we conduct a set of experiments in the DM control suite~\cite{tassa2018deepmind}. More specifically, we use the real-world RL challenge\footnote{\href{https://github.com/google-research/realworldrl_suite}{https://github.com/google-research/realworldrl\_suite}}~\cite{dulacarnold2020realworldrlempirical}, which introduces a set of real-world inspired challenges. In this paper, we are particularly interested in the tasks, where a set of constraints are imposed on existing control domains. In the following, we give a short description of the tasks and safety constraints used in the experiments, with their respective observation ($\mathcal{S}$) and action ($\mathcal{A}$) dimensions. First, \textit{realworldrl-walker-walk} ($\mathcal{S}\times\mathcal{A} = 18 \times 6$) corresponds to the dm-control suite \textit{walker} task with (a) joint-specific constrains on the joint angles to be within a range and (b) a constrain on the joint velocities to be within a range. Next, \textit{realworldrl-quadruped-joint-walk} ($\mathcal{S}\times\mathcal{A} = 78 \times 12$) corresponds to the dm-control suite \textit{quadruped} task with the same set of constraints as just described. \textit{realworldrl-quadruped-upright-walk} has a constrain on the quadruped's torso's z-axis to be oriented upwards, and \textit{realworldrl-quadruped-force-walk} limits foot contact forces when touching the ground.
\input{sections/tables}\vspace*{-.8em}
\subsection{Comparison between Risk Quantifiers of $\mathtt{SAAC}$\xspace}
First, we compare the different variants of $\mathtt{SAAC}$\xspace allowed by the method's framework in the \textit{realworldrl-walker-walk-returns} task. From Table~\ref{tab:comparison} and Fig.~\ref{fig:constraint1} (lines are average performances and shaded areas represent one standard deviation) we evaluate how our method affects the performance and risk aversion of agents.
In addition to the rate at which the maximum average return is reached by each of the methods compared to SAC, we compare the cumulative number of failures of the agents (the lower the better). As expected, risk-sensitive agents such as $\mathtt{SAAC}$\xspace decrease the probability of breaking safety constraints. Concurrently, they achieve the maximum average return with much higher sample efficiency, $\mathtt{SAAC}$-$\mathtt{MSD}$\xspace ahead. Henceforth, we use the $\mathtt{SAAC}$-$\mathtt{MSD}$\xspace version of our method to compare with the baselines.
\subsection{Comparison of $\mathtt{SAAC}$\xspace to Baselines}
Now, we compare the best performing $\mathtt{SAAC}$\xspace variant $\mathtt{SAAC}$-$\mathtt{MSD}$\xspace with SAC~\cite{haarnoja2018soft}, TQC~\cite{kuznetsov2020controlling} and TQC-CVaR, i.e. an extension of TQC with 16\% of the topmost atoms dropped (cf. Table 6 in~\cite[Appendix B]{kuznetsov2020controlling}) of all Q-function atoms. In Table~\ref{tab:quadruped-upright} and Fig.~\ref{fig:constraint2}, we evaluate $\mathtt{SAAC}$-$\mathtt{MSD}$\xspace in \textit{realworldrl-quadruped-upright-walk}. In Table~\ref{tab:quadruped-joint} and Fig.~\ref{fig:constraint3}, we report the results for \textit{realworldrl-quadruped-joint-walk}.
Table~\ref{tab:quadruped-joint} shows that $\mathtt{SAAC}$-$\mathtt{MSD}$\xspace performs better than all other baselines both in terms of final performance and in terms of finding risk-averse policies. Moreover, although TQC-CVaR exhibits fewer number of failures over the course of learning, it performs slightly worse than its non-truncated counterpart TQC. Table~\ref{tab:quadruped-upright} confirms the advantage of using $\mathtt{SAAC}$-$\mathtt{MSD}$\xspace as a risk-averse MaxEnt RL method over the baselines: overall using $\mathtt{SAAC}$\xspace allows the agents to achieve faster convergence using safer policies during training. Interestingly, TQC achieves the maximum score of the task a bit later than the SAC agent. Nevertheless, TQC-CVaR, its CVaR variant, opens the door for better sample efficiency score with much safer policies.
\subsection{Visualization of Safer State Space Visitation}
\begin{figure*}[t!]
\centering
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\linewidth]{exp/state-space/pca_0.pdf}
\end{minipage}\hfill
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\linewidth]{exp/state-space/pca_20.pdf}
\end{minipage}\\
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\linewidth]{exp/state-space/pca_40.pdf}
\end{minipage}\hfill
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\linewidth]{exp/state-space/pca_60.pdf}
\end{minipage}
\caption{Visualization of visited state space projection at different stages of learning in the \textit{realworldrl-walker-walk} task.}\label{fig:state-space}\vspace*{-1em}
\end{figure*}
In this experiment, we choose SAC, $\mathtt{SAAC}$-$\mathtt{Cons}$\xspace and $\mathtt{SAAC}$-$\mathtt{MSD}$\xspace to train a relatively wide spectrum of agents using the same experimental protocol as in Sec. 5.2., and on the \textit{realworldrl-walker-walk} task. We collect samples of states visited during the evaluation phase in a test environment at different stages of the training. The state vectors are projected from a 18D space to a 2D space using PCA. We present the results in Fig.~\ref{fig:state-space}. At the beginning of training, there is no clear distinction in terms of explored state regions, as the learning has not begun yet. On the contrary, during the 200k-600k timesteps, there is a significant difference in terms of state space visitation. In resonance with the cumulative number of failures shown in Fig.~\ref{fig:constraint1}, the results suggest that SAC engages in actions leading to more unsafe states. Conversely, $\mathtt{SAAC}$\xspace seems to successfully constraint the agents to safe regions. |
1,108,101,564,356 | arxiv |
\section{The Proposed Framework}
\label{sec_network}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth,height=0.65\linewidth]{images/detection/58_image.jpg}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth,height=0.65\linewidth]{images/detection/58_bb_box_mask.jpg}
\end{subfigure}
\caption{A synthetically generated sample}
\label{fig:random sample data}
\vspace{-5mm}
\end{figure}
\subsection{Dataset}
We collect $30$ images for each class of brick (e.g., blue and green) using Kinect. Images are collected, such that each image has an isolated brick with a different pose. A manual mask is generated for each image. Following \cite{kumar2019semi}, the raw images and the corresponding masks are used to synthesize the cluttered scenes. A dataset of $10$k training images and $5$k test images are generated. We generate ground truths for synthetic images such that a unique instance-ID, as opposed to semantic segmentation, is assigned for each brick instance. A random sample from the data set is shown in Fig. \ref{fig:random sample data}. Furthermore, for each mask instance, we generate a rotated box using the OpenCV API.
Each image is divided into several grids, where each grid has a size of $16\times16$ pixels. Thus if the raw image has a size of $480\times640$, then the total number of the grids are $\frac{480}{16} \times \frac{640}{16}$, i.e., $30\times40$. Further, each grid is represented by an $8D$ vector representing the three-class probabilities (blue brick, green brick, or background) and five bounding box parameters $x, y, w, h, \theta$. For each grid, class probabilities are assigned if the rotated bounding box's centroid (corresponding to the blue brick, green brick) exists within that grid. If there is no centroid in the grid, then we will assign the probability of $1.0$ to the background label. Suppose a centroid exists within a grid. Corresponding bounding box parameters are $x, y, w, h, \theta$, where $x, y$ is the offset between the rotated bounding box center and the topmost grid corner. Parameters $w, h, \theta$ are width, height, and orientation of the bounding box, respectively. We scale the bounding box parameters in the range of ($0, 1$), where the maximum value of offset is $16$ pixels. The box's maximum dimension can be $480 \times 640$ pixels, and the maximum orientation value is $3.14$ radians. Thus for each image, we have an output tensor of size $30\times40\times8$. Further, if multiple centroid points exist in a single grid, we select the centroid point corresponding to the mask, which has a larger fraction of the complete mask in that grid.
\begin{figure}
\includegraphics[width=\linewidth]{images/Slide.jpg}
\caption{Proposed Network}
\label{fig:network}
\vspace{-5mm}
\end{figure}
\subsection{Rotating Box Network}
Fig. \ref{fig:network} represents the network architecture. For each block, the size of the input and output feature map is mentioned. Also, we use the ReLU activation function after each layer. The SSD architecture inspires the proposed rotating box network architecture. In SSD, shallow layer features and depth layer features are used for final predictions. Similarly, in the proposed network, features from individual shallow layers are processed, concatenated, and passed through a series of fully convolutional layers for final prediction. Unlike SSDs, the proposed network prediction does not use any anchor boxes. Instead, it predicts an additional degree of freedom (angle of the box), and thus the predicted bounding boxes can align more accurately than the constrained bounding box.
In order to train the network for predicting rotated boxes, the input to the network is the raw image, and the output of the network is a tensor of size $30 \times 40 \times 8$. Further, we use a cross-entropy loss for the class probabilities and a regression loss for the bounding box parameters. Overall loss for the network is the average of both losses. Further, to avoid any biasing in training because of the large number of non-object grids as compared to the object grids, we select positive to negative ratio = $1:2$ by following \cite{SSD}. Output the model for the different arrangement of bricks in a variety of backgrounds is shown in Fig. \ref{fig:all_all_results}.
\begin{figure}[h]
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/detection/5.jpeg}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/detection/6.jpeg}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/detection/7.jpeg}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/detection/9.jpeg}
\end{subfigure}
\caption{Network Predictions}
\label{fig:all_all_results}
\vspace{-4mm}
\end{figure}
\subsection{Pose Estimation}
\label{sec_pose}
To estimate the brick pose, as mentioned earlier, we have to calculate the pose of one of the brick surfaces and use relative transformation to extract the complete brick pose. For this task, we feed the current image (Fig. \ref{fig:pose1}) to the rotating box network. The region corresponding to the rotating box (Fig. \ref{fig:brick_seg}) is called the brick region, and the point cloud corresponding to the brick region is called the brick cloud. On the brick's cloud, we apply the following steps:
\begin{itemize}
\item Apply \textit{RANSAC} method for estimating a set of points (inliers) that fits a planar surface in the brick cloud data.
\item Compute the centroid, major axis, and minor axis of the inliers. Together these three pieces of information represent the pose of the planar surface. To estimate the surface ID, we follow the following steps.
\item Using \cite{vohra2019real}, extract all boundary points in the inliers, which is marked in white color in Fig. \ref{fig:edges}.
\item Apply RANSAC method for fitting the lines on the boundary points which are shown pink in Fig. \ref{fig:lines}.
\item Compute all corner points, which are the intersecting point of two or more lines.
\item Pair the corner points representing the same line \cite{icinco21}, and the distance between two corner points gives the length of the edge.
\item Since the brick dimensions are known in advance. Hence the length of the edges can be used to identify the surface, and we can use relative transformation to compute the 6D pose of the brick, as shown in Fig. \ref{fig:pose}.
\end{itemize}
\begin{figure}[h]
\centering
\begin{subfigure}{0.125\textwidth}
\includegraphics[width=1.18\linewidth]{images/pose_estimation/sample.jpg}
\caption{Image}
\label{fig:pose1}
\end{subfigure}\hfil
\begin{subfigure}{0.125\textwidth}
\includegraphics[width=1.18\linewidth]{images/pose_estimation/bb_box_pose.jpg}
\caption{Rotating box}
\label{fig:brick_seg}
\end{subfigure}\hfil
\begin{subfigure}{0.125\textwidth}
\includegraphics[height=0.89\linewidth, width=1.15\linewidth]{images/pose_estimation/real_cloud.jpg}
\caption{Point Cloud}
\end{subfigure}
\medskip
\begin{subfigure}{0.125\textwidth}
\includegraphics[height=0.89\linewidth, width=1.16\linewidth]{images/pose_estimation/edges.jpg}
\caption{Edges}
\label{fig:edges}
\end{subfigure}\hfil
\begin{subfigure}{0.125\textwidth}
\includegraphics[height=0.89\linewidth, width=1.16\linewidth]{images/pose_estimation/lines.jpg}
\caption{Lines}
\label{fig:lines}
\end{subfigure}\hfil
\begin{subfigure}{0.125\textwidth}
\includegraphics[height=0.89\linewidth, width=1.15\linewidth]{images/pose_estimation/pose.jpg}
\caption{6D Pose}
\label{fig:pose}
\end{subfigure}
\caption{Pose estimation pipeline}
\label{fig:complete_pose}
\vspace{-4mm}
\end{figure}
\section{Conclusion}
\label{sec_con}
An end-to-end visual perception framework is proposed. The framework consists of a CNN for predicting a rotated bounding box. The performance of the CNN detector has been demonstrated in various scenarios, which mainly include isolated and dense clutter of bricks. The proposed CNN module localizes the bricks in a clutter while simultaneously handling multiple instances of the bricks. The detection is free of the anchor-box technique, which improves the timing performance of the detection module. In order to compare our method quantitatively with state-of-the-art models, we reported Precision ($P$), Recall ($R$), and mAP scores for various test cases. We have compared the effectiveness of rotating bounding box predictions against upright bounding box detection (YOLO-v3, SSD-Lite). The proposed scheme outperforms the upright bounding box detection. It implies that rotating bounding boxes can align more accurately with the object's convex-hull and thereby reduce the overlap with neighboring bounding boxes(if any). The framework has also been successfully deployed on a robotic system to construct a wall from bricks in fully autonomous operation.
\section{Experiments and Result}
\label{sec_exp}
\subsection{Experimental Setup}
For experimental evaluation, we use our robotic platform setup, as shown in Fig. \ref{fig:hardware_setup}. It consists of a UR5 robot manipulator with its controller box (internal computer) mounted on a ROBOTNIK Guardian mobile base and a host PC (external computer). The UR5 robot manipulator is a 6-DOF robotic arm designed to work safely alongside humans. We use an eye-in-hand approach, i.e., the image acquisition hardware, which consists of RGB-D Microsoft Kinect Sensor, is mounted on the manipulator. A suction-based gripper is used for grasping. Robot Operating System (ROS) is used to establish a communication link among the sensor, manipulator, and the gripper.
\begin{figure*}
\begin{subfigure}[b]{0.14\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/1.jpg}
\caption{}
\label{fig:full_system1}
\end{subfigure}%
\begin{subfigure}[b]{0.14\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/2.jpg}
\caption{}
\label{fig:full_system2}
\end{subfigure}%
\begin{subfigure}[b]{0.14\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/3.jpg}
\caption{}
\label{fig:full_system3}
\end{subfigure}%
\begin{subfigure}[b]{0.14\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/4.jpg}
\caption{}
\label{fig:full_system4}
\end{subfigure}%
\begin{subfigure}[b]{0.14\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/5.jpg}
\caption{}
\label{fig:full_system5}
\end{subfigure}%
\begin{subfigure}[b]{0.14\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/6.jpg}
\caption{}
\label{fig:full_system6}
\end{subfigure}%
\begin{subfigure}[b]{0.14\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/7.jpg}
\caption{}
\label{fig:full_system7}
\end{subfigure}
\caption{Sequence of actions executed in order to carry out a single step of wall construction }
\label{fig:full_system}
\vspace{-4mm}
\end{figure*}
\subsection{Overall Algorithmic Flow}
For the experiment, we follow a simple pattern (or wall pattern) such that we place a blue brick on the previously placed blue brick and a green brick over the previously placed green brick and keep on placing the bricks up to the height of $6$ layers. The system needs to place the brick correctly for wall construction, hence requiring the brick pose with high accuracy. Since in a dense clutter, network prediction for brick can include some portion of other bricks, directly processing the box regions for pose can give a noisy or less reliable brick pose. For the safer side, we perform the grasp operation with a noisy brick pose and place the brick in a separate area and again estimate the pose of single isolated brick.
Fig. \ref{fig:full_system} refers to the sequence of steps executed to complete a single phase of the wall construction task. In the first stage, the Kinect sensor is positioned to have a clear view of the bricks clutter (Fig. \ref{fig:full_system1}), and the current image is fed to the rotating bounding box network. We compute the planar surface and its pose (centroid, major-axis, and minor-axis) corresponding to each rotated box. We select the topmost pose from all the calculated poses, i.e., the pose at the top of the clutter. The required motion commands are generated to reach the selected pose for brick grasping in clutter (Fig. \ref{fig:full_system2}).
The grasped brick is placed in a separate area (Fig. \ref{fig:full_system3}), and again the vision sensor is positioned in an appropriate position to have a clear view of a single isolated brick (Fig. \ref{fig:full_system4}). At this instant, the current frame is fed to the network, and the output of the network is a class label and the bounding box parameters. The brick's pose is estimated and required motion commands are generated to grasp the brick (Fig. \ref{fig:full_system5}).
In the final stage, the sensor is positioned in an appropriate position to have a view of the wall region (Fig. \ref{fig:full_system6}). The current frame is fed to the network. Using the estimated bounding box, point cloud data, and grasped brick label (previous step), the brick's final pose is estimated, where the brick is to be placed to form a wall (Fig. \ref{fig:full_system7}).
\subsection{Error Metrics}
The system's overall wall-building performance depends entirely upon the performance of the visual perception system, i.e., the accuracy of brick detection. Therefore, we report the performance of the detection system in terms of precision ($P$) and recall ($R$). We have defined $P,C$ as:
\begin{equation}
P = \frac{NO}{TP},\ \ \ R = \frac{DB}{TB}
\end{equation}
\begin{description}
\item [where,]
\item [$NO$] Number of object pixels in the predicted box (rotated / upright)
\item [$TP$] Total number of object pixels in the predicted box (rotated / upright)
\item [$DB$] Total number of detected bricks
\item [$TB$] Total number of the bricks in the ground truth.
\end{description}
\subsection{Quantitative Analysis}
We compare rotating box network with YOLO-v3 and SSD-lite. For fair comparison, all models are trained by choosing hyper-parameters as $epoch=60, mini-batch=4, lr=0.0001$ except SSD-lite which has a learning rate of $0.001$. We use Adam optimizer to tune the parameters of CNN. We divide the quantitative analysis in following two cases:
\subsubsection{Upright Bounding Box Prediction}
In this case, we report Mean Average Precision (mAP) score, $P$ and $R$ of the proposed CNN model against SSD-lite and YOLO-v3 (Table-\ref{table_urb}). All the three models produces upright bounding boxes.
\vspace{-7mm}
\begin{center}
\begin{table}[h]
\caption{}
\label{table_urb}
\centering
\begin{tabular}{| c | c | c | c | }
\hline
& SSD-lite & YOLO-v3 & Proposed \\
\hline
$P$ & $0.608$ & $0.580$ & $0.638$ \\
\hline
$R$ & $0.98$ & $0.84$ & $0.84$ \\
\hline
mAP & $0.834$ & $0.827$ & $0.811$ \\
\hline
\end{tabular}
\end{table}
\end{center}
\vspace{-7mm}
\subsubsection{Rotated Bounding Box Prediction}
In this case, SSD-lite and YOLO-v3 produce regular bounding boxes, while the proposed CNN model produces rotated boxes. Since the mAP score of rotated boxes can not be compared directly with that of upright bounding boxes. Hence only $P$ and $R$ is reported (Table-\ref{table_rb}). The precision $P$ for rotating boxes network is significantly higher as compared to other networks, because of the one additional degree of freedom (angle of boxes), the network predicts bounding boxes that can align more accurately as compared to constrained bounding boxes (straight boxes), thus there will be less overlap between different boxes and most of the region inside the bounding box represents the same object which results in high precision.
\vspace{-4mm}
\begin{center}
\begin{table}[h]
\caption{}
\label{table_rb}
\centering
\begin{tabular}{| c | c | c | c | }
\hline
& SSD-lite & YOLO-v3 & Proposed \\
\hline
$P$ & $0.608$ & $0.580$ & $0.778$ \\
\hline
$R$ & $0.98$ & $0.84$ & $0.999$ \\
\hline
\end{tabular}
\end{table}
\end{center}
\vspace{-4mm}
\vspace{-4mm}
\subsection{Qualitative Analysis}
Predictions of all four networks under consideration are shown in Fig. \ref{fig:all_results}. It can be noticed from Fig. \ref{fig:all_output_4_1} and \ref{fig:all_output_4_3}, two green bricks present in the center of the frame are represented by a single bounding box, thus decreasing the recall value for Yolo-v3 and the proposed bounding box network. While SSD-lite (Fig. \ref{fig:all_output_4_2}) and the proposed rotating box network (Fig. \ref{fig:all_output_4_4}), both assign two different boxes for the two green bricks. Thus SSD-lite and our network have a higher recall. However, in SSD-lite, two predicted bounding boxes have a significant overlap area, thus having lower precision than the rotating box network.
\begin{figure}
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/yolo_o1.jpg}
\caption{}
\label{fig:all_output_1_1}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/ssd_1.jpg}
\caption{}
\label{fig:all_output_1_2}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/my_o1.jpg}
\caption{}
\label{fig:all_output_1_3}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/bb_box1.jpg}
\caption{}
\label{fig:all_output_1_4}
\end{subfigure}
\medskip
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/yolo_o2.jpg}
\caption{}
\label{fig:all_output_2_1}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/ssd_2.jpg}
\caption{}
\label{fig:all_output_2_2}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/my_o2.jpg}
\caption{}
\label{fig:all_output_2_3}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/bb_box2.jpg}
\caption{}
\label{fig:all_output_2_4}
\end{subfigure}
\medskip
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/yolo_o3.jpg}
\caption{}
\label{fig:all_output_3_1}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/ssd_3.jpg}
\caption{}
\label{fig:all_output_3_2}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/my_o3.jpg}
\caption{}
\label{fig:all_output_3_3}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[height = 0.66\textwidth, width =0.95\linewidth]{images/experiments/bb_box3.jpg}
\caption{}
\label{fig:all_output_3_4}
\end{subfigure}
\medskip
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/yolo_o4.jpg}
\caption{}
\label{fig:all_output_4_1}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/ssd_4.jpg}
\caption{}
\label{fig:all_output_4_2}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/my_o4.jpg}
\caption{}
\label{fig:all_output_4_3}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/bb_box4.jpg}
\caption{}
\label{fig:all_output_4_4}
\end{subfigure}
\medskip
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/yolo_o5.jpg}
\caption{}
\label{fig:all_output_5_1}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/ssd_5.jpg}
\caption{}
\label{fig:all_output_5_2}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/my_o5.jpg}
\caption{}
\label{fig:all_output_5_3}
\end{subfigure}%
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[scale=0.08]{images/experiments/bb_box5.jpg}
\caption{}
\label{fig:all_output_5_4}
\end{subfigure}
\caption{From Column 1 to 4: YOLO-v3 predictions, SSD-lite predictions, Proposed Bounding box predictions, Rotating box predictions.}
\label{fig:all_results}
\end{figure}
\subsection{Task Evaluation} To evaluate the system's performance, we repeat the task of wall construction for $25$ rounds. For each round, the robotic system has to place the bricks up to the height of $6$ layers, and the wall pattern will remain the same, i.e., the blue brick on previously placed blue brick and green brick on previously placed green brick. For the experiments, the first layer of bricks is placed manually, and the system has to place the rest of the bricks, i.e., $2-6$ layers, according to the pattern. We count the number of bricks (or layers) for each round the robotic system has successfully placed on the wall. In the experiments, we define the brick placement as successful if the distance between the centroid of the currently placed brick and the centroid of the manually placed brick, when projected on the ground plane is $< 0.1m$. Further, the Euler angle difference between the calculated pose of the currently placed brick and the manually placed brick should be less than \ang{15} for each axis. From our experiments, we observed that if none of the above criteria are satisfied, the wall becomes asymmetrical and collapses. The Table-\ref{table_task_eval} shows the performance of the system for $25$ rounds. The video link for the experiment is \url{https://www.youtube.com/watch?v=FvsCv-Pt58c}.
\vspace{-4mm}
\begin{center}
\begin{table}[h]
\caption{Task Evaluation}
\label{table_task_eval}
\centering
\begin{tabular}{ | C{1.2cm} | C{0.8cm}| C{0.8cm} | C{0.8cm} | C{0.8cm} | C{0.8cm} | }
\hline
& layer-2 & layer-3 & layer-4 & layer-5 & layer-6 \\
\hline
Successful rounds (max 25) & 25 & 25 & 22 & 19 & 17 \\
\hline
\end{tabular}
\vspace{-4mm}
\end{table}
\end{center}
\vspace{-4mm}
From Table-\ref{table_task_eval}, we observed that the robotic system has successfully placed layer-$2$ and layer-$3$ bricks in all $25$ rounds. However, accuracy decreases with the upper layers. This is because the new position of the brick on the wall is estimated by calculating the previously placed brick pose. Thus, if there is a slight error in brick placement in the previous step, this error is transferred to the current step. Thus, with higher layers, the error accumulates, resulting in lower accuracy.
\section{Introduction}
Manufacturing and construction are one of the widespread and continuously growing industries. The former has seen a dramatic increase in production capacity due to the optimization of industrial automation, while the latter has adopted automation only marginally \cite{asadi2018real}. Construction automation is inherently quite challenging for several reasons. First, the workspace is highly unstructured. Therefore, very high precision and robust visual perception, motion planning, and navigation algorithms are required for autonomous solutions to adapt to different scenarios. Secondly, a mobile manipulator needs to move between multiple positions, compelling us to perform onboard computations for various algorithms. Therefore, limited memory, power, and computational resources make this task more challenging.
The process of automation can have a broad impact on the construction industry. First, construction work can continue without pause, which ultimately shortens the construction period and increases economic benefits. Also, the essential benefits of construction automation are worker safety, quality, and job continuity. Towards this end, recently, Construction Robotics, a New York-based company, has developed a bricklaying robot called SAM100 (semi-automated mason) \cite{parkes2019automated}, which makes a wall six times faster than a human. However, their robot required a systematic stack of bricks at regular intervals, making this system semi-autonomous, as the name suggests.
One of the primary construction tasks is to build a wall from a pile of randomly arranged bricks. To replicate the simplified version of the above work, humans must complete a sequence of operations: \textit{i)} Select the appropriate brick from the pile, e.g., the topmost brick, \textit{ii)} finding the optimal grasp pose for the brick, and \textit{iii)} finally, placing the brick in its desired place, i.e., on the wall. Humans can do this work very quickly and efficiently. However, the robot must perform a complex set of underlying operations to complete the above steps autonomously \cite{prakash2019learning}.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=0.965\linewidth,height=0.750\linewidth]{images/experiments/1.jpg}
\caption{}
\label{fig:hardware_setup}
\end{subfigure}
\hspace{2ex}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=0.965\linewidth,height=0.750\linewidth]{images/introduction/8.jpg}
\caption{}
\end{subfigure}
\caption{
(a) shows a simple scenario where a pile is located near the robotic system, (b) the robotic system mimics wall building task i.e. detects pile, selects a target brick and constructs a wall on its side in a fully autonomous way.
}
\label{fig: task} \vspace{-4mm}
\end{figure}
In this paper, we aim to deploy a robotic solution for the task of construction automation in a constrained environment (Fig.\ref{fig: task}) with limited computational resources (single CPU with I7 processor, 4core, 8GB RAM ). We assume that all bricks are of equal size, and their dimensions are known. We further assume that the wall assembly area and brick piles are very close, exempting us from deploying any localization and navigation modules for robots. Thus, the main challenge in this task is to detect and localize bricks in the clutter while handling the multiple instances of the bricks. Once we have localized the bricks in a clutter, we will use the above information to estimate the brick pose. Following are the main contributions in this paper:
\begin{itemize}
\item A computationally efficient object detection network for the detection and localization of bricks in a clutter is presented in this paper.
\item A light computational method for estimating brick pose using point cloud data is presented in this paper.
\item All the modules are integrated into a robotic system to develop a fully autonomous system.
\item Extensive experiments to validate the performance of our system.
\end{itemize}{}
In the next section, we briefly provide a review of state-of-the-art algorithms related to the paper. In the section-\ref{sec_prob}, we formulate the problem statement. The overall approach and its modules are explained in section-\ref{sec_network}. In section-\ref{sec_exp}, the experimental study of the algorithm is reported for various test cases. This paper is finally concluded in section-\ref{sec_con}.
\section{Problem Statement}
\label{sec_prob}
\subsection{Object Detection}
As mentioned in previous sections, the main challenge is to identify the bricks in a clutter. All state-of-the-art object detectors predict the upright or straight bounding box, which has three limitations:
\begin{itemize}
\item The bounding box corresponding to the rotated or tilted object contains a significant non-object region (Fig. \ref{fig:nms_prob_a}). Thus, it requires an additional step to extract object information, like object segmentation in the bounding box region.
\item If two or more objects are very close to each other. The corresponding bounding boxes will have non-zero intersecting regions (Fig. \ref{fig:nms_prob_b}). Thus, additional steps are required to handle intersecting regions, as this region may contain clutter for one box or an object part for another box.
\item If the intersecting regions are significant, after applying non-maximal-suppression (NMS), neighbor detection may be missed \cite{nms_prob}, as shown in Fig. \ref{fig:nms_prob_c}.
\end{itemize}{}
\begin{figure}[!h]
\centering
\begin{subfigure}[b]{0.33\linewidth}
\includegraphics[width=0.965\linewidth]{images/introduction/sample_output2.jpg}
\caption{Predicted Boxes}
\label{fig:nms_prob_a}
\end{subfigure}%
\begin{subfigure}[b]{0.33\linewidth}
\includegraphics[width=0.965\linewidth]{images/introduction/w_o_nms.jpg}
\caption{Boxes Overlap}
\label{fig:nms_prob_b}
\end{subfigure}%
\begin{subfigure}[b]{0.33\linewidth}
\includegraphics[width=0.965\linewidth]{images/introduction/w_nms.jpg}
\caption{After NMS}
\label{fig:nms_prob_c}
\end{subfigure}
\caption{Nearby predictions can be missed due to NMS}
\label{fig:nms_prob}
\end{figure}
To compete with the above limitations, we designed a CNN-based model to detect and localize bricks by predicting rotated boxes. An additional degree of freedom, i.e., the box's angle, allows the network to predict the box with greater alignment with the target object. Since most of the area inside the rotated bounding box corresponds to the target object, we can directly use the region corresponding to the rotated bounding box to extract the target object's information, avoiding any additional computations. A detailed description of the proposed model is given in Section-\ref{sec_network}.
\subsection{6D Pose Estimation}
As mentioned earlier, the bricks used in the experiments have a flat and textureless surface. Therefore, feature matching methods for pose estimation are unreliable in our experiments, as the number of features is less and not very distinct. Since the bricks used in our experiments have a cuboidal shape, if we can estimate the pose of at least one surface of the brick, this information is sufficient to estimate the pose of the entire brick. The brick has six faces, and each face has a specific relative pose with a local brick frame. Hence to estimate the brick pose, we have to identify the brick surface (out of six surfaces), estimate the surface pose, and use relative transformations to get the complete brick pose. A brief description of the pose estimation method is given in the section - \ref {sec_pose}.
\section{Pose Estimation}
\label{sec_pose}
In this section we will estimate the 6D pose of the brick. We observed that for a cubical shape object at least four corner points are sufficient to estimate the 6D pose of the object \textit{i.e.} if we know 3D coordinates of four corner points of the brick, then we can estimate the 6-\textit{DOF} pose of the brick. To estimate the corners of the brick, first we feed the current image (Fig. \ref{fig:pose1}) to the rotating box network. Region corresponding to the rotating box (Fig. \ref{fig:brick_seg}) is called brick region and point cloud corresponding to the the brick region is called brick cloud. In brick cloud, we estimate the edge points (\ref{fig:edges}), then using RANSAC method we fit the line on the edges (\ref{fig:lines}) and intersection of lines will give the corner points. From the corner points we estimate the pose of the brick as shown in Fig. \ref{fig:pose}.
\begin{figure}[t]
\centering
\begin{subfigure}{0.125\textwidth}
\includegraphics[width=1.18\linewidth]{images/pose_estimation/sample.jpg}
\caption{Image}
\label{fig:pose1}
\end{subfigure}\hfil
\begin{subfigure}{0.125\textwidth}
\includegraphics[width=1.18\linewidth]{images/pose_estimation/bb_box_pose.jpg}
\caption{Rotating box}
\label{fig:brick_seg}
\end{subfigure}\hfil
\begin{subfigure}{0.125\textwidth}
\includegraphics[height=0.89\linewidth, width=1.15\linewidth]{images/pose_estimation/real_cloud.jpg}
\caption{Point Cloud}
\end{subfigure}
\medskip
\begin{subfigure}{0.125\textwidth}
\includegraphics[height=0.89\linewidth, width=1.16\linewidth]{images/pose_estimation/edges.jpg}
\caption{Edges}
\label{fig:edges}
\end{subfigure}\hfil
\begin{subfigure}{0.125\textwidth}
\includegraphics[height=0.89\linewidth, width=1.16\linewidth]{images/pose_estimation/lines.jpg}
\caption{Lines}
\label{fig:lines}
\end{subfigure}\hfil
\begin{subfigure}{0.125\textwidth}
\includegraphics[height=0.89\linewidth, width=1.15\linewidth]{images/pose_estimation/pose.jpg}
\caption{6D Pose}
\label{fig:pose}
\end{subfigure}
\caption{Pose estimation pipeline}
\label{fig:complete_pose}
\vspace{-4mm}
\end{figure}
\section{Related Works}
\label{sec_background}
\subsection{Object Detection}
As mentioned in the previous section, the first stage of the construction process is the localization of the target object. In our case, the target object is referred to as a brick. In general, bricks are arranged randomly. Therefore, the brick must be localized before grasping. The process of brick localization falls under the category of the object detection algorithm. We perform a brick localization process in the image space. Several object detection algorithms exist in the literature. Here, we limit our discussion to only Conventional Neural Network (CNN) based methods.
The RCNN \cite{RCNN} generates object proposals (rectangular regions) in the image plane, and a CNN is used to extract features from the proposed regions, followed by a classifier to classify the proposed regions into N different classes, where N may vary according to application. In RCNN, most of the time is consumed in proposal generation as this step is performed on CPU, and also inference time increases linearly with an increase in the number of proposals. SPPnets \cite{SPPNet} were proposed to speed up the RCNN by extracting the features for the whole image at once and then cropping the feature map corresponding to the proposals. Due to the multistage nature of the above algorithm, joint training was required. Fast-RCNN \cite{Fast-RCNN} proposes an improved approach that is relatively faster and requires single-stage training. A further improved version of Faster-RCNN \cite{Faster-RCNN} was also proposed in which proposals are generated within the CNN, called a Region Proposal Network (RPN). The RPN was the key improvement in improving the overall algorithmic real-time performance. All the methods discussed above predict an upright bounding box around the detected object. In addition to the target object region, the predicted box may contain non-object regions or backgrounds. Hence, to minimize the background in the detected boxes, various solutions are present in the literature. For example, in \cite{li2018multiscale}, the author predicts a rotated bounding box from the set of prior rotated boxes (anchor boxes). Similarly, Mask-RCNN \cite{Mask-RCNN} can predict the bounding box and mask of the object simultaneously, which is known as instance detection and segmentation.
All the algorithms mentioned above consists of two steps; \textit{i)} generation of object proposals or anchor boxes (axis-aligned or rotated), \textit{ii)} classification (or regressing) the proposals using a CNN with a backbone such as VGG \cite{vgg}, ResNet \cite{resnet}. Thus the performance of the algorithm depends on the proposal generation process. On the other hand, authors of You Only Look Once (YOLO-v1) \cite{yolov1} have proposed a single network for object detection which divides the image into grid cells and directly predicts the fixed number of bounding boxes, corresponding confidence scores, and class probabilities for each grid cell. In the same direction, single shot multibox detector (SSD) \cite{SSD} is another variant of a single-stage object detector. In this variant, multi-resolution object detection is performed, i.e., detecting the presence of an object and its class score at various stages of different spatial resolutions.
\subsection{6D Pose Estimation}
After brick localization, a grasp operation needs to be performed by the manipulator. Choosing an optimal grasp configuration is a non-trivial task and remains an open problem. The grasp configuration depends on the 6D pose of the brick. Several Neural-network based pose estimation methods exist in the literature \cite{tekin2018real}, \cite{he2020pvn3d}, but limited memory resources compel us to use computationally light pose-estimation methods. To this end, several algorithms exist to estimate the object's 6D poses, which require a high degree of surface texture on the object. In our case, the estimation of brick poses is quite challenging due to their cubic shape (flat surfaces), which do not have surface textures. Therefore, the feature point matching technique \cite{vfh} \cite{do2016efficient} cannot be used. Other approaches \cite{drost2010model}, \cite{hinterstoisser2016going} consider an earlier model of the object. These methods require a preprocessing step, followed by the correspondence matching process. The matching process is the most time-consuming component of such algorithms. Besides, point-to-point matching methods (ICP \cite{icp}, GICP \cite{gicp}) are based on local geometric properties. Therefore, these methods can be stuck in local minima when aligning the target model with the reference due to flat surfaces. |
1,108,101,564,357 | arxiv | \section{Introduction}
Over the past few years, the landscape of computer vision has been noticeably changed from the engineered feature architecture to an end-to-end feature learning architecture, deep neural networks, by which many state-of-the-art work advanced the development of classical tasks such as object detection \cite{girshick2014rich}, semantic segmentation \cite{long2015fully}, and image retrieval \cite{li2015weakly}. Such a revolutionary change mainly results from several crucial elements, such as big datasets, high-performance hardware, new effective models, and regularization techniques. In this work, we focus on two notable elements, activation function and the corresponding initialization of network.
One of known activation functions is Rectified Linear Unit (ReLU) \cite{nair2010rectified,krizhevsky2012imagenet} which produced profound effect on the development of deep neural networks. ReLU is a piecewise-linear function that keeps positive inputs and outputs zero for negative inputs. Owing to this form, it can alleviate the problem of vanishing gradient, allowing the supervised training of much deeper neural networks. However, it experiences a potential disadvantage that units will never activate once gradients reach zero. Seeing this, Maas \emph{et al.} \cite{maas2013rectifier} presented Leaky ReLU (LReLU) where the negative part of activation function is replaced with a linear function. He \emph{et al.} \cite{He_2015_ICCV} further extended LReLU to a Parametric Rectified Linear Unit (PReLU) which can learn the parameters of the rectifiers, leading to higher classification accuracy with little overfitting risk. In addition, Clevert \emph{et al.} \cite{clevert2015fast} presented the Exponential Linear Unit (ELU), leading to faster learning and better generalization performance than the rectified unit family on deep networks. The above rectified and exponential linear units are commonly adopted by the recent deep learning architectures \cite{krizhevsky2012imagenet,simonyan2014very,Szegedy_2015_CVPR,he2015deep} to achieve good performance. However, there exists a gap of representation space between the two types of activation functions. For the negative part, ReLU or PReLU are able to represent the linear function family but not the non-linear one, while ELU is able to represent the non-linear function family but not the linear one. The representation gap to some extent undermines the representational power of those architectures using a particular activation function. In addition, ELU is at a potential disadvantage when used with Batch Normalization \cite{DBLP:conf/icml/IoffeS15}. Clevert \emph{et al.} \cite{clevert2015fast} showed that using Batch Normalization with ELU could harm the classification accuracy, which is also verified in our experiments.
This work is mainly motivated by PReLU and ELU. Firstly, we present a new Multiple Parametric Exponential Linear Unit (MPELU), a generalization of ELU, to bridge the gap. In particular, an extra learnable parameter, $\beta$, is introduced into the inputs of ELU to control the shape of negative part. By optimizing $\beta$ through stochastic gradient descent (SGD), MPELU is able to adaptively switch between the rectified and exponential linear units. Secondly, motivated by PReLU, we make the hyper-parameter $\alpha$ of ELU learnable to further improve its representational ability and tune the function shape. This design makes MPELU more flexible than its antecedents, ReLU, PReLU, and ELU that can be seen as special cases of MPELU. Therefore, through learning $\alpha$ and $\beta$, the linear and non-linear space of the negative part can be covered in a single activation function module, whereas its special existing cases do not have this property.
The introduction of learnable parameters into ELU may likely bring an additional benefit. This is inspired by the observation that Batch Normalization does not improve ELU networks but can improve ReLU and PReLU networks. To see this, MPELU can be inherently decomposed into a composition of PReLU and learnable ELU:
\begin{align}
\label{MPELU_decompose_no_BN}
MPELU = \widetilde{ELU}[PReLU(x)],
\end{align}
where x is the inputs of activation function, and $\widetilde{ELU}$ denotes the ELU \cite{clevert2015fast} with a learnable parameter $\alpha$. Applying Batch Normalization to the inputs gives
\begin{align}
\label{MPELU_decompose}
MPELU = \widetilde{ELU}\{ PReLU[ BN(x) ] \}.
\end{align}
As we can see, the outputs of Batch Normalization flow into PReLU before ELU, which can result in not only the improvement of the classification performance, but the alleviation of the potential problem of working with ELU. Eqn.~(\ref{MPELU_decompose}) suggests that MPELU is also able to share the advantages of PReLU and ELU simultaneously, for example, the superior learning behavior of ELU compared to ReLU and PReLU, as described in \cite{clevert2015fast}. Our experimental results on CIFAR-10 and ImageNet 2012 demonstrate that by introducing the learnable parameters, MPELU networks provide better classification performance and convergence property than its counterparts.
Because of the introduction of extra parameters, overfitting could be a concern. To address this, we adopt the same strategy as PReLU to reduce the overfitting risk. For each MPELU layer, $\alpha$ and $\beta$ are initialized as the channel-share version or the channel-wise version. Therefore, the increment of parameters of the entire network is at most twice the total number of channels, which is negligible compared to the number of weights.
Although lots of activation functions, e.g., ELU \cite{clevert2015fast}, were proposed recently, few works determine a weight initialization for networks using them. Improper initialization often hampers the learning of very deep networks \cite{simonyan2014very}. Glorot \emph{et al.} \cite{glorot2010understanding} proposed an initialization scheme but only considered the linear activation functions. He \emph{et al.} \cite{He_2015_ICCV} derived an initialization method that considers the rectifier linear units (e.g., ReLU) but not makes allowance for the exponential linear units (e.g., ELU). Even though Clevert \emph{et al.} \cite{clevert2015fast} applied it to the networks using ELU, this lacks theoretical analysis. Furthermore, none of these works is suitable for non-convex activation functions. Observing this,
this paper presents a strategy of weight initialization, enabling the training of networks using exponential linear units including ELU and MPELU, and thus extends the current theory to the wider range. In particular, since MPELU is non-convex, the proposed initialization also applies to non-convex activation functions.
The main contributions of this work are:
\begin{enumerate*}
\vspace{-10pt}
\item A new activation function MPELU that covers the solution space of both the rectified and exponential linear units.
\item A technique of weight initialization, allowing the training of extremely deep networks using ELU and MPELU.
\item A simple architecture of ResNet with MPELU, achieving state-of-the-art results on the CIFAR \cite{krizhevsky2009learning} dataset with comparable time/memory complexity and parameters to the original versions \cite{he2015deep,He2016}.
\end{enumerate*}
The remainder of this paper is organized as follows. Sec.~\ref{section-2:related_work} reviews the related work. In Sec.~\ref{Section-3: the proposed methods}, we propose our activation function and initialization method. The experiments and analysis are given in Sec.~\ref{Section-4: MPELU_experiments} to show their effectiveness. Utilizing the proposed methods, Sec.~\ref{Section-5: Deep MPELU Residual Networks} presents a deep MPELU residual architecture to provide state-of-the-art performance on CIFAR-10/100. Finally, Sec.~\ref{Section-6: conclusion} concludes. To keep the paper at a reasonable length, the implementation details of our experiments are given in appendix.
\section{Related Work}
\label{section-2:related_work}
This paper mainly focuses on activation functions and the weight initialization of deep neural networks. Therefore, we review the related work in the two fields. Note that training very deep networks can also be realized by developing new architectures such as introducing skip connection as in \cite{NIPS2015_5850,he2015deep}, but this is beyond the scope of the paper.
\noindent \\
\textbf{Activation Functions.} Even though activation functions are an early invention, they were not formally defined until recently \cite{gulcehre2016noisy}. Activation functions allow deep neural networks to learn a complex non-linear transformation, which is crucial to the power of modeling. From the feature point of view, the outputs of activation functions can be used as high-level semantic representations (can also be obtained by subspace learning, e.g., \cite{li2015robust}) that are more robust to variance than low-level ones, which facilitates recognition tasks.
Among recent work is Rectified Linear Unit (ReLU) \cite{nair2010rectified,krizhevsky2012imagenet}, one of keys to the breakthrough of deep neural networks. ReLU keeps positive inputs unchanged and outputs zero for negative inputs, and therefore it can avoid the problem of vanishing gradients, enabling the training of much deeper supervised neural networks, whereas sigmoid nonlinearity can not. LReLU \cite{maas2013rectifier} was proposed that multiplies the negative inputs by a slope factor, aiming to avoid zero gradients in ReLU. According to \cite{maas2013rectifier}, LReLU provides comparable performance to ReLU and is sensitive to the value of the slope. He \emph{et al.} \cite{He_2015_ICCV} found that the cost function is differentiable with respect to the slope factor and therefore proposed optimizing the slope through SGD. This parametric rectified linear unit is named PReLU. Experiments showed that PReLU can improve the performance of convolutional neural networks with little overfitting risk. They also proved that PReLU has the ability of pushing off-diagonal blocks of FIM closer to zero, which enables faster convergence than ReLU. None of the above activation functions can learn the non-convex functions since their essence of convex function. To address this, Jin \emph{et al.} \cite{Jin2016aaai} proposed a S-shaped rectified linear activation unit (SReLU) to learn both convex and non-convex functions, which is inspired by the Webner-Fechner law and the Stevens law. In addition to the above rectified linear units, Clevert \emph{et al.} \cite{clevert2015fast} presented a novel form of activation function, Exponential Linear Unit (ELU). ELU is similar to sigmoid for negative inputs and has the same form as ReLU for positive inputs. It has been proved that ELU is able to bring the gradient closer to the unit natural gradient, which accelerates learning speed and leads to higher performance. When used with Batch Normalization \cite{DBLP:conf/icml/IoffeS15}, ELU tends to expose an unexpected degradation problem. In this case, ELU has a negligible impact on the generalization capability and classification performance. In addition to the above deterministic activation functions, there is another random version. Recently, Xu \emph{et al.} \cite{xu2015empirical} proposed a randomized leaky rectified linear unit, RReLU. RReLU also has negative values which is helpful to avoid zero gradients. The difference is that the slope of RReLU is not fixed or learnable but randomized. Through this strategy, RReLU is able to reduce the overfitting risk to some extent. However, Xu \emph{et al.} only verified RReLU on small datasets, like CIFAR-10/100. How RReLU performs on large datasets such as ImageNet is still needed to be explored.
\noindent \\
\textbf{Initialization.} Initialization of parameters is very important especially for deep networks and the case of large learning rate. If not initialized properly, it may be very hard to converge through SGD. Many efforts have concentrated on this subject. Hinton \emph{et al.} \cite{hinton2006fast} introduced a learning algorithm that utilizes layer-wise unsupervised pre-training to initialize all layers. Before this, there is no suitable algorithms for training deep fully-connected architectures. Shortly after, Bengio \emph{et al.} \cite{bengio2007greedy} studied the pre-training strategy and conducted a series of experiments to substantiate and verify it. Erhan \emph{et al.} \cite{erhan2009difficulty} further performed a number of experiments to confirm and clarify the procedure, showing that it can initialize the starting point in parameter space in a better basin of attraction than picking starting parameters at random. During the development of deep learning, another important work is ReLU \cite{nair2010rectified} which addresses the problem of vanishing gradients. With ReLU, deep networks are able to converge even randomly initialized from a Gaussian distribution. Krizhevsky \emph{et al.} \cite{krizhevsky2012imagenet} applied ReLU to supervised convolutional neural networks with random initialization and won the ILSVRC 2012 challenge. Since that, deeper and deeper networks have been proposed, leading to a sequence of improvements in computer vision. However, Simonyan \emph{et al.} \cite{simonyan2014very} showed that deep networks still face the optimization problem once the number of layers reaches some value (e.g., 11 layers). This phenomenon is also mentioned in \cite{glorot2010understanding,Szegedy_2015_CVPR,He_2015_ICCV,NIPS2015_5850}. Glorot \emph{et al.} \cite{glorot2010understanding} proposed a method to initialize weights according to the size of a layer. This strategy holds under the assumption of linear activation functions, which works well in many cases but not holds for rectified linear units (e.g., ReLU and PReLU). He \emph{et al.} \cite{He_2015_ICCV} extended this method to the case of rectified linear units and proposed a new initialization strategy usually MSRA filler which has shown great help for training very deep networks. Nevertheless, for exponential linear units, there is currently no appropriate strategy to initialize weights. Observing this, we generalize the MSRA filler to a new initialization for deep networks using exponential linear units (e.g., ELU and MPELU) based on the first-order Taylor expansion of MPELU at zero.
\begin{figure}[t]
\centering
\subfloat[][shapes of activation functions]{
\includegraphics[width=0.4\textwidth]{fig1_activation_functions.eps}\label{fig:1-a}}
~~~~~~~~
\subfloat[][other activation functions are special cases of MPELU]{
\includegraphics[width=0.4\textwidth]{fig2_mpelu_as_others.eps}\label{fig1-b}}
\caption{The graphical depiction of activation functions. (a) shapes of activation functions. $a$ of
PReLU is initialized with 0.25. The hyper-parameter $\alpha$ of ELU is 1. $\alpha$ and $\beta$ of MPELU are initialized with 3 and 1, respectively. (b) other activation functions are special cases of MPELU. With $\alpha$ = 0, MPELU is reduced to ReLU. If $\alpha$ = 25.6302 and
$\beta$ =0.01, MPELU approximates to PReLU; When $\alpha$, $\beta$ = 1, MPELU becomes
ELU}
\label{fig:1}
\end{figure}
\section{The Proposed Activation Function and Weight Initialization}
\label{Section-3: the proposed methods}
This section first presents the Multiple Parametric Exponential Linear Unit (MPELU), then derives the weight initialization for networks using exponential linear units.
\subsection{Multiple Parametric Exponential Linear Unit}
PReLU and ELU have limited but complementary representational power for their negative parts. This work proposes a general form of activation function that unifying the existing ReLU, LReLU, PReLU, and ELU.
\noindent \\
\textbf{Forward Pass.} Formally, the definition of MPELU is:
\begin{align}
\label{MPELU_forward}
f(y_i)=\left\{\begin{matrix}
y_i & if & y_i>0\\
\alpha_c (e^{\beta_c y_i}-1) & if & y_i\leqslant0 &.
\end{matrix}\right.
\end{align}
Here, $\beta$ is constrained to be greater than zero, and $i$ is the index of input $y$ corresponding to the $c_{th}$ ($c \in \{1, ... ,M\}$) $\alpha$ and $\beta$. Following PReLU, $\alpha_c$ and $\beta_c$ can be channel-wise ($M =$ the number of feature maps) or channel-shared ($M = 1$) learnable parameters, which control the value to and at which MPELU saturates respectively. Fig.~1(a) shows the shapes of four activation functions.
By adjusting $\beta_c$, MPELU can switch between the rectified and exponential linear units. To be specific, if $\beta_c$ is set to a small number, for example, 0.01, the negative part of MPELU approximates to a linear function. In this case, MPELU becomes the Parametric Rectified Linear Unit (PReLU). On the other side, if $\beta_c$ takes a large value, for example, 1.0, the negative part of MPELU is a non-linear function, making MPELU turn back into the exponential linear units.
Introducing $\alpha_c$ helps further control the form of MPELU, as shown in Fig.~1(b). If $\alpha_c$ and $\beta_c$ are set to 1, MPELU reduces to ELU. Decreasing $\beta_c$ in this case lets MPELU go to LReLU. Finally, MPELU is exactly equivalent to ReLU when $\alpha_c = 0$.
From the above analysis, it is easy to see that the flexible form of MPELU makes it cover the solution space of its special cases, and therefore grants it more powerful representation. We will show that ResNet \cite{he2015deep,He2016} could gain significant improvement merely by tuning the usage of activation functions, that is, from ReLU to MPELU.
Another benefit of MPELU is fast learning. Eqn.~(\ref{MPELU_decompose}) suggests that MPELU could potentially share the properties of PReLU and ELU. Thus, as an exponential linear unit, MPELU exhibits the same learning behavior as ELU. Readers are referred to \cite{clevert2015fast} for more details.
\noindent \\
\textbf{Backward Pass.} Since MPELU is differentiable almost everywhere, deep networks with MPELU can be trained end-to-end. We use chain rule to derive the update formulations of $\alpha_c$ and $\beta_c$:
\begin{align}
\label{Eqn: backward pass of MPELU 1}
top'&=f(y_i)+\alpha_c\\
\frac{\partial f(y_i)}{\partial \alpha_c}&=\left\{\begin{matrix}
0 \ \ \ & if & y_i>0\\
\label{Eqn: backward pass of MPELU 2}
e^{\beta_c y_i}-1 \ \ \ & if & y_i\leqslant0
\end{matrix}\right.\\
\label{Eqn: backward pass of MPELU 3}
\frac{\partial f(y_i)}{\partial \beta_c}&=\left\{\begin{matrix}
0 \ \ \ & if & y_i>0\\
y_i*top' \ \ \ & if & y_i\leqslant0
\end{matrix}\right.\\
\label{Eqn: backward pass of MPELU 4}
\frac{\partial f(y_i)}{\partial y_i}&=\left\{\begin{matrix}
1 \ \ \ & if & y_i>0\\
\beta_c*top' \ \ \ & if & y_i\leqslant0 &.
\end{matrix}\right.
\end{align}
Note that $\frac{\partial f(y_i)}{\partial \alpha_c}$ and $\frac{\partial f(y_i)}{\partial \beta_c}$ are the gradients of activation function with respect to $\alpha_c$ and $\beta_c$ for a single unit. When computing the gradients of loss function for the entire layer, the gradients of $\alpha_c$ and $\beta_c$ should be:
\begin{align}
\frac{\partial L}{\partial \alpha_c}&=\sum_{y_i}\frac{\partial L}{\partial f(y_i)}*\left\{\begin{matrix}
0 \ \ \ & if & y_i>0\\
e^{\beta_c y_i}-1 \ \ \ & if & y_i\leqslant0
\end{matrix}\right.\\
\frac{\partial L}{\partial \beta_c}&=\sum_{y_i}\frac{\partial L}{\partial f(y_i)}*\left\{\begin{matrix}
0 \ \ \ & if & y_i>0\\
y_i*top'_i \ \ \ & if & y_i\leqslant0 &,
\end{matrix}\right.
\end{align}
where $\Sigma$ sums over all the positions corresponding to $\alpha_c$ and $\beta_c$. Throughout this paper, we employ the channel-wised version for all the experiments. By this strategy, the increment of parameters of the entire network is at most twice the total number of channels, which is negligible compared to the number of weights. We show in Sec.~\ref{Section-5: Deep MPELU Residual Networks} that the model size of the proposed MPELU ResNet architectures can be comparable to (or even less than) that of ReLU architectures.
For the actual running time, MPELU is roughly comparable to PReLU if we carefully optimize the codes. This will be analyzed in Section \ref{Section: experiments on ImagenNet}.
Initializing $\alpha$ and $\beta$ with different values has small but non-negligible impact on classification accuracy. We recommend using $\alpha = 1 \ or \ 0.25$ and $\beta = 1$ as the initial values, and five times the base learning rate for both of them. Moreover, we highlight that it is important to use weight decay ($l_2$ regularization) on both $\alpha$ and $\beta$, which is opposite the case of rectified linear units such as PReLU \cite{He_2015_ICCV} and SReLU \cite{Jin2016aaai}.
\subsection{The Proposed Weight Initialization for Networks with MPELU}
\label{Section: weight initialization}
The previous works \cite{hinton2006fast,bengio2007greedy,glorot2010understanding,He_2015_ICCV} have laid a solid foundation for the initialization of deep neural networks. This paper complements the current theory and extends it to the wider range.
\noindent \\
\textbf{Briefly Review of MSRA filler.} MSRA filler contains two cases of initialization, the forward propagation case and the backward propagation case. He \emph{et al.} \cite{He_2015_ICCV} proved that both cases are able to properly scale the backward signal. Therefore, it is sufficient to only investigate the forward propagation case.
For the $l_{th}$ convolutional layer, a pixel in the output channel is expressed as:
\begin{align}
y_l = \bm{w_l} * \bm{x_l} + b_l,
\end{align}
where $y_l$ is a random variable, \bm{$w_l$} and \bm{$x_l$} are random vectors and independent of each other, and $b_l$ is initialized with zero. The goal is to explore the relationship between the variance of $y_{l-1}$ and the variance of $y_l$.
\begin{align}
Var(y_l) = Var(\bm{w_l} \bm{x_l} + b_l) = Var(\bm{w_l} \bm{x_l}) = k_l^{2} c_l Var(w_l x_l ),
\label{dyl}
\end{align}
where $k_l$ is the kernel size and $c_l$ is the number of input channels. Here, both $w_l$ and $x_l$ are random variables. Eqn.~(\ref{dyl}) holds under the assumption that the elements in \bm{$w_l$} and \bm{$x_l$} are independent and identically distributed respectively.
Usually, weights of deep network are initialized with zero mean, and Eqn.~(\ref{dyl}) becomes:
\begin{align}
Var(y_l) = k_l^2 c_l Var(w_l)E(x_l^2).
\label{Eqn: variance of y_l}
\end{align}
Next, we need to find the relationship between $E(x_l^2)$ and $Var(y_l-1)$. Note that there exists an activation function between $x_l$ and $y_{l-1}$,
\begin{align}
x_l = f(y_{l-1}).
\end{align}
For different activation functions $f$, we may derive different relationships, and thus different initialization methods. Specifically, for symmetric activation functions, the sigmoid non-linearity, Glorot \emph{et al.} \cite{glorot2010understanding} assumed they are linear at the initialization and therefore proposed the Xavier method. For rectified linear units, ReLU and PReLU, He \emph{et al.} \cite{He_2015_ICCV} removed the linear assumption and extended the Xavier method to the MSRA filler. In the next section, we further extend the MSRA filler to a more general form by taking the first-order Taylor expansion of MPELU at zero and clipping the results to its linear part.
\noindent \\
\textbf{The Proposed Initialization.} This section mainly follows the derivation in \cite{glorot2010understanding,He_2015_ICCV}. Since ELU is a special case of MPELU, we focus on MPELU. As we can see from Eqn.~(\ref{MPELU_forward}), it is very difficult to obtain the exact relationship between $E(x_l^2)$ and $Var(y_{l-1}$). Instead, we use its Taylor series at zero. For the negative part, MPELU can be expressed as:
\begin{align}
\alpha (e^{\beta y}-1)=\alpha \beta y+\frac{1}{2}\alpha(\beta y)^{2}+\frac{1}{3!}\alpha(\beta y)^{3}+... \ .
\label{taylor_express}
\end{align}
Then, the left side of Eqn.~(\ref{taylor_express}) is approximated by its Taylor polynomial of degree 1.
\begin{align}
\alpha(e^{\beta y}-1) =\alpha \beta y+R_n(y) \approx \alpha \beta y
\label{assumption_linearity}
\end{align}
Eqn.~(\ref{assumption_linearity}) introduces the linear approximation only for the negative regime. We call this semi-linear assumption with which we have:
\begin{align}
x_l &\approx \max (0,y_{l-1})+ \min (0,\alpha \beta y_{l-1}) \\
E(x_l^2) &= \int_{-\alpha}^{\infty }x_l^2 p(x_l) dx_l \approx \frac{1}{2} (1 + \alpha_{l-1}^2 \beta_{l-1}^2) E(y_{l-1}^2),
\end{align}
where, $p(x)$ is the probability density function. Following \cite{glorot2010understanding,He_2015_ICCV}, if $w_{l-1}$ having a symmetric distribution with zero mean, it is also the case for $y_{l-1}$. Then,
\begin{align}
E(x_l^2) \approx \frac{1}{2} (1 + \alpha_{l-1}^2 \beta_{l-1}^2) Var(y_{l-1}).
\label{Exl_Dyl-1}
\end{align}
By Eqn.~(\ref{Exl_Dyl-1}) and (\ref{Eqn: variance of y_l}), we obtain:
\begin{align}
Var(y_l) \approx \frac{1}{2}k_l^2 c_l (1 + \alpha_{l-1}^2 \beta_{l-1}^2) Var(w_l) Var(y_{l-1}).
\end{align}
Through this, it is easy to derive the relationship between $y_{l-1}$ and $y_1$:
\begin{align}
Var(y_l) \approx Var(y_1)\prod_{i=2}^{l}\frac{1}{2}k_i^2 c_i (1 + \alpha_i^2 \beta_i^2) Var(w_i).
\end{align}
Following \cite{glorot2010understanding,He_2015_ICCV}, to keep the signals of the forward and backward pass flowing correctly, we expect that $Var(y_1)$ is equal to $Var(y_l)$, which leads to:
\begin{align}
\frac{1}{2}k_i^2 c_i (1 + \alpha_i^2 \beta_i^2) Var(w_i)=1, \forall i.
\end{align}
Therefore, for each layer in deep networks using MPELU, we can initialize weights from a Gaussian distribution
\begin{align}
\left ( 0, \sqrt{\frac{2}{k_i^2 c_i (1 + \alpha_i^2 \beta_i^2)}}\ \right ),
\label{taylor_result}
\end{align}
where $i$ is the index of layer. Eqn.~(\ref{taylor_result}) applies to deep networks using the rectified or exponential linear units.
Note that when $\alpha = 1$ and $\beta = 1$, Eqn.~(\ref{taylor_result}) becomes the initialization for ELU networks. When $\alpha = 0$, Eqn.~(\ref{taylor_result}) corresponds to the initialization for ReLU networks. Furthermore, when $\alpha = 0.25$ and $\beta = 1$, Eqn.~(\ref{taylor_result}) can be used to initialize PReLU networks. From this point of view, MSRA filler is a special case of the proposed initialization.
\noindent \\
\textbf{Comparison with Xavier, MSRA, and LSUV.} Xavier method is designed for symmetric activation functions with the hypothesis of linearity, and MSRA filler only applies to the rectified linear units (ReLU and PReLU), while the proposed method addresses the initialization for both rectified and exponential linear units. Recently, Mishkin \emph{et al.} \cite{mishkin2015all} proposed the LSUV initialization that is data-driven and thus avoids solving the relationship between $E(x_l^2)$ and $Var(y_l-1)$, but Eqn.~(\ref{taylor_result}) is an analytic solution for ELU and MPELU and therefore runs faster than LSUV.
\section{Experiment}
\label{Section-4: MPELU_experiments}
This section explores the usage of MPELU on a number of architectures. In Sec.~\ref{Section: experiments on cifar10}, we begin with experiments with Network in Network (NIN) \cite{2013arXiv1312.4400L} on CIFAR-10, showing the benefit of introducing learnable parameters into ELU. Sec.~\ref{Section: experiments on ImagenNet} further substantiates this benefit in deeper networks and on the larger dataset, ImageNet 2012. Finally, Sec.~\ref{sect:initialization experiments} verifies the proposed initialization with a very deep network on ImageNet, showing the ability of training very deep ELU/MPELU networks. In Sec.~\ref{Section: experiments on cifar10} and Sec.~\ref{sect:initialization experiments}, we also provide the convergence analysis, showing that MPELU, like ELU, possesses the superior convergence property to ReLU and PReLU.
\subsection{Experiments with NIN on CIFAR-10}
\label{Section: experiments on cifar10}
This section conducts the experiments of Network in Network with different activation functions on the CIFAR-10 dataset. The goal is to investigate the benefits of introducing learnable parameters into ELU.
This architecture has nine convolutional layers including six ones with $1\times1$ kernel size and no Fully Connected (FC) layers, which is easy to train and sufficient for a comprehensive evaluation of effectiveness of learnable parameters. The implementation details are given in appendix.
\renewcommand{\arraystretch}{0.7}
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
\caption{Test error rate (\%) of classification on the CIFAR-10. $\alpha$ and $\beta$ in MPELU are initialized with 1 or 0.25, and they are updated by SGD without weight decay. As in \cite{NIPS2015_5850,he2015deep} the best (mean $\pm$ std) results are reported by five runs for each network}
\label{table:cifar10}
\begin{tabular}{llll}
\hline\noalign{\smallskip}
NIN & parameter(s) & CIFAR-10 & CIFAR-10 (augmented)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
ReLU \cite{2013arXiv1312.4400L} & - & 10.41 & 8.81\\
PReLU & $\alpha = 0.25$ & \textbf{9.02 (9.19 $\pm$ 0.15)} & \textbf{7.28 (7.49 $\pm$ 0.14)}\\
ELU & $\alpha = 1$ & 9.39 (9.63 $\pm$ 0.23) & 7.77 (7.83 $\pm$ 0.05)\\
MPELU & $\alpha = 1$; $\beta = 1$ & 9.06 (9.19 $\pm$ 0.11) & 7.37 (7.57 $\pm$ 0.16) \\
MPELU & $\alpha = 0.25$; $\beta = 1$ & 9.10 (9.27 $\pm$ 0.12) & 7.30 (7.52 $\pm$ 0.18)
\\
\hline
\end{tabular}
\end{center}
\end{table}
For fair comparison, we train networks using ReLU, PReLU, ELU, and MPELU with the same settings from scratch. Tab.~\ref{table:cifar10} shows that MPELU consistently outperforms ELU (e.g., 9.06\% vs. 9.39\% test error rate without data augmentation, and 7.30\% vs 7.77\% test error rate with data augmentation). This improvement over ELU is completely from $\alpha$ and $\beta$, verifying the benefit from the learnable parameters.
\setlength{\tabcolsep}{1.4pt}
\begin{figure}[t]
\centering
\subfloat[][training loss]{
\label{fig: nin training loss on cifar}
\includegraphics[width=1\textwidth]{fig3a_nin_training_loss_v3.eps}\label{fig:2-a}}\\
\subfloat[][test error rate]{
\label{fig: nin test error on cifar}
\includegraphics[width=1\textwidth]{fig3b_nin_test_acc_v3.eps}\label{fig:2-b}}
\caption{Comparison of convergence on CIFAR-10. All the models learn very quickly on this small dataset, and so we adopt the evaluation method similar to \cite{clevert2015fast} according to which the number of iterations used to reach 15\% test error is measured.
(a) indicates that MPELU can reduce the loss earlier. (b) shows that MPELU reaches the 15\% error after 9k iterations, while ReLU and PReLU need 25k and 15k iterations to reach the same error rate}
\label{fig:2}
\end{figure}
Some interesting phenomenon can be observed in Tab.~\ref{table:cifar10} and Fig.~\ref{fig:2}. Firstly, Tab.~\ref{table:cifar10} shows that MPELU ($\alpha = \beta = 0.25$) performs like PReLU (a negligible difference of 0.03\% mean test error when using data augmentation). Secondly, Fig.~\ref{fig:2}(a)(b) show that its learning curves are closer to ELU's, suggesting a potential superior learning behavior compared to the rectified linear units, as described in \cite{clevert2015fast}. Note that all the models learn very quickly on this small dataset and reach the same test error rate (15\%) within 25k iterations, which makes it very hard to compare the learning speed. To deal with this, we adopt the similar evaluation criterion to \cite{clevert2015fast}, that is, the iteration to reach the 15\% test error rate. Fig.~\ref{fig:2}(b) shows that MPELU starts reducing the error (also the loss) earlier and reaches the 15\% error after 9k iterations, while ReLU and PReLU need 25k and 15k iterations to reach the same error rate, respectively. The above better performance arises from the combining advantages of PReLU and ELU, as suggested in Eqn.~(\ref{MPELU_decompose_no_BN}).
It is also worth noting that MPELU achieves a comparable performance to PReLU with a bit more parameters. This is not caused by overfitting since ELU performs much worse than PReLU and MPELU. The underlying reason is still unclear and will be studied in the future. Even though MPELU is a bit less effective than PReLU in this shallower architecture, we will show that MPELU outperforms PReLU in deeper architectures.
\subsection{Experiments on ImageNet}
\label{Section: experiments on ImagenNet}
\renewcommand{\arraystretch}{0.7}
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
\caption{Top-1 error rate (single-view test) on the validation set of ImageNet 2012 with data augmentation. The comparison is under the same initial values of $\alpha$. $\beta$ in MPELU is initialized with 1 for all cases. $\alpha$ and $\beta$ in MPELU are updated by SGD with/without weight decay. MPELU outperforms its counterparts consistently and obtains the overall best result}
\label{table:modele15_imagenet}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
$\alpha$ & $\beta$ & \multicolumn{4}{c|}{Gaussian initialization} & \multicolumn{2}{c|}{MSRA} & \multicolumn{2}{c|}{our initialization} \\
\hline
\multicolumn{2}{|l|}{$\beta$ for MPELU} & ReLU & PReLU & ELU & MPELU & ReLU & PReLU & ELU & MPELU \\
\hline
0/0 & 1/0 & 37.66 & - & - & 39.40 & 37.45 & - & - & - \\
0/1 & 1/0 & - & - & - & 37.92 & - & - & - & - \\
0/1 & 1/1 & - & - & - & \textbf{37.61} & - & - & - & \textbf{37.41} \\
\hline
0.25/0 & 1/0 & - & 39.48 & - & 40.94 & - & 38.72 & - & 39.46 \\
0.25/1 & 1/1 & - & 39.53 & - & \textbf{37.81} & - & 38.57 & - & \textbf{37.47} \\
\hline
1/0 & 1/0 & - & - & 40.36 & 39.53 & - & - & 39.83 & 38.42 \\
1/1 & 1/1 & - & - & - & \textbf{38.04} & - & - & - & \textbf{\color{blue}{37.33}} \\
\hline
\multicolumn{6}{l}{$\alpha$, $\beta$: initial value / weight decay multiplier } \\
\end{tabular}
\end{center}
\end{table}
This section evaluates MPELU on the ImageNet 2012 classification task. ImageNet 2012 contains about 1.28 million training examples, 50k validation examples, and 100k test examples which belong to 1000 classes. This enables us to utilize a deeper network with little overfitting risk. Therefore, we build a 15-layer network modified from the model-E in \cite{He_2015_CVPR}. The models evaluated in this section are trained on the training set and tested on the validation set. \\
\noindent \\
\textbf{Network Structure.} Based on the model-E, we add one more convolutional layer, insert Batch Normalization \cite{DBLP:conf/icml/IoffeS15} immediately before activation functions, and remove dropout \cite{JMLR:v15:srivastava14a} layers. Following \cite{He_2015_CVPR,krizhevsky2012imagenet,sppnet}, the networks are divided into three stages by max-pooling layers. The first stage contains only one convolutional layer with a kernel size of $7\times7$ pixels and 64 filters. The second stage consists of four convolutional layers with the kernel size of $2\times2$ pixels and 128 filters. We set stride and pad accordingly so as to maintain the feature map size of $36\times36$ pixels. The third stage consists of seven convolutional layers with kernel size of $2\times2$ pixels and 256 filters. In the third stage, the feature map size is reduced to $18\times18$ pixels. The next layer is spp \cite{sppnet} which is followed by two 4096-d FC layers, one 1000-d FC layer, and one softmax successively. The networks are initialized through three methods which are Gaussian distribution with zero mean and 0.01 standard deviation, MSRA filler \cite{He_2015_ICCV}, and the proposed initialization (see Sec. \ref{Section: weight initialization}). The bias terms are initialized with 0 as usual. $\alpha$ and $\beta$ in MPELU are initialized with varying values and updated by SGD with weight decay. Other implementation details are given in appendix.
For fair comparison, the participants are evaluated under the same initial values of $\alpha$, and Tab. \ref{table:modele15_imagenet} lists the results. For clarity, the results that outperform others are marked in boldface and the overall best result is marked in blue.\\
\noindent
\textbf{Gaussian Initialization.} When compared to ELU, all the MPELU layers are initialized with $\alpha = \beta = 1$. As we can see, the MPELU network outperforms the ELU network by 0.83\% top-1 error rate. If weight decay is used, it can significantly outperform the ELU network by 2.32\%. Since the only difference between them lies in the activation function, this improvement over ELU indeed demonstrates the advantage of the learnable parameters, $\alpha$ and $\beta$.
For further examining MPELU, we also compared it with PReLU. In this case, $\alpha$ in MPELU are initialized with 0.25. Tab.~\ref{table:modele15_imagenet} shows that the MPELU network achieves the top-1 error rate 40.94\%, which is worse than 39.48\% provided by the PReLU network. Nevertheless, using weight decay considerably improves the performance of the MPELU network by 3.13\%, reducing the top-1 error rate to 37.81\% which is better than that of the PReLU network by 1.72\%.
\noindent \\
\textbf{Other Initialization Methods.} Experiments are also conducted with other initialization methods (see Tab.~\ref{table:modele15_imagenet}). The experimental results are in line with the Gaussian initialization case. MPELU surpasses all the counterparts. The overall best top-1 error rate 37.33\% achieved by MPELU is significantly lower than those achieved by PReLU and ELU. It is interesting to see that the MPELU networks initialized from the proposed method consistently outperform those initialized from Gaussian method, demonstrating that our initialization can lead to better generalization capability, which is also verified in Sec.~\ref{sect:initialization experiments}.
Note that MPELU only provides slight improvement over ReLU, and using weight decay in MPELU tends to decrease the top-1 test error in all three cases. This result is not caused, however, by overfitting, since adding more layers (more parameters) to the 15-layer network leads to lower test error, as shown in Sec.~\ref{sect:initialization experiments}. A possible reason is that using weight decay tends to push $\alpha$ and $\beta$ to zero, resulting in smaller scale activations or sparser representations, like ReLU, that are more likely to be linearly separable in a high-dimensional space \cite{glorot2011deep}. Another explanation may come from the sparse feature selection \cite{li2014clustering}.
To provide an empirical interpretation, we performed four extra experiments using LReLU with different slopes, and gradually decreased the scale of activations. All the five models (ReLU and LReLU A-D) have the same number of parameters, which eliminates the influence of overfitting. The only difference among them is the scale of the negative activations. A noticeable trend is illustrated in Tab.~\ref{table:4-leaky_relu_experiments}. The top-1/top-5 test error decreases with the slope, which explains why using weight decay to MPELU leads to better results and why ReLU performs better than PReLU and ELU. Nevertheless, this phenomenon is not observed in Sec.~\ref{Section: experiments on cifar10}, which might be due to that small scale or sparsity is less important for the shallower architecture (The ReLU NIN performs worst).
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
\caption{Classification comparison among different slopes on the ImageNet validation set. The trend is that the performance increases with the decrease of slope}
\label{table:4-leaky_relu_experiments}
\begin{tabular}{llcc}
\hline\noalign{\smallskip}
15-layer network with & slope & top-1 error rate(\%) & top-5 error rate(\%) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
ReLU & $a = 0$ & 37.66 & 15.98 \\
LReLU (A) & $a = 0.1$ & 37.92 & 16.26 \\
LReLU (B) & $a = 0.25$ & 38.54 & 16.65 \\
LReLU (C) & $a = 0.5$ & 42.76 & 20.18 \\
LReLU (D) & $a = 1$ & 60.27 & 36.60
\\
\hline
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\noindent \\
\textbf{Convergence Comparison.} Since Batch Normalization has a great influence on the convergence of networks, we leave the comparison of convergence among activation functions to Sec.~\ref{sect:initialization experiments}.
\noindent \\
\textbf{Running Time.} The running time refers to the time consumption of performing an iteration with batch size 64 during training. Essentially, the computational cost of MPELU is greater than its counterparts. But this problem can be properly addressed by carefully engineered implementation (e.g., faster exponential functions). In our Caffe \cite{Jia:2014:CCA:2647868.2654889} implementation, the backward pass utilizes the outputs of the forward pass, as shown in Eqn.~(\ref{Eqn: backward pass of MPELU 1})(\ref{Eqn: backward pass of MPELU 3})(\ref{Eqn: backward pass of MPELU 4}), which saves a lot of computation. Furthermore, the gradients of parameters and inputs can be computed together for each loop. Consequently, the real running time of MPELU can be only slightly slower than that of PReLU, as summarized in Tab.~\ref{table:runing_time}.
\renewcommand{\arraystretch}{1}
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
\caption{The running time (seconds/iteration) of ReLU, PReLU, ELU, and MPELU based on Caffe implementation. The experiments are performed on a NVIDIA Titan X GPU. The running time below is the mean value of 600k iterations}
\label{table:runing_time}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
& ReLU & PReLU & ELU & MPELU \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
running time & 0.2310 & 0.2417 & 0.2299 & 0.2441
\\
\hline
\end{tabular}
\end{center}
\vspace{-20pt}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\subsection{Experiments of Initialization}
\label{sect:initialization experiments}
This section conducts experiments on ImageNet 2012. The task is to examine whether the proposed initialization is able to help with convergence of very deep networks using exponential linear units. To this end, we add extra 15 convolutional layers to the network in Sec.~\ref{Section: experiments on ImagenNet}, resulting in a 30-layer network that suffices for investigating the effect of the initialization. Note that the network is similar to the 30-layer ReLU network in \cite{He_2015_ICCV} but differs from it in several aspects such as batch size, pad, and feature map size.
Since BN has a great influence on the convergence of deep networks, it is nature to take it into account. Following \cite{DBLP:conf/icml/IoffeS15}, we remove dropout layers when using BN. Finally, four methods are compared: the baseline Gaussian initialization, our initialization, BN + Gaussian initialization, and BN + our initialization. $\alpha$ and $\beta$ in MPELU are initialized with 1 and updated by SGD without weight decay, with other settings identical to Sec.~\ref{Section: experiments on ImagenNet}.
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
\caption{Comparison of initialization. The top-1 test error (\%) on the validation set of ImageNet 2012 is reported. The 30-layer ELU and MPELU networks with Gaussian method totally stop learning. On the contrary, the proposed method makes them converge, verifying the effectiveness of Eqn.~(\ref{taylor_result}). When BN is used, the performance can still be boosted by the proposed method. Note that the results, 44.28\% and 42.96\%, achieved by the 30-layer MPELU networks with BN are considerably lower than those, 39.53\% and 38.42\%, achieved by the 15-layer counterparts, suggesting the emergence of the degradation problem \cite{he2015deep} }
\label{Comparison of initialization}
\scalebox{0.9}{
\begin{tabular}{|l|c|c|c|c|} \hline
\multirow{2}{*}{initialization methods}& \multicolumn{2}{|c|}{30-layer networks} & \multicolumn{2}{|c|}{15-layer networks}\\ \cline{2-5}
& ELU & MPELU & ELU & MPELU \\ \hline
Gaussian & $\times$ & $\times$ & - & - \\ \hline
ours & 37.08 & \textbf{\color{blue}{36.49}} & - & - \\ \hline
Gaussian + BN & - & 44.28 & 40.36 & 39.53 \\ \hline
ours + BN & - & \textbf{42.96} & \textbf{39.83} & \textbf{38.42} \\ \hline
\multicolumn{5}{l}{$\times$: fails to converge} \\
\end{tabular}}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
\caption{Comparison between LSUV and ours through the 15-layer networks. Although the improvement over LSUV is slight, but still consistent}
\label{table:comparison_for_15_lsuv}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
15 layers & \multicolumn{5}{c|}{MPELU} & ELU \\
\hline
$\alpha$, $\beta$ & 0/1, 1/1 & 0.25/0, 1/0 & 0.25/1, 1/1 & 1/1, 1/1 & 1/0, 1/0 & 1/0, 1/0 \\
\hline
LSUV \cite{mishkin2015all} & 37.72 & 39.93 & 37.67 & 37.62 & 38.57 & 39.85 \\ \hline
ours & \textbf{37.41} & \textbf{39.46} & \textbf{37.47} & \textbf{37.33} & \textbf{38.42} & \textbf{39.83} \\
\hline
\multicolumn{7}{l}{$\alpha$, $\beta$: initial value / weight decay multiplier}
\end{tabular}
\end{center}
\vspace{-10pt}
\end{table}
\noindent \\
\textbf{Comparison to Gaussian.} Tab.~\ref{Comparison of initialization} shows that the Gaussian initialization fails to train the 30-layer ELU/MPELU networks, while our method can help learn, which justifies the effectiveness of Eqn.~(\ref{taylor_result}). Furthermore, the 37.08\%/36.49\% top-1 test error rates achieved by the 30-layer ELU/MPELU networks are obviously lower than those achieved by 15-layer counterparts, meaning that the proposed method indeed addresses the diminishing gradients caused by improper initialization of very deep networks, hence makes them enjoy the benefit from the increase of depth. When BN is adopted, the proposed method reduces the error consistently compared to the Gaussian initialization, showing its benefit to the generalization capability. In addition, MPELU networks always perform better than ELU networks, and obtains the overall best result, 36.49\% top-1 test error rate, demonstrating the benefit of introducing learnable parameters. The above results indicate that although Eqn.~(\ref{taylor_result}) derives from a first-order Taylor approximation of Eqn.~(\ref{taylor_express}), it indeed works rather well in practice.
\noindent \\
\textbf{Comparison to LSUV.} Mishkin \emph{et al.} \cite{mishkin2015all} verified LSUV in the 22-layer GoogLeNet \cite{Szegedy_2015_CVPR} using ReLU. To examine LSUV in deeper networks with exponential linear units, we build another 52-layer ELU network and initialize the 30- and 52-layer ELU networks with LSUV. Without BN, LSUV makes both ELU networks explode within only several iterations, while our method can make them converge. More experiments are also conducted through the 15-layer networks from Sec.~\ref{Section: experiments on ImagenNet} and the results are given in Tab.~\ref{table:comparison_for_15_lsuv}. The proposed initialization leads to marginal, but consistent, decrease in top-1 test error. In addition, Eqn.~(\ref{taylor_result}) is an analytic solution, while LSUV is a data-driven method, meaning that the proposed method runs faster than LSUV.
\noindent \\
\textbf{Degradation Analysis.} It should be noted in Tab.~\ref{Comparison of initialization} that while the 30-layer network without BN obtains the overall best result, the 30-layer networks with BN perform considerably worse than the 15-layer counterparts. To explain this, we analyze their learning behaviors.
\setlength{\tabcolsep}{1.4pt}
\begin{figure}[t]
\centering
\subfloat[][training loss (end)]{
\includegraphics[width=\textwidth]{degradation_training_loss_last.eps}}
\\
\subfloat[][training error]{
\includegraphics[width=0.5\textwidth]{degradation_training_error.eps}}
\subfloat[][test error]{
\includegraphics[width=0.5\textwidth]{degradation_test_error.eps}}
\caption{learning curves of 15/30-layer MPELU networks on ImageNet. (a) training loss: All the 30-layer networks tend to converge. (b) top-1 training error (\%). (c) top-1 test error (\%). the 30-layer networks with BN have higher training/test error than the 15-layer network, suggesting the emergence of the degradation problem \cite{he2015deep}. Somehow surprisingly, if BN is removed, the problem is eliminated (see the red dashed line)}
\label{fig:degradation phenomenon}
\end{figure}
Firstly, Fig.~\ref{fig:degradation phenomenon}(a) shows the training loss of all the 30-layer networks at the end of training. As we can see, the networks with BN have comparable training loss to the network without BN, demonstrating that they all converge well. Thus, it is most unlikely that the decrease of accuracy is caused by vanishing gradients. Secondly, Fig.~\ref{fig:degradation phenomenon}(b)(c) show the top-1 training/test error rates. Obviously, the 30-layer networks with BN have higher training/test error than the 15-layer counterpart, suggesting the emergence of the degradation problem as described in \cite{he2015deep}. Interestingly, the 30-layer network without BN does not suffer from this problem. It can enjoy the benefit of increasing depth. Note that the only difference among these networks is the usage of BN. Therefore, BN might be an underlying factor causing the degradation problem.
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
\caption{The statistics (mean and variance) of activations of conv\{1, 7, 14, 20, 27\}. As described in \cite{He_2015_ICCV}, the ReLU network can roughly preserve its variance, which leads to large magnitude of outputs, and thus diverges. As a comparison, the MPELU network can gradually reduce the magnitude, and thus avoid overflow}
\label{tab:30-layer_convergence}
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
& & conv1 & conv7 & conv14 & conv20 & conv27 \\ \hline
\multirow{2}{*}{Mean} & ReLU & 38.95 & 41.25 & 28.37 & 22.52 & 19.61 \\ \cline{2-7}
& MPELU & 25.31 & 4.77 & 0.13 & 0.03 & 0.003 \\ \hhline{*7-}
\multirow{2}{*}{Var} & ReLU & 4196.36 & 4603.98 & 2594.84 & 2381.22 & 2627.62 \\ \cline{2-7}
& MPELU & 1840.65 & 74.43 & 0.71 & 0.07 & 0.01 \\ \hline
\end{tabular}}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\noindent \\
\textbf{Comparison of convergence.} Since deeper networks are harder to train, it is good to examine the convergence of activation functions by the 30-layer networks without BN. To this end, four such networks are constructed and initialized from the corresponding method with FAN\_IN, FAN\_OUT, and AVERAGE mode. Experimental results show that the ReLU network fails to converge in all three modes. The PReLU network converges only in the FAN\_OUT mode. On the contrary, ELU/MPELU networks are able to converge in all three modes. These results may be due to the robust to variations of inputs introduced by the left saturation of ELU/MPELU. To verify this, the statistics (mean and variance) are computed. Tab.~\ref{tab:30-layer_convergence} shows that the ReLU network roughly preserves the variance of inputs, which results in very large activations at higher layers and overflow of softmax as discussed in \cite{He_2015_ICCV}. The MPELU network does not suffer from this since it has the left saturation to a small negative value and thereby gradually decreases the variance during forward propagation.
\subsection{Residual Analysis of the Proposed Initialization}
The left side of Eqn.~(\ref{assumption_linearity}) is approximated by the first order Taylor expansion. This section estimates the residual term $R_n(y)$,
\begin{align}
R_n(y) = \frac{e^{\theta \beta y}}{2!} \alpha (\beta y)^2 \ \ \ (0< \theta <1).
\end{align}
To this end, two cases with and without BN will be considered.
\noindent \\
\textbf{With BN.} BN are usually adopted immediately before MPELU. Therefore, it is reasonable to assume that the input of MPELU, $y$, has a Gaussian distribution with zero mean at the initialization stage. According to probability theory, over 99.73\% inputs fall into the range of $[-3\sqrt{Var(y)},\ \ 3\sqrt{Var(y)}]$, and in this range only half of them contribute to the residuals. We consider three inputs taking $-\sqrt{Var(y)}$, $-2\sqrt{Var(y)}$, and $-3\sqrt{Var(y)}$ whose corresponding residuals are:
\begin{align}
\label{Eqn: three sigma rule-1}
R_n(\ -3\sqrt{Var(y)}\ ) & = \frac{9e^{-3\theta \beta \sqrt{Var(y)}}}{2} \alpha \beta^2 Var(y) < \frac{9}{2} \alpha \beta^2 Var(y), \\
\label{Eqn: three sigma rule-2}
R_n(\ -2\sqrt{Var(y)}\ ) & = \frac{4e^{-2\theta \beta \sqrt{Var(y)}}}{2} \alpha \beta^2 Var(y) < \frac{4}{2} \alpha \beta^2 Var(y), \\
\label{Eqn: three sigma rule-3}
R_n(\ -\sqrt{Var(y)}\ ) & = \frac{e^{-\theta \beta \sqrt{Var(y)}}}{2} \alpha \beta^2 Var(y)\ \ < \frac{1}{2} \alpha \beta^2 Var(y).
\end{align}
Eqn.~(\ref{Eqn: three sigma rule-1}), (\ref{Eqn: three sigma rule-2}), and (\ref{Eqn: three sigma rule-3}) say that at the initialization, more than 99.865\%, 97.725\%, and 84.135\% (the probability of $y$ falling in [$-3\sqrt{Var(y)}$, $+\infty$], [$-2\sqrt{Var(y)}$, $+\infty$], and [$-\sqrt{Var(y)}$, $+\infty$], respectively) inputs will have the residuals less than $\frac{9}{2} \alpha \beta^2 Var(y)$, $2 \alpha \beta^2 Var(y)$, and $\frac{1}{2} \alpha \beta^2 Var(y)$, respectively. Here, $y$ has unit variance. If $\alpha$ and $\beta$ are initialized with 1, more than 84.135\% inputs will have the residuals less than 0.5. Furthermore, consider some negative input $\hat{y}$ whose residual is less than $10^{-2}$. For $\hat{y}$,
\begin{align}
\label{estimate_residual_y}
R_n(\hat{y}) &= \frac {e^{\theta \beta \hat y}}{2} \alpha \beta^2 \hat y^2 \ < \ \frac {1}{2} \alpha \beta^2 \hat y^2 \ < \ 0.01.
\end{align}
If $\alpha$ and $\beta$ are initialized with 1, then we obtain:
\begin{align}
\hat y > -\frac {\sqrt{2}}{10 \sqrt{\alpha} \beta} = -0.1414.
\end{align}
This means there will be about 55.57\% inputs having the residuals less than 0.01. Although the residuals are innegligible, Eqn.~(\ref{taylor_result}) still works well in practice. The analysis can be side-verified by Clevert \emph{et al.} work \cite{clevert2015fast} in which they observed that ELU does not show better performance when used with BN. ELU ($\alpha = 1$) behaves more like LReLU ($a = 1$), a linear function, for the whole period of training since most residuals are small, see Tab. \ref{table:4-leaky_relu_experiments}, LReLU (D).
\noindent \\
\textbf{Without BN.} In this case, it is difficult to estimate the residuals analytically. Fortunately, the residual can be easily computed from the outputs of a convolutional layer. For this purpose, the 30-layer MPELU network without BN from Sec.~\ref{sect:initialization experiments} is adopted. By Eqn.~(\ref{estimate_residual_y}), we consider the inputs of residuals less than \{0.01, 0.5, 2, 4.5\}, or equivalently $\{y\ |\ y > -0.1414\}$, $\{y\ |\ y > -1\}$, $\{y\ |\ y > -2\}$, and $\{y\ |\ y > -3\}$.
\renewcommand{\arraystretch}{0.7}
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
\caption{The histogram of units for residuals. The bins are (0, 0.01), (0, 0.5), (0, 2), and (0, 4.5). Conv\{1, 7, 14, 20, 27\} are picked from the 27 convolutional layers. For each bin (each row), the deeper the layer, the higher percentage of units fall in it. Once the depth reaches 14, most of units will have residuals 0.5 or less. It is interesting to note that the outputs of the median layer, conv14, approximately have a standard normal distribution }
\label{table:residual_analysis}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
residual & conv1 & conv7 & conv14 & conv20 & conv27 \\
\hline
0.01 & 51.24 & 46.83 & \textbf{56.70} & 78.18 & 89.45 \\
0.5 & 51.65 & 50.00 & \textbf{84.75} & 99.60 & 1 \\
2 & 52.11 & 53.78 & \textbf{97.09} & 1 & 1 \\
4.5 & 52.53 & 57.54 & \textbf{99.71} & 1 & 1 \\
\hline
\end{tabular}
\end{center}
\end{table}
For simplicity, the statistics are computed every 7 layers. As shown in Tab.~\ref{table:residual_analysis}, the deeper layers have a better approximation for Eqn.~(\ref{assumption_linearity}). Also, once the depth reaches the median, e.g., conv14, most of units will have the residuals less than 0.5. In addition, the statistics of conv14 is very close to a standard normal distribution, which suggests that it plays a role of BN which ensures that gradients can be properly propagated to the lower layers at the initialization. We argue that the residuals are acceptable for the initialization. Sec.~\ref{sect:initialization experiments} has proven the effectiveness of the proposed initialization.
\section{Deep MPELU Residual Networks}
\label{Section-5: Deep MPELU Residual Networks}
Sec.~\ref{Section-4: MPELU_experiments} shows that MPELU and the proposed initialization can bring benefits to the plain networks. This section gives a deep MPELU ResNet to show that the proposed methods are especially suitable for the ResNet architecture \cite{he2015deep} and provides state-of-the-art performance on the CIFAR-10/100 datasets.
\subsection{MPELU and Batch Normalization}
\label{Section: MPELU with BN in ResNet}
This section demonstrates that MPELU, as opposed to ELU, can be used with BN. Clevert \emph{et al.} \cite{clevert2015fast} found that BN can improve ReLU networks, but not (even be harmful to) ELU networks. Observing this, Shah \emph{et al.} \cite{shah2016deep} proposed to remove most BN layers when constructing ResNet using ELU. While removing BN could lower the barrier between them, it tends to diminish the desired regularization properties, which may lead to unexpected negative effect on the generalization capability. We argue that a proper method to alleviate the problem is introducing learnable parameters $\alpha$ and $\beta$.
\renewcommand{\arraystretch}{1}
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
\caption{
Classification error on CIFAR-10. ReLU is simply replaced with ELU or MPELU. The mean test error over 5 runs is reported except that we show best (mean $\pm$ std) for depth 110. In MPELU ResNet (A), $\alpha$ and $\beta$ are initialized with 1 and updated by SGD with weight decay. For (B), we pay a special attention to MPELU after addition, and initialize $\alpha$ and $\beta$ with 98 and 0.01, respectively}
\label{Table: ReLU-ELU-MPELU-ResNet}
\scalebox{0.9}{
\begin{tabular}{|l|c|c|c|c|c|c|} \hline
\# layers / \# params & 20 & 32 & 44 & 56 & 110 & \# params \\ \hline
ResNet \cite{he2015deep}& 8.75 & 7.51 & 7.17 & 6.97 & 6.43 (6.61 $\pm$ 0.16) & 1.73M \\ \hline
ELU ResNet & \textbf{7.980} & 7.872 & 7.714 & 7.844 & 8.11 (8.36 $\pm$ 0.29) & 1.73M \\ \hline
MPELU ResNet (A) & 8.12 & 7.35 & 6.90 & 6.72 & 6.21 (6.89 $\pm$ 0.47) & 1.74M \\ \hline
MPELU ResNet (B) & 8.16 & \textbf{7.12} & \textbf{6.67} & \textbf{6.27} & \textbf{5.64 (5.77 $\pm$ 0.15)} & 1.74M \\ \hline
\end{tabular}}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
To examine this, we simply replace ReLU with ELU and MPELU in ResNet, keeping any other settings unchanged. $\alpha$ and $\beta$ in MPELU (A) are initialized with 1 and updated by SGD with weight decay. Tab.~\ref{Table: ReLU-ELU-MPELU-ResNet} shows the ELU ResNet performs worse than the original ResNet, potentially demonstrating that BN does not improve the ELU ResNets. On the contrary, the MPELU ResNets (A) consistently reduces the test error for different depths.
The improvement over ELU may receive an explanation from Eqn.~(\ref{MPELU_decompose}) that origins from the learnable parameters in MPELU. Eqn.~(\ref{MPELU_decompose}) suggests that the outputs of BN directly flow into its PReLU submodule and therefore avoid the ELU submodule. Another possible reason comes from the principle of ResNet, a hypothesis that it is easier to optimize the residual mapping than the original mapping. The ResNet architecture is derived from the extreme case of the hypothesis where the identity mapping is optimal. Compared to ReLU and ELU, MPELU covers larger solution space, which allows the solvers to have more opportunities for approximating identity mappings, and therefore improves the performance. To verify this, we pay a special attention to the MPELU layers after addition, where $\alpha$ and $\beta$ are initialized with 98 and 0.01 respectively. By doing so, the shortcut connection and the MPELU layer after addition combine to an identity mapping. Following the philosophy in \cite{he2015deep}, if an identity mapping were optimal, it would be easier to learn an identity mapping by a shortcut connection plus such a MPELU layer than plus a ReLU or ELU layer since neither ReLU or ELU covers the identity mapping. The results are given in MPELU ResNets (B). Tab.~\ref{Table: ReLU-ELU-MPELU-ResNet} shows that MPELU ResNets (B) consistently outperform the counterparts by a large margin, demonstrating the benefit from the larger solution space introduced by the learnable parameters.
\subsection{Network Architectures}
\label{Section: MPELU ResNet Architectures}
\begin{figure}[t]
\centering
\subfloat[][non-bottleneck]{\includegraphics[width=0.25\textwidth]{ResNet.eps}\label{fig: resnet}} ~
\subfloat[][MPELU non-bottleneck]{\includegraphics[width=0.25\textwidth]{MPELU-ResNet.eps}\label{fig: mpelu-resnet}} ~
\subfloat[][full pre-activ. bottleneck]{\includegraphics[width=0.25\textwidth]{Pre-ResNet.eps}\label{fig: full pre-activation}} ~
\subfloat[][MPELU full pre-activ. bottleneck]{\includegraphics[width=0.25\textwidth]{MPELU-Pre-ResNet-origin.eps}\label{fig: mpelu full pre-activation}}
\caption{Various residual blocks. (a) the non-bottleneck block in \cite{he2015deep}, (b) MPELU non-bottleneck block, (c) the full pre-activation bottleneck block in \cite{He2016}, (d) MPELU full pre-activation bottleneck block}
\label{fig: Various residual blocks}
\end{figure}
He \emph{et al.} \cite{he2015deep,He2016} investigated the usage of activation functions for deep residual networks. The resulted ResNet and Pre-ResNet architectures are highly optimized for ReLU. Even though the performance can be improved by simply replacing ReLU with MPELU as shown in Sec.~\ref{Section: MPELU with BN in ResNet}, we expect that it would benefit from an adjusted deployment. For this reason, this section proposes a variant of the residual architecture, MPELU ResNet which includes two types of blocks, non-bottleneck and bottleneck, as described in the following. \\
\noindent
\textbf{MPELU Non-bottleneck Residual Block.} This block, (Fig.~\ref{fig: Various residual blocks}(b)), is a simplification of the original non-bottleneck residual block in ResNet \cite{he2015deep} (Fig.~\ref{fig: Various residual blocks}(a)). The experimental results from Sec.~\ref{Section: MPELU with BN in ResNet} suggest that ResNet using MPELU gains more opportunities for finding a better solution than using ReLU or ELU. However, introducing nonlinear units (e.g., MPELU) after addition would still affect the optimization. For example, if an identity mapping were optimal, to the extreme, it would require the solvers to fit an identity mapping by a stack of nonlinear units in addition to pushing the residual functions to zero. Inspired by \cite{He2016,gross2016training}, the identity mapping is directly constructed, as shown in Fig.~\ref{fig: Various residual blocks}(b), by removing the MPELU after addition instead of being fit by the solvers.
\begin{figure}[t]
\centering
\subfloat[][MPELU-only pre-activ. with BN]{\includegraphics[width=0.2\textwidth]{MPELU-Halfpre-ResNet-BN-end.eps}\label{fig: mpelu-halfpre-bn-end}} ~
\subfloat[][MPELU-only pre-activ.]{\includegraphics[width=0.2\textwidth]{MPELU-Halfpre-ResNet.eps}\label{fig: mpelu-only pre-activation}} ~
\subfloat[][nopre with BN before addition]{\includegraphics[width=0.2\textwidth]{MPELU-NoPre-ResNet-BN-end.eps}\label{fig: mpelu bn before addition}} ~
\subfloat[][nopre-activ.]{\includegraphics[width=0.2\textwidth]{MPELU-NoPre-ResNet.eps}\label{fig: mpelu-nopre}}
\subfloat[][nopre-activ. without BN]{\includegraphics[width=0.2\textwidth]{MPELU-NoPre-NoBN-ResNet.eps}\label{fig: mpelu-nopre-nobn-resnet}}
\caption{Alternatives of residual function. (a) MPELU-only pre-activation block ending with a BN, (b) MPELU-only pre-activation block, (c) nopre-activation with a BN, (d) nopre-activation bottleneck, (e) nopre-activation without BN.}
\label{fig: other nopre alternatives}
\end{figure}
\noindent \\
\textbf{MPELU Bottleneck Residual Block.} A naive MPELU Bottleneck block can be simply obtained by replacing ReLU (Fig.~\ref{fig: Various residual blocks}(c)) with MPELU (Fig.~\ref{fig: Various residual blocks}(d)). This pull pre-activation structure is highly optimized for ReLU.
This section presents a nopre-activation bottleneck block optimized for MPELU (see Fig.~\ref{fig: other nopre alternatives}(d)). Since the pre-activation part is removed, the complexity and the number of parameters of this block can be largely reduced. As a consequence, the final complexity and the number of parameters of the entire network is comparable to the original. Besides, we adopt a BN (denoted by BN$_1$) plus a MPELU right after the first convolution layer, and a BN (denoted by BN$_{end}$) plus a MPELU right after the last element-wise addition of the entire network. The BN$_1$ and BN$_{end}$ are important for the nopre-activation bottleneck block. We will empirically demonstrate this. In addition to this structure, other alternatives (see Fig.~\ref{fig: other nopre alternatives}) are also investigated.
\subsection{Results on CIFAR}
\label{Section: MPELU ResNet results}
This section firstly evaluates the variants and alternatives of the proposed MPELU ResNet, then compares it to the state-of-the-art architectures. The implementation details are given in appendix.
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
\caption{Test error (\%) of non-bottleneck architectures on CIFAR-10. We try different learning rate and weight decay multipliers for $\alpha$ and $\beta$, and pick the one that gets the best performance. We retrained the original ResNet for 200 epochs and denote the results by *}
\label{Table: MPELU non-bottleneck on cifar10}
\scalebox{0.9}{
\hspace*{-22pt}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
Fig. / \# layers / \# params & Fig. & 20 & 32 & 44 & 56 & 110 & \# params \\ \hline
ResNet \cite{he2015deep} & Fig.~\ref{fig: Various residual blocks}(a) & 8.75 & 7.51 & 7.17 & 6.97 & 6.43 (6.61 $\pm$ 0.16) & 1.73M \\ \hline
ResNet \cite{he2015deep}* & Fig.~\ref{fig: Various residual blocks}(a) & 8.16 & 7.06 & 6.99 & 6.58 & 6.27 (6.40 $\pm$ 0.18) & 1.73M \\ \hline
MPELU ResNet (non-bottle.) & Fig.~\ref{fig: Various residual blocks}(b) & \textbf{7.71} & \textbf{6.73} & \textbf{6.26} & \textbf{5.95} & \textbf{5.35 (5.47 $\pm$ 0.14)} & 1.74M \\ \hline
\end{tabular}}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
\caption{Test error (\%) of bottleneck architectures on CIFAR-10. $\alpha$ and $\beta$ are initialized with 0.25 and 1, respectively, and updated by SGD with weight decay}
\label{Table: MPELU bottleneck variants on cifar10}
\scalebox{0.9}{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
Fig. / \# layers / \# params & Fig. & 164 & \# params \\ \hline
the original Pre-ResNet \cite{He2016} & Fig.~\ref{fig: Various residual blocks}(c) & 5.46 & 1.703M \\ \hline
MPELU full pre-activ. & Fig.~\ref{fig: Various residual blocks}(d) & 5.20 (5.32 $\pm$ 0.13) & 1.728M \\ \hline
MPELU-only pre-activ. with BN & Fig.~\ref{fig: other nopre alternatives}(a) & diverged within few steps & 1.727M \\ \hline
MPELU-only pre-activ. & Fig.~\ref{fig: other nopre alternatives}(b) & 5.49 & 1.712M \\ \hline
MPELU nopre with BN & Fig.~\ref{fig: other nopre alternatives}(c) & diverged within few steps & 1.713M \\ \hline
MPELU nopre & Fig.~\ref{fig: other nopre alternatives}(d) & \textbf{4.87 (5.04 $\pm$ 0.14)} & 1.696M \\ \hline
MPELU nopre (no BN$_1$ and BN$_{end}$)& - & diverged within few steps & 1.696M \\ \hline
MPELU nopre (no BN$_{1}$)& - & 5.29 & 1.696M \\ \hline
MPELU nopre without BN & Fig.~\ref{fig: other nopre alternatives}(e) & diverged within few steps & 1.688M \\ \hline
\end{tabular}}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\noindent \\
\textbf{Classification Results.} For shallower architectures, the MPELU ResNets (non-bottle.) are considered. Tab.~\ref{Table: MPELU non-bottleneck on cifar10} shows that the MPELU ResNets (non-bottle.) achieve consistent improvement with negligible increase of parameters. For example, the 110-layer MPELU ResNet reduces the mean test error rate to 5.47\%, which is 1.14\% lower than the original ResNet-110. Note that this improvement is obtained merely via a simple strategy -- changing the usage of activation functions, demonstrating the benefit from MPELU.
When the networks go deeper (164 layers), we focus on the bottleneck architectures to reduce the time/memory complexity as done in \cite{he2015deep}. Tab.~\ref{Table: MPELU bottleneck variants on cifar10} shows that the MPELU full pre-activ., Fig.~\ref{fig: Various residual blocks}(d), provides a marginal decrease in the mean test error rate from 5.46\% to 5.32\% compared to the original Pre-ResNet, Fig.~\ref{fig: Various residual blocks}(c). This is done by simply replacing ReLU with MPELU. For the MPELU-only pre-activ. with BN (Fig.~\ref{fig: other nopre alternatives}(a)), the network fails to converge under the initial learning rate 0.1. Following \cite{he2015deep}, we warm up the training using learning rate 0.01 for one epoch, then switch back to 0.1. With this policy, the network is able to converge but to a worse solution than the full pre-activ. architecture. Based on this observation, we keep the pre-activation part and remove the BN before addition (see Fig.~\ref{fig: other nopre alternatives}(b)). Interestingly, the network can converge without warming up, leading to the mean test error 5.49\% which is also worse than the full pre-activ. architecture. Through these results, the MPELU-only pre-activ. architectures are not considered in the rest of the paper.
We focus on the MPELU nopre architecture (Fig.~\ref{fig: other nopre alternatives}(d)), and its variants. Somehow surprisingly, as shown in Tab.~\ref{Table: MPELU bottleneck variants on cifar10}, simply removing the pre-activation brings about lower test error rate with less parameters and complexity, which suggests that the deep residual architectures have the potential to enjoy the benefit from MPELU. In addition, the performance is also examined by adding more BN layers to and removing BN layers from the MPELU nopre architecture. For the former case (Fig.~\ref{fig: other nopre alternatives}(c)), as demonstrated in Tab.~\ref{Table: MPELU bottleneck variants on cifar10}, adding one more BN before addition makes the network diverge within few steps. Seeing this, we tried the warming up and found that the network converged well. Combining this phenomenon with the observations of Fig.~\ref{fig: other nopre alternatives}(a) and ResNet-110 \cite{he2015deep}, we suspect that the BN before addition would exert a negative impact on the gradient signals so that we have to lower the initial learning rate to warm up the training. For the latter case, removing all the BN from the residual function (see Fig.~\ref{fig: other nopre alternatives}(e)) also leads to divergence. Again, the same result happens when BN$_1$ and BN$_{end}$ are removed from the MPELU nopre. However, if keeping BN$_{end}$, the network still converges and performs slightly worse (5.29\% $vs.$ 5.04\% mean test error). These results suggest that BN$_1$ and BN$_{end}$ are important to the nopre architecture.
Considering the time/memory complexity and model size, the MPELU nopre is picked as the proposed bottleneck architecture of this paper and used to compared to other state-of-the-art methods.
\renewcommand{\arraystretch}{0.7}
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
\caption{Comparison to state-of-the-art methods on CIFAR-10/100. MPELU are initialized with $\alpha = 0.25\ or\ 1$ and $\beta = 1$ that are updated by SGD with weight decay. $\dagger$ denotes that the hyper-parameter settings follow \cite{huang2016densely} (see appendix). Our results are based on the best of 5 runs with mean $\pm$ std}
\label{Table: compared to state-of-the-arts}
\scalebox{0.9}{
\begin{tabular}{l|l|c|c|c|c} \hline
Method & settings & depth & \# params & CIFAR-10 & CIFAR-100 \\ \hline
NIN \cite{2013arXiv1312.4400L} & - & - & - & 8.81 & - \\
DSN \cite{lee2015deeply} & - & - & - & 7.97 & 34.57 \\
All-CNN \cite{springenberg2014striving} & - & - & - & 7.25 & 33.71 \\
Highway \cite{NIPS2015_5850} & - & - & - & 7.72 & 32.39 \\
ELU \cite{clevert2015fast} & - & - & - & 6.55 & 24.28 \\
Fitnets \cite{romero2014fitnets} & - & - & - & 8.39 & 35.04 \\ \hline
\multirow{2}{*}{ResNet \cite{he2015deep}} & - & 110 & 1.7M & 6.61 & - \\
& - & 1202 & 19.4M & 7.93 & - \\ \hline
\multirow{2}{*}{sto. ResNet \cite{huang2016deep}} & - & 110 & 1.7M & 5.23 & 24.58 \\
& - & 1202 & 10.2M & 4.91 & - \\ \hline
\multirow{2}{*}{Wide ResNet \cite{zagoruyko2016wide}} & k = 8 & 16 & 11.0M & 4.81 & 22.07 \\
& k = 10 & 28 & 36.5M & 4.17 & 20.50 \\ \hline
\multirow{2}{*}{Pre-ResNet \cite{He2016}} & - & 164 & 1.7M & 5.46 & 24.33 \\
& - & 1001 & 10.2M & 4.62 (4.69 $\pm$ 0.20) & 22.71 (22.68 $\pm$ 0.22) \\ \hline
& $\alpha = 1$ & 164$^\dagger$ & 1.696M & 4.58 (4.67 $\pm$ 0.06) & 21.35 (21.78 $\pm$ 0.33) \\
MPELU nopre& $\beta = 1$ & 1001$^\dagger$ & 10.28M & \textbf{3.63 (3.78 $\pm$ 0.09)} & \textbf{18.96 (19.08 $\pm$ 0.16)}\\ \cline{2-6}
ResNet& $\alpha = 0.25$ & 164 & 1.696M & 4.87 (5.06 $\pm$ 0.14) & 23.16 (23.29 $\pm$ 0.11) \\
(Fig. 5(d))& $\beta = 1$ & 164$^\dagger$ & 1.696M & 4.43 (4.53 $\pm$ 0.12) & 21.69 (21.88 $\pm$ 0.19) \\
& & 1001$^\dagger$ & 10.28M & \textbf{\color{blue}{3.57 (3.71 $\pm$ 0.11)}} & \textbf{\color{blue}{18.81 (18.98 $\pm$ 0.19)}} \\ \hline
\end{tabular}}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\noindent \\
\textbf{Comparison to state-of-the-art methods.} To compare to the state-of-the-art methods, we adopt an aggressive training strategy from \cite{huang2016densely} (See appendix for details), denoted by the symbol $\dagger$.
The test error rate is given in Tab.~\ref{Table: compared to state-of-the-arts}. It is easy to see that with the training strategy $\dagger$, the mean test error of MPELU nopre ResNet-164 ($\alpha = 0.25$) is considerably reduced especially on CIFAR-100 dataset (21.88\% $vs.$ 23.29\%). This might be because that CIFAR-100 is challenger than CIFAR-10. Training for more epochs with large learning rate would help the model learn the underlying elusive concepts. Interestingly, changing the initial value of $\alpha$ to 1 in MPELU can further improve the test error on CIFAR-100 (21.78\%) but not on CIFAR-10 (4.67\%). For comparison, we also trained the 1001-layer MPELU nopre ResNet. Tab.~\ref{Table: compared to state-of-the-arts} shows that even though more parameters are introduced, the MPELU ResNet architectures do not suffer from overfitting and still enjoy the performance gains from increased parameters and depth. The best results from the proposed MPELU nopre ResNet-1001 are 3.57\% test error on CIFAR-10 and 18.81\% on CIFAR-100, which are considerably lower than those by the original Pre-ResNet \cite{He2016}.
\section{Conclusions}
\label{Section-6: conclusion}
Activation function is the pivotal component of deep neural networks. Recently, some work on this subject has been proposed. This paper generalized the existing work to a new Multiple Parametric Exponential Linear Units (MPELU). By introducing the learnable parameters, MPELU can become the rectified or the exponential linear units and combine their advantages. Comprehensive experiments via networks of varying depth (from 9-layer NIN \cite{2013arXiv1312.4400L} to 1001-layer ResNet \cite{he2015deep}) are conducted to examine the performance of MPELU. Experimental results showed that MPELU can bring benefits to the classification performance and the convergence of deep networks. In addition, MPELU can work with Batch Normalization as opposed to ELU. Weight initialization is also an important factor in deep neural networks. This paper proposes an initialization for networks using exponential linear units, which complements the current theory of this field. To our knowledge, this is the first method that gives an analytic solution for networks using exponential linear units. Experimental results demonstrated that the proposed initialization not only enable the training of very deep networks using exponential linear units, but leads to better generalization performance. In addition, these experiments suggested that Batch Normalization might be one of factors that caused the degradation problem. Finally, this paper investigated the usage of MPELU with ResNet and presented a deep MPELU residual networks which achieved state-of-the-art accuracy on the CIFAR-10/100 datasets.
\section*{Acknowledgement}
We would like to acknowledge NVIDIA Corporation for donating the Titan X GPU and supporting this research. This work was supported by the National Natural Science Foundation of China
(Grants No., NSFC-61402046, NSFC-61471067, NSFC-81671651),
Fund for Beijing University of Posts and
Telecommunications (Grants No., 2013XD-04, 2015XD-02),
Fund for National Great Science Specific Project (Grants No. 2014ZX03002002-004),
Fund for Beijing Key Laboratory of Work Safety and Intelligent Monitoring.
\section*{Appendix: Implementation Details}
\noindent
\textbf{NIN on CIFAR-10 (Sec.~\ref{Section: experiments on cifar10}).} During training, all the models are trained using SGD with batch size 128 for 120k iterations (around 307 epochs). The learning rate is initially set to 0.1, and then decreased by a factor of 10 after 100k iterations. The weight decay and momentum are 0.0001 and 0.9. The weights are initialized from a zero-mean Gaussian distribution with 0.01 standard deviation. $\alpha$ and $\beta$ in MPELU are initialized with 0.25 or 1, and updated by SGD without weight decay. During test, we adopt the single-view test. Following \cite{goodfellow2013maxout,2013arXiv1312.4400L,NIPS2015_5850}, the data is preprocessed with global contrast normalization and ZCA whitening. When data augmentation is used, the $28\times28$ patches are randomly cropped from the preprocessed images, and then flipped with a probability of 50\%.
\noindent
\textbf{The 15-layer networks on ImageNet (Sec.~\ref{Section: experiments on ImagenNet}).} The models are trained by SGD with mini-batch size of 64 for 750k iterations (37.5 epochs). The learning rate is 0.01 initially, then divided by 10 at 100k and 600k iterations. The weight decay and momentum are 0.0005 and 0.9, respectively. All of images are scaled to $256\times256$ pixels. During training, a $224\times224$ sub image is randomly sampled from the original image or its flipped version. No further data augmentation is used. During test, we adopt the single-view test.
\noindent
\textbf{MPELU ResNet on CIFAR-10/100 (Sec.~\ref{Section: MPELU ResNet results}).} The implementation details mainly follow \cite{he2015deep} and the fb.resnet.torch \cite{gross2016training}. Specifically, the models are trained by SGD with batch size of 128 for 200 epochs (no warming up). The learning rate is initially set to 0.1, then decreased by a factor of 10 at 81 and 122 epochs. The weight decay is set to 0.0001, and the momentum is set to 0.9. MPELU are initialized with $\alpha = 0.25\ or\ 1$ and $\beta = 1$ that are updated by SGD with weight decay. All the MPELU models are initialized from the proposed method (Sec.~\ref{Section: weight initialization}). For comparison, we follow the standard data augmentation implemented by fb.resnet.torch \cite{gross2016training}: each image is padded with 4 pixels and then a 32$\times$32 patch is randomly cropped from it or its horizontal flip version. When the aggressive training strategy $\dagger$ from \cite{huang2016densely} is adopted, the models are trained for 300 epochs. The batch size is 64 on two Titan X GPUs (32 each). The learning rate is initially at 0.1, then decreased by a factor of 10 at 150 and 225 epochs.
\bibliographystyle{splncs}
|
1,108,101,564,358 | arxiv | \section{Introduction}\label{s1}
\setcounter{equation}{0}
On October 11, 2019 passed away well-known specialist in Probability and Statistics professor Abram Aronovich Zinger. One of the main fields of his scientific interests was the theory of characterizations of probability distributions. He published a number of outstanding results in this field (see \cite{Z51},\cite{ZL57},\cite{Z69},\cite{ZL70}). Many of these results are connected with characterization of the normal law by the independence and/or identical distribution property of suitable statistics. In cooperation with professor Yuri Vladimirovich Linnik he was the first who provide such characterization using linear forms with random coefficients \cite{ZL70}. However, in our discussions with professor Zinger, he expressed an opinion the slightly different properties may characterize other classes of distributions. Now we are talking on independence properties of linear forms with random coefficients because characterizations of different probability distribution classes by independence of non-linear statistics are known for a very long time (see, for example, \cite{ZL57}, \cite{Z58}, \cite{ZKM}). Some characterizations by of identical distribution property and a constance of regression are known as well \cite{Z69},\cite{ZK90}. In two latest publications (\cite{K19a}, \cite{K19b}) their author tried to show that professor Zinger opinion on possibility to use the independence of linear forms with random coefficients for characterization non-normal distribution is true. Namely, such properties are suitable for characterization of two-point and hyperbolic secant distributions. The aim of this paper is to show that the exponential distribution may be characterized in the similar way.
\section{Exponential distribution and linear form with random coefficients}\label{s2}
\setcounter{equation}{0}
\begin{thm}\label{th1}
Let $\varepsilon_p$ be a random variable taking values $1$ with probability $p \in (0,1)$ and $0$ with probability $1-p$. Suppose that $X,Y$ are independent identically distributed (i.i.d.) random variables positive almost surely (a.s.) and independent with $\varepsilon_p$.
Consider linear forms
\begin{equation}\label{eq1}
L_1=(1-p)a X+\varepsilon_p a Y \quad \text{and} \quad L_2= pb X+(1-\varepsilon_p) b Y,
\end{equation}
where $a,b$ are positive constants. Then the forms $L_1$ and $L_2$ are independent if and only if $X$ and $Y$ have exponential distribution.
\end{thm}
\begin{proof}
Laplace transform of the random vector $(L_1,L_2)$ has the following form
\begin{equation}\label{eq2}
\E \exp\{-sL_1 - tL_2\} = f(a(1-p)s+bpt)\Bigl(f(as)p +f(bt)(1-p)\Bigr),
\end{equation}
where $f$ is Laplace transform of the random variable $X$. Random forms $L_1$ and $L_2$ are independent if and only if
\[ \E \exp\{-sL_1 - tL_2\} = \E \exp\{-sL_1\} \E \exp\{-tL_2\}, \]
what is equivalent to
\[f(a(1-p)s+bpt)\Bigl(f(as)p +f(bt)(1-p)\Bigr)= \]
\begin{equation}\label{eq3}
= f(a(1-p)s)\Bigl(f(as)p +(1-p)\Bigr)f(bpt)\Bigl(p +f(bt)(1-p)\Bigr).
\end{equation}
Change $as$ and $bt$ by new variables which we denote $s$ and $t$ again. Instead of (\ref{eq3}) we obtain
\[f((1-p)s+pt)\Bigl(f(s)p +f(b)(1-p)\Bigr) = \]
\begin{equation}\label{eq4}
= f((1-p)s)\Bigl(f(s)p +(1-p)\Bigr)f(pt)\Bigl(p +f(t)(1-p)\Bigr).
\end{equation}
By substituting $f(s) = 1/(1+\lambda s)$ ($\lambda >0$) into (\ref{eq4}) we obtain that the Laplace transform of exponential distribution satisfies this equation that is $L_1$ and $L_2$ are independent for exponentially distributed $X$ and $Y$.
Let us show inverse statement: if $L_1$ and $L_2$ are independent then $X$ and $Y$ have exponential distribution. To this aim put $t=s$ into (\ref{eq4}). We obtain
\begin{equation}\label{eq5}
f^2(s)= f(ps)f((1-p)s)\Bigl( p(1-p)f^2(s) +\bigl(p^2+(1-p)^2\bigr)f(s) +p(1-p)\Bigr).
\end{equation}
Now we would like to show the equation (\ref{eq5}) has no other solutions than $\varphi(s;\lambda) = 1/(1+\lambda s)$ ($\lambda >0$) which are Laplace transform of a probability distribution.
Suitable for this aim is the method of intensively monotone operators developed in \cite{KKM}. However, the equation (\ref{eq5}) has not too convenient form to apply any of the theorems from \cite{KKM} and, therefore, we apply the method itself. It is clear that:
\begin{enumerate}
\item $\varphi(s;\lambda)$ satisfies (\ref{eq5}) for any $\lambda >0$;
\item $\varphi(s;\lambda)$ is analytic in $s$ in some neighborhood of the point $s=0$;
\item for any positive $s_o$ there is $\lambda_o$ such that $f(s_o) = \varphi (s_o,\lambda_o)$, where $f$ is a fixed solution of (\ref{eq5}).
\end{enumerate}
Let us consider the difference between $f(s)$ and $\varphi(s,\lambda_o)$. From 3. it follows that $\varphi(s_o;\lambda_o) = f(s_o)$. Define the set $S=\{ s: 0<s \leq s_o, f(s)=\varphi(s,\lambda_o)\}$ and denote $s^*=\inf S$. Because both $f(s)$ and $\varphi(s,\lambda_o)$ are continuous we have $f(s^*)=\varphi(s^*,\lambda_o)$. Show that the case $s^*>0$ is impossible. Really, in this case we would have
\[ f^2(s^*)= f(ps^*)f((1-p)s^*)\Bigl( p(1-p)f^2(s^*) +\bigl(p^2+(1-p)^2\bigr)f(s^*) +p(1-p)\Bigr)\]
and
\[ \varphi^2(s^*)= \varphi(ps^*,\lambda_o)\varphi((1-p)s^*,\lambda_o)\Bigl( p(1-p)\varphi^2(s^*,\lambda_o)+ \] \[+\bigl(p^2+(1-p)^2\bigr)\varphi(s^*,\lambda_o) +p(1-p)\Bigr).\]
We have $f(s^*)=\varphi(s^*,\lambda_o)$ and two previous relations give us
\begin{equation}\label{eq6}
f(ps^*)f((1-p)s^*) = \varphi(ps^*,\lambda_o)\varphi((1-p)s^*,\lambda_o).
\end{equation}
However, $f(s) \neq \varphi(s,\lambda_o)$ for all $s\in (0,s^*)$. Therefore, either
$f(ps^*)f((1-p)s^*) > \varphi(ps,\lambda_o)\varphi((1-p)s^*,\lambda_o)$ or $f(ps^*)f((1-p)s^*) < \varphi(ps^*,\lambda_o)\varphi((1-p)s^*,\lambda_o)$ in contradiction with (\ref{eq6}). This leads us to the fact that $s^*=0$. The latest is possible only if there exists a sequence of pints $\{s_j,\; j=1,2, \ldots \}$ such that $s_j \to 0$ as $j \to \infty$ and $f(s_j)=\varphi(s_j,\lambda_o)$. It shows that $f(s)=\varphi(s,\lambda_o)$ for all $s \geq 0$ (see Example 1.3.2 from \cite{KKM}).
\end{proof}
{\it Let us note that under conditions of Theorem \ref{th1} we have
\begin{equation}\label{eq7}
((1-p)X+\varepsilon_p Y, pX+(1-\varepsilon_p)Y ) \stackrel{d}{=}(X,Y)
\end{equation}
if and only if $X$ has an exponential distribution}. Really, from Theorem \ref{th1} it follows that $X$ has exponential distribution. It is easy to verify that $X \stackrel{d}{=}(1-p)X +\varepsilon_p Y$ for independent exponentially distributed $X$ and $Y$.
The relation (\ref{eq7}) leads us a number of questions. Let us mention some of them:
\begin{enumerate}
\item[1)] Is the property $aX+bY \stackrel{d}{=}\bigl((1-p)a+pb\bigr)X +\bigl(a\varepsilon_p+b(1-\varepsilon_p)\bigr)Y$ characteristic for an exponential distribution?
\item[2)] Let $X_1, \ldots ,X_n, \ldots$ be a sequence of independent random variables distributed identical with $X$. Suppose that $Y,\; \varepsilon_p$ are from Theorem \ref{th1} and $\{\nu_q, \; q \in (0,1)\}$ has geometric distribution with the parameter $q$ independent on the sequence of $X_1, \ldots ,X_n, \ldots$. When $(1-p)X+\varepsilon_p Y \stackrel{d}{=} q \sum_{j=1}^{\nu_q}X_j$?
\item[3)] Does the relation
\[\E\{ pX+(1-\varepsilon_p)Y | (1-p)X+\varepsilon_p Y\} =\text{const} \]
characterize an exponential distribution?
\end{enumerate}
\vspace{-0.2cm} Below we shall try to answer these questions for some particular cases.
\vspace{0.1cm}
{\it Let us start with the question 1)}. Below we give two results in connection with this question.
\begin{thm}\label{th2}
Let $\varepsilon_p$ be a random variable taking values $1$ with probability $p \in (0,1)$ and $0$ with that of $1-p$. Suppose that $X,Y$ are independent identically distributed (i.i.d.) random variables a.s. positive and independent with $\varepsilon_p$. Linear forms $X$ and $(1-p)X+\varepsilon_p Y$ are identically distributed
\begin{equation}\label{eq8}
X \stackrel{d}{=}(1-p)X +\varepsilon_p Y
\end{equation}
if and only if $X$ has an exponential distribution.
\end{thm}
\begin{proof}
The forms are identically distributed if and only if their Laplace transforms satisfy the equation
\[ f(t)=f((1-p)t)\bigl(p f(t) +(1-p) \bigr)\]
or
\[ f(t)= \frac{1-p}{1-p f((1-p)t)}. \]
Now the result follows from \cite{KMM, KKRT}.
\end{proof}
\begin{thm}\label{th3} Let $\varepsilon_p$ be a random variable taking values $1$ with probability $p \in (0,1)$ and $0$ with that of $1-p$. Suppose that $X,Y$ are independent identically distributed (i.i.d.) random variables a.s. positive and independent with $\varepsilon_p$. Let $0<a<b<1$. Define $V=(b-a)/\bigl(pb+(1-p)a\bigr)$ and $k=[\log(1/p)/\log(V)]+2$, where square brackets are used for integer part of the number in them. Suppose that $X$ has finite moment of order $k$. Linear forms
\begin{equation}\label{eq9}
aX+bY \stackrel{d}{=}\bigl((1-p)a+pb\bigr)X +\bigl(a\varepsilon_p+b(1-\varepsilon_p)\bigr)Y
\end{equation}
are identically distributed if and only if $X$ has an exponential distribution.
\end{thm}
\begin{proof} In terms of Laplace transformation the relation (\ref{eq9}) takes form
\begin{equation}\label{eq10}
f(as)f(bs)=f(cs)\Big(p f(as) + (1-p) f(bs)\Bigr), \quad c=(1-p)a+pb.
\end{equation}
It is easy to verify that Laplace transform of exponential distribution $g(s)=1/(1+\lambda s)$ is a solution of (\ref{eq10}) for arbitrary $\lambda>0$.
Therefore, (\ref{eq9}) holds for exponential distribution with arbitrary scale parameter. Introduce new functions $\varphi(s) =1/f(s)$ and $\psi(s)=1/g(s)$. It is clear that $\varphi(0) = \psi(0)=1$ and
\begin{equation}\label{eq11}
\varphi(s)=p\varphi(Bs)+(1-p)\varphi(As),
\end{equation}
where $A=a/c<1$ and $B=b/c>1$. Obviously, the function $\psi$ satisfies equation (\ref{eq11}) as well. Note that (\ref{eq11}) is not Cauchy equation because $a,b$ and $c$ are fixed numbers. The function $f(s)$ is Laplace transform of a probability distribution which is not degenerate at zero. Therefore, $\varphi(s)$ is greater or equal to 1 for all positive values of $s$ and tends monotonically to infinity as $s \to \infty$. Because $X$ has moments up to order $k$ the functions $f(s)$ and $\varphi (s)$ are at least $k$ times differentiable for $s \in [0,\infty)$. It is easy to see that $\log(1/p)/\log(V) >1$ and, therefore, $k \geq 3$. It is also clear that
\begin{equation}\label{eq12}
pB^j+(1-p)A^j \neq 1
\end{equation}
for $j=2,3,\ldots$ while the left hand side of (\ref{eq11}) coincide with $1$ for $j=0,1$. Therefore, the derivative $\varphi^{(j)}(0)$ may be arbitrary for $j=0,1$ and equals to zero for $j=2, \ldots ,k$ (to obtain this it is sufficient take $j$-th derivative from both sides of (\ref{eq11}) and setting $s=0$).
Because the function $f(s)$ is Laplace transform of a probability distribution which is not degenerate at zero then $\varphi (0)=1$, $\varphi^{\prime}(0)>0$. From (\ref{eq11}) we have
\[ \varphi(B s) =\frac{1}{p}\Bigl( \varphi(s)-(1-p)\varphi(A s)\Bigr) <\frac{1}{p}\varphi(s). \]
Therefore,
\[ \varphi (B^m s) < \frac{1}{p^m}\varphi (s), \quad m=1,2, \ldots , \]
and, consequently,
\[ \varphi (s) < C s^{\gamma} \]
for sufficiently large values of $s$. Here $C>0$ is a constant and $\gamma = \log(1/p)/\log(B)$. The difference $\xi (s)=\varphi (s)- \psi (s)$ satisfies equation (\ref{eq11}) and conditions:
\begin{enumerate}
\item[$a)$] $\xi^{(j)}(0)=0$ for $j=0,1, \ldots k$;
\item[$b)$] $|\xi (s)| < C s^{\gamma}$ for sufficiently large $s$.
\end{enumerate}
Introduce a space $\go F$ of real continuous functions $\zeta (s)$ on $[0,\infty)$ for which the integral $\int_{0}^{\infty}|\zeta(s)|/s^{k+1} ds$ converges. According to properties $a)$ and $b)$ we have $\xi \in {\go F}$. Define a distance $d$ on $\go F$ as
\[ d(\zeta_1,\zeta_2) =\int_{0}^{\infty}\bigl|\zeta_1(s) - \zeta_2(s)\bigr| \frac{ds}{s^{k+1}}. \]
It is clear that $({\go F},d)$ is a complete metric space. Introduce the following operator
\[ \mathcal{A}(\zeta) =\frac{1}{p}\Bigl( \zeta (s/B )-(1-p)\zeta (sA/B)\Bigr) \]
from ${\go F}$ to ${\go F}$. For $\zeta= \zeta_1 - \zeta_2$, where $\zeta_1, \zeta_2 \in {\go F}$, we have
\[ d(\mathcal{A}(\zeta_1), \mathcal{A}(\zeta_2)) =\int_{0}^{\infty} \Bigl|\frac{1}{p}\Bigl( \zeta (s/B )-(1-p)\zeta (sA/B)\Bigr)\Bigr|\frac{d s}{s^{k+1}} \leq \]
\[ \leq \int_{0}^{\infty} \Bigl|\frac{1}{p}\zeta (s/B )\Bigr|\frac{d s}{s^{k+1}} + \int_{0}^{\infty}\Bigl| \frac{1}{p}(1-p)\zeta (sA/B)\Bigr|\frac{d s}{s^{k+1}} \leq \]
\[ \leq \frac{1+(1-p)A^{k}}{p B^{k}} d(\zeta_1,\zeta_2) = \rho d(\zeta_1,\zeta_2),\]
where
\[ \rho = \frac{1+(1-p)A^{k}}{p B^{k}} <1.\]
Now we see that $\mathcal{A}$ is contraction operator. It has only one fixed point in the space ${\go F},d$. Obviously, this point is $\zeta(s) = 0$ for all $s \geq 0$. In other words, $\xi (s) =0$ for all $s \geq 0$ and $\varphi = \psi$.
\end{proof}
Let us mention that Theorems \ref{th2} and \ref{th3} give particular answers for the question 1) above.
\vspace{0.2cm}
{\it Went now to the question 2)}. Namely, let $X_1, \ldots ,X_n, \ldots$ be a sequence of independent random variables distributed identical with $X$. Suppose that $Y,\; \varepsilon_p$ are from Theorem \ref{th1} and $\{\nu_q, \; q \in (0,1)\}$ has geometric distribution with the parameter $q$ independent on the sequence of $X_1, \ldots ,X_n, \ldots$. When $(1-p)X+\varepsilon_p Y \stackrel{d}{=} q \sum_{j=1}^{\nu_q}X_j$? In terms of Laplace transform we have solve the following equation
\begin{equation}\label{eq13}
f((1-p)s)\Bigl((1-p)+p f(s)\Bigr)\Bigl(1-(1-q)f(qs) \Bigr)=q f(qs).
\end{equation}
The case $q=1-p$ appears to be very simple.
\begin{thm}\label{th4}
Under conditions above, if $q=1-p$ equation (\ref{eq13}) hold if and only if $f(s)$ is Laplace transform of an exponential distribution.
\end{thm}
\begin{proof} For $q=1-p$ let us make the change of function. Namely, set $1/f(s)=1+\xi(s)$. After simple transformations we come to
\[ \xi (s) = \frac{1}{1-p}\xi ((1-p)s). \]
The statement follows now from \cite{KKM} similarly to the proof of Theorem \ref{th2}.
\end{proof}
\begin{thm}\label{th5}
Under conditions above suppose the distribution of $X$ has the moments of all orders. Let $q^{k-1}\neq p+(1-p)^k$ for all $k=2,3, \ldots$. Equation (\ref{eq13}) holds if and only if $f(s)$ is Laplace transform of an exponential distribution.
\end{thm}
\begin{proof} Let us rewrite (\ref{eq13}) in the form
\begin{equation}\label{eq14}
f((1-p)s)\Bigl((1-p)+p f(s)\Bigr)\Bigl(1-(1-q)f(qs) \Bigr)-q f(qs)=0.
\end{equation}
Because $X$ has moments of all orders its Laplace transform is infinite differentiable for all values of $s \geq 0$. Let us differentiate $k$ times the both sides of (\ref{eq14}) with respect to $s$ and put $s=0$. The coefficient at
$f^{k}(0)$ is
\begin{equation}\label{eq15}
(1-p)^kq+qp-(1-q)q^k-q^{k+1} = q\bigl((1-p)^k+p-q^{k-1}\bigr) \neq 0
\end{equation}
for $k=2,3, \ldots$. For $k=1$ coefficient (\ref{eq15}) is zero. This means the derivatives of $f$ of order $k>1$ calculated at zero are uniquely defined by the value of first derivative $f^{\prime}(0)<0$. However, Laplace transform of the Exponential distribution satisfies (\ref{eq14}), and its derivative at zero may be taking as arbitrary negative number.
\end{proof}
\vspace{0.2cm}{\it Let us now went to the question 3)}.
\begin{thm}\label{th6}
Let $X,Y$ are i.i.d. positive random variables possesing finite moments of all orders. Suppose that $\varepsilon_p$ is a Bernoulli random variate independent with $X,Y$ and takes value $1$ with probability $p$ and 0 with that of $1-p$, $0<p<1$. The relation
\begin{equation}\label{eq16}
\E\{ pX+(1-\varepsilon_p)Y | (1-p)X+\varepsilon_p Y\} =\text{const}
\end{equation}
holds if and only if $X$ has an exponential distribution.
\end{thm}
\begin{proof}
Equation (\ref{eq16}) may be written in terms of Laplace transform as
\begin{equation}\label{eq17}
-p f^{\prime}((1-p)t)\Bigl(pf(t) + (1-p)\Bigr) = \lambda p f((1-p)t)f(t),
\end{equation}
where $\lambda =\E X >0$. After changing function $f$ to $\varphi = 1/f$ we obtain
\begin{equation}\label{eq18}
\varphi^{\prime}\left((1-p)t \right)\Bigl( p+(1-p)\varphi (t) \Bigr) = \lambda \varphi \left((1-p)t \right).
\end{equation}
Putting here $t=0$ and taking into account $\varphi (0) =1$ we obtain $\varphi^{\prime}(0)=\lambda$ (what is obvious).
However, differentiating both part of (\ref{eq18}) with respect to $t$ we obtain
\begin{equation}\label{eq19}
\varphi^{\prime \prime}\bigl((1-p)t\bigr) (1-p) \Bigl( p+(1-p)\varphi(t)\Bigr) +(1-p)\varphi^{\prime}\bigl((1-p)t\bigr) \varphi^{\prime}(t) = \lambda (1-p) \varphi^{\prime}\bigl((1-p)t\bigr)
\end{equation}
Putting here $t=0$ we obtain $\varphi^{\prime \prime}(0) =0$. The induction shows that $\varphi^{(m)}(0) = 0$ for all $m=2,3, \ldots$. It implies $\varphi(t)$ is a linear polynomial in $t$ and, therefore, $f(t)$ is Laplace transform of an exponential distribution.
\end{proof}
\section{Few words in conclusion}\label{sec3}
\setcounter{equation}{0}
Of course, there are many other problems connected to characterizations of exponential distribution by the properties of linear forms with random coefficients. Let us mention some questions on identical distribution of quadratic forms, constant of regression of quadratic statistic on linear form with random coefficient, reconstruction of a distribution through the common distribution of a set of linear forms with random coefficients.
Let us give an example of a little bit nonstandard characteristic property of an exponential distribution. Namely, the relation (ref{eq8}) shows that the distribution of linear form $(1-p)X+\varepsilon_p Y$ does not depend on the parameter $p \in (0,1)$. It appears that this is a characteristic property of exponential distribution.
\begin{thm}\label{th7}
Let $X,Y$ be i.i.d. positive random variables. Suppose that $\{ \varepsilon_p, \; p\in (0,1)\}$ is a family of Bernoulli random variables, $\p\{\varepsilon_p =1\}=p$ and $\p\{\varepsilon_p =0\}=1-p$. The distribution of linear form $L=(1-p)X + \varepsilon_p Y$ does not depend on the parameter $p \in (0,1)$ if and only if $X$ has an exponential distribution.
\end{thm}
\begin{proof} If the distribution of linear form $L=(1-p)X + \varepsilon_p Y$ does not depend on the parameter $p \in (0,1)$ then its Laplace transform
\begin{equation}\label{eq20}
f\bigl((1-p)t\bigr)\Bigl( 1-p+pf(t)\Bigr) = \psi (t)
\end{equation}
possesses the same property. The relation (\ref{eq20}) shows that the function $ff\bigl((1-p)t\bigr)$ is differentiable with respect to $p$ and, therefore, according to $t\geq 0$. The latest implies that $X$ possesses finite first moment.
Let us differentiate the both sides of (\ref{eq20}) with respect to $p$:
\[ -t f^{\prime}\bigl((1-p)t \bigr) \Bigl( (1-p)+pf(t)\Bigr) - f\bigl((1-p)t \bigr) \bigl(1-f(t)\bigr). \]
Passing here to limit as $p \to 1$ we obtain
\[ f(t) = \frac{1}{1+t \E X}. \]
\vspace{-0.2cm}
\end{proof}
In the book (\cite{KKM}, p.p. \hspace{-10
pt}153--157) there was given a characterization of Marshall-Olkin law by the property of identical distribution of a monomial and a linear form with random matrix coefficient. It would be interesting to study possibility of characterization of Marshall-Olkin distribution by independence of suitable statistics.
Some of properties mentioned above will be a subject for our future work.
\section*{Acknowledgment}
The study was partially supported by grant GA\v{C}R 19-04412S (Lev Klebanov).
|
1,108,101,564,359 | arxiv | \section[The Mathematics of Painting: the Birth of Projective Geometry in the Italian Renaissance ]{\textbf{\center The
Mathematics of Painting: \\
the Birth of Projective Geometry in the Italian Renaissance }}
\section[]{\bfseries }
\begin{center}
{\large Graziano Gentili, Luisa Simonutti and Daniele C. Struppa}\footnotetext{The first author was partially supported by INdAM and by Chapman University.
The second author was partially supported by ISPF-CNR and by Chapman University}
\end{center}
\bigskip
{\raggedleft\selectlanguage{english}
«Porticus aequali quamvis est denique ductu
\par}
{\raggedleft\selectlanguage{english}
\foreignlanguage{italian}{stansque in perpetuum paribus suffulta columnis,}
\par}
{\raggedleft\selectlanguage{english}
\foreignlanguage{italian}{longa tamen parte ab summa cum tota videtur,}
\par}
{\raggedleft\selectlanguage{english}
\foreignlanguage{italian}{paulatim trahit angusti fastigia coni,}
\par}
{\raggedleft\selectlanguage{english}
\foreignlanguage{italian}{tecta solo iungens atque omnia dextera laevis}
\par}
{\raggedleft\selectlanguage{english}
donec in obscurum coni conduxit acumen.»
\par}
\bigskip
{\raggedleft\selectlanguage{english}
Titus Lucretius Carus, \textit{De rerum natura}, IV 426-431
\par}
\bigskip
{\selectlanguage{english}
\textbf{Abstract.} We show how the birth of perspective painting in the Italian Renaissance led to a new way of
interpreting space that resulted in the creation of projective geometry. Unlike other works on this subject, we
explicitly show how the craft of the painters implied the introduction of new points and lines (points and lines at
infinity) and their projective coordinates to complete the Euclidean space to what is now called projective space. We
demonstrate this idea by looking at original paintings from the Renaissance, and by carrying out the explicit analytic
calculations that underpin those masterpieces.}
\bigskip
{\selectlanguage{english}
\textbf{Keywords.} Renaissance, Piero della Francesca, painting, perspective, analytic projective geometry, points and
lines at infinity.}
\section{1. Introduction}
{\selectlanguage{english}
The birth of projective geometry through the contribution of Italian Renaissance painters is a topic that has originated
a large and very interesting bibliography, some of which is referred to in this article. Most of the existing
literature dwells on the evolution of the understanding of the techniques that painters and artists such as Leon
Battista Alberti and Piero della Francesca developed to assist them (and other painters) in creating realistic
representations of scenes. These techniques, of course, are a concrete translation of ideas that slowly germinated and
were only later completely developed into a new branch of geometry that goes under the name of projective geometry. }
{\selectlanguage{english}
The point of view that we are taking in this article, however, is to strengthen the linkage between the pictorial ideas
and the mathematical underpinnings. More to the point, the entire architecture of prospective painting consists in
realizing that the space of vision cannot be represented through the usual Euclidean space, but requires the inclusion
of new geometrical objects that, properly speaking, do not exist in the Euclidean space. We are referring here to what
mathematicians call improper points and improper lines or, with a more suggestive term, points and lines at infinity.
Unlike most other studies, for example [4] and [12], we use here the approach and the terminology from analytic
projective geometry, rather than the proportion theory from Euclidean geometry, by introducing the notion of projective
coordinates. Just as the birth of projective geometry was stimulated by pictorial necessities, we show here how the
language of this new geometry can be applied to those necessities.}
{\selectlanguage{english}
There are two main reasons for this approach. On one hand, we believe the projective terminology allows a simpler way to
treat the technical task at hand, namely the identification of the technical processes that a painter needs to
represent a scene. But there is a deeper reason: perspective is not simply a technique; rather it is a radical change
of perspective (pun intended) on what space is. In order to formally perfect the process of representation, the
mathematicians had to introduce new objects, new points, new lines, new planes. It is by introducing these objects that
mathematicians were able to create a logically consistent view of the pictorial space, that allowed them a formally
unimpeachable process through which what we see can be translated into what we draw. The new line, plane, space (which
are now the projective line, the projective plane, the projective space) resemble (and contain) the old Euclidean line,
plane, space, but perfect the nature of their properties. So, for example, while in the Euclidean plane we say that any
two distinct lines intersect in a point \textit{unless} they are parallel, in the new projective plane we can say that
any two distinct lines intersect in a point, without exception. Projective geometry is not just a new and useful
technique, it is a radically different way of representing the space around us.}
{\selectlanguage{english}
We should add a couple of notes for the reader. Projective geometry is born of the necessity to understand the
phenomenon of apparent intersection between parallel lines, and most of our article is devoted to this aspect. However,
once the mathematics is clear, projective geometry allows the study of much more complex situations. For example, the
same techniques that we illustrate in our article, can be utilized to determine how to represent the halo of a saint,
or the shadow of a lamp against the wall of a church. This topic goes beyond the purposes of this article, but we did
not want the reader to think that projective geometry exhausts its role with the study of points and lines. We should
add that, like it often happens in mathematics, the theory of projective geometry and its developments has taken a life
of its own, and it is now one of the most fertile and successful fields in all of mathematics.}
{\selectlanguage{english}
To begin our analysis of the evolution of the prospective point of view in painting, we will look at a few paintings
from the early renaissance. Specifically, in section 2, we will look at two Tuscan painters: Giotto, whose worldwide
fame rests on his fresco cycle in Padova (where he depicted the life of Jesus and the life of the Virgin Mary), and
possibly (attribution is disputed) on his frescos in Assisi (where he depicted the life of San Francesco), and the
equally important Duccio di Buoninsegna, whose \textit{Maestà} is visible at the Duomo in Siena.}
{Giotto was
considered, at the time, the greatest living painter, and he is usually credited with being the link between the
Byzantine style and the Renaissance, and the first to adopt a more naturalistic style. \textcolor[rgb]{0.2,0.4,1.0}{
}Giotto was an attentive observer of reality, as we can see by looking at the faces and figures in his paintings, but
because of the lack of appropriate technique, his approach to architecture appears a mixture of artificial and
fantastic.\footnotemark{} In this section, we consider some of his works, as well as Duccio's paintings, to highlight
both their early understanding of the need for new ideas, as well as their insufficient clarity on what those ideas
would need to be.}
\footnotetext{\textrm{The reader is referred to [18], [26] for a careful reconstruction of the path from natural to
artificial perspective in the Middle-Ages and Renaissance. See also the bibliographies [19], [21], [25].}}
{Section 3 is devoted to the mathematical description of the
process that is necessary for a faithful representation of a three-dimensional scene on a canvas. This section is where
we are able to introduce the basic ideas that will lead to the projective space. How Leon Battista Alberti and Piero
della Francesca understood such ideas is the subject of Section 4, where we go back to the original texts, and
paintings, to illustrate the way in which the theory of projective geometry was applied in these more advanced works
from the Renaissance. To be precise, we will show that in fact Leon Battista Alberti did not fully justify his
technique (\textit{costruzione legittima}), and so we have an example of a process which seems to work, while its own
developers are not yet fully aware of its theoretical justification. The last Section, before our final conclusions,
inverts the process, so to speak. Instead of discussing how to use geometry to represent a scene on the canvas, we will
take a painting as a starting point, to reconstruct what the scene that the painter had in mind must have been. This is
an interesting exercise, not only for the mathematician, but for the art historian as well, since this reconstruction
can help shed light on some interpretation issues, as we will discuss in more detail in the section. }
\section{2. Early steps: Giotto (1267{}-1337) and Duccio di Buoninsegna (1255/60{}-1318/19)}
{\selectlanguage{english}
If one takes a look at any of Giotto's frescos, the first thing that jumps to the eye is a really distorted sense of
distances, positions, and sizes of the elements of the pictorial composition. As we see below (figure 1) in a fresco
that represents San Francesco who chases away the devils from the city of Arezzo, the buildings have odd angles, the
figures are too big, and it looks like we are watching the scene both from the top and from the side (note how we see
the side of the walls surrounding Arezzo, but also the buildings inside the walls themselves). What is going on?}
{\selectlanguage{english}
The answer to this question lies in the fact that Giotto is one of those painters who found themselves in a moment of
epochal transformation. A moment in which painters understood that the way we see objects, and the way objects are, do
not coincide. More specifically, when we think of a table, when we touch a table, we deal with a rectangle. This is
what most tables are, and if we close our eyes and simply touch the table, we perceive a rectangle. Opposite sides are
parallel, and the angles between contiguous sides are right angles (ninety degrees). But when we look at a table, or
when we try to draw a table, something completely different appears. Now the angles become acute or obtuse, depending
on where we are looking, and the parallel sides may not appear parallel anymore.}
\bigskip
{\centering
\includegraphics[width=2.8799in,height=3.3335in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img001.jpg}
\par}
{\centering\selectlanguage{english}
\foreignlanguage{italian}{Figure 1. Giotto, }\foreignlanguage{italian}{\textit{La cacciata dei diavoli da
Arezzo}}\foreignlanguage{italian}{, scene from ``Storie di San Francesco'', (1295-1299), fresco, Basilica Superiore di
Assisi.}
\par}
\bigskip
{\selectlanguage{english}
So, the painter has to recognize a complex shift: if the table has to look right, it has to be drawn wrong. Instead of a
rectangle, something else has to be drawn, in order to trick the viewer's brain into recognizing a properly positioned
table. If the painter were a mathematician, he would recognize that there are two geometries that conflict with each
other: the geometry of touching (the geometry of sculpture), and the geometry of seeing (the geometry of painting). But
this must have been incredibly difficult for Giotto and his contemporaries back in the fourteenth century. This
difficulty explains why his frescos appear so odd, and why the angles in the buildings that he depicts are so
un-lifelike. It is because Giotto understood that right angles do not always appear as right, but in fact they need to
be depicted as acute or obtuse. But he did not grasp, for example, the fact that parallel lines don't always appear
parallel. The unrealistic sizes of the figures in his frescos are a consequence of a similar cognitive dissonance.
Giotto realized that objects that are closer to us appear larger than objects at a distance. But he lacked the
mathematics to figure out the precise proportions that should be used. As we will see in Section 4, it will only be
with Leon Battista Alberti (1404-1472) that a precise method to address this issue will be developed.}
{\selectlanguage{english}
This conflict is quite apparent in another great contemporary of Giotto, namely Duccio di Buoninsegna. In his
\textit{Maest}à, there is a section where Duccio paints a \textit{Last Supper }(figure 2). }
\bigskip
{\centering
\includegraphics[width=2.9736in,height=2.6957in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img002.jpg}
\par}
{\centering\selectlanguage{english}
\foreignlanguage{italian}{Figure 2. Duccio di Buoninsegna, }\foreignlanguage{italian}{\textit{L'ultima
Cena}}\foreignlanguage{italian}{, scene from the back of the ``Maestà'', (1308-11), tempera on wood, Museo dell'Opera
del Duomo, Siena}
\par}
\bigskip
{\selectlanguage{english}
The central object in any such painting is the table, and when we look at this representation, we have the impression
that the plates on the table are on the verge of falling on the floor. The reason for such an impression is that the
table is represented not as a rectangle (Duccio like Giotto realized that this would not have worked), nor as a
trapezoid (which is the correct representation). Rather, it is a parallelogram, in which the right angles are
eliminated (as they should), but the parallelism among sides is preserved, thus offering a totally inadequate
representation. One should also look at the ceiling and the beams in the ceiling itself. In the room, such beams are
clearly parallel, and we know (we will get into more details later) that parallel lines must be represented as
converging to a point. But, as we see in the modified picture below, while Duccio understands this, he seems not to
know that all lines parallel to each other must converge in the same point. Instead, as we see (figure 3), the internal
beams converge on the figure of Christ, while the external beams converge on the table. The outcome is a ceiling that
is clearly wrong.}
\bigskip
{\centering
\includegraphics[width=3.1563in,height=2.8173in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img003.jpg}
\par}
{\centering\selectlanguage{english}
Figure 3
\par}
\bigskip
\bigskip
{\selectlanguage{english}
These two examples are not offered to demean these great painters, but rather to suggest how complex it must have been
for those living in the XIII and XIV century, to realize how to go from Euclidean geometry to projective geometry. As
we will see in the next sections, this process will lead to the understanding that a fundamental mathematical truth is
hidden somewhere. And it was because of a few artists with great mathematical background, that this was finally
understood. \ Before we get to that point, however, we will take a brief mathematical detour.}
\section{3. The mathematics of perspective}
{\selectlanguage{english}
It is an interesting challenge to illustrate in modern terms how the effort to understand vision and painting leads to a
new geometry, that mathematicians call projective geometry. \ }
{\selectlanguage{english}
First of all, notice that the main approach of a painter to this problem does not discuss how the eye and the brain
allow us to see the surrounding world, but only how the light and the colors reach the eye: in fact this is the
environment in which a painter can mainly intervene. It is therefore reasonable to think the eye as a point of our
3-dimensional space, and set its position as the origin $O$ of a system of Cartesian coordinates $(x,y,z)$.
\footnote{\textrm{ This is clearly a simplification that does not model the anatomical aspects of vision.}}}
{\selectlanguage{english}
We can then base our study on the experimental fact that light rays essentially propagate along straight lines in space,
and hence assume that the vision relies upon what all the infinite rays entering $O$ bring to the
eye.\footnote{\textrm{ This interpretation of vision, a revolutionary one in the Renaissance, was due mainly to ibn
Al-Haytham. also known as Alhazen, a well-known authority of the 11th century. Indeed, this visual theory was based on
his }\textrm{\textit{Book of Optics}}\textrm{ (}\textrm{\textit{Kitab al-Manazir}}\textrm{) [3]. On this topic see [5],
[6], [7].}} Every ray entering $O$ brings a colored point, which comes from the object being viewed (possibly
the sky). \ Therefore each ray entering the origin contributes to the vision with a colored point. Of course $O$
brings no contribution to the vision. }
{\selectlanguage{english}
Let us now imagine to be able to insert, between the observer, and the object which is observed, a canvas, possibly a
transparent one. Then, it is clear that the ray that joins the object to the observer will intersect the canvas in one
point and one alone. That point, with the color that the object has, becomes like a pixel on the canvas, and the
entirety of the pixels that are generated in this way, is a faithful representation of the object itself. Note that if
we were to take two different canvases, the eye would not be able to distinguish a difference in the images (we hope
the reader will forgive our use of relatively modern terms such as pixel).}
{\selectlanguage{english}
The beautifully simple, but not at all easy, mathematical idea that we have just described can be expressed by saying
that we have represented each spatial ray $r$ entering the origin $O$ by means of one of its points only,
$P(r)$, that lies on the chosen canvas and contains all the information that the ray brings to the eye. Of
course, if we move the representing point $P(r)$ (with all the information that it carries) along the ray
$r$, (in other words we change the canvas) the resulting view will not be affected at all. \ In principle, the
representing point of a ray can be chosen to be \textit{any }point of the ray. In practice, it is usually chosen to
stay on a plane - the plane of the painting, its canvas -- or, in different, more mathematical contexts, on a sphere
(we will not discuss this more complex type of representation).}
{\selectlanguage{english}
To help the reader understand what follows, we suggest a simple experiment. As you go through the next few lines, we
would ask that you sit in front of a window, looking at whatever lies in front of you.\textstyleFootnoteSymbol{
}\footnote{\textrm{ The metaphor of the window was used by Leon Battista Alberti to explain his
}\textrm{\textit{costruzione legittima. }}\textrm{According to Gerard Simon, without the new ideas of ibn Al-Haytham on
vision, Alberti's window would not have been thinkable: one of the many examples of historical encounters between
Western and Arab cultures [20]. }}\ \ And now, as you sit, imagine your eye to be the origin $O$ of a system of
coordinates (figure 4). The x-axis of the system will exit from your eye (the origin) and points to your right, the
$y$-axis exits from $O$ and points forward towards the window, and finally the $z$-axis is the vertical line from
$O$ up. Using the Cartesian coordinates so established, we will call ${\pi}$ the plane of the window (we assume
you are sitting upright, and therefore the window is perpendicular to the $y$-axis). If we assume the distance from the
reader to the window to be one unit, we would mathematically express the equation of this plan as $y=1$ (the mathematical
notations will be useful in the sequel when we will write the equations of the transformations, but are not necessary
for the understanding of the basic ideas). We will also identify the ceiling of the room with the plane of equation
$z=1$, i.e. with the horizontal plane located at a distance of $1$ unit from the eye, above the head of the reader.}
\includegraphics[width=5.24in,height=3.65in]{Figura4.pdf}
{\centering\selectlanguage{english}
Figure 4
\par}
\bigskip
\bigskip
{\selectlanguage{english}
Now, two points of Cartesian coordinates $(x,y,z)$ and $(u,v,w)$ give the same contribution (pixel) $P(r)$ to the
vision if they belong to the same ray $r$ entering the origin (where the eye is located)\footnote{\textrm{
Mathematically, this means that there is a nonzero real number $t$ such that $(u,v,w)=t(x,y,z)=(tx, ty, tz)$. \ }}.
Therefore we will denote the contribution (pixel) $P(r)$ to the vision, given by the ray $r$ containing
the point $(x,y,z)$ and entering the origin, with the symbol $[x,y,z]$, \ and establish that $[x,y,z]=[tx,ty,tz]$ for all
nonzero real numbers t. \ The idea is that $[x,y,z]$ and $[tx,ty,tz]$ will indicate the same pixel, positioned on different
canvas.}
\includegraphics[width=5.24in,height=3.65in]{Figura5.pdf}
{\centering\selectlanguage{english}
Figure 5
\par}
\bigskip
\bigskip
{\selectlanguage{english}
Since we have taken the window to be represented by the equation $y=1$, the pixel $[x,y,z]$ generated by $(x,y,z)$ will
correspond, on the window, to a point with $y=1$; this can only be obtained by taking $t=1/y$ and therefore the coordinate
of the pixel on the window will be $(x/y,1,z/y)$. (figure 5).}
{\selectlanguage{english}
Rays that enter the eye at $O$ will intersect the plane ${\pi}$ (just like when you are watching the countryside
from inside your home, the entering rays would all intersect the glass of the big window). If for all rays $r$
we place the representative point $P(r)$ on the plane ${\pi}$ \ than we have made the perfect theoretical
painting that represents the landscape the eye is watching.}
{\selectlanguage{english}
And here are a few surprises. A straight line $s$ of the observed landscape will be seen by the eye through all the rays
of the plane L that contains $O$ and the line $s$. We can then represent s on the painting ${\pi}$ as the
intersection of L and ${\pi}$. \ This demonstrates (in an empirical way) one of the first results of projective
geometry, namely the fact that a line is transformed (by projections) into another line (the reader is invited to
reflect on what would happen, however, if the line were one of the rays).}
{\selectlanguage{english}
With this in mind, if we are given the equations of a few beams of the ceiling of our room in the $3$-space, we can for
example compute the equations of the beams, and how they will appear in the painting ${\pi}$. As we have seen in
Duccio's example in the previous section, the issue of representing ceiling beams was in fact one of the most difficult
to understand.}
In our Cartesian environment, let us consider $3$ parallel beams in the ceiling $z=1$, of equations
\begin{eqnarray*}
&&x=-1 \quad \hbox{and} \quad z=1\\
&&x=0 \quad \hbox{and} \quad z=1\\
&&x=1 \quad \hbox{and} \quad z= 1,
\end{eqnarray*}
{\selectlanguage{english}
respectively\footnote{\textrm{ The reader will note that we use two equations to represent a line. This is because
}\textrm{{one}}\textrm{ can think of a line as the intersection of two planes, each one with its own
equation. In this particular case, we are looking at lines which are on the ceiling (and so all of their points have
$z=1$) , but also that are perpendicular to the x-axis and therefore have a fixed value for $x$ (in the three cases,
respectively, $x=1, x=0, x=-1$).}}. These three beams are all parallel to the y-axis as indicated in figure 6.}
\bigskip
{\centering
\includegraphics[width=5.24in,height=3.6445in]{Figura6.pdf}
\par}
{\centering\selectlanguage{english}
Figure 6
\par}
\bigskip
\bigskip
{\selectlanguage{english}
Note that a point on the first beam (the one with equations $x=-1, z=1$) will have coordinates $(-1, y, 1)$, where the $x$ and
the $z$ coordinates are fixed because of the planes and the $y$-coordinate is free to range in any way.}
{\selectlanguage{english}
If we now search for the three sets of points that represent the contributions to the vision (pixels), coming from the
three beams, we get (with arbitrary $y$)}
\[
[-1,y,1]
\]
\[
[0,y,1]
\]
\[
[1,y,1].
\]
As we have already pointed out, to place them on the painting ${\pi}$ of equation $y=1$, we just divide the (so called
homogeneous) coordinates by (the arbitrary nonzero) $y$ and get, with the established notations:
\begin{eqnarray}\label{(1)}
&&[-1/y, 1, 1/y]\\
&&[0, 1, 1/y]\nonumber \\
&&[1/y, 1, 1/y],\nonumber
\end{eqnarray}
i.e., by putting $u=1/y$,
\bigskip
\begin{eqnarray}\label{(2)}
&&[u,1, -u]\\
&&[0,1, u]\nonumber \\
&&[u, 1, u].\nonumber
\end{eqnarray}
{\selectlanguage{english}
These are the equations of three straight lines in ${\pi}$ that are not only nonparallel, but that all meet at the point
$V$=(0,1,0).\footnote{ Strictly speaking, the point $V$ cannot be achieved because u is never zero, but
we think it is clear what we are describing here.} (figure 6).}
{\selectlanguage{english}
If the reader has followed the process, (s)he should be noticing that this process establishes a certain correspondence
between the ceiling and the canvas. Every point of the ceiling has a corresponding point on the canvas, but not every
point on the canvas comes from a point on the ceiling. \ Indeed, it is apparent that the points of the canvas y=1 with
a negative z{\textless}0 coordinate cannot come from the ceiling: the reader will immediately see that these points
come from the floor (ground) of the observed landscape (floor of equation, say, z=-1). Well, now all seems to be well
understood{\dots}, but where is the point $V$ coming from? If one tries to reconstruct the process we just
described, one will realize that in fact the point $V$ does not come either from any point on the ceiling, or
from any point of the floor and this throws a monkey-wrench in our construction. What can be done to fix this apparent
irregularity? What is the meaning of this surprising difficulty?}
{\selectlanguage{english}
If we analyze carefully what we have done so far, we will notice that the point $V$ is approached\textbf{ }on the
painting\textbf{ }${\pi}$ by the pixels contributed by the rays coming from points of any of the three beams very far
from the origin: when y becomes arbitrarily big in (1), $1/y= u$ approaches $0$ (see (2)). \ If we now think of a few lines
of the floor, parallel to the y axis, and repeat for the floor $z=-1$ the procedure used for the ceiling, we will see
that the point $V$is approached\textbf{ }on the painting\textbf{ }${\pi}$ by the pixels contributed by the rays
coming from points of any of these lines of the floor, very far from the origin.}
{\selectlanguage{english}
In some sense, the point $V$ (which in the painter terminology is called \textit{the vanishing point of the
painting} \footnote{\textrm{ Lucretius, in his }\textrm{\textit{De rerum natura}}\textrm{ describes the vision of a
colonnade which extends in front of us \ in the passage we used as incipit to this article: «Again, a colonnade may be
of equal line from end to end and supported by columns of equal height throughout, yet, when its whole length is
surveyed from one end, it gradually contracts into the point of a narrowing cone, completely joining roof to floor and
right to left, until it has gathered all into the vanishing point of the cone.» Lucretius.~}\textrm{\textit{On the
Nature of Things, }}\textrm{[17,}\textrm{\textit{ }}\textrm{IV, pp. 426-431].}\textrm{\textit{ }}\textrm{This
fascinating piece of poetry constitutes the first description of the vanishing point that has reached us. }}) is the
image of the point on the ceiling that would belong to each of the beams, if they could continue to
infinity.\footnote{\textrm{ For an extensive explanation, see e.g. [11].}} In the same way, the point $V$ is the
image of the point on the floor that would belong to each of the chosen parallel lines, if they could continue to
infinity. \ But of course the three beams, and the chosen lines of the floor, are parallel, and they have no point in
common. What is happening? The answer (whose mathematical formalization we will describe shortly) is that in order for
the correspondence between the canvas and the system ceiling-floor to be complete, we need to add a point (which
doesn't belong either to the ceiling or to the floor), which is the intersection both of the parallel beams of the
ceiling and of the chosen parallel lines of the floor. In fact, as we will discover shortly, even this addition will
not be enough. Indeed, we will need to add to the system an entire line, in order to reconstruct a perfect
correspondence.}
{\selectlanguage{english}
To understand this last point, consider now a family of parallel beams on the ceiling that are not parallel to the $y$
axis. Consider for instance the three beams of equations (figure 7)}
\begin{eqnarray}\label{(3)}
&&x=y \quad \hbox{and}\quad z=1\\
&&x=y-1 \quad \hbox{and}\quad z=1\nonumber \\
&&x=y-2 \quad \hbox{and}\quad z=1.\nonumber
\end{eqnarray}
{\selectlanguage{english}
These three beams contribute to the vision with the pixels denoted by, for arbitrary $y$}
\begin{eqnarray}\label{(4)}
&&[y, y, 1]\\
&&[y-1, y, 1]\nonumber \\
&&[y-2, y, 1],\nonumber
\end{eqnarray}
{\selectlanguage{english}
which placed on the painting ${\pi}$ \ (of equation $y=1$) become, for arbitrary nonzero $y$}
\begin{eqnarray}\label{(5)}
&&[y/y, 1, 1/y] = [1, 1, 1/y]\\
&&[(y-1)/y, 1, 1/y] = [1-1/y,1, 1/y]\nonumber \\
&&[(y-2)/y, 1, 1/y] = [1-2/y, 1,1/y],\nonumber
\end{eqnarray}
{\selectlanguage{english}
i.e., for an arbitrary nonzero $u$}
\begin{eqnarray}\label{(6)}
&&[1, 1, u]\\
&&[1-u, 1, u]\nonumber \\
&&[1-2u, 1, u].\nonumber
\end{eqnarray}
{\selectlanguage{english}
These are the equations of three straight lines in ${\pi}$ that, again, are not only nonparallel, but that all meet at
the point $W$= $(1,1,0)$ (figure 7). Again, the point $W$ does not contribute with a pixel coming from the
ceiling, and anyway belongs to the painting ${\pi}$. \ $W$ is called \textit{vanishing point for the given
family of parallel beams.} \ The point $W$is approached\textbf{ }on the painting\textbf{ }${\pi}$ by the pixels
contributed by the rays coming from points of any of the three beams listed in (3), very far from the origin: when y
becomes arbitrarily big in (5), $1/y= u$ approaches 0 in (6). If we now think of a few lines of the floor, obtained by
substituting $z=-1$ in place of $z=1$ in formulas (3), and repeat the procedure used for the ceiling, we will see that
the point $W$is approached\textbf{ }on the painting\textbf{ }${\pi}$ by the pixels contributed by the rays
coming from points of any of these lines of the floor, very far from the origin.}
{\selectlanguage{english}
The family of parallel lines that we considered in (3) are actually parallel to the bisecting line of the first and
third quadrant of the $xy$ plane of equation $z=1$ (the ceiling): these lines could be diagonals of a square tessellation
of the ceiling. In this situation, we see that the distance between the vanishing point of the painting
$V$=$(0,1,0)$ and the vanishing point $W$=(1,1,0) coincides with the distance of the eye of the observer
from the plane of the painting ${\pi}$ (!!). \ The distance of the eye of a painter from his painting can be encoded
in the painting itself. This is the reason why Leon Battista Alberti e Piero della Francesca called $W$ the
\textit{distance point}. Notice that the projective approach made the identification of the point $W$
immediate.}
\bigskip
\includegraphics[width=5.24in,height=3.65in]{Figura7.pdf}
{\centering\selectlanguage{english}
Figure 7
\par}
\bigskip
{\selectlanguage{english}
The phenomenon we described above is not limited to a particular family of parallel lines. More generally, we will show
that every family of parallel lines on the ceiling is represented, on the canvas, by a family of lines that converge to
a point that doesn't come either from a point of the ceiling, or from a point of the floor, and that needs to be added
in order to complete the correspondence of the canvas with the system ceiling-floor. And all these new points that we
will add (and which we call \textit{improper points} or \textit{points at infinity}), will eventually be on a line
(\textit{improper line} or \textit{line at infinity}), whose pictorial meaning we will describe in the next few pages.}
{\selectlanguage{english}
Let us therefore consider an arbitrary family of parallel lines in the ceiling of equation $z=1$. Three beams from this
family have equations, for any nonzero $m$, }
\begin{eqnarray}\label{(7)}
&&x=(y-1)/m \quad \hbox{and} \quad z=1\\
&&x=(y-2)/m \quad \hbox{and} \quad z=1\nonumber \\
&&x=(y-3)/m \quad \hbox{and} \quad z=1.\nonumber
\end{eqnarray}
{\selectlanguage{english}
These three beams contribute to the vision with the pixels denoted by, for arbitrary $y$}
\begin{eqnarray}\label{(8)}
&&[(y-1)/m, y, 1]\\
&&[(y-2)/m, y, 1]\nonumber \\
&&[(y-3)/m, y, 1],\nonumber
\end{eqnarray}
{\selectlanguage{english}
which on the painting ${\pi}$ \ (of equation $y=1$) become, for arbitrary nonzero y}
\begin{eqnarray}\label{(9)}
&&[(y-1)/my, 1, 1/y] = [1/m-1/my, 1, 1/y]\\
&& [(y-2)/my, 1, 1/y] = [1/m-2/my,1, 1/y]\nonumber \\
&&[(y-3)/my, 1, 1/y] = [1/m-3/my, 1,1/y],\nonumber
\end{eqnarray}
{\selectlanguage{english}
i.e., for an arbitrary nonzero $u$}$U$
\begin{eqnarray}\label{(10)}
&&[1/m-u, 1, u]\\
&&[1/m-2u, 1, u]\nonumber \\
&&[1/m-3u, 1, u].\nonumber
\end{eqnarray}
{\selectlanguage{english}
These are the equations of three straight lines in ${\pi}$ that, again, are not only nonparallel, but that all meet at
the point $U= (1/m,1,0)$. Again, the point $U$ does not co$U$ntribute with a pixel coming from the ceiling,
and anyway belongs to the painting ${\pi}$. The point $U$ is called \textit{vanishing point for the given family of
parallel lines}. \ If we now consider on the floor the family of lines analogous to the family described in (7), but
with $z=-1$, we still find the point $U$, which\textbf{ }does not contribute with a pixel coming from the floor,
and anyway belongs to the painting ${\pi}$.}
{\selectlanguage{english}
It is clear now that (being m arbitrary) the collection of all vanishing points of families of parallel lines of the
ceiling or of the floor, contribute to the vision with all the pixels that on the painting ${\pi}$ of equation $y=1$ are
of the form}
\[
[u,1,0],
\]
{\selectlanguage{english}
i.e. with the line of the painting ${\pi}$ of equation ($y=1$ and) $z=0$. For obvious and charming reasons, this line is
called the \textit{horizon}! (figure 8).}
\includegraphics[width=5.24in,height=3.65in]{Figura8.pdf}
{\centering\selectlanguage{english}
Figure 8
\par}
\bigskip
{\selectlanguage{english}
This construction is beautifully illustrated in the following painting of Andrea del Castagno (figure 9), in which we
have highlighted (figure 10) three families of parallel lines, with the corresponding vanishing points and the
resulting horizon (the reader is advised not to highlight such lines when visiting an art museum!). }
\bigskip
\includegraphics[width=5.461in,height=2.2264in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img009.jpg}
{\centering\selectlanguage{english}
\foreignlanguage{italian}{Figure 9. Domenico Ghirlandaio, }\foreignlanguage{italian}{\textit{Ultima
cena}}\foreignlanguage{italian}{, (\~{} 1476), affresco, Cenacolo della Badia di Passignano, Abbazia di San Michele
Arcangelo a Passignano, Tavarnelle Val di Pesa, Firenze,}
\par}
\bigskip
\bigskip
\includegraphics[width=5.5827in,height=2.2264in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img010.jpg}
{\centering\selectlanguage{english}
Figure 10
\par}
\bigskip
{\selectlanguage{english}
In order for the correspondence that we have described to hold for every point, we need to add an improper point for
every direction of lines. So, we now have an entire line of improper points on the system ceiling-floor, usually called
the improper line. The image of the improper line under the correspondence we have described is thus the horizon on the
painting. }
{\selectlanguage{english}
In this way we have presented a formalized mathematical method to move the representative point of each ray of light
(originating at the ceiling or at the floor) to the painting ${\pi}$. }
{\selectlanguage{english}
The reader will appreciate the beautiful symmetry that is emerging. Just like the images of two parallel lines (whether
on the ceiling or on the floor) intersect in a point, the vanishing point, that we imagine to be the image of the
improper point shared by the parallel lines, so the images of two parallel planes (the ceiling and the floor) intersect
in a line, the horizon, that is the image of the improper line that ceiling and floor share. A marvelous symmetry
indeed!}
{\selectlanguage{english}
The attentive reader will, however, note that while we have added a line (an improper one) to the system ceiling-floor,
the full correspondence will require the addition of a line to the (infinite) canvas as well. Indeed, if we are now
trying to describe (on the canvas) the line which is represented on the ceiling z=1 by y=0, we easily see that this is
not possible. Pictorially, this is a consequence of the fact that the painter cannot represent, in the painting, the
points that are vertically above his head. Mathematically, this is a consequence of the fact that the plane y=0 does
not intersect the plane ${\pi}$ \ given by=1. Finally, if one looks at the coordinates (x,y,1) of a point on the
ceiling, and allows y to become zero, one obtains the point (x,0,1) which does not belong to the painting. Just like we
did before, we would now need to add all these points (for all values of x) to the canvas, and thus complete the plane
of the painting with an improper line.}
{\selectlanguage{english}
In this new correspondence the following happens:}
\liststyleWWNumv
\begin{itemize}
\item {\selectlanguage{english}
Every point in the ceiling and on the floor (except those with y=0) is represented by a point on the canvas.}
\item {\selectlanguage{english}
Every point on the canvas (except those on the horizon) are the representation of a point on the ceiling or on the
floor.}
\item {\selectlanguage{english}
The points of the horizon can be thought of as images of the improper line that we have added to the system
ceiling-floor.}
\item {\selectlanguage{english}
The points on the ceiling with y=0 are represented on the improper line that we have added to the canvas.}
\end{itemize}
\section{4. Leon Battista Alberti (1404{}-1472) and Piero della Francesca (1416/17{}-1492)}
{\selectlanguage{english}
Section 2 described some of the uncertainties that were plaguing the painters of the early Renaissance. Despite these
uncertainties, these painters were feeling the strong need to change the nature and the subjects of their work. The
interest was slowly shifting away from the ascetic body of the teachers of the medieval scholastics, and was turning to
three-dimensional figures, the divine \textit{Maestà }inside gothic churches, or the suggestive backgrounds of battles
where the powerful soldiers and the vigor of the horses could find an effective representation. A philosophical
development was forcing the painters towards a new understanding of their art as evidenced in the work of artists such
as Paolo Uccello (figure 13), Mantegna (figure 11), Masaccio, and the Giambellino (Giovanni Bellini) (figure 12).}
\bigskip
{\centering
\includegraphics[width=3.4866in,height=2.9555in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img011.jpg}
\par}
{\centering\selectlanguage{english}
\foreignlanguage{italian}{Figure 11. Andrea Mantegna, }\foreignlanguage{italian}{\textit{Cristo morto
}}\foreignlanguage{italian}{(1475 {}- 1478), Tempera on canvas, Pinacoteca di Brera, Milano}
\par}
{\centering
\includegraphics[width=3in,height=3.9307in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img012.png}
\par}
{\centering\selectlanguage{english}
Figure 12. Giovanni Bellini, \textit{The Blood of the Redeemer}, (1460{}-1465), The National Gallery, London.
\par}
\bigskip
\bigskip
\section[\ ]{
\includegraphics[width=5.3134in,height=2.939in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img013.jpg}
\foreignlanguage{italian}{\ }}
{\centering {Figure 13. Paolo Uccello,
}\foreignlanguage{italian}{\textit{Predella del Miracolo dell'ostia profanata}}\foreignlanguage{italian}{, (1467-1468),
tempera on wood \ Galleria Nazionale delle Marche, Urbino.}\par}
\bigskip
{\selectlanguage{english}
Among them Leon Battista Alberti (who wrote in 1435 the treatise \textit{De pictura praestantissima} [1], where he offers a practical
guide to perspective drawing) and the great painter and mathematician Piero della Francesca, who built on his knowledge
of Euclid and Alberti, to write (towards the end of the XV century) \textit{De prospectiva pingendi} [15], probably the
ultimate text on prospective in painting.\footnote{\textrm{ Field's essay [10] - an extensive comparison of the
treatment of perspective in Alberti's }\textrm{\textit{De pictura praestantissima}}\textrm{ [1] and Piero della Francesca's
}\textrm{\textit{De prospectiva pingendi }}\textrm{[15]}\textrm{\textit{{}--}}\textrm{ contains a historically
contextualized presentation of the main mathematical tools on which the theory and practice of perspective (and the
very initial basis of projective geometry) relied upon. See also [8]}}}
\bigskip
\hfill \parbox[r]{12cm}{For theory, when a separated from practice, is generally of very little use; but when the two chance to come
together, there is nothing that is more helpful to our life, both because art becomes much richer and more perfect by
the aid of science, and because the counsels and the writings of learned craftsmen have in themselves greater efficacy
and greater credit than the words or works of those who know nothing but mere practice, whether they do it well or ill.
And that all this is true is seen manifestly in Leon Batista Alberti, who, having studied the Latin tongue, and having
given attention to architecture, to perspective, and to painting, left behind him books written in such a manner, that,
since not one of our modern craftsmen has been able to expound these matters in writing, although very many of them in
his own country have excelled him in working, it is generally believed; such is the influence of his writings over the
pens and speech of the learned; that he was superior to all those who were actually superior to him in work. ]{For
theory, when a separated from practice, is generally of very little use; but when the two chance to come together,
there is nothing that is more helpful to our life, both because art becomes much richer and more perfect by the aid of
science, and because the counsels and the writings of learned craftsmen have in themselves greater efficacy and greater
credit than the words or works of those who know nothing but mere practice, whether they do it well or ill. And that
all this is true is seen manifestly in Leon Batista Alberti, who, having studied the Latin tongue, and having given
attention to architecture, to perspective, and to painting, left behind him books written in such a manner, that, since
not one of our modern craftsmen has been able to expound these matters in writing, although very many of them in his
own country have excelled him in working, it is generally believed; such is the influence of his writings over the pens
and speech of the learned; that he was superior to all those who were actually superior to him in
work.}\textstyleRimandonotaapiiii{ }\footnotemark{}}
\footnotetext{\textrm{ }\textrm{\textit{Live of Leon Battista Alberti}}\textrm{ in: }\textrm{Vasari's
}\textrm{\textit{Lives of the Artists}}\textrm{, [23]. }\foreignlanguage{italian}{\textrm{See Vasari's
}}\foreignlanguage{italian}{\textrm{\textit{Le
vite}}}\foreignlanguage{italian}{\textrm{:}}\foreignlanguage{italian}{\textrm{
``}}\foreignlanguage{italian}{\textrm{{\dots} Non è cosa che più si convenga alla vita nostra, sì perché l'arte col
mezzo della scienza diventa molto più perfetta e più ricca, sì perché gli scritti et i consigli de' dotti artefici
hanno in sé molto maggiore efficacia et acquistansi maggior credito che le parole o l'opere di coloro che non sanno
altro che il semplice esercizio, o bene o male che essi lo facciano: ché invero leggendo le istorie e le favole et
intendendole, un capriccioso maestro megliora \ continovamente e fa le sue cose con più bontà e con maggiore
intelligenza che non fanno gli illetterati. \ E che questo sia il vero si vede manifestamente in
Leon}}\foreignlanguage{italian}{\textrm{\textbf{\textit{ }}}}\foreignlanguage{italian}{\textrm{Batista Alberti, il
quale, per avere atteso alla lingua latina e dato opera alla architettura, alla prospettiva et alla pittura, lasciò i
suoi libri scritti di maniera che, per non essere stato fra gli artefici moderni chi le abbia saputo distendere con la
scrittura, ancora che infiniti ne abbiamo avuti più eccellenti di lui nella pratica'',
}}\foreignlanguage{italian}{\textrm{[22, }}\foreignlanguage{italian}{\textrm{3
voll}}\foreignlanguage{italian}{\textrm{., vol. III, pp.
283-284}}\foreignlanguage{italian}{\textrm{].}}\foreignlanguage{italian}{\textrm{ See also [24, ``Leon Battista
Alberti'', pp.178-184]. \ }}}
\bigskip
{This is how Vasari, in his \textit{Vite} [22], (essentially a collection of biographies
of painters, sculptors, and architects) described the polyhedric character of Leon Battista Alberti, (1404-1472),
architect, mathematician, humanist, musician, who was born in Genova but split his life between the papal court of
Rome, and the courts of the Este in Ferrara, of the Malatesta in Rimini, of the Gonzaga in Modena, and belonged the
circle of the Florentine humanists. It is important to note that his reflections on painting and sculpture were not
simply the byproduct of his interest on painting techniques and on how to prospectively represent the human body, but
rather were consequence of a deeper intellectual research.}
{\selectlanguage{english}
In the opening of his treatise \textit{De pictura praestantissima }[1]\textit{, }Leon Battista Alberti explicitly declares that he is
not writing as mathematician, but as a painter. And in fact it is clear that the aim of his treatise is to provide
painters a practical guide to the use of perspective, instead of investigating and discussing in detail the theoretical
aspects and features of that subject, which -- as he explicitly states - was certainly quite difficult and not yet well
discussed by any author.\footnote{\foreignlanguage{italian}{\textrm{ Leon Battista Alberti,
}}\foreignlanguage{italian}{\textrm{\textit{The Architecture }}}\foreignlanguage{italian}{\textrm{[2, p. 241]. See also
Leon Battista Alberti, }}\foreignlanguage{italian}{\textrm{\textit{On Painting}}}\foreignlanguage{italian}{\textrm{ [2,
p. 37]. }}}}
\bigskip
{\selectlanguage{english}
\hfill \parbox[r]{12cm}{ But throughout these whole Treatise I must beg my Reader to take Notice, that I speak of these Things, not as a
Mathematician, but as a Painter; for the Mathematician considers the Nature and Forms of Things with the Mind only,
absolutely distinct from all Kind of Matter: whereas it being my Intention to set Things in a Manner before the Eyes,
it will be necessary for me to consider them in a Way less refined. And indeed I shall think I have done enough, if
Painters, when they read me, can gain some Information in this difficult Subject, which has not, as I know of, been
discussed hitherto by any Author.} }
\bigskip
{\selectlanguage{english}
In modern terms, we could say that what Leon Battista Alberti was doing was actually to scientifically present an
algorithm that any painter could use to correctly set up the basics of his paintings, from the point of view of the
perspective. \ More specifically, Alberti wanted to give a practical tool to correctly set up the floor, the vanishing
point, and the horizon of a painting (see Section 3). Once done this, it was easier for a painter to fill in the
painting with all the rest in a reasonably coherent form. This point of view explains also the reason why Alberti is in
fact teaching how to represent in perspective a ground floor with square tiles: even in the case of an eventually
uniform ground floor, the hidden presence of a fine square grid would help a lot the skilled painter to place objects
and human figures properly and proportionally in the table.}
{\selectlanguage{english}
It is of great interest to examine the three steps of the algorithm proposed by Leon Battista Alberti to paint a
square-tile floor in perspective. In his treatise \textit{De pictura praestantissima,} Alberti considers a square painting ${\pi}$
whose side measures six \textit{braccia fiorentine} (in modern terms approximately 348/354 cm). \ Since the established
standard height of a human figure for a painter in those years was three braccia fiorentine, in practice Alberti chose
to place:}
\liststyleWWNumi
\begin{itemize}
\item {\selectlanguage{english}
the horizontal basis of the painting on the floor;}
\item {\selectlanguage{english}
the point of view of the painter on the straight line orthogonal to the center of the painting ${\pi}$;}
\item {\selectlanguage{english}
the vanishing point at the center of the painting itself.}
\end{itemize}
{\selectlanguage{english}
One other datum is that the square tiles of the floor to be painted have two sides parallel, and two orthogonal, to the
painting $\pi $. Finally, here is Alberti's \ costruzione legittima.\footnote{\textrm{ The Renaissance texts on
perspective are normally didactic manuals, whose authors take it for granted that perspective is a
}\textrm{\textit{vera scientia}}\textrm{. Alberti mentions but does not give a proof for his
}\textrm{\textit{costruzione legittima}}\textrm{. In his article, Elkins [9] presents an annotated (incomplete) proof
of Alberti's }\textrm{\textit{costruzione,}}\textrm{ taken from two propositions of Piero's
}\textrm{\textit{Prospective pingendi}}\textrm{. }}}
{\selectlanguage{english}
\textit{Step 1. \ Design the projections of the ``orthogonal'' straight lines of the floor on the painting
}${\pi}$\textit{. }\ This step can be done formally as explained in Section 3. It can be practically performed as
follows: it is enough to join each intersection of a straight line of the floor with the basis of the painting with the
vanishing point (figure 14).}
\includegraphics[width=5.5134in,height=2.7654in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img014.png}
{\centering\selectlanguage{english}
Figure 14. Leon Battista Alberti, \textit{Of Painting} \textit{in three books}, `Book I', in [2]\textit{.}
\par}
\bigskip
{\selectlanguage{english}
\textit{Step 2. Design the heights of the projections of the ``parallel'' straight lines of the floor} \textit{on the
painting }$\pi $\textit{. }\ The distance of the point of view of the painter from the vanishing point has to intervene
in this step. \ Consider the set painter-painting-floor seen from someone on the right, staying on the plane of the
painting $\pi $. Figure 15 shows how to construct these heights.}
\includegraphics[width=5.3736in,height=2.5827in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img015.png}
{\centering\selectlanguage{english}
Figure 15
\par}
\bigskip
{\selectlanguage{english}
\textit{Step 3. Put together steps 1 and 2, and design the projection of the entire square-tile ground floor} \textit{on
the painting }$\pi $\textit{. } As shown in figure 16, it is enough to add to the painting obtained in Step 1 a
horizontal line at each of the heights constructed in Step 2.}
\includegraphics[width=5.6437in,height=2.3736in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img016.jpg}
{\centering\selectlanguage{english}
Figure 16
\par}
\bigskip
{\selectlanguage{english}
As the reader can see, the algorithmic construction illustrated by Leon Battista Alberti is very simple and does not
require any knowledge of sophisticated mathematical theories: this aspect made it really innovative at that time.}
{\selectlanguage{english}
Alberti, in his treatise \textit{De pictura praestantissima }[1]\textit{,} gives several other interesting and useful techniques for the
painters of his age, some of which are applications of the costruzione legittima. \ The process that we describe below
was meant to help the painter to identify the appropriate size of figures at different places on the square-tile floor,
exactly what Giotto would have needed in order to represent correctly the figure of San Francesco in the fresco we
described in Section 2. }
{\selectlanguage{english}
Note that the decision of placing the canvas on the floor, implies that only objects and figures placed on the floor
along the basis of the painting are represented in 1-1 scale. And one can then use the projection of the side of a
square tile parallel to the painting as a unit to give the measures of any object that is placed in the painting
precisely on this side (figure 17).}
\bigskip
{\centering
\includegraphics[width=5.10in,height=3.20in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img017.png}
\par}
{\centering\selectlanguage{english}
Figure 17. Leon Battista Alberti, \textit{Of Painting} \textit{in three books}, `Book II', in [2]\textit{.}
\par}
\bigskip
{\selectlanguage{english}
We now show how to use this representation to calculate the distance of the eye of the painter from the painting (figure
18). Note that a horizontal straight line L exiting from the eye and making an angle of 45 degrees with the plane of
the painting is parallel to one of the diagonals of the square tiles. Therefore, the line L, and all of the parallel
diagonals, encounter the horizon of the painting at a same point $A$ (see Section 3). \ Therefore, by extending
a diagonal of a tile in the painting until it encounters the horizon, one can find the point $A$. And now, since
the triangle with vertices the eye, the vanishing point, and the point $A$ is isosceles (it has the angles equal
to 45 degrees), then the distance between $A$ and the vanishing point is equal to the distance of the eye from
the painting. Therefore, by means of the side-of-tile-meter one can find the desired distance. \ }
\vskip .5cm
{\centering
\includegraphics[width=5.4957in,height=3.6264in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img018.png}
\par}
{\centering\selectlanguage{english}
Figure 18. Leon Battista Alberti, \textit{Of Painting} \textit{in three books}, `Book I', in [2]\textit{.}
\par}
\bigskip
{\selectlanguage{english}
But Alberti, faithful to his promise to provide a practical manual and not a mathematical one, simply says to the
painter: \ extend one of the diagonals of a tile of the floor until it encounters the horizon of the painting at
$A$. Measure the distance between $A$ and the vanishing point. That distance is equal to the distance of
the eye of the painter from the painting itself. }
{\selectlanguage{english}
\section{5. Reconstructing a scene from a painting.}
{\selectlanguage{english}
As we have shown in the previous sections, the entire purpose of perspective is to take a three-dimensional scene, and
translate it into a two-dimensional scene (the painting) in a way that would fool the viewer into believing he is
actually looking at the original scene. }
{\selectlanguage{english}
But one could ask the inverse question. Can we reconstruct a real life scene, by just looking at its painting?
Immediately, we should know that the answer is negative. In fact it is clear that when we go from three dimensions down
to two dimensions we must lose some piece of information: the simplest way to convince ourselves of this consists in
closing one eye and start walking around. It will become easily apparent that a single eye provides a lack of depth
that may make some of the common chores difficult. This is essentially because the eye acts like a projection
mechanism, and the image of a three dimensional object is there represented as a flat picture on the bottom of the
retina. To remedy this difficulty, most animals have developed a system with two eyes.}
{\selectlanguage{english}
We could say that a properly designed painting is like a 2D compression of the data of a 3D scene. And, at least in
special situations, the originating scene can be appropriately reconstructed.}
{\selectlanguage{english}
The first situation in which a reconstruction is possible is the one in which a painting has a floor. If this is the
case, and if one knows both the distance $D$ of the point of view O from the plane of the painting ${\pi}$, and the
height H of the point of view from the floor, then all the vertical figures and objects that are standing on the floor
can be well placed in 3D. This is made clear by the following Thales-style\footnote{\textrm{ By this we mean a drawing
that utilizes a theorem that is often referred to as Thales' Theorem, namely an important result in elementary geometry
about the ratios of different line segments that arise if two intersecting lines are intercepted by two parallel
lines.}} drawings (figures 19, 20):}
\includegraphics[width=5.20in,height=2.75in]{Figura19.pdf}
{\centering\selectlanguage{english}
Figure 19
\par}
\bigskip
\includegraphics[width=5.20in,height=2.75in]{Figura20.pdf}
{\centering\selectlanguage{english}
Figure 20
\par}
\bigskip
{\selectlanguage{english}
For those objects and figures that touch the floor, reconstruction is very easy: one has only to project from the point
of view through the painting and reach the ground, on the other side of the painting. Hence the height of each vertical
figure/object will become clear.}
{\selectlanguage{english}
But how can one recover the measures of $H$ and $D$, that are -- as it is clear -- fundamental for the reconstruction?
\ These key measures belong to the world that was external to the painting at the time it was painted: after several
centuries, the external world and the participating characters have all disappeared. The only hope is to find pieces of
information concerning that world, encoded inside the painting. It is like if we need to enter the painting, in a new
Mary Poppins-type walk. }
{\selectlanguage{english}
Let us now see how this was done in a specific case [16], for Piero della Francesca's \ \textit{Flagellazione }(figure
21), a first example -- we should say the example -- of the mathematically well constructed theory of perspective
contained in his \textit{De prospectiva pingendi}\footnote{ \textrm{In the extensive literature dedicated to the
}\textrm{\textit{Flagellazione }}\textrm{the article of Wittkower and Carter [27] offers an analysis made with
particular attention to the technical aspects of the perspective and to the historical language used. In this essay the
authors also trace the influence of the painter's perspective on real architecture.}\par }. }
\bigskip
{\centering
\includegraphics[width=5.8835in,height=4.178in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img021.jpg}
\par}
{\centering\selectlanguage{english}
\foreignlanguage{italian}{Figure 21. Piero della Francesca, }\foreignlanguage{italian}{\textit{Flagellazione di Cristo,
}}\foreignlanguage{italian}{(1444-1470)}\foreignlanguage{italian}{\textit{, }}\foreignlanguage{italian}{tempera on
wood, Galleria Nazionale delle Marche, Urbino.}
\par}
\bigskip
{\selectlanguage{english}
First one notes that there are several human figures in the painting, whose knees all touch the line of the horizon
(figure 22); since, in the early renaissance and as we have mentioned before when discussing Alberti, the height of the
painted human figure was rigidly fixed to be three ``braccia fiorentine'', one immediately deduces that the height of
the knees, of the horizon, of the vanishing point $V$, and finally of point of view $O$ of the painter turns out to be
approximately 60 cm. }
{\centering \includegraphics[width=5.6173in,height=4.061in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img022.png}
\par}
{\centering\selectlanguage{english} Figure 22 \par}
\bigskip
{The determination of the distance $D$ is even more challenging. Here is how it was performed in the case of the \textit{Flagellazione}.
As shown in figure 23, there is exactly one straight line L exiting from the point of view $O$ (the eye of the painter),
intersecting the horizon of the painting in a point $PD$ (the distance point, see Section 3) on the right side of the
vanishing point $PV$, and such that the triangle $O$, $PV$, $PD$, formed by the eye $O$, the vanishing point $PV$, and the point $PD$
is isosceles and rectangle in $PV$.
As we have seen in Section 3, all straight lines of the space that are parallel to
$L$, when represented in the painting, will have the same vanishing point $PD$. Therefore, if we find a line $K$ in the
painting ${\pi}$ that is the projection of a straight line of the space parallel to $L$, then we can solve the problem:
we intersect the extension of $K$ with the horizon and find the point $PD$, and then we try to figure out the ``real''
distance between $PV$ and $PD$, which will be the real distance between the eye $O$ of the painter and the painting ${\pi}$.}
{\selectlanguage{english}
Note that the floor of the \textit{Flagellazione} has a rectangular tile, whose real sides are (horizontal and)
parallel, respectively orthogonal, to the painting. If the tile of the floor were square, then one of the diagonals of
a tile would be a possible line $K$ we are searching for.
{\selectlanguage{english}
Further more, we see a decoration of the floor, inscribed in a rectangular tile near the column with the Christ,
rendered as an ellipse in the painting. Of course this decoration could be in the reality either an ellipse or a
circle.\footnote{\textrm{ This is due to the fact the circle and the ellipse are two possible sections of the visual
cone that has vertex in the eye. In reality, and this goes beyond the purpose of this article, projective geometry
gives us all the tools to describe the way in which circles are transformed when we project them from the ceiling-floor
system to the painting. }} If it were a circle, then we could deduce that the tile is a square, and that its diagonal
is a possible line $K$. Then we could intersect its extension with the horizon and find $PD$, which will in turn suggest
the distance between $PV$ and $PD$, and hence the distance between the eye of the painter $O$ and the painting ${\pi}$. }
{\selectlanguage{english}
But now art history comes to our help. It appears that in the late 1400's no elliptical decorations were used in a floor
of tiles, and hence it can be now demonstrated that the distance between the point of view $O$ (the eye of the painter)
and the painting\textcolor{red}{ }is\textcolor{red}{ }approximately cm 145, [16]. }
{\centering \includegraphics[width=4.750in,height=4.00in]{Figura23.pdf}
\par}
{\centering \selectlanguage{english}
Figure 23
\par}
\bigskip
{\selectlanguage{english}
We conclude this section with a comment on the use of perspective not simply to represent reality, but to attribute
additional meanings to it. }
{\selectlanguage{english}
We believe that the mathematical analysis and reconstruction of the three dimensional scene of Piero's
\textit{Flagellazione} presented above could add a technical contribution to the historical-iconological one, in
connection with the hermeneutical problem that alimented the main interpretations of this painting proposed in the last
fifty years.\footnote{\textrm{ For a synthesis of the debate, see e.g., [13, p. 54 and ff].}} \ }
{\selectlanguage{english}
The identification of the figures of the painting, and in particular of the three of them which appear in the foreground
relies upon such a scant documentation, so that the iconological enigma hidden in the \textit{Flagellazione} seems to
remain still unsolved. In particular, the identification of the figure who appears on the right hand\footnote{\textrm{
As a curiosity we point out that the painting that is reproduced in [14] is actually a specular image of the actual
painting, a minor mistake that does not alter the interest of the article.}} side of the painting in the blue brocade
tunic -- likely an exponent of the noble Montefeltro family and possibly the patron of the painting -- remains
uncertain, [13, p. 62 and ff].}
{\selectlanguage{english}
The representation of patrons is not a surprising fact (akin to the naming of buildings that we see on campuses around
the world), but it is often somewhat unrelated to the painting itself. As an example, we can remind the reader of the
Scrovegni Chapel in Padua, where Giotto depicts the patron (Enrico Scrovegni) in the act of donating the chapel to the
Holy Virgin (figure 24). }
\bigskip
{\centering \includegraphics[width=3.0083in,height=3.348in]{geometria20proiettiva20e20pittura20davvero20finalissima20mattino20della20crociera-img024.jpg}
\par}
{\centering \selectlanguage{english}
\foreignlanguage{italian}{Figure 24. Giotto, }\foreignlanguage{italian}{\textit{Last
Judgment}}\foreignlanguage{italian}{, ($\sim$1305), detail: Enrico Scrovegni gives to Madonna the model of Cappella
Scrovegni, affresco, Cappella degli Scrovegni, Padova. }
\par}
\bigskip
{\selectlanguage{english}
In Piero's case, however, we believe we can read an attempt to place the patron exactly at the scene, through the use of
perspective technique. Indeed, the fact that the level of the knees of the three gentlemen on the right is the same
level for the knees of Jesus and his torturers, indicates very specifically that the two sides of the picture were
rendered by the painter as if they were taking place at the same place and the same time. \ We see, therefore,
perspective used not simply as a geometrical device, but as a narrative instrument: the painter has made, here, a very
specific choice to insert the contemporary figures in a way that places them within the context of the historical
event.}
{\selectlanguage{english}
\section{6. Conclusions}
{\selectlanguage{english}
As we indicated in the introduction, this article is dedicated to the linear aspects of perspective. We have used the
desire of renaissance painters to faithfully represent tables, ceiling beams, and square floor decorations, to create
the new object that mathematician call the projective plane. This object (which will represent the floor and the
ceiling) is nothing but the old plane, to which one must add new (improper) points to represent the vanishing points
that the eye sees in a scene, as well as a new line, which is the line to which all improper points belong, and that is
represented as the horizon. But the story of perspective and projective geometry does not end here. The next natural
step, at least for a mathematician, is to study second degree equations, such as circles, ellipses, and other conic
sections. From the point of view of the painter, this is also an urgent matter, as it relates to the representation of
important everyday objects such as plates, carriage wheels, and windows, as well as not so everyday objects (yet very
important in religious paintings) such as halos. Projective geometry, with ideas that go back to the ancient Greek
mathematicians, provides a beautiful and very elegant solution to this problem, but this will be the object of a
subsequent article. }
\bigskip
\bigskip
{\selectlanguage{english}
\foreignlanguage{italian}{\textbf{\Large References}}}
{\selectlanguage{english}
\noindent \foreignlanguage{italian}{[1] Leon Battista Alberti, }\foreignlanguage{italian}{\textit{De pictura praestantissima, et
numquam satis laudata arte libri tres absolutissimi, Leonis Baptistae de Albertis viri in omni scientiarum genere, \&
praecipue mathematicarum disciplinarum doctissimi. }}\textit{Iam primum in lucem editi}, Westheimer, Basel, 1540.}
\bigskip
{\selectlanguage{english}
\noindent [2] Leon Battista Alberti, \textit{The Architecture {\dots} \ in ten books. Of Painting in three books. And Of Statuary
in one book. Translated into Italian by Cosimo Bartoli, and }\textit{into English by James Leoni, Architect.
Illustrated with seventy-five copper-plates, engraved by Mr. Picart,} Edward Owen, London, 1755.}
\bigskip
{\selectlanguage{english}
\noindent [3]\textit{ }Ibn al-Haytham,\textit{ The optics. Books 1-3 On direct vision, }translated with introduction and
commentary by A I Sabra, Warburg Institute, University of London, London, 1989.}
\bigskip
{\selectlanguage{english}
\noindent [4] Kirsti Andersen, \textit{The Geometry of an Art. The History of the Mathematical Theory of Perspective from Alberti
to Monge}, Springer, New York, 2007.}
\bigskip
{\selectlanguage{english}
\noindent [5] Hans Belting, \textit{Perspective: Arab Mathematics and Renaissance Western Art}, European Review 16, no. 2 (2008),
pp. 183-190.}
\bigskip
{\selectlanguage{english}
\noindent [6] Hans Belting, \textit{La double perspective. }\foreignlanguage{italian}{\textit{La science arabe et l'art de la
Renaissance}}\foreignlanguage{italian}{, La presse du reel/Presses universitaires de Lyon, Lyon, 2010.}}
\bigskip
{\selectlanguage{english}
\noindent [7] Hans Belting, \textit{The Double Perspective: Arab Mathematics and Renaissance Art}, Third Text 24, no. 5 (2010),
pp. 521-527.}
\bigskip
{\selectlanguage{english}
\noindent [8] Rudolf Bkouche, \textit{La naissance du projectif.} \textit{De la perspective à la géométrie projective}, in
\ Roshdi Rashed,~\textit{Mathématiques et Philosophie.~De l'Antiquité à l'Âge classique, }Paris,\textit{~}C.N.R.S.
Editions, 1991, pp. 239-285.}
\bigskip
{\selectlanguage{english}
\noindent [9] James Elkins, \textit{Piero della Francesca and the Renaissance. Proof of Linear Perspective,}
The Art Bulletin 69, no. 2 (1987), pp. 220-230.}
\bigskip
{\selectlanguage{english}
\noindent [10] J.V. Field, \textit{Alberti, the `Abacus' and Piero della Francesca's proof of perspective}, Renaissance Studies
11, no. 2 (1997), pp. 61-88.}
\bigskip
{\selectlanguage{english}
\noindent [11] J.V. Field, \foreignlanguage{english}{\textit{The Invention of Infinity. Mathematics and Art in the Renaissance}},
Oxford University Press, Oxford, New York, Tokyo, 1997.}
\bigskip
{\selectlanguage{english}
\noindent \foreignlanguage{italian}{[12] J.V. Field, }\foreignlanguage{italian}{\textit{Piero della Francesca. }}\textit{A
Mathematician's Art}, Yale University Press, New Haven and London, 2005.}
\bigskip
{\selectlanguage{english}
\noindent \foreignlanguage{italian}{[13] Carlo Ginzburg, }\foreignlanguage{italian}{\textit{Indagini su Piero. Il battesimo, il
ciclo di Arezzo, la Flagellazione di Urbino}}\foreignlanguage{italian}{. Nuova edizione, Einaudi, Torino, 1994.}}
\bigskip
{\selectlanguage{english}
\noindent \foreignlanguage{italian}{[14] Martin Kemp, }\foreignlanguage{italian}{\textit{Piero's
perspective}}\foreignlanguage{italian}{, Nature}\foreignlanguage{italian}{\textit{ }}\foreignlanguage{italian}{390, no.
13 (1997), p. 128.}}
\bigskip
{\selectlanguage{english}
\noindent \foreignlanguage{italian}{[15] Piero della Francesca, }\foreignlanguage{italian}{\textit{De prospectiva
pingendi}}\foreignlanguage{italian}{, MSS }\url{http://digilib.netribe.it/bdr01/Sezione.jsp?idSezione=50}}
{\selectlanguage{english}
electronic reproduction: }
{\selectlanguage{english}
\noindent \url{http://digilib.netribe.it/bdr01/visore/index.php?pidCollection=De-prospectiva-pingendi:889&v=-1&pidObject=De-prospectiva-pingendi:889&page=00
\bigskip
{\selectlanguage{english}
\noindent \foreignlanguage{italian}{[16] Placido Longo, }\foreignlanguage{italian}{\textit{La «Flagellazione» di Piero della
Francesca fra Talete e Gauss}}\foreignlanguage{italian}{, Bollettino dell'Unione Matematica Italiana 8, no. 2 (1999),
pp. 121--144. }\url{http://www.bdim.eu/item?id=BUMI_1999_8_2A_2_121_0}}
\bigskip
{\selectlanguage{english}
\noindent [17] Lucretius.~\textit{On the Nature of Things, }Translated by~W. H. D. Rouse.~Revised by~Martin F. Smith, Harvard
University Press, Cambridge, MA, 1924.}
\bigskip
{\selectlanguage{english}
\noindent [18] Erwin Panofsky, \textit{Perspective as symbolic form}, Zone Books, New York - MIT Press, Cambridge Mass., 1991.}
\bigskip
{\selectlanguage{english}
\noindent [19] Herman Schüling, \textit{Geschichte der Linear-Perspektive im Lichte der Forschung von ca 1870-1970},
Universitatsbibliothek, Giessen, 1975.}
\bigskip
{\selectlanguage{english}
\noindent [20] Gerard Simon, \textit{Optique et perspective: Ptolémée, Alhazen, Alberti / Optics and perspective: Ptolemy,
Alhazen, Alberti}, Revue d'histoire des sciences 54, no. 3 (2001), pp. 325-350.}
\bigskip
{\selectlanguage{english}
\noindent \foreignlanguage{italian}{[21] Luigi Vagnetti, `}\foreignlanguage{italian}{\textit{De naturali et artificiali
perspectiva':~bibliografia ragionata delle fonti teoriche e delle ricerche di storia della prospettiva. Contributo alla
formazione della conoscenza di un'idea razionale, nei suoi sviluppi da Euclide a Gaspard
Monge}}\foreignlanguage{italian}{, Edizione della Cattedra di composizione architettonica IA di Firenze e della L.E.F.,
1979.}}
\bigskip
{\selectlanguage{english}
\noindent \foreignlanguage{italian}{[22] Giorgio Vasari, }\foreignlanguage{italian}{\textit{Le vite de' più eccellenti pittori,
scultori et architettori}}\foreignlanguage{italian}{, Lorenzo Torrentino, Firenze, 1550, 3 voll.}}
{\selectlanguage{english}
\noindent \foreignlanguage{italian}{\ }\url{http://vasari.sns.it/vasari/consultazione/Vasari/ricerca.html}}
\bigskip
{\selectlanguage{english}
\noindent \foreignlanguage{italian}{[23] Giorgio Vasari, }\foreignlanguage{italian}{\textit{Live of Leon Battista
Alberti}}\foreignlanguage{italian}{ in: Vasari's }\foreignlanguage{italian}{\textit{Lives of the
Artists}}\foreignlanguage{italian}{. }\url{http://members.efn.org/~acd/vite/VasariAlberti.html}}
\bigskip
{\selectlanguage{english}
\noindent [24]\textit{ }Giorgio Vasari, \textit{Lives of the Artists}, translated with an Introduction and Notes by Julia Conaway
Bondanella and Peter Bondanella, Oxford University Press, Oxford, 1991.}
\bigskip
{\selectlanguage{english}\color{black}
\noindent [25] Kim H. Veltman, \textit{Literature on Perspective. A Select Bibliography (1971-1984)}, Marburger Jahrbuch für
Kunstwissenschaft 21, (1986), pp. 185-207.}
\bigskip
{\selectlanguage{english}
\noindent \foreignlanguage{italian}{[26] Graziella Federici Vescovini, }\foreignlanguage{italian}{\textit{De la métaphysique de la
lumière à la physique de la lumière dans la perspective des XIIIe et XIVe siècles}}\foreignlanguage{italian}{, Revue
d'histoire des sciences 60, no. 1 (2007), pp. 101-118.}}
\bigskip
{\selectlanguage{english}
\noindent [27] R. Wittkower and B. A. R. Carter, \textit{The Perspective of Piero della Francesca's `Flagellation'}, Journal of
the Warburg and Courtauld Institutes 16, no. 3/4 (1953), pp. 292-302.}
\bigskip
\begin{center}
\bigskip
{\large
\noindent Dipartimento di Matematica e Informatica “U. Dini”, Universit\`a di Firenze,\\
Viale Morgagni 67/A, I-50134 Firenze, Italy. [email protected]
\bigskip
\noindent Istituto per la storia del pensiero filosofico e scientifico moderno C.N.R.\\
Area 3 - Bicocca Milano, via Cozzi, 53, 20125 Milano, Italy. [email protected]
\bigskip
\noindent Donald Bren Presidential Chair in Mathematics, Chapman University,\\
One University Drive, Orange, CA 92866, USA. [email protected]
}
\end{center}
\end{document}
|
1,108,101,564,360 | arxiv | \section{Introduction}
In the field of solid state physics, in particular physics of defects, the legacy of Ekkehart Kr\"oner who died ten years ago at
the age of $81$, is invaluable. He has been actively publishing for $50$ years, mostly as a single author, on the physical
understanding of defective solids, but also on their mathematical structure. One could make a distinction between a first series of paper
\cite{KR55}-\cite{KR80} where he constructs an original approach to understand dislocations, and a later series \cite{KR90}-\cite{KR01} where he raises questions, while reporting new knowledge in the field.
Most of the theory can be found in the course \cite{KR80} but since Kr\"oner also distilled many comments, ideas, and computations along other
publications, the idea of writing the present tribute grew up. It is especially intended to commemorate the $10^{th}$
anniversary of his death, in order,
not to recall (because the author has no privileged relationship with Kr\"oner to do so), but to enlighten Kr\"oner's ideas and show how they
are found rich enough by the author to be diffused, revisited and emphasized today.
It should be pointed out that Anthony \cite{ANT70a,ANT70b} is one of Kr\"oner's direct
students who also greatly contributed to understand defect lines (in particular, disclinations). Since then, many contributions to the field
(nonlinear dislocations, dislocation motion, thermodynamic of defective crystals etc) have appeared, but surprisingly enough, few only cite Kr\"oner.
This is probably due to the lack of real school following him, but also due to scientific reasons: indeed, Kr\"oner's theory is formulated in
physical terms, but appeals to complex mathematical concepts, the combination of which is only rarely seen in the literature.
It should be emphasized that de Le\'{o}n, Epstein, Lazar, Maugin and co-authors \cite{LEONEP2,EPMAUG,BUCAEP,LAZMAUG} (cf. the well-documented survey
\cite{Maugin2003} and the references therein) have produced
significant results not only by following, but especially by completing the ideas of Kr\"oner.
So, the present paper is intended to (i) collect and show Kr\"oner's results in the light of a new presentation, (ii) describe the
non-Riemannian crystal and show how it can help to select appropriate deformation and
internal (thermodynamic) variables,
(iii) participate to the debate around Kr\"oner's question: ``what are the dynamical variables of our theory?'' \cite{KR95}
It will be especially stressed that the crystal geometry and the physical laws governing defects are inseparable, as is the case in
the Einstein's General Theory of Relativity. However, we entirely agree with Noll when he writes \cite{NOLL67} that ``the geometry [must be] the natural
outcome, not the first assumption, of the theory'' (i.e., as in the \textit{Continuous Distribution of Dislocation} (CDD)
theory of Bilby at al. \cite{BILBY}). Many geometrical tools and mathematical theory required for a rigourous description of
the dislocated crystal geometry can be found in the landmark papers by Noll \cite{NOLL67} and Wang \cite{WANG67}, while also
pointing out a recent book on
Continuum Mechanics in that spirit \cite{EP2010}. The approach followed here and detailled in \cite{VGD2009}-\cite{VG2011} is nonetheless distinct
from the CDD theory.
Single crystals growing from the melt are considered where high temperature gradients are unavoidable and hence where point defects are present \cite{VGetal}.
Moreover, since there are no
internal boundaries, the defect lines can take in principle any orientation while forming either loops or lines ending at the crystal
boundary. However, for the purpose of simplicity in the exposition of the theory, we will consider a tridimensional crystal filled with a network of rectilinear parallel
disclinations and/or dislocations.
Particular to the chosen approach is the distinction between scales, where the macroscale is recovered from the mesoscale by a
homogenization process: the singularities (i.e., the defect lines) have been
erased and hence the density of defects (dislocations and/or disclinations) are recovered by means of
smooth fields which we will show responsible for
curvature and torsion of the crystal intrisic geometry. Also, the density of point defects will show responsible for
the appearance of non-metric terms. In this
approach, only objective fields are considered
to describe defective matter: they are defined across scales although their physical meaning might differ. Moreover, no
elasto-plastic decomposition and no prescription of any reference configuration are required, and there is no assumption of
static equilibrium (vanishing stress divergence).
\section{Preliminar results at the continuum scale}\label{meso}
\begin{notation}
In this paper, a scalar, vector or tensor of any order are not typographically distinct symbols in the text. The tensor order is specified
when equations are written, since in this case only, the vector $v$ is written as $v_i$ (with one index), and the tensor $U$ as $U_{ij\cdots}$ with a
number of indices corresponding to its order.
\end{notation}
The present section focuses on the mesoscopic scale, where dislocations and disclinations are lines and whose
characteristic length is some average distance between neighboring defects. The remaining of the medium is a
continuum governed by linear elasticity. At time $t$, the body is referred to as ${\mathcal R}^{\star}(t)$ to represent any random
sample corresponding to a given crystal growth experiment. In the crystal domain $\Om$, the meso-scale physics will then be represented by a nowhere dense set of defect lines
which in $2D$ are parallel to each other.
\begin{definition}[$2D$ mesoscopic defect lines]\label{lines2D}
At the meso-scale, a $2D$ set of dislocations and/or disclinations $\Lr\subset\Om$ is a closed set of $\Om$
(this meaning the intersection with $\Om$ of a closed set of $\RR^3$) formed by a countable union of parallel lines
$L^{(i)}, i\in\mathcal{I}\subset\NN$, whose adherence is itself a countable union of lines and where the linear
elastic strain is singular. In the sequel, these lines will be assumed as parallel to the $z$-axis.
\end{definition}
Since accumulation points (to be understood as clusters of parallel lines) might appear, the scale of matter
description of this section is named continuum scale.
\subsection{Objective internal fields for the model description}
The present mesoscopic theory is developed from the sole linear elastic strain, which itself is
defined from the stress field (although the stress-strain relationship is not used in the sequel) and therefore is
an objective internal field.
\begin{assumption}[$2D$ mesoscopic elastic strain]\label{asstrain}
The linear strain $\mathcal{E}^\star$ is a given symmetric $L^1_{loc}(\Om)$ tensor such that $\partial_z\mathcal{E}^\star
=0$. Moreover, $\mathcal{E}^\star$ is assumed as compatible on $\OmLr:=\Om\setminus\Lr$ in the sense that the incompatibility tensor defined by
\begin{eqnarray}
&&\hspace{-150pt}\mbox{\scriptsize{INCOMPATIBILITY:}}\hspace{45pt}\eta^\star_{kl}:=\epsilon_{kpm}\epsilon_{lqn}\partial_p\partial_q\mathcal{E}^\star_{mn},\label{eta}
\end{eqnarray}
where derivation is intended in the distribution sense, vanishes everywhere on $\OmLr$.
\end{assumption}
In the following definition generalizing the concept of rotation and displacement gradients to dislocated media,
the strain is considered as a distribution on $\Omega$
(i.e. as acting on $\mathcal{C}^1_c$ test-functions with compact support).
\begin{definition}[Frank and Burgers tensors]\label{FBtensors}
\begin{eqnarray}
&&\hspace{-118pt}\mbox{\scriptsize{FRANK TENSOR:}}
\hspace{45pt}\overline\partial_m\omega_k^\star:=\epsilon_{kpq}\partial_p\mathcal{E}_{qm}^\star\label{delta_m_a}\\
&&\hspace{-118pt}\mbox{\scriptsize{BURGERS TENSOR:}}
\hspace{41pt}\overline\partial_l b^\star_k:=\mathcal{E}^\star_{kl}+\epsilon_{kpq}(x_p-x_{0p})\overline\partial_l
\omega^\star_q,\label{delta_lb_i}
\end{eqnarray}
where $x_0$ is a point where displacement and rotation are given.
\end{definition}
Line integration of the Frank and Burgers tensors in $\OmLr$ (i.e., outside the defect set) provides the
multivalued rotation and Burgers vector fields $\omega^\star$ and $b^\star$. These properties are summarized in the following
theorem, whose proof is classical.
\begin{theorem}[Multiple-valued displacement field]\label{MVdispl}
From a symmetric smooth linear strain $\mathcal{E}^{\star}_{ij}$ on $\Om_\Lr$ and a point $x_0$ where displacement
and rotation are given, a multivalued displacement field $u^\star_i$ can be constructed on $\Om_\Lr$ such that the symmetric part of the distortion $\partial_ju^\star_i$ is the single-valued strain tensor
$\mathcal{E}^\star_{ij}$ while its skew-symmetric part is the multivalued rotation tensor
$\omega^\star_{ij}:=-\epsilon_{ijk}\omega^\star_k$. Moreover, inside $\OmLr$ the gradient $\partial_j$ of
the rotation and Burgers fields $\omega^\star_k$ and $b^\star_k=u^\star_k-\epsilon_{klm}\omega^\star_l(x_m-x_{0m})$
coincides with the Frank and Burgers tensors.
\end{theorem}
From this result, the Frank and Burgers vectors can be defined as invariants of any isolated defect line $L^{(i)}$ of
$\Lr$.
\begin{definition}[Frank and Burgers vectors]\label{Burgers}
The Frank vector of the isolated defect line $L^{(i)}$ is the invariant
\begin{eqnarray}
\Om^{\star(i)}_k:=[\omega^\star_k]^{(i)},\label{frank}
\end{eqnarray}
while its Burgers vector is the invariant
\begin{eqnarray}
B^{\star(i)}_k:=[b^\star_k]^{(i)}=[u_k^\star]^{(i)}(x)-\epsilon_{klm}\Om^{\star(i)}_l(x_m-x_{0m}),\label{burgers}
\end{eqnarray}
with $[\omega^\star_k]^{(i)},[b^\star_k]^{(i)}$ and $[u_k^\star]^{(i)}$ denoting the jumps of $\omega^\star_k,b^\star_k$
and $u_k^\star$ around $L^{(i)}$.
\end{definition}
The three types of $2D$ defects are the screw and edge dislocation, and the wedge disclination.
As an example, the distributional strain and Frank tensor of an isolated screw dislocation (see \cite{VGD2009} for the
other two) is given by the following
$L^1_{loc}(\Om)$ symmetric tensor and Radon measure \cite{AFP2000}:
\begin{eqnarray}
[\mathcal{E}^\star_{ij}]&=&\frac{-B^\star_z}{4\pi r^2}\left[ \begin{array}{ccc} 0& 0 & y\\ 0 & 0 & -x \\ y & -x & 0
\end{array} \right]\nonumber\\
{[\overline\partial_m\omega^\star_k]}&=&
\frac{-B^\star_z}{4\pi r^{2}}\left[\begin{array}{ccc} \cos2\theta & \sin2\theta & 0 \\ \sin2\theta &-\cos2\theta & 0 \\ 0 & 0 & 0
\end{array}\right]+
\frac{B^\star_z}{4}\left[\begin{array}{ccc} -\delta_L & 0 & 0 \\ 0 &-\delta_L & 0 \\ 0 & 0 & 2\delta_L
\end{array}\right].\nonumber
\end{eqnarray}
Consider a set of countable lines $\Lr$ and remark that
the present distributional approach is subtle in the sense that the physical condition that $\displaystyle\sum_{L^{(i)}\in\Lr}|B^{\star(i)}_z|$
be bounded is needed
in order for $\displaystyle\sum_{L^{(i)}\in\Lr}\overline\partial_m\omega^{\star(i)}_k$ to still be a Radon measure \cite{VGD2009}.
Besides their relationship with the multivalued rotation, Burgers and displacement fields, the Frank and Burgers
tensors can be directly related to the strain incompatibility by use of (\ref{eta}), (\ref{delta_m_a}) \&
(\ref{delta_lb_i}).
\begin{theorem}
The distributional curls of the Frank and Burgers tensors are
\begin{eqnarray}
\epsilon_{ilj}\partial_l\overline\partial_j\omega^\star_k&=&\eta^\star_{ik}\label{sup1}\\
\epsilon_{ilj}\partial_l\overline\partial_jb^\star_k&=&\epsilon_{kpq}(x_p-x_{0p})\eta^\star_{iq},\label{sup2}
\end{eqnarray}
with $\eta^\star_{ik}$ the incompatibility tensor.
\end{theorem}
From this theorem it results that single-valued rotation and Burgers fields $\omega^\star$ and $b^\star$ can
be integrated on $\Om$ if the incompatibility tensor vanishes.
\nl
To complete the model two other objective internal fields are introduced: the dislocation
and disclination densities.
\begin{definition}[Defect densities]
\begin{eqnarray}
&&\hspace{-58pt}\mbox{\scriptsize{DISCLINATION DENSITY:}}\hspace{35pt}\Theta^\star_{ij}:=
\sum_{k\in\mathcal{I}\subset\NN}\Om^{\star (k)}_j\tau_i^{(k)}\delta_{L^{(k)}}\ (i,j=1\cdots 3)\label{disclindens1}\\
&&\hspace{-58pt}\mbox{\scriptsize{DISLOCATION DENSITY:}}\hspace{38pt}\Lambda^\star_{ij}:=
\displaystyle\sum_{k\in\mathcal{I}\subset\NN}B^{\star (k)}_j\tau_i^{(k)}\delta_{L^{(k)}}\ (i,j=1\cdots 3),\label{dislocdens2}
\label{dislocdens4}
\end{eqnarray}
where $\delta_{L^{(k)}}$ is used to represent the one-dimensional Hausdorff measure density \cite{AFP2000}
concentrated on the rectifiable arc $L^{(k)}$ with the tangent vector $\tau_i^{(k)}$ defined almost everywhere
on $L^{(k)}$, while $\Om^{\star (k)}_j$ and $B^{\star (k)}_j$ denote the Frank and Burgers vectors of $L^{(k)}$,
respectively.
\end{definition}
\subsection{Kr\"oner's formula}
In this paper, only a simplified $2D$ mesoscopic distribution of defects in a tridimensional crystal is considered. Accordingly, the vectors $\eta^\star_k,\Theta^\star_k$ and $\Lambda^\star_k$ denote the tensor components $\eta^\star_{zk},\Theta^\star_{zk}$
and $\Lambda^\star_{zk}$.
Greek indices are used to denote the values $1,2$ (instead of the Latin indices used in $3D$ to denote the values
$1,2$ or $3$). Moreover, $\epsilon_{\alpha\beta}$ denotes the permutation symbol $\epsilon_{z\alpha\beta}$.
The contortion as introduced by Kondo \cite{KONDO52}, Nye \cite{NYE53} and Bilby et al. \cite{BILBY} will show a crucial defect density tensor.
Kr\"oner \cite{KR80} understood the importance of this object in terms of modelling.
\begin{definition}[$2D$ mesoscopic contortion]
\begin{eqnarray}
&&\hspace{-72pt}\mbox{\scriptsize{CONTORTION:}}\hspace{87pt}\kappa_{ij}^\star:=\delta_{iz}\alpha^\star_j
-\frac{1}{2}\alpha^\star_z\delta_{ij}\quad(i,j=1\cdots 3),\label{KR2}
\end{eqnarray}
where
\begin{eqnarray}
&&\hspace{68pt}\alpha^\star_j:=\Lambda^\star_j-\delta_{j\alpha}\epsilon_{\alpha\beta}\Theta^\star_z(x_\beta-x_{0\beta}).
\label{KR3}
\end{eqnarray}
\end{definition}
Among several equivalent formulations, the formula relating strain incompatibility to the defect densities has been
proposed in full generality by Kr\"oner \cite{KR80}, and proved for a countable set of $2D$ lines
(this meaning, replacing subscript $i$ by $z$ in the formula) by Van Goethem \& Dupret \cite{VGD2009}:
\begin{eqnarray}
&&\hspace{-124pt}\mbox{\scriptsize{KR\"ONER'S FORMULA IN $2D$:}}\hspace{50pt}\eta_k^\star=\Theta_k^\star
+\epsilon_{\alpha\beta}\partial_\alpha\kappa^\star_{k\beta}.\label{etak}
\end{eqnarray}
For the expression of incompatibility for a set of skew isolated $3D$ lines, we refer to \cite{VG2010}, while general
$3D$ results can be found in \cite{VG2011}.
\section{Preliminar results at the macroscopic scale}
Following Kondo \cite{KONDO55}, by calling a crystal ``perfect'', it is meant that the atoms form, in its stress-free
configuration, a regular pattern proper to the prescribed nature of the matter. However, no real crystal is perfect, but
rather filled with point and line defects which interact mutually. Each defect type is responsible for a
particular geometric property, as will be described in this paper. In order to reach this crystal, we first need
to provide a way from passing from the above scale to a scale where the fields have been smoothed.
\subsection{Homogenization}\label{sec_homo}
Homogenization is obtained from the continuum scale by a limit procedure which will not be detailled here (cf. \cite{VG2011}),
but whose
effect is to erase the singularities (isolated ones or those resulting from accumulation) and hence to
provide a smooth macroscopic crystal. Basically we postulate the following limits:
\begin{equation}
\Theta^\star,\Lambda^\star, \strain\rightarrow \Theta,\Lambda,
\mathcal{E},\label{homo}
\end{equation}
where $\Theta,\Lambda,\mathcal{E}$ belong to $\mathcal{C}^\infty(\Om)$ and where convergence is intended
in the sense of measures \cite{AFP2000}. The tensor $\mathcal{E}$ will be called macroscopic strain without
claiming however that $\mathcal{E}$ is the elastic strain (i.e. linearly related to the macroscopic stress $\sigma$).
As a consequence of law (\ref{homo}) we directly obtain from (\ref{eta}), (\ref{etak}) and straightforward
distribution properties:
\begin{eqnarray}
&&\hspace{-42pt}\mbox{\scriptsize{MACROSCOPIC KR\"ONER'S FORMULA:}}\hspace{35pt}\eta_k=\epsilon_{\alpha\beta}\partial_\alpha
\overline\partial_\beta
\omega_k=\Theta_k
+\epsilon_{\alpha\beta}\partial_\alpha\kappa_{k\beta},\label{etakmacro}
\end{eqnarray}
where by (\ref{delta_m_a}) and (\ref{KR2}), (\ref{KR3}),
\begin{eqnarray}
\mbox{\scriptsize{MACROSCOPIC FRANK TENSOR:}}\hspace{12pt}\overline\partial_m\omega_k:=\epsilon_{kpq}\partial_p\mathcal{E}_{qm}\hspace{115pt}\label{frankmacrotens}\\
\hspace{-25pt}\mbox{\scriptsize{MACROSCOPIC CONTORTION:}}\hspace{25pt}\kappa_{ij}=\delta_{iz}\left(\Lambda^\star_j
-\delta_{j\alpha}\epsilon_{\alpha\beta}\Theta_z(x_\beta-x_{0\beta})\right)
-\frac{1}{2}\Lambda_z\delta_{ij}.
\label{dislocdens2d}
\end{eqnarray}
\subsection{External and internal observers}
The \textit{external observer} analyzes the crystal actual configuration $\mathcal{R}(t)$ with the Euclidian metric $g^{ext}_{ij}=\delta_{ij}$.
The internal observer, in turn, can only count atom steps while moving in $\mathcal{R}(t)$, and parallelly transport a vector along crystallographic
lines. According to Kr\"{o}ner \cite{KR90}: ``in our universe we are internal observers who do not possess the ability to realize external
actions on the universe, if there are such actions at all. Here we think of the possibility that the universe could be
deformed from outside by higher beings. A crystal, on the other hand, is an object which certainly can deform from
outside. We can also see the amount of deformation just by looking inside it, e.g., by means of an electron microscope.
Imagine some crystal being who has just the ability to recognize crystallographic directions and to count lattice steps
along them.
Such an \textit{internal observer} will not realize deformations from outside, and therefore will be in a situation
analogous to that of the physicist exploring the world. The physicist clearly has the status of an internal observer.''
\section{The macroscopic crystal}
At time $t$, the defective crystal is a tridimensional body denoted by $\mathcal{R}(t)$. The crystal
defectiveness is not countable anymore, as was the case in Section \ref{meso}, since the fields have been smoothed
by homogenization. However, defectiveness is recovered
by the natural embedding of the crystal into a specific geometry which will be described in the two following
sections.
\subsection{Macroscopic strain and contortion as key physical fields}
The macroscopic strain $\mathcal{E}$ and contortion $\kappa$ have been defined by homogenization in Section
\ref{sec_homo}. It turns out that the relevant physical fields are not the Frank and Burgers tensors but their completed counterparts
\cite{VGD2009}:
\begin{definition}\label{defmeso}
\begin{eqnarray}
\hspace{-45pt}\mbox{\scriptsize{COMPLETED FRANK TENSOR }}\hspace{38pt}\eth_j\omega_k
&:=&\overline\partial_j\omega_k-\kappa_{kj}\label{disclindens}\\
\hspace{-45pt}\mbox{\scriptsize{COMPLETED BURGERS TENSOR}}\hspace{33pt}\eth_j b_k
&:=&\mathcal{E}_{kj}
+\epsilon_{kpq}(x_p-x_{0p})\eth_j\omega_q.\label{dislocdens}
\end{eqnarray}
\end{definition}
The following result is a direct consequence of its mesoscopic counterpart:
\begin{theorem}\label{nouveautenseur1}
\begin{eqnarray}
\hspace{-42pt}\mbox{\scriptsize{MACROSCOPIC DISCLINATION DENSITY}}\hspace{65pt}
\Theta_{ik}=\epsilon_{ilj}\partial_l\eth_j\omega_k&&\label{disclindens2}\\
\hspace{-34pt}\mbox{\scriptsize{MACROSCOPIC DISLOCATION DENSITY}}\hspace{68pt}
\Lambda_{ik}=\epsilon_{ilj}\partial_l\eth_j b_k.&&\label{dislocdens2c}
\end{eqnarray}
\end{theorem}
Vectors $\eta_k,\Theta_k$ and $\Lambda_k$ denote the tensor components $\eta_{zk},\Theta_{zk}$
and $\Lambda_{zk}$. With the above definitions and results, the Frank and Burgers vectors are physical measures of defect which are given in terms
of the sole strain and contortion tensors:
\begin{definition}
The Frank and Burgers vectors of surface $S$ are defined as
\begin{eqnarray}
\hspace{-85pt}\mbox{\scriptsize{MACROSCOPIC FRANK VECTOR}}\hspace{49pt}\Om_k(S)&:=&\int_S\Theta_k dS\label{frankmacro}\\
\hspace{-95pt}\mbox{\scriptsize{MACROSCOPIC BURGERS VECTOR}}\hspace{39pt}B_k(S)&:=&\int_S\Lambda_k dS.\label{burgers1macro}
\end{eqnarray}
\end{definition}
As a consequence of (\ref{disclindens2}) and (\ref{frankmacro}) and Stokes theorem, the relation between the completed Frank tensor
and the rotation gradient appears clear. Moreover, it results from (\ref{dislocdens2c}) and (\ref{burgers1macro}) that
$\left(\eth b\right)_{jk}:=\eth_j b_k$ appears instead of the displacement gradient.
In the crystal dislocation-free regions
(i.e. where the contortion vanishes), it results from the classical integral relation of infinitesimal elasticity
that the multiple-valued rotation and displacement fields read
\begin{eqnarray}
\omega_k&=&\omega_{0k}+\int_{x_0}^x\eth_m\omega_kd\xi_m
\label{rot}\\
u_i&=&u_{0i}-\epsilon_{ikl}\omega_k(x_l-x_{0l})
+\int_{x_0}^x\eth_mb_k(\xi)d\xi_m.\label{displ}
\end{eqnarray}
\subsection{Bravais metric and nonsymmetric connection as key geometrical objects}
A Riemannian metric is a smooth symmetric and positive definite tensor field $g_{ij}$.
From its symmetry property, there is a smooth transformation $a_i^j$ such that
$g_{ij}=a_i^ma_j^n\delta_{mn}$. The metric of the ``external observer'' on $\mathcal{R}(t)$ is the Euclidian metric $\delta_{ij}$. However,
as soon as the macroscopic strain $\mathcal{E}_{ij}$ is given, another Riemannian metric can be defined on
$\mathcal{R}(t)$, namely the
\begin{eqnarray}
\hspace{-84pt}\mbox{\scriptsize{BRAVAIS METRIC}}\hspace{115pt}g^B_{ij}=\delta_{ij}-2\mathcal{E}_{ij},\label{elastmetric1}
\end{eqnarray}
where the term ``Bravais'' (from the notion of Bravais crystal \cite{KR90}) is to recall that it has not a purely elastic meaning.
The use of this metric on defect-free regions of $\mathcal{R}(t)$ implies the
existence of a one-to-one coordinate change between $\mathcal{R}(t)$ and $\mathcal{R}_0$,
whose deformation gradient writes as $a_{mi}=g_{mn}^Ba^{n}_i=\delta_{mi}-\partial_i u_m$ where $u_m$ denotes a
displacement-like field. Let us remark that since small displacements are considered, no distinction is to be made between upper and lower
indices.
In the presence of defects, the following object (which is said ``of anholonomity'' \cite{Schouten})
$\Omega_{ijk}:=\partial_ka_{ji}-
\partial_ia_{jk}$ is directly related to the strain incompatibility and hence does not vanish as soon as
defects are present. This exactly signifies that there is no global system of coordinates $\{x^B_j(x_i)\}$ with a smooth
transformation matrix $a_{ji}=\partial_i x^B_j$. In fact, such a smooth $a_{ij}$ -- or,
equivalently, such a smooth displacement field only exist in the defect-free regions of the crystal.
Quoting Cartan \cite{CART}, ``the Riemannian space is for us an
ensemble of small pieces of Euclidian space, lying however to a certain degree amorphously'', while Kondo \cite{KONDO55}
suggests that
``the defective crystal is, by contrast [with respect to
the above by him given
definition of perfect crystal], an aggregation of an immense number of small pieces of perfect crystals
(i.e. small pieces of the defective crystal brought to their natural state in which the atoms are arranged on the regular positions of the perfect
crystal) that cannot be connected with one other so as to form a finite lump of perfect crystals as an organic unity.''
\nl
From the elastic metric, we define the compatible symmetric Christoffel symbols
\begin{eqnarray}
\hspace{-22pt}\mbox{\scriptsize{BRAVAIS CHRISTOFFEL SYMBOLS}}\hspace{35pt}\Gamma_{k;ij}^B=\frac{1}{2}\left(\partial_ig_{kj}^B+\partial_jg_{ki}^B-\partial_kg_{ij}^B\right),\label{compatconn}
\end{eqnarray}
whose torsion $\Gamma_{k;[ij]}^B:=\Gamma_{k;ij}^B-\Gamma_{k;ji}^B$ vanishes, while its curvature
\begin{eqnarray}
\hspace{-25pt}\mbox{\scriptsize{BRAVAIS CURVATURE}}\hspace{50pt}R_{l;kmq}^B:=\left(\partial_q\Gamma_{l;km}^B+\tilde g^B_{np}\Gamma_{n;km}^B\Gamma_{p;lq}^B\right)_{[mq]}, \label{curv}
\end{eqnarray}
with $\tilde g^B_{np}=\delta_{np}+\mathcal{E}_{np}$ the inverse of $g^B_{np}$ under the small strain assumption, and where symbol
$[\cdot]$ denotes the skew symmetric index commutation operator (i.e., $A_{[mn]}=A_{mn}-A_{nm}$). In the terminology of Wang \cite{WANG67}
and Noll
\cite{NOLL67} a connection such that $R_{l;kmq}^B$ vanishes is called \textit{a material connection}, while for
such a connection $\Gamma_{k;[ij]}^B$ is denoted as \textit{the inhomogeneity tensor}.
\nl
Quoting Einstein, ``to take into account gravitation, we assume the existence of Riemannian metrics.
But in nature we also have electromagnetic fields, which cannot be described by Riemannian metrics.
The question arises: How can we add to our Riemannian spaces in a logically natural way an additional structure that
provides all this with a uniform character ?''
In the present case, it is sufficient to replace gravitation by
strain, and electromagnetic fields by line defects to paraphrase Einstein and raise the question of the apropriate
connection inside the defective crystal. To be complete we should add that in order for the theory of dislocations
to be closed, it should be combined with the theory of point defects which play a role at higher temperature (in the same way
as Maxwell theory has to be combined with the theory of weak interactions, see Kr\"oner \cite{KR95}).
The Bravais geodesics are those lines whose tangent vector $\tau_i$ is parallelly
transported, hence solutions to $\tau_j\nabla^B_j\tau_i=0$, where $\nabla^B$ is the covariant derivative
of $\Gamma^B$. It turns out that on these lines, the internal observer is not be able to recognize any defect line.
Therefore, the above Bravais connection must be completed by a non-symmetric term.
The following geometric objects are introduced from the sole dislocation density
(or equivalently by (\ref{dislocdens}) \& (\ref{dislocdens2c}) from the sole strain and contortion tensors):
\begin{definition}\label{defgeom}
\begin{eqnarray}
\hspace{-125pt}\mbox{\scriptsize{DISLOCATION TORSION:}}\hspace{100pt} T_{k;ij}&:=&-\frac{1}{2}\epsilon_{ijp}\Lambda_{pk}\label{torsion}\\
\hspace{-105pt}\mbox{\scriptsize{CONNECTION CONTORTION:}}\hspace{74pt}\Delta\Gamma_{k;ij}&:=&T_{j;ik}+T_{i;jk}-T_{k;ji}\label{contortion}\\
\hspace{-28pt}\mbox{\scriptsize{NON SYMMETRIC CHRISTOFFEL SYMBOLS:}}\hspace{23pt}\Gamma_{k;ij}&:=&\Gamma_{k;ij}^B-\Delta\Gamma_{k;ij}.
\label{connexion}
\end{eqnarray}
\end{definition}
According to Noll
\cite{NOLL67} $\Delta\Gamma_{k;[ji]}$ is precisely the crystal \textit{inhomogeneity tensor} which will be shown in the following sections
to be directly related to the density of dislocations and disclinations.
\section{The macroscopic crystal as a non-Riemannian manifold}
By contrast with Kr\"oner's presentation, the present approach shows
geometrical objects as defined from homogenization of mesoscopic
measurable, objective physical fields (\ref{dislocdens2c}), (\ref{torsion}) \& (\ref{contortion})
whose identification with their physical macroscopic counterparts follows
as (proved) results.
\subsection{Physical and geometrical torsions and contortions}
\label{sec:contortion}
The following lemma is easy to prove from the definitions.
\begin{lemma}\label{propgeom}
The tensor $g_{ij}^B$ defines a Riemannian metric. The symmetric Christoffel symbols $\Gamma_{k;ij}^B$ define a
symmetric connection compatible with this metric, while $T_{k;ij}$ and $\Delta\Gamma_{k;ij}$ are skew-symmetric tensors
w.r.t. $i$ and $j$ and $i$ and $k$, respectively.
\end{lemma}
The following results makes the link between internal motion of the observer by parallel transport and the deformation and defect internal
variables encountered.
\begin{theorem}[Physical and geometrical torsions]\label{gamma}
The Cristoffel symbols $\Gamma_{k;ij}$ define a nonsymmetric connection compatible with $g_{ij}^B$ whose
torsion writes as $T_{k;ij}$.
\end{theorem}
\textbf{Proof.}
It is easy to verify \cite{Dubrovin} that $\Gamma_{k;ij}$ is a connection since $\Gamma_{k;ij}^B$ is a
connection and $\Delta\Gamma_{k;ij}$ is a tensor. Denoting by $\nabla_k$ (resp. $\nabla^B_k$) the covariant gradient
w.r.t. $\Gamma_{k;ij}$ (resp. $\Gamma_{k;ij}^B$), and recalling that a connection is compatible with the metric
$g_{ij}^B$ if the covariant gradient of $g_{ij}^B$ w.r.t. this connection vanishes, we find by (\ref{connexion}) that
\begin{eqnarray}
\nabla_{k}g_{ij}^B:&=&\partial_k g_{ij}^B-\Gamma_{l;ik}g_{lj}-\Gamma_{l;jk}g_{li}^B=\nabla^B_{k}g_{ij}^B+\Delta\Gamma_{l;ik}g_{lj}^B+\Delta\Gamma_{l;jk}g_{li}^B\label{tilde},
\end{eqnarray}
where in the RSH, the $1^{st}$ term vanishes by Lemma \ref{propgeom} while the $2^{nd}$ and $3^{rd}$
terms cancel each other since $\Delta\Gamma_{l;jk}g_{li}=\Delta\Gamma_{i;jk}=-\Delta\Gamma_{j;ik}$. It results
that the connection torsion, i.e. the skew-symmetric part of $\Delta\Gamma_{j;ik}$ w.r.t. $i$ and $k$, writes as
\begin{eqnarray}
&&\hspace{-32pt}\frac{1}{2}\left(\Delta\Gamma_{j;ik}-\Delta\Gamma_{j;ki}\right)=-\frac{1}{2}\left(\Delta\Gamma_{i;jk}-\Delta\Gamma_{k;ji}\right)=\frac{1}{2}\bigl(\left(\Delta\Gamma_{k;ij}-\Delta\Gamma_{i;kj}\right)+\nonumber\\
&&\hspace{87pt}\left(\Delta\Gamma_{k;ji}-\Delta\Gamma_{k;ij}\right)-\left(\Delta\Gamma_{i;jk}-\Delta\Gamma_{i;kj}\right)
\bigr)\label{calcul}.
\end{eqnarray}
Observing that the $1^{st}$ term in the RHS side of (\ref{calcul}) writes as $\Delta\Gamma_{k;ij}$ while, by
Definition \ref{defgeom} (Eq. (\ref{contortion})), the LHS and the two remaining terms of the RHS
of (\ref{calcul}) are equal to $T_{j;ik}, T_{k;ji}$ and $-T_{i;jk}$, respectively, the proof is complete.
{\hfill $\square$}
\begin{theorem}[Physical and geometrical contortions]\label{deltagamma}
The connection contortion tensor $\Delta\Gamma_{k;ij}$ writes in terms of the dislocation contortion $\kappa_{ij}$ as
\begin{eqnarray}
\Delta\Gamma_{k;ij}=\delta_{k\kappa}\left(\delta_{i\alpha}\delta_{j\beta}\epsilon_{\kappa\alpha}\kappa_{z\beta}\right)
+\delta_{i\alpha}\delta_{jz}\epsilon_{\alpha\tau}\kappa_{\tau\kappa}&+&\delta_{iz}\delta_{j\beta}\epsilon_{\beta\tau}
\kappa_{\tau\kappa}-\delta_{kz}\delta_{i\alpha}\delta_{j\beta}\epsilon_{\alpha\beta}\kappa_{zz}.\nonumber
\end{eqnarray}
\end{theorem}
\textbf{Proof.}
For $k=z$, by Definition \ref{defgeom}, the last statement of Lemma \ref{propgeom}, and (\ref{dislocdens2d}),
it is found that $\Delta\Gamma_{z;ij}=\Delta\Gamma_{z;\alpha\beta}\delta_{i\alpha}\delta_{j\beta}$,
with
\begin{eqnarray}
\Delta\Gamma_{z;\alpha\beta}=T_{z;\alpha\beta}=-\frac{1}{2}\epsilon_{\alpha\beta}\Lambda_z
=-\frac{1}{2}\epsilon_{\alpha\tau}\delta_{\tau\beta}\Lambda_z
=\epsilon_{\alpha\tau}\kappa_{\tau\beta}.\nonumber
\end{eqnarray}
For $k=\kappa$, by Definition \ref{defgeom} and the last statement of Lemma \ref{propgeom}, it is found that
\begin{eqnarray}
\Delta\Gamma_{\kappa;ij}=\delta_{i\alpha}\delta_{j\beta}\left(T_{\kappa;\alpha\beta}+T_{\beta;\alpha\kappa}
+T_{\alpha;\beta\kappa}\right)+\delta_{i\alpha}\delta_{jz}T_{z;\alpha\kappa}+\delta_{iz}\delta_{j\beta}T_{z;\beta\kappa},
\nonumber
\end{eqnarray}
with $T_{z;\xi\kappa}=\epsilon_{\xi\tau}\kappa_{\tau\kappa}$
and $\displaystyle T_{\xi;\tau\nu}=-\frac{1}{2}\epsilon_{\tau\nu}\Lambda_\xi$. Since the combination of the terms
in $\Theta_z$ vanish in $\Delta\Gamma_{\kappa;ij}$, the proof is completed by
observing that $\epsilon_{\alpha\beta}\Lambda_\kappa+\epsilon_{\kappa\alpha}\Lambda_\beta=(\epsilon_{\alpha\kappa}
\epsilon_{\tau\nu})\epsilon_{\tau\beta}\Lambda_\nu=\epsilon_{\alpha\kappa}\Lambda_\beta=\epsilon_{\alpha\kappa}
\kappa_{z\beta}$.
{\hfill $\square$}
In conclusion, the non-Riemannian crystal is described from a physical viewpoint by $\mathcal{E}$
and $\kappa$, that is by $15$ degrees of freedom. From a geometrical viewpoint the $15$
unknowns are the $6$ components of the symmetric Bravais metric and the $9$ (by (\ref{torsion}) \& (\ref{contortion}))
nonvanishing components of the connection contortion.
\subsection{The Bravais crystal}
\label{sec:bravais}
The following definition introduces two differential forms whose path integrations generalize (\ref{rot})
\& (\ref{displ}) to the defective regions of the crystal.
\begin{definition}[Bravais forms]\label{formdiff}
\begin{eqnarray}
d\omega_j&:=\eth_\beta\omega_jdx_\beta,\label{domega}\\
d\beta_{kl}&:=-\Gamma_{l;k\beta}dx_\beta\label{dbeta}.
\end{eqnarray}
\end{definition}
In the literature the existence of an elastic macroscopic distortion field is generally postulated
together with the global distortion decomposition in elastic and plastic parts (for a rigorous justification
of the latter, see \cite{KR96}).
The present approach renders however possible to avoid this a-priori decomposition. Nevertheless, the following theorem introduces
rotation and distortion fields (which must not be identified with the rotation and distortion as related to the macroscopic strain)
in the absence of disclinations. As a consequence and in contrast with the classical literature where it is basically postulated that
dislocation density is the distortion curl, this relationship is here proved.
\begin{theorem}\label{elasticrotdist}
If the macroscopic disclination density vanishes, there exists rotation and distortion fields defined as
\begin{align}
&\hspace{-50pt}\mbox{\scriptsize{BRAVAIS ROTATION}}\ &\omega_j(x)&:=\omega_j^0+\int_{x_0}^x d\omega_j,\label{omega}\\
&\hspace{-50pt}\mbox{\scriptsize{BRAVAIS DISTORTION}}\ &\beta_{kl}(x)&:=\mathcal{E}_{kl}(x^0)-\epsilon_{klj}\omega^0_j
+\int_{x_0}^x d\beta_{kl},\label{beta}\\
&&&=\mathcal{E}_{kl}(x)-\epsilon_{klj}\omega_j(x)
\end{align}
where $\omega_j^0$ is arbitrary and the integration is made on any line with endpoints $x_0$ and $x$. Moreover,
\begin{eqnarray}
\partial_\alpha\beta_{k\beta}=\partial_\alpha\mathcal{E}_{k\beta}+\epsilon_{kp\beta}\eth_\alpha\omega_p\quad\mbox{and}
\quad
\epsilon_{\alpha\beta}\partial_\alpha\beta_{k\beta}=\Lambda_{zk}.\label{decompmacro}
\end{eqnarray}
\end{theorem}
\textbf{Proof.}
By Definition \ref{defgeom}, the symmetric part of the connection writes as
\begin{eqnarray}
-\Gamma_{(l;k)\beta}dx_\beta=-\frac{1}{2}\partial_\beta g_{kl}^B dx_\beta=-\frac{1}{2}\partial_m g_{kl}^B dx_m
=\partial_m \mathcal{E}_{kl}dx_m=d\mathcal{E}_{kl},\nonumber
\end{eqnarray}
while, by Definition \ref{defgeom} and Theorem \ref{deltagamma}, the skew-symmetric part writes as
\begin{eqnarray}
-\Gamma_{[l;k]\beta}&=&-\frac{1}{2}(\partial_k g_{l\beta}^B -\partial_l g_{k\beta}^B )+\Delta\Gamma_{l;k\beta}=
\partial_k\mathcal{E}_{l\beta}-\partial_l\mathcal{E}_{k\beta}+\Delta\Gamma_{l;k\beta}.\nonumber
\end{eqnarray}
Observing, by (\ref{disclindens}) and Definition \ref{formdiff} and Theorem \ref{deltagamma}, that
$\displaystyle d\omega_j=\eth_\beta\omega_j dx_\beta$ $=-\frac{1}{2}\epsilon_{lkj}\Gamma_{[l;k]\beta}dx_\beta$,
it results that $d\beta_{kl}=d\mathcal{E}_{kl}-\epsilon_{klj}d\omega_j$.
Under the assumption of a vanishing macroscopic disclination density,
the existence of single-valued Bravais rotation and distortion fields follows from (\ref{domega}),
(\ref{disclindens2}) \& (\ref{frankmacro}). Moreover, since
$\partial_\alpha\beta_{k\beta}=\partial_\alpha\mathcal{E}_{k\beta}-\epsilon_{k\beta j}\eth_\alpha\omega_j$,
Eq. (\ref{decompmacro}) is satisfied by (\ref{disclindens2}) \& (\ref{dislocdens2c}).
{\hfill $\square$}
\begin{remark}
Eq. (\ref{omega}) indicates that symbol $\eth$ in (\ref{domega}) becomes a true derivation operator in the absence of disclinations.
\end{remark}
\begin{remark}\label{seulconn}
Referring to ``Bravais'' instead of ``elastic'' rotation and distortion fields is devoted to highlight that these
quantities do not have a purely elastic meaning. In fact, the Bravais metric is not even needed since the internal observer
only requires the prescription of the connection, and subsequent path integration of the forms:
\begin{eqnarray}
d\mathcal{E}_{kl}=-\Gamma_{(l;k)\beta}dx_\beta,\quad d\omega_j:=-\frac{1}{2}\epsilon_{lkj}\Gamma_{[l;k]\beta}dx_\beta\quad\mbox{and}\quad
d\beta_{kl}:=-\Gamma_{l;k\beta}dx_\beta\nonumber.
\end{eqnarray}
\end{remark}
\begin{remark}\label{remark5}
The Bravais distortion does not derive from any ``Bravais displacement'' in the presence of dislocations.
In fact, around a closed loop $C$, even if the disclination density vanishes, the differential of the displacementas
$du_k:=\beta_{k\alpha}dx_\alpha$ verifies by Theorem \ref{elasticrotdist}:
\begin{eqnarray}
\int_C du_k=\int_S\epsilon_{\alpha\beta}\partial_\alpha\beta_{k\beta}dS
=\int_C \eth_\beta b_k dx_\beta=\Lambda_k(S)=\int_S\epsilon_{\alpha\beta}\partial_\alpha\eth_\beta b_k dS.
\end{eqnarray}
\end{remark}
\begin{remark}\label{remark4}
Theorem \ref{gamma} defines an operation of parallel displacement according to the Bravais lattice geometry.
The parallel displacement of any vector $v_i$ along a curve of tangent vector $dx^{(1)}_\alpha$ is such that
$dx^{(1)}_\alpha\nabla_\alpha v_i=0$ and hence that the components of $v_i$ vary according to the
law $d^{(1)}v_i=-\Gamma_{i;j\beta}v_j dx^{(1)}_\beta$ \cite{Dubrovin}. This shows the macroscopic Burgers
vector and dislocation density together with the Bravais rotation and distortion fields as reminiscences of the
defective crystal properties at the atomic, mesoscopic and continuum scales. In fact, if $dx^{(1)}_\nu, dx^{(2)}_\xi$ are two infinitesimal vectors
with the associated area $dS:=\epsilon_{\nu\xi}dx^{(1)}_\nu dx^{(2)}_\xi$, it results from Eq. (\ref{torsion}) and the skew symmetry of $\Gamma_{k;\alpha\beta}$ that, in the absence of disclinations,
\begin{eqnarray}
dB_k=\Lambda_{z\alpha} dS
=-\epsilon_{\alpha\beta}\Gamma_{k;\beta\alpha}dS
=-\Gamma_{k;\beta\alpha}(dx^{(1)}_\alpha dx^{(2)}_\beta-dx^{(1)}_\beta dx^{(2)}_\alpha)\nonumber,
\end{eqnarray}
whose right-hand side appears as a commutator verifying the relation
\begin{eqnarray}
dB_k=\epsilon_{\alpha\beta}\partial_\alpha\eth_\beta b_k dS=-\epsilon_{\alpha\beta}d^{(\alpha)}(dx^{(\beta)})\nonumber.
\end{eqnarray}
\end{remark}
\subsection{Motion of the internal observer}
\label{sec:int}
The internal observer will be represented by the $\textbf{k}^{th}$ geodesic basis element ${\bf{e_k}}(x)$ solution to
\begin{eqnarray}
({\bf{e_k}})_j\nabla_j({\bf{e_k}})_l=0\quad\mbox{(with no summation on $\texttt{\textbf{k}}$)},
\end{eqnarray}
where $\nabla$ is the covariant derivative of $\Gamma$ as given by (\ref{connexion}). We have seen
in the above two sections that it was sufficient to provide him with a connection, i.e. with a law of parallel transport inside
the crystal. In fact at this stage the internal observer is not able to measure distances, while he can measure the disclination
(resp. dislocation) content of a
surface $S$ by boundary measurements of $\eth_\beta b_k$ (resp. $\eth_\beta\omega_k$) on the curve $C$ enclosing $S$
(which depend merely on $\Gamma$ -- cf Remarks \ref{seulconn}-\ref{remark4}).
The notion of metric connection can be explained as follows. Let the external observer be equipped with the
Bravais metric and Cartesian coordinate system $\{x_i\}$. Since $\Gamma_{l;km}=\nabla_m ({\bf{e_k}})_l$, we have
on a portion $A-B$ of geodesic $\texttt{k}$,
\begin{eqnarray}
({\bf{e_k}})_l(B)-({\bf{e_k}})_l(A)=\int_A^B\Gamma_{l;km}dx_m
=\lim_{N\to\infty}\sum_{1\leq i\leq N}\Gamma_{l;km}(x^i)({\bf{e_k}})_m(x^i)\Delta s^i,\nonumber
\end{eqnarray}
where $x^i$ are discretisation points on the curve with endpoints $x^1=A$ and $x^N=B$, and $\Delta s^i$ a tending to zero element of the geodesic.
Moreover, if the connection is compatible with the metric
$g^B$, the angles between these lattice vectors and their (unit) length remain invariant during
parallel transport. So, we understand Kr\"{o}ner \cite{KR92} when he says: ``when a lattice vector is parallelly displaced using
$\Gamma$ along itself, say $1000$ times, then its start [say, $A$] and goal [say, $B$] are separated by 1000 atomic spacings, as
measured by $g^B$. Because the result of the measurement by parallel displacement and by counting lattice steps
is the same, we say that the space is metric with respect to the connection $\Gamma$.''
Moreover, as long as $({\bf{e_k}})_l(A)$ (the internal observer) is parallely transported along a closed curve $C$ with start- and endpoint
$A$, the gap as created when he comes back to his orgin can be measured by the external observer, since by Stokes theorem
\begin{eqnarray}
&&({\bf{e_k}}^\shortparallel)_l-({\bf{e_k}})_l=\int_C\Gamma_{l;km}dx_m=\int_{S}\epsilon_{pqm}\partial_q
\Gamma_{l;km}dS_p\nonumber\\
&=&\int_{S}\epsilon_{pqm}\left(\nabla_q
\Gamma_{l;km}+\left(\Gamma_{l;pm}\Gamma_{p;kq}+\Gamma_{l;pq}\Gamma_{p;km}\right)+\Gamma_{l;kp}\Gamma_{p;mq}\right)dS_p
\nonumber,
\end{eqnarray}
where $({\bf{e_k}}^\shortparallel)_l$ denotes the base $({\bf{e_l}})_k$ after being parallely transported along $C$. Since the term inside the
parenthesis is symmetric in $m$ and $q$, we have \cite{Dubrovin}:
\begin{eqnarray}
=\int_{S}\epsilon_{pqm}\frac{1}{2}\left(\nabla_{[q}
\nabla_{m]} ({\bf{e_k}})_l+\nabla_p({\bf{e_k}})_l T_{p;mq}\right)dS_p=\int_{S}\epsilon_{pqm}\frac{1}{2}R_{l;nmq}({\bf{e_k}})_ndS_p,\label{internmotion}
\end{eqnarray}
with the definition of the Riemannian curvature tensor
\begin{eqnarray}
R_{l;kmq}:=R_{l;kmq}^B+\Delta R_{l;kmq}, \label{curv2}
\end{eqnarray}
where by (\ref{curv}) \& (\ref{connexion}), $R^B$ and $\Delta R$ denote the Riemann curvature tensors
associated to $\Gamma^B$ and
$\Delta\Gamma$, respectively.
\nl
By (\ref{internmotion}), the internal observer is convinced to return to his startpoint
while the external observer however can see the gap as created by the crystal curvature, itself resulting from the presence of defects.
\subsection{Geometric and physical curvatures}
\label{sec:curv}
Let us remark that in the absence dislocations ($T=\Delta R=\Lambda=0$), the gap is merely due to curvature
effects with a curvature tensor directly related by (\ref{etakmacro}) to the disclination density by
$R_{l;kmq}=-\epsilon_{lki}\epsilon_{mqj}\Theta_{ij}$. It should however be noted that in the absence of disclinations, the curvature
is not vanishing but depends on the sole contortion, since from (\ref{etakmacro}) \& (\ref{curv2}),
\begin{eqnarray}
R_{l;kmq}=-\epsilon_{lki}\epsilon_{mqj}\epsilon_{ipn}\partial_p\kappa_{jn}+\Delta R_{l;kmq},\label{curv4}
\end{eqnarray}
where by Theorem \ref{deltagamma}, $\Delta R$ is linearly related to the contortion. It is computed from (\ref{curv4}) that the Ricci and Gauss
curvatures \cite{Dubrovin} read
\begin{eqnarray}
\hspace{-68pt}\mbox{\scriptsize{RICCI CURVATURE}} \hspace{38pt}R_{kq}^B&:=&R_{l;kmq}^B=R_{p;kpq}^B=\eta_{kq}-\delta_{kq}\eta_{pp}\label{Ricci}\\
\hspace{-65pt}\mbox{\scriptsize{GAUSS CURVATURE}} \hspace{36pt}R^B&:=&\frac{1}{2}R^B_{pp}=-\eta_{pp},\label{Gauss}
\end{eqnarray}
while Einstein tensor reads
\begin{eqnarray}
-\frac{1}{4}\epsilon_{lki}\epsilon_{mqj}R_{l;kmq}^B=\eta_{ij}=R_{ij}^B-\delta_{ij}R^B
\end{eqnarray}
in the presence of dislocations and disclinations, thereby contradicting Kr\"oner who identified Einstein
tensor with the disclination density in \cite{KR92}.
Moreover, since the macroscopic strain can be decomposed into (symmetric) compatible and (symmetric) solenoidal
parts \cite{VGD2009}, where only the second one as denoted by $\mathcal{E}^s$ is relevant for the incompatibility tensor, it results that
its trace
$\mathcal{E}_{pp}^s$ satisfies by (\ref{Gauss}) $-\Delta \mathcal{E}_{pp}^s=R^B$, thereby
showing how the Gauss curvature is related to the variation of matter density.
\subsection{Summary of the non-Riemannian metric crystal}
\label{sec:sum}
The crystal equiped with
$\{g^B,\Gamma\}$ has the following properties: (i) the geodesics of $\Gamma$ are the crystallographic lines;
(ii) the effect of parallel displacement
of the internal observer (equiped with $\Gamma$) along a crystallographic line is equivalent to counting the lattice steps;
(iii) the defect content, i.e. disclination and/or dislocation densities can be computed from
measures of $\Gamma$ only; (iv) the torsion of $\Gamma$ is merely due to the presence of dislocations, while its curvature is
due to the presence of both disclinations and dislocations;
(v) in the absence of disclinations, there exists a single-valued rotation and distortion field;
(vi) if and only if there are no defect lines, $\Gamma$ is Euclidean and there exists a holonomic
coordinate system. In the latter case only, one can properly speak of a reference configuration,
of single-valued rotation, displacement and distortion fields, with the macroscopic strain compatible with the displacement field.
\nl
Figure \ref{geom} illustrate the inseparable link between physics and geometry. On the one hand, the physical fields can be set apart: the deformation
and defect internal variables are shown in rectangular and hexagonal boxes, respectively. On the other hand, the purely geometrical object are
in oval boxes. The double boundary line means that the quantity contains differential combinations of other fields (as connected by arrows),
while single lines mean algebraic combinations only.
The main deformation field is the strain, while the distortion and rotation are only obtained in the absence of disclinations (see Theorem
\ref{elasticrotdist}). Because they depend on an arbitrary point where there value is assumed known, they are considered inappropriate
as model variables. Concerning defect
internal variables, one could indifferently chose the dislocation torsion or contortion. Let us mention that since strain instead of distortion is
chosen, the deformation and defect variables should be considered as independent physical fields.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=11cm,height=7cm]{geometry.eps}
\end{center}
\caption{Link between physics and geometry of defects}\label{geom}
\end{figure}
\subsection{Nonmetricity, teleparallelism and the paradox of the flat crystal}
\label{sec:distpar}
Let us first remark that the notions of metric and of connection must be considered as distinct. This has been
emphasized in \cite{BOURG} where it is recalled that historically it has not been so for a long time (including Einstein literature).
Here, we have seen that the metric is attached to the notion of external observer, while the connection is attached to the notion of
parallel displacement of the internal observer inside the crystal. We have seen that parallel displacement with $\Gamma^B$ and
counting-step measurements are the same in a crystal filled with line defects. This was true because $\Gamma^B$ was compatible with
$g^B$ and hence crystallographic basis elements remain crystallographic as transported along the crystallographic lines.
Suppose now that the crystal also contains point defects. According to Kr\"oner \cite{KR96}, ``nonmetricity means that length measurements are disturbed. It is easy to see that this just occurs in the presence of
point defects. In fact, when counting atomic steps along crystallographic lines to measure distance between two atoms, [the internal observer] feels
disturbed when suddently a vacancy or an intersticial emerges instead of another atoms [of the perfect crystal].''
Let $C_V$ and $C_I$ be the scalar vacancy (resp. interstitial) concentration, that is the number of vacancies (resp. interstitials) per unit
volume of crystal. Then, the following metric as proposed by Kr\"oner \cite{KR90}:
\begin{equation}
g'=(1+C_I-C_V)^2g^B,\label{newmetric}
\end{equation}
verifies $dV=\sqrt{\det g'} dV_0=(1+\Delta C)\sqrt{\det g^B} dV_0$, with $dV_0$ the volume element of the stress-free crystal and $dV$ that of
the actual one, and where $\Delta C=C_I-C_V$ is the excess atomic content of $dV$. An evolution equation for $C_V$ and $C_I$ (and hence for $\Delta C$ and $g'$)
will be given in Section \ref{concl}.
It is clear that the non-metricity defined as $Q_{j;ik}:=\hat \nabla_j g'_{ik}\neq 0$ \cite{Schouten,KR92} must now enter the geometric point- and
line-defect model. Differentiation $\hat\nabla$ is here intended with respect to connection $\hat\Gamma$, as defined by
\begin{eqnarray}
\mbox{\scriptsize{CHRISTOFFEL SYMBOLS WITH POINT DEFECTS:}}\hspace{4pt}\hat\Gamma_{k;ij}:=\Gamma'_{k;ij}-\Delta\Gamma_{k;ij}
-\frac{1}{2}\delta\Gamma_{k;ij}\label{connexion2}
\end{eqnarray}
with $\Delta\Gamma_{k;ij}$ given by (\ref{torsion}) \& (\ref{contortion}) and where $\Gamma'_{k;ij}:=\frac{1}{2}\left(\partial_ig'_{kj}+
\partial_jg'_{ki}-\partial_kg'_{ij}\right)$ and
\begin{eqnarray}
\mbox{\scriptsize{NONMETRIC CONTORTION:}}\hspace{100pt}\delta\Gamma_{k;ij}:=Q_{j;ik}+Q_{i;jk}-Q_{k;ji}.\label{contortion2}
\end{eqnarray}
Since $Q$ is a tensor quantity, it is expected to play a role for the physical description of the crystal (and to obey an evolution equation as related
to the other defects).
\nl
The paradox of the flat
crystal is the fact that a defective solids with in addition to defect lines a certain amount of point defects
can recover a vanishing curvature if the following balance holds:
\begin{eqnarray}
\hspace{-35pt}\mbox{\scriptsize{TELEPARALLELISM ASSUMPTION:}}\hspace{30pt}\delta R_{l;kmq}=-(R'_{l;kmq}+\Delta R_{l;kmq}), \label{curv6}
\end{eqnarray}
where the three terms are the curvature of $\delta\Gamma,\Gamma'$ and $\Delta\Gamma$, respectively. This is what Kr\"oner
(and Bilby et al \cite{BILBY}) calls teleparallelism, by
this meaning that the global connection curvature vanishes and hence that the internal observer ends up parallel when travelling along a loop. It
should be
emphasized that teleparallelism is often considered as a working assumption \cite{BILBY,KR96}.
However, we rather follow Kr\"oner \cite{KR90} when he says
that
``curved crystals are possible only if the curvature is, in some sense, compatible with the considered crystal structure'', which means that instead
of a
flat crystal given by (\ref{curv6}), the connection curvature $\hat R$ of the actual crystal should be such that point and line defects accomodate to
satisfy
\begin{eqnarray}
\hat R_{l;kmq}=\delta R_{l;kmq}+(R'_{l;kmq}+\Delta R_{l;kmq}), \label{curv7}
\end{eqnarray}
Particularizing (\ref{curv7}) we learn from identity $\hat R_{(l;k)mq}+\hat\nabla_{[m}Q_{q];lk}+T_{p:mq}Q_{p;lk}=0$ \cite{Schouten,KR92} that point defects
and dislocations must be geometrically related, which phenomena is well known from solid-state physicist: ``dislocations moving perpendicular to their Burgers vector produce point defects, and similar processes occur when
dislocations cut each other'' \cite{KR90} (concerning non metricity, see also \cite{BENABR}).
\section{Concluding remarks: the choice of model variables}\label{concl}
Let us conclude by attempting to answer Kr\"oner's question of the Introduction. To the knowledge of the author, this question has not
been answered yet, or to say the least, there is still no agreement on the answer.
In fact it depends of the physics which one wants to capture. If motion of dislocations is modelled, then not
only the conservative glide but also the non-conservative climb mode must be taken into account. Non conservation is due to the presence,
creation, anihilation and motion of point defects, and these processes require high temperatures and non-negligible temperature gradients
\cite{VGetal,Phil}. Therefore a complete model of dislocation motion must be thermodynamical, away from thermal equilibrium (i.e. irreversible),
and coupled with the motion of point defects. In \cite{VGetal} the following set of
PDEs showed very good results to model point defects:
\begin{eqnarray}
\frac{DC_K}{Dt}&=&\nabla\cdot\left(D_K\nabla C_K+\tilde D_KC_K\nabla T\right)-P,\label{eqci}
\end{eqnarray}
with the Lagrangian derivative $D/Dt$, and where $C_K, D_K, \tilde D_K$ and $P$ mean (scalar) concentration, (tensorial) equilibrium diffusion,
thermodiffusion
and (scalar) recombination ($K=I$ for interstitials and $K=V$ for vacancies).
Concerning the motion of dislocations (we assume here that disclinations are negligible), a similar equation as (\ref{eqci}) should be proposed,
with an inter-dislocation recombination term and a term of interaction with point defects, collectively denoted by $\tilde P$ (also appearing in (\ref{eqci})). However,
the
dislocation density cannot be scalar (which is the case in most of the current models available in the literature) but must be the tensorial
$\Lambda$
(or equivalently the contortion $\kappa$). The PDE could read
\begin{eqnarray}
\frac{D\kappa}{Dt}&=&\nabla\cdot\left(D\nabla \kappa+\tilde D\kappa\nabla T\right)-\tilde P,\label{eqdisloc}
\end{eqnarray}
with appropriate boundary condition and where $D$ and $\tilde D$ are tensor diffusivities of order $4$. Moreover the contortion verifies the
conservation law \cite{VGD2009}
$\nabla\cdot\kappa=\nabla\left(\texttt{tr}\ \kappa\right)$, meaning that the mesoscopic dislocations are loops or end at the crystal boundary, in such a way that Eq. (\ref{eqdisloc}) amounts to a system of $6$ coupled PDEs.
Moreover the expression of $\tilde P$ must somehow satisfy the geometric interaction between defects as given by (\ref{curv7}).
\nl
Concerning the deformation variables, let us first observe Figure \ref{physics}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=11cm,height=6.5cm]{physics.eps}
\end{center}
\caption{The deformation and defect state variables}\label{physics}
\end{figure}
Incompatibility is the final quantity as obtained by recursive
differentiation of either the strain (twice), or the contortion (once). It hence shows the ultimate convergence of the initially set apart deformation
and defect variables. The other oval boxes denote defect variables obtained from the two key model variables: strain and contortion.
It should be
observed that all relations between strain $\mathcal{E}$, Frank tensor $\overline\partial\omega$, contortion $\kappa$ and incompatibility are obtained by means of recursive application of the curl
operator (either $\nabla\times$ or $\times\nabla$).
\nl
As a first step, Kr\"oner
proposed an (``athermal'') Gibbs free energy reading $W=\tilde W(F^e,\Lambda)$ with $F^e$ the elastic deformation gradient.
Restricting hence to statics, he thereby attempted to answer his question \cite{KR95}: ``what are the independent (extensive) [explicit state]
variables entering the free energy (at constant temperature)?'' However in \cite{KR96}, he recognizes that the
use of $F^e$ is inevitably ambiguous because the elasto-plastic decomposition is not unique.
According to our theory, the free energy
naturally reads from the diagram on Fig. \ref{physics} as a first strain gradient (see \cite{LEONEP,LAZMAUG}) model $W=W_1(\mathcal{E},
\overline\partial\omega;\kappa)$ where the strain gradient is however replaced by its curl.
Equivalently it could read $W=W_2(\mathcal{E},\eth\omega,\Lambda)$ by combination of the last two
variables of $W_1$, or even
$W=W_3(\mathcal{E},\eth b,\Lambda)$ by combination of the first two variables of $W_2$. Let us observe that according to
Theorem \ref{elasticrotdist}, $W_3$ can nevertheless be compared with $\tilde W$ as soon as $\eth b$ is identified with a distortion (i.e. a deformation gradient),
although not becessarily the elastic one (cf. Remark \ref{seulconn}). Moreover, a curl differential relation is also observed between $W_3$ last two
variables (cf. Remark \ref{remark5}).
A recent thermodynamic analysis with $W_3$ has been remarkably reported by Berdichevsky \cite{BERD08} where
$\eth b$ is identified with the plastic distortion. Let us remark however that by (\ref{disclindens})
\& (\ref{dislocdens}), $\eth b$ is not, because of the prescription of the arbirary $x_0$,
an unambiguous state variable, as opposed to $\eth \omega$.
They are however reasons to be tempted by the choice $W=W_1$ because (i) all variables are explicit state variables defined by objective fields
(which do not appeal to reference configurations, arbitrary plastic parts, or points $x_0$), (ii) there is a distinction between (strain-like) deformation
and (internal) defect variables, (iii) all variables have clear and unambiguous physical and a geometrical meanings. If needed, all other variables (such as $\nabla\mathcal{E},
\eth\omega,\eth b,\Lambda,\eta$) can be recovered
as implicit state variables of the model \cite{KR95}. Moreover, if applying the curl operator twice to the strain and once to the contortion, then
the only additional model variable naturally appearing is the incompatibility, of both deformation and defect nature.
This sounds like a closure on the recursive iteration for (higher-order) models.
Nonetheless, we rather prefer to introduce incompatibility through Kr\"oner's formula (\ref{etakmacro}) as a constraint to the Gibbs energy
with $6$ degrees of freedom as coupling strain, Frank tensor and contortion:
\begin{eqnarray}
\nabla\times\mathcal{E}\times\nabla=\nabla\times\overline\partial\omega=\kappa\times\nabla, \label{KR}
\end{eqnarray}
while the equilibrium law writes as the following equation with $3$ d.o.f.:
\begin{eqnarray}
-\nabla\cdot\sigma=f\quad\mbox{with}\quad \sigma:=\frac{\partial W}{\partial\mathcal{E}}\quad\mbox{and}\quad W=\mathcal{W}(T,\nabla T; \mathcal{E},\overline\partial\omega;\kappa),\label{equ}
\end{eqnarray}
where $f$ is the sum of external forces and of configurational (internal) pseudo-forces directly related to $\kappa$ and to the
derivative of the so-called dislocation moment stress $\partial W_1/\partial\kappa$ \cite{KR96,BERD08}. Let us also mention that the
additional constraint of incompressibility must be added in order to avoid climb and point defects \cite{KR95}. Also, a remarkable discussion on the
nature of $W=W_4(\eth b)$ (with identification of $\eth b$ with the distortion) in a nonlinear and variational setting can be found
in \cite{PAL2008}.
\nl
Summarizing, an athermal model of dislocations requires to solve equations (\ref{eqdisloc})-(\ref{equ}) which involve a total of $15$ degrees of feedom. This number
is exactly the number of d.o.f. required by the internal observer to parallel displace inside the crystal (through the nonsymmetric connexion).
Moreover to be closed the theory must involve point and line defects, and hence must consider high temperature and temperature gradients.
So, as recognized by Kr\"oner \cite{KR96}, the dislocation model must be cast within the general frame of irreversible thermodynamics because the
time variations of the internal variables create thermal dissipation.
This is the main reason why a huge work remains to be done in order to determine, e.g., the stress-strain relation, all other constitutive laws,
the diffusion coefficients (which depend on the crystal internal symmetries, glide planes, etc), and the defect interaction/production terms.
\nl
The author is unable to answer definitely any of these questions but will pursue research on the topic.
This paper aims at recalling Ekkehart Kr\"oner's
legacy, and in particular the fundamental questions he raised which are still open and crucial nowadays. It is also aimed at stressing that solutions
to dislocation modelling will most probably arise from a strong interplay between mathematics and physics, as remarkably done by Kr\"oner along his papers
\cite{KR55}-\cite{KR01}.
|
1,108,101,564,361 | arxiv | \section{Introduction}
\subsection{Motivations}
The notion of twisted Alexander polynomials was introduced by Wada [W] and Lin [L] independently in 1990s.
The definition of Lin is for knots in $S^3$ and the definition of Wada is for finitely presented groups.
The twisted Alexander polynomial is a generalization of the Alexander polynomial, and it is defined for the pair of a group and its representations.
By Kitano and Morifuji [KM], it is known that Wada's twisted Alexander polynomials of the knot groups for any nonabelian representations into $SL_2(\mathbb{F})$ over a field $\mathbb{F}$ are polynomials.
As a corollary of the claim, they also showed that if $K$ is a fibered knot of genus $g$,
then its twisted Alexander polynomials are monic polynomials of degree $4g-2$ for any nonabelian $SL_2(\mathbb{F})$-representations. The converse does not hold, in other words, there exist examples of nonfibered knot which has a $SL_2(\mathbb{C})$-representation such that the twisted Alexander polynomial of the representation is monic (see [GoMo]).
If $K$ is hyperbolic, i.e. the complement $S^3 \setminus K$ admits a complete hyperbolic metric of finite volume,
the most important representation is its holonomy representation into $SL_2(\mathbb{C})$ which is a lift of the representation into the group of orientation-preserving isometries of the hyperbolic 3-space $\mathbb{H}^3$.
Dunfield, Friedl and Jackson [DFJ] conjectured that the twisted Alexander polynomials of hyperbolic knots associated to their holonomy representations determine the genus and fiberedness of the knots.
In fact, they computed the twisted Alexander polynomials of all hyperbolic knots of 15 or fewer crossings associated to their holonomy representations, and the conjecture is verified for these hyperbolic knots.
Recently, the twisted Alexander polynomials of some infinite families of knots, twist knots and genus one two-bridge knots associated to their holonomy representations, are computed by Morifuji [Mo1] and Tran [T1], and genus one two-bridge knots associated to the adjoint representations of their holonomy representations is also computed by Tran [T2].
These examples are also the supporting evidences of the conjecture.
In this paper, we compute the twisted Alexander polynomials of $(-2,3,2n+1)$-pretzel knots $K_n$ depicted in Figure 1 associated to their holonomy representations given in the following section.
As a corollary, we obtain new supporting evidences of Dunfield, Friedl and Jackson's conjecture, i.e. the twisted Alexander polynomials of $K_n$ are monic polynomials of degree $4n+6$.
We can observe that $K_n$ is fibered for any non-negative integers $n$ and the genus of $K_n$ is $n+2$ because the Seifert surface $S_n$ is a Murasugi sum of a Seifert surface of a torus knot and a Seifert surface of a Hopf link. Since these Seifert surfaces are fibered surfaces, $S_n$ is also a fibered surface (see [HM, M, O] for more details).
On the other hand, $(-2,3,2n+1)$-pretzel knot is an infinite family of knots which contains the Fintushel-Stern knot i.e. $(-2,3,7)$-pretzel knot. It plays an important role in studying of exceptional surgeries of knots [Ma].
In fact, the A-polynomials of $(-2,3,2n+1)$-pretzel knot are computed by Tamura-Yokota [TY] and Garoufalidis-Mattman [GaMa].
\begin{figure}[h]
\begin{center}
\includegraphics[clip,width=7cm]{knot.eps}\caption{$(-2,3,2n+1)$-pretzel knot}
\end{center}
\end{figure}
\subsection{Definition of twisted Alexander polynomials}
In this paper, we use the following definition due to Wada.
\begin{dfn}
Let $G(K)=\pi_1(S^3 \setminus K)$ be the knot group of a knot $K$ presented by
\[
G(K) = \langle x_1, \cdots ,x_n \ \vline \ r_1, \cdots , r_{n-1} \rangle.
\]
Let $\Gamma$ denote the free group generated by $x_1, \cdots ,x_n $
and $\phi: \mathbb{Z} \Gamma \to \mathbb{Z} G(K)$ the natural ring homomorphism.
Let $\rho: G(K) \to GL_d(\mathbb{F})$ be a $d$-dimensional linear representation of $G(K)$
and $\Phi : \mathbb{Z} \Gamma \to M_d(\mathbb{F}[t,t^{-1}])$ the ring homomorphism defind by
\[
\Phi =(\tilde{\rho} \otimes \tilde{\alpha}) \circ \phi,
\]
where $\tilde{\alpha}:\mathbb{Z} G(K) \to \mathbb{Z} \langle t,t^{-1} \rangle$ and $\tilde{\rho}$ are respective ring homomorphisms induced by the abelianization $\alpha: G(K) \to \langle t \rangle$ and $\rho$.
We put
\[
A_{i,j} = \Phi \left( \frac{\partial r_i}{\partial x_j}\right),
\]
where $\displaystyle \frac{\partial}{\partial x_j}$ denotes the Fox derivative (or free derivative) with respect to $x_j$, that is, a map $\mathbb{Z} \Gamma \to \mathbb{Z} \Gamma$ satisfying the conditions
\begin{eqnarray*}
\displaystyle \frac{\partial}{\partial x_j} x_i = \delta_{ij} \ \ \ \mbox{and} \ \ \
\displaystyle \frac{\partial}{\partial x_j}g g'= \displaystyle \frac{\partial}{\partial x_j} g+\displaystyle \frac{\partial} {\partial x_j}g',
\end{eqnarray*}
where $\delta_{ij}$ denotes the Kronecker symbol and $g,g' \in \Gamma$.
Then, the twisted Alexander polynomial of $K$ is defined by
\[
\Delta_{K,\rho} =\displaystyle \frac{\det A_{\rho,k}}{\det \Phi(x_k-1)},
\]
where $A_{\rho,k}$ is the $2(n-1)\times 2(n-1)$ matrix obtained from $A_{\rho}=(A_{i,j})$ by removing the $k$-th column, i.e.
\[
A_{\rho,k}=
\left(
\begin{array}{cccccc}
A_{1,1}& \cdots & A_{1,k-1} & A_{1,k+1} & \cdots & A_{1,n}\\
\vdots & & \vdots & \vdots & &\vdots\\
A_{n-1,1}& \cdots & A_{n-1,k-1} & A_{n-1,k+1} & \cdots & A_{n-1,n}
\end{array}
\right).
\]
\end{dfn}
\vspace{3mm}
\hspace{-5.3mm}
{\it Acknowledgement}:
The author would like to thank professor Yoshiyuki Yokota for supervising and giving helpful comments. She also would like to thank professor Teruhiko Soma and professor Manabu Akaho for giving valuable comments.
\section{Holonomy representations}
In this section, we give a presentation of knot group $G(K_n)$ and its holonomy representation $\rho_m : G(K_n) \to SL_2(\mathbb{C})$, where $m$ represents the eigenvalue of the meridian of $K_n$.
Let $L$ be the link depicted in Figure 2 and $E=S^3 \setminus L$. Then, the Wirtinger presentation (see [CF]) of $\pi_1(E)$ is given by
\[
\langle a,b,x \ \vline \ \{a x b a (x b)^{-1} \}^{-1} x = x b \{a x b a (x b)^{-1}\}^{-1} (a x b)^{-1} x b, \ [x, a x b a (x b)^{-1} ] = 1 \rangle,
\]
where $a,b$ and $x$ is Wirtinger generators assigned to the corresponding pass depicted in Figure 2.
Note that $E_n:=S^3 \setminus K_n$ is obtained from $L$ by $(-\frac{1}{n})$-surgery along the trivial component, that is, removing the tubular neighborhood of the trivial component and re-gluing the solid torus again. Therefore, by the van Kampen theorem, we have
\begin{eqnarray*}
\pi_1 (E_n) \!\!&\!\! = \!\!&\!\! \langle a,b,x \ \vline \ \{ a x b a (x b)^{-1} \}^{-1} x = x b \{a x b a (x b)^{-1}\}^{-1} (a x b)^{-1} x b, \
x = \{a x b a (x b)^{-1} \}^{n} \rangle.
\end{eqnarray*}
\begin{figure}[htbp]
\begin{center}
\includegraphics[clip,width=6.1cm]{link.eps}\caption{Link $L$}
\end{center}
\end{figure}
\begin{prop} \label{representation}
For a non-zero complex number $m$, there exists a representation $\rho_m : \pi_1(E_n) \to SL_2(\mathbb{C})$ such that
\begin{eqnarray*}
\rho_m(a)=
\left(
\begin{array}{cc}
m & \displaystyle -\frac{\left(m^2-s\right) \left(s^{2 n+1}+1\right)}{m (s+1)} \\
0 & \displaystyle m^{-1} \\
\end{array}
\right), \ \
\rho_m(b)=\displaystyle \frac{1}{s \alpha }
\left(
\begin{array}{cc}
\beta & \displaystyle -\frac{(s \alpha -m \beta)(m s \alpha -\beta )}{m \beta } \\
\beta & \displaystyle \frac{m(m s \alpha -\beta ) +s \alpha }{m} \\
\end{array}
\right),
\end{eqnarray*}
and
\begin{eqnarray*}
\rho_m(x)=
\left(
\begin{array}{cc}
s^n & 0 \\
\displaystyle \frac{s^n-s^{-n}}{s^{2 n+1}+1} & s^{-n} \\
\end{array}
\right),
\end{eqnarray*}
where $s$ is a solution to
\begin{eqnarray}
0 = m^8(s - 1\!\!\!&\!\!\!)(\!\!\!&\!\!\! s + 1)^2 (s^{2 n}-s^2) s^{2 n+2}\\
-m^6\{s^{6 n+3} \!\!\!&\!\!\!+ \!\!\!&\!\!\!(2 s^{6} + s^{5} - 4 s^{4} + s^{3} + s^{2} - s -1)s^{4 n+1} \nonumber \\
\!\!\!&\!\!\! - \!\!\!&\!\!\!(s^6 + s^{5} - s^{4} - s^{3} + 4 s^{2} - s -2)s^{2 n+2} + s^{6}\} \nonumber\\
+m^4 \{(s^2 + \!\!\!&\!\!\!1 \!\!\!&\!\!\!) s^{6 n+2} + (s^{6} + 2 s^{5} - 3 s^{4} - 2 s^{3} + 6 s^{2} - 4 s -2)s^{4 n+3}\nonumber\\
\!\!\!&\!\!\! - \!\!\!&\!\!\! (2 s^{6} + 4 s^{5} - 6 s^{4} + 2 s^{3} + 3 s^{2} - 2 s -1)s^{2 n} + (s^2 + 1)s^5\} \nonumber\\
-m^2\{s^{6 n+3} \!\!\!&\!\!\!+ \!\!\!&\!\!\!(2 s^{6} + s^{5} - 4 s^{4} + s^{3} + s^{2} - s -1)s^{4 n+1} \nonumber\\
\!\!\!&\!\!\! - \!\!\!&\!\!\!(s^6 + s^{5} - s^{4} - s^{3} + 4 s^{2} - s -2)s^{2 n+2} + s^{6}\} \nonumber\\
+(s -1) (s \!\!\!&\!\!\!+ \!\!\!&\!\!\! 1 )^2 (s^{2 n}-s^2) s^{2 n+2} \nonumber
\end{eqnarray}
and $\alpha, \beta$ are given by
\begin{eqnarray*}
\alpha \!\!&\!\! = \!\!& \!\! (s^2 - 1) s^{2 n} \{-m^6 (s - 1) s^2 (s^{2 n + 1} + 1) + m^4(s^{2 n+ 2} (s^4 - 2 s^2 + 3 s -1) + s^4 - 3 s^3 + 2 s^2 -1) \\
&& \!\! - m^2 s (s^ {2 n} (2 s^3 - s^2 + 1) - s (s^3 - s + 2)) + s^2 (s^{2 n} - s^2)\},\\
\beta \!\!&\!\! =\!\! & \!\! m^7 s^{2 n + 2} (s^2 - 1)(s^3 + 1) \\
&& \!\! - m^5 s^3 \{s^{4 n} (s^3 - s^2 + 1) + s^{2 n - 2} (s - 1) (s^3 + s + 1) (s^3 + s^2 + 1) - (s^3 - s + 1)\} \\
&& \!\! + m^3 s^2 (s^3 + 1) (s^{2 n} - 1) (s^{2 n}+ s^2) - m s^3 (s^{2 n} - s^2) (s^{2 n} + s) .
\end{eqnarray*}
\end{prop}
In what follows, for simplicity, we denote the right hand side of $(1)$ by $r_0$.
\begin{proof}
For simplicity, put $A = \rho_m(a), \ B = \rho_m(b),\ X = \rho_m(x)$. By the aid of Mathematica, we have
\begin{eqnarray*}
A X B A (X B)^{-1} =
\left(
\begin{array}{cc}
s & 0 \\
\displaystyle \frac{s^2-1}{s (s^{2n+1} + 1)} & \displaystyle \frac{1}{s} \\
\end{array}
\right)
+r_1
\left(
\begin{array}{cc}
\displaystyle \frac{1}{m^3 s (s^{2n+1}+1) \alpha ^2} & \displaystyle -\frac{1}{m^3 s (s+1) \alpha ^2} \\
\displaystyle \frac{s+1}{m^3 s^2 (s^{2n+1}+1)^2 \alpha ^2} & \displaystyle -\frac{1}{m^3 s^2 (s^{2n+1}+1) \alpha ^2} \\
\end{array}
\right),\\
\end{eqnarray*}
where
\begin{eqnarray*}
r_1 \!\!& \!\!= \!\!&\!\! -\alpha ^2 m s (m^2 s^{2n+2}-m^2-s^{2n+1}+s)
+\alpha \beta (m^2-1) (m^2+1) s^{2n+1} (s+1)\\
&&\!\! +\beta ^2 m s^{2n} (m^2 s^{2n+1}-m^2 s-s^{2n+2}+1) \equiv 0 \mod r_0.
\end{eqnarray*}
Therefore, by $(1)$, we have $X=\{A X B A (X B)^{-1}\}^{n}$, that is, $\rho_m(x)=\rho_m \left(\{a x b a (x b)^{-1}\}^{n} \right)$.
On the other hand, we can observe
\begin{eqnarray*}
A X B \{A X B A (X B)^{-1} \} \equiv X B X^{-1}\{A X B A (X B)^{-1}\} X B \mod r_0
\end{eqnarray*}
and so $A X B \{A X B A (X B)^{-1} \} = X B X^{-1}\{A X B A (X B)^{-1}\} X B$ by (1). Further more, we obtain
\begin{eqnarray*}
X B \{A X B A (X B)^{-1} \}^{-1} (A X B)^{-1} X B
& = & X B (A X B \{A X B A (X B)^{-1} \})^{-1} X B \\
& = & X B (X B X^{-1}\{A X B A (X B)^{-1}\} X B )^{-1} X B \\
& = & \{A X B A (X B)^{-1}\}^{-1} X
\end{eqnarray*}
that is,
$\rho_m\left(\{a x b a (x b)^{-1}\}^{-1} x \right) = \rho_m\left(x b \{a x b a (x b)^{-1}\}^{-1} (a x b)^{-1} x b\right)$.
This completes the proof.
\end{proof}
\begin{rem}
Since the representation $\rho_m$ comes from the holonomy representation obtained from the ideal triangulation of $E$ given in [TY],
the holonomy representation $\rho_m$ of $G(K_n)$ is given by the solution to $(1)$ which maximizes the hyperbolic volume of $S^3 \setminus K_n$.
\end{rem}
\section{Calculation of the twisted Alexander polynomial}
The following is the main result of this paper.
\begin{thm}
The twisted Alexander polynomial of $K_n$ associated to $\rho_m$ is given by
\begin{eqnarray*}
\Delta_{K_n, \rho_m}(t) = 1 + \sum_{i=0}^{2n -1} \lambda_{i} (t^{i + 3} + t^{4n - i + 3} )+ t^{4 n + 6},
\end{eqnarray*}
where
\begin{eqnarray*}
\lambda_i =
\begin{cases}
\displaystyle \frac{(1 + m^2) (H s^{ i/2 +1 }\beta - s (s^{i/2 +1}- s^{-(i/2 +1)}) ( \eta_1 + \eta_2))}{H m \beta}
& {\rm if} \ 0 \le i \le 2n - 2 \,\, {\rm and} \,\, i \,\, {\rm is \ even,}\\
\displaystyle \frac{s^{(i-1)/2} - s^{-(i-1)/2}}{s - s^{-1}} & {\rm if} \ 0 \le i \le 2n - 2 \,\, {\rm and} \,\, i \,\, {\rm is \ odd,}\\
\displaystyle \frac{s^{n-1} - s^{-(n-1)}}{s - s^{-1}} -\frac{(s^2 - 1) \eta_1}{H s^n \beta} & {\rm if} \ i = 2 n - 1
\end{cases}
\end{eqnarray*}
and we put
\begin{eqnarray*}
H \!\! & \!\!= \!\! & \!\! 1 - m^2 s + m^2 s^{2n +1} - s^{2n + 2},\\
\eta_1\!\! & \!\!= \!\! & \!\! m \alpha - m s^{2n +1} \alpha + s^{2 n} \beta + m^2 s^{2 n} \beta,\\
\eta_2\!\! & \!\!= \!\! & \!\! -m s \alpha + m s^{2n +1} \alpha - s^{2 n} \beta - s^{2n +1} \beta.
\end{eqnarray*}
\end{thm}
To prove Theorem 3.1, it suffices to show
\begin{prop} \label{mein prop}
For simplicity, we put $S = s^n$ and $T = t^n$. The twisted Alexander polynomial $\Delta_{K_n, \rho_m}(t)$ is given by
\begin{eqnarray*}
&&\!\!\!\!\!\!\!\!\!\! \frac{S-T^2}{s-t^2}\frac{s}{S} \left( \frac{ m s - m S T^2 + (1 + m^2) (1 - s^2) S t T^2}{m (1 - s^2) t^2} +\frac{(1 + m^2) (1 - s S t^2 T^2) (\eta_1 + \eta_2)}{H m t^3 \beta} \right)\\
&+& \!\!\!\! \frac{1 - S T^2}{1 - s t^2} \frac{s}{S} \left( \frac{(1 + m^2) (1 - s^2) S - m S t + m s t T^2}{m (1 - s^2) t^3} -\frac{(1 + m^2) (s S - t^2 T^2) (\eta_1 + \eta_2)}{H m t^3 \beta} \right)\\
&+& \!\!\!\! \frac{1}{t^6} + T^4 + \frac{(1 - s^2) (1 + t^2) T^2 \eta_1}{H S t^4 \beta} .
\end{eqnarray*}
\end{prop}
By multiplying $t^6$ and rearranging with respect to $t$ , we obtain the formula of Theorem 3.1, when we use
\begin{eqnarray*}
\frac{S - T^2}{s - t^2} =\frac{S}{s} \sum_{i=0}^{n-1} \left( \frac{t^2}{s} \right)^i , \,\
\frac{S T^2 - 1}{s t^2 - 1} = \sum_{i=0}^{n-1} (s t^2)^i.
\end{eqnarray*}
\section{Proof of Proposition 3.2}
Recall that
\begin{eqnarray*}
\pi_1 (E_n) \!\!&\!\! = \!\!&\!\! \langle a,b,x \ \vline \ \{ a x b a (x b)^{-1} \}^{-1} x = x b \{a x b a (x b)^{-1}\}^{-1} (a x b)^{-1} x b, \
x = \{a x b a (x b)^{-1} \}^{n} \rangle\\
\!\!& \!\!=\!\! &\!\! \langle a,c \ \vline \ ( a c a c^{-1} )^{n-1} = c (a c a c^{-1})^{-1} (a c)^{-1} c\rangle.
\end{eqnarray*}
Then the twisted Alexander polynomial of $K_n$ is given by
\begin{eqnarray*}
\Delta_{K_n,\rho_m}(t) & = & \frac{\displaystyle \det \Phi \left( \frac{\partial}{\partial a}( a c a c^{-1} )^{n-1} -\frac{\partial}{\partial a} c (a c a c^{-1})^{-1} (a c)^{-1} c \right)}{\det \Phi(c-1)},
\end{eqnarray*}
where
\begin{eqnarray}
&& \Phi \left( \frac{\partial}{\partial a}( a c a c^{-1} )^{n-1} -\frac{\partial}{\partial a} c (a c a c^{-1})^{-1} (a c)^{-1} c \right) \nonumber \\
&& = \sum_{i=1}^{n-1} t^{2(i-1)} \rho_m \left( \left\{a x b a (x b)^{-1} \right\}^{i-1} \right) \left\{ \rho_m(1)+t^{2(n+1)} \rho_m(a x b)\right\} + t^{4n+1} \rho_m(x b x b a^{-1}) \\
&& + t^{2n-1} \rho_m \left(x b \left\{a x b a (x b)^{-1} \right\}^{-1} \right) +t^{-3} \rho_m \left( x b \{a x b a (x b)^{-1}\}(a x b)^{-1} \right). \nonumber
\end{eqnarray}
For simplicity, we put
\begin{eqnarray*}
\gamma_1 = s \alpha - m \beta \ ,\
\gamma_2 = m s \alpha - \beta \ ,\
\gamma_3 = m^2 s (s S^2 + 1) \alpha.
\end{eqnarray*}
By the aid of Mathematica, the first term of the right hand side of $(2)$ is given by
\begin{eqnarray*}
&&\sum_{i=1}^{n-1} t^{2(i-1)} (AXBA(XB)^{-1})^{i-1} (E+t^{2(n+1)} AXB)\\
&& =
\left(
\begin{array}{cc}
\displaystyle \frac{(S T^2 - s t^2) (S t^2 \beta T^2+m \alpha)}{m s t^2 (s t^2 - 1) \alpha }
& -\displaystyle \frac{T^2 (S T^2 - s t^2) (\gamma_1 \eta_2 + (m \alpha - \beta) \gamma_3)}{m^2 s (s+1) S \left(s t^2-1\right) \alpha \beta } \\
\displaystyle \frac{m C_1 \alpha -S t^2 T^2 C_2 \beta}{m s S (s S^2 + 1) t^2 (s - t^2)(s t^2 - 1) \alpha }
& \displaystyle \frac{C_3 t^4 T^4 + C_4 t^2 T^4 + C_5 t^6 T^2 + C_6 t^4 T^2 + C_7}{(s + 1) S^2 t^2 (s - t^2) (s t^2 -1)\gamma_3 \beta } \\
\end{array}
\right),
\end{eqnarray*}
where
\begin{eqnarray*}
C_{1} \!\!&\!\!=\!\!&\!\! -t^4 s(s^2 - 1)S - T^2 \{t^2 (S^2 - s^4) - s(S^2 - s^2)\},\\
C_{2} \!\!&\!\!=\!\!&\!\! -t^2(t^2 - 1) s(s + 1)S + T^2 \{t^2 (S^2 + s^3) + s(S^2 - s)\},\\
C_{3} \!\!&\!\!=\!\!&\!\! (s^3 + S^2) \gamma_1 \eta_2 - \{s^3 (m s \alpha + \beta) - S^2 (m \alpha - \beta) \} \gamma_3,\\
C_{4} \!\!&\!\!=\!\!&\!\! - s (s + S^2) \gamma_1 \eta_2 + s \{s (m s \alpha + \beta) - S^2 (m \alpha - \beta)\} \gamma_3,\\
C_{5} \!\!&\!\!=\!\!&\!\! - s (s + 1) S\{\gamma_1 \eta_2 + (\eta_1 + \eta_2 - (1 + m^2 S^2 - s S^2) \beta) \gamma_3 \},\\
C_{6} \!\!&\!\!=\!\!&\!\! s (s + 1) S \{s \alpha \eta_2 - m (s + 1) S^2 \beta \gamma_2\},\\
C_{7} \!\!&\!\!=\!\!&\!\! s (s + 1) S (s t^2 -1) (S t^2 - s T^2) \beta \gamma_3.
\end{eqnarray*}
Similarly, the second term of the right hand side of $(2)$ is given by
\begin{eqnarray*}
&& X B X B A^{-1}=
\left(
\begin{array}{cc}
\displaystyle \frac{S^2 D_{1}}{\gamma_3 \alpha}
& \displaystyle \frac{m s D_1 D_2 - (s S^2 + 1) (s S^2 D_1 + m \gamma_3 \alpha) \beta^2}{ (s+1)\gamma_3 \alpha \beta ^2} \\
\displaystyle \frac{(s+1) D_{2}}{ (s S^2+1)\gamma_3 \alpha}
& \displaystyle \frac{m s S^2 D_1 D_2 + s(s S^2 + 1) (m^2 s \alpha^2 - S^2 \beta^2) D_2}{S^2 (s S^2+1) \gamma_3 \alpha \beta ^2} -m \\
\end{array}
\right),
\end{eqnarray*}
where
\begin{eqnarray*}
D_1 \!\!&\!\!=\!\!&\!\! -(s + 1) \alpha \gamma_2 + m (\eta_1 + \gamma_2 + m S^2 \gamma_1)\beta,\\
D_2 \!\!&\!\!=\!\!&\!\! -\alpha \eta_2 + m S^2 (\eta_1 + m S^2 \gamma_1 + \gamma_2) \beta,
\end{eqnarray*}
the third term of the right hand side of (2) is given by
\begin{eqnarray*}
X B \left\{AXBA(XB)^{-1}\right\}^{-1}=
\left(
\begin{array}{cc}
\displaystyle \frac{S E_{1}}{m s \left(s S^2+1\right) \alpha \beta }
& \displaystyle -\frac{S \gamma_1 \gamma_2}{m \alpha \beta } \\
\displaystyle \frac{(s+1) E_{2}}{m s S \left(s S^2+1\right)^2 \alpha \beta }
& \displaystyle \frac{E_{3}}{m S \left(s S^2+1\right) \alpha \beta } \\
\end{array}
\right),
\end{eqnarray*}
where
\begin{eqnarray*}
E_{1} \!\!&\!\!=\!\!&\!\! (s^2 - 1) \alpha \gamma_2 + m (\eta_1 + m S^2 \gamma_1 - s \gamma_2) \beta,\\
E_{2} \!\!&\!\!=\!\!&\!\! (s - 1) \alpha \eta_2 + m S^2 (\eta_1 + m S^2 \gamma_1 -s \gamma_2) \beta,\\
E_{3} \!\!&\!\!=\!\!&\!\! - s \alpha \eta_2 + m (s + 1) S^2 \beta \gamma_2,
\end{eqnarray*}
and the fourth term of the right hand side of (2) is given by
\begin{eqnarray*}
X B (A X B AXBA(XB)^{-1})^{-1}
=
\left(
\begin{array}{cc}
\displaystyle \frac{m F_{3} }{ \gamma_3 ^2 \beta ^2}
& \displaystyle \frac{F_4}{m (s+1) \gamma_3 \alpha \beta ^2} \\
\displaystyle \frac{m(s^2 - 1) F_{1} F_{2}}{S^2 (s S^2+1) \gamma_3 ^2 \beta ^2}
& \displaystyle \frac{m F_5}{S^2 \gamma_3 ^2 \beta ^2} \\
\end{array}
\right),
\end{eqnarray*}
where
\begin{eqnarray*}
F_{1} \!\!&\!\!=\!\!&\!\! m (s + 1) S^2 (\eta_1 + m S^2 \gamma_1) \beta - \eta_2 \alpha ,\\
F_{2} \!\!&\!\!=\!\!&\!\! m (s + 1) S^2 (s S^2 + 1) \beta^2 - s F_1,\\
F_{3} \!\!&\!\!=\!\!&\!\! - \{m \beta (\eta_1 + m S^2 \gamma_1) + s \gamma_1 \gamma_2 - \gamma_2 \alpha\} F_2 + m s (s + 1) S^2 (s S^2 + 1) \gamma_1 \gamma_2 \beta^2,\\
F_{4} \!\!&\!\!=\!\!&\!\! (s^2 - 1) \{m (\eta_1+m S^2 \gamma_1) \beta - \gamma_2 \alpha\} F_2 \\
&& + \gamma_3 \{m \gamma_2 \alpha - (m^2 \eta_1 + s^2 \eta_2 + m^3 S^2 \gamma_1 - s^2 (S^2 - 1) \gamma_2) \beta -
m s \gamma_1 \gamma_2\}\alpha,\\
F_{5} \!\!&\!\!=\!\!&\!\! (s - 1) (s F_1 - m \gamma_3 \alpha) F_2 - m^2 S^2 (s S^2 + 1) \gamma_3 \alpha \beta^2.
\end{eqnarray*}
Therefore, the determinant of the right hand side of (2) is written as
\[
\frac{\sum_{i,j} U_{i,j} t^i T^j}{m^3 S^2 t^6 (s - t^2) (s t^2 - 1) \beta^2 \iota},
\]
where
\begin{eqnarray*}
U_{0,0} \!\! & \!\!= \!\! & \!\! U_{4,0} = U_{6,0} = U_{2,4} = U_{10,4} = U_{6,8} = U_{8,8} = U_{12,8}
= -m^3 s S^2 \beta^2 \iota,\\
U_{2,0}\!\! & \!\!= \!\! & \!\! U_{10,8} = m^3 (s^2 + 1) S^2 \beta^2 \iota,\\
H U_{3,0}\!\! & \!\! \equiv \!\! & \!\! H U_{9,8} \equiv
-m^2 (m^2 + 1) s S^2 \beta (H s \beta - (s^2 - 1) (\eta_1 + \eta_2)) \iota \mod r_0,\\
U_{5,0}\!\! & \!\! \equiv \!\! & \!\! U_{7,8} \equiv m^2 (m^2 + 1) s S^2 \beta^2 \iota \mod r_0,\\
H U_{1,2}\!\! & \!\! \equiv \!\! & \!\! H U_{11,6} \equiv
m^2 (m^2 + 1) (s - 1) s S \beta \eta_2 \iota \mod r_0,\\
H U_{2,2}\!\! & \!\! = \!\! & \!\! H U_{6,2} = H U_{8,2} = H U_{4,6} \equiv H U_{6,6} = H U_{10,6} \equiv
m^3 (s^2 - 1) s S \beta \eta_1 \iota \mod r_1,\\
H U_{3,2}\!\! & \!\! \equiv \!\! & \!\! H U_{9,6} \equiv
m^2 (m^2 + 1) (s - 1) S \beta \{H s S^2 \beta - s (s S^2 + 1) \eta_1 - (s^2 S^2 + s^2 + 1 ) \eta_2 \} \iota \mod r_0,\\
H^2 U_{4,2}\!\! & \!\! \equiv \!\! & \!\! H^2 U_{8,6} \\
\!\! & \!\! \equiv \!\! & \!\!
m (s - 1) s S \{H^2 m^3 \alpha \beta + H (m^2 + 1) (m^2 s + s + 1) \beta \eta_2 - (m^2 + 1)^2 (s^2 - 1) \eta_2 (\eta_1 + \eta_2)\} \iota\\
&& \mod r_0,\\
H U_{5,2}\!\! & \!\! \equiv \!\! & \!\! H U_{7,6} \equiv -m^2 (m^2 + 1) (s - 1) s S \beta \eta_2 \iota \mod r_0,\\
H U_{7,2}\!\! & \!\! \equiv \!\! & \!\! H U_{5,6} \equiv
m^2 (m^2 + 1) (s - 1) s S \beta (H S^2 \beta - (s S^2 + 1) \eta_1 - (s S^2 - 1) \eta_2) \iota \mod r_1,\\
H^2 U_{3,4}\!\! & \!\! \equiv \!\! & \!\! H^2 U_{9,4} \equiv
-m^2 (m^2 + 1) (s - 1)^2 s (s + 1) \eta_1 \eta_2 \iota \mod r_0,\\
H^2 U_{4,4}\!\! & \!\! = \!\! & \!\! H^2 U_{8,4}\\
\!\! & \!\! \equiv \!\! & \!\!
m \{H^2 m^2 (s^2 - s + 1) S^2 \beta^2 + (m^2 + 1)^2 (s - 1)^2 s \eta_2 (-H S^2 \beta + (s S^2 + 1) \eta_1 + s S^2 \eta_2)\} \iota \\
&& \mod r_1,\\
H^2 U_{5,4}\!\! & \!\! \equiv\!\! & \!\! H^2 U_{7,4}\\
\!\! & \!\! \equiv \!\! & \!\!
-(m^2 + 1) (s - 1) s \{(s - 1) \eta_2 (m^3 H \alpha + (m^2 + 1) \eta_2) + m^2 S^2 H \beta (H \beta - (s + 1) (\eta_1 + \eta_2))\} \iota \\
&& \mod r_0,\\
H^2 U_{6,4}\!\! & \!\! \equiv \!\! & \!\!
-2 m s (H m S \beta - (m^2 + 1) (s - 1) \eta_2) (H m S \beta + (m^2 + 1) (s - 1) \eta_2) \iota \mod r_0,
\end{eqnarray*}
where we put
$\iota = m^2 s^2 (s + 1) S (s S^2 + 1)^3 \alpha^3 \beta$, and the other $U_{i,j}$'s are $0$.
On the other hand, by the aid of Mathematica,
\begin{eqnarray*}
\det \Phi(c-1) \!\!&\!\! = \!\!&\!\!
\det \left( t^{2n+1}\rho_m(x b)-
\left(
\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}
\right) \right)\\
\!\!&\!\! = \!\!&\!\!
\frac{m S H \beta + m S H t^2 T^4 \beta - (m^2 + 1) (s - 1) t T^2 \eta_2}{m S H \beta}
-\frac{(S^2-1) t T^2}{m S (s S^2+1) H \alpha \beta}r_1\\
\!\!&\!\! = \!\!&\!\!
\frac{m S H \beta + m S H t^2 T^4 \beta - (m^2 + 1) (s - 1) t T^2 \eta_2}{m S H \beta} .
\end{eqnarray*}
Consequently, we have
\begin{eqnarray}
\Delta_{K_n, \rho_m}(t) = \frac{\sum_{i,j} V_{i,j} t^i T^j}{H m^2 S t^6 (s - t^2) (s t^2 - 1) \beta},
\end{eqnarray}
where
\begin{eqnarray*}
V_{0, 0} \!\! & \!\!= \!\! & \!\! V_{4, 0} = V_{6, 0} = V_{4, 4} = V_ {6, 4} = V_{10, 4} = -H m^2 s S \beta,\\
V_{2, 0} \!\! & \!\!= \!\! & \!\! V_{8, 4} = H m^2 (s^2 + 1) S \beta,\\
V_{3, 0} \!\! & \!\!= \!\! & \!\! V_{7, 4} = m (m^2 + 1) s S \{(s^2 - 1)( \eta_1 + \eta_2) - H s \beta\},\\
V_{5, 0} \!\! & \!\!= \!\! & \!\! V_{5, 4} = H m (m^2 + 1) s S \beta,\\
V_{2, 2} \!\! & \!\!= \!\! & \!\! V_{8, 2} = m^2 s (s^2 - 1) \eta_1,\\
V_{3, 2} \!\! & \!\!= \!\! & \!\! V_{7, 2} = m (m^2 + 1) (s -1) s \{(s + 1) \eta_1 + \eta_2 \}\\
V_{4, 2} \!\! & \!\!= \!\! & \!\! V_{6, 2} = (s - 1) s \{(m^2 + 1) \eta_2 + H m^3 \alpha\},\\
V_{5, 2} \!\! & \!\!= \!\! & \!\! -2 m (m^2 + 1) (s - 1) s \eta_2,
\end{eqnarray*}
and the other $V_{i,j}$'s are $0$.
By the aid of Mathematica, the difference between the right hand side of (3) and the formula in Proposition 3.2 is equal to
\begin{eqnarray*}
\frac{s \zeta_1 + t \zeta_2 - 2 t^2 \zeta_1 + t^3 \zeta_2 + s t^4 \zeta_1}{H m^2 S t^3 (s + 1) (s - t^2) (s t^2 - 1) \beta} T^2,
\end{eqnarray*}
where
\begin{eqnarray*}
\zeta_1 \!\! & \!\!= \!\! & \!\! m (m^2 + 1) s (s + 1) (H S^2 \beta - s (S^2 - 1) \eta_1 - (s S^2 - 1) \eta_2), \\
\zeta_2 \!\! & \!\!= \!\! & \!\! H m^2 s (m \alpha - m s^2 \alpha + s \beta + S^2 \beta) - (s^2 - 1) (m^2 \eta_1 + m^2 s^3 \eta_1 + s \eta_2 + m^2 s \eta_2).
\end{eqnarray*}
Note that $\zeta_1 = 0$ by the definition of $H, \eta_1$ and $\eta_2$ and that
\begin{eqnarray*}
\zeta_2 =
m \{(m^2 (s^2 - s + 1) - s) (s^3 S^2 + 1) - H s (s - 1)\} r_0 = 0.
\end{eqnarray*}
This completes the proof of Proposition 3.2.
|
1,108,101,564,362 | arxiv | \section*{References\markboth
{References}{References}}\list
{[\arabic{enumi}]}{\settowidth\labelwidth{[#1]}
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\usecounter{enumi}}
\def\hskip .11em plus .33em minus .07em{\hskip .11em plus .33em minus .07em}
\sloppy
\sfcode`\.=1000\relax}
\let\endthebibliography=\endlist
\def\ ^<\llap{$_\sim$}\ {\ ^<\llap{$_\sim$}\ }
\def\ ^>\llap{$_\sim$}\ {\ ^>\llap{$_\sim$}\ }
\def\sqrt 2{\sqrt 2}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\gamma^{\mu}{\gamma^{\mu}}
\def\gamma_{\mu}{\gamma_{\mu}}
\def{1-\gamma_5\over 2}{{1-\gamma_5\over 2}}
\def{1+\gamma_5\over 2}{{1+\gamma_5\over 2}}
\def\sin^2\theta_W{\sin^2\theta_W}
\def\alpha_{EM}{\alpha_{EM}}
\defM_{\tilde{u} L}^2{M_{\tilde{u} L}^2}
\defM_{\tilde{u} R}^2{M_{\tilde{u} R}^2}
\defM_{\tilde{d} L}^2{M_{\tilde{d} L}^2}
\defM_{\tilde{d} R}^2{M_{\tilde{d} R}^2}
\defM_{z}^2{M_{z}^2}
\def\cos 2\beta{\cos 2\beta}
\defA_u{A_u}
\defA_d{A_d}
\def\cot \beta{\cot \beta}
\def\v#1{v_#1}
\def\tan\beta{\tan\beta}
\def$e^+e^-${$e^+e^-$}
\def$K^0$-$\bar{K^0}${$K^0$-$\bar{K^0}$}
\def\omega_i{\omega_i}
\def\chi_j{\chi_j}
\defW_\mu{W_\mu}
\defW_\nu{W_\nu}
\def\m#1{{\tilde m}_#1}
\defm_H{m_H}
\def\mw#1{{\tilde m}_{\omega #1}}
\def\mx#1{{\tilde m}_{\chi^{0}_#1}}
\def\mc#1{{\tilde m}_{\chi^{+}_#1}}
\def{\tilde m}_{\omega i}{{\tilde m}_{\omega i}}
\def{\tilde m}_{\chi^{0}_i}{{\tilde m}_{\chi^{0}_i}}
\def{\tilde m}_{\chi^{+}_i}{{\tilde m}_{\chi^{+}_i}}
\defM_z{M_z}
\def\sin\theta_W{\sin\theta_W}
\def\cos\theta_W{\cos\theta_W}
\def\cos\beta{\cos\beta}
\def\sin\beta{\sin\beta}
\defr_{\omega i}{r_{\omega i}}
\defr_{\chi j}{r_{\chi j}}
\defr_f'{r_f'}
\defK_{ik}{K_{ik}}
\defF_{2}(q^2){F_{2}(q^2)}
\begin{titlepage}
\ \
\vskip 0.5 true cm
\begin{center}
{\large {\bf The Chromoelectric and Purely Gluonic Operator Contributions to the
Neutron Electric Dipole Moment }} \\
{\large {\bf in $N=1$ Supergravity }}
\vskip 0.5 true cm
\vspace{2cm}
\renewcommand{\thefootnote}
{\fnsymbol{footnote}}
Tarek Ibrahim and Pran Nath
\\
\vskip 0.5 true cm
\it Department of Physics, Northeastern University \\
\it Boston, MA 02115, USA \\
\end{center}
\vskip 4.0 true cm
\centerline{\bf Abstract}
\medskip
A complete one loop analysis of the chromoelectric dipole
contribution to the electric dipole moment (edm) of the quarks and of the
neutron in N=1 supergravity including the gluino, chargino and neutralino
exchange contributions and exhibiting the dependence on the two
CP violating phases allowed in the soft SUSY breaking
sector of minimal supergravity
models is given. It is found that in significant parts
of the supergravity parameter space the chromoelectric
dipole contribution to the neutron edm is
comparable to the contribution from the electric dipole term and
can exceed the contribution of the electric dipole term in certain
regions of the parameter space.
An analysis of the contribution
of Weinberg's purely gluonic CP violating dimension six operator within
supergravity
unification is also given. It is found that this contribution can also be
comparable to the contribution from the electric dipole term in certain
regions of the supergravity parameter space.
\end{titlepage}
\newpage
\section{Introduction}
The electric dipole moment (edm) of fermions is one of the important windows to
new physics beyond the Standard Model (SM). In the SM the edm for the fundamental
fermions arising from the
Kobayashi-Maskawa CP violating phase is much smaller\cite{shaba} than the
current experimental limit of $1.1\times 10^{-25}$ecm\cite{exp}
and beyond the reach
of experiment in the forseeable future. In supersymmetric unified models new
sources of CP violation arise from the complex phases of the soft SUSY
breaking parameters which contribute to the edm of the quarks and of the
leptons[3-9].
For the case of the quarks and the neutron there are also the color dipole
operator
and the CP violating purely gluonic dimension six operator\cite{wein,dai}
which contribute.
However,
with the exception of the work of ref.\cite{aln} virtually all previous
analyses of the neutron edm have been done neglecting the contributions of
these additional operators.
The reason for this neglect is the presumption\cite{kizu,garisto,falk} that
their contribution to the neutron edm is small. In the analysis of this
paper we show that the contributions of the color dipole operator and
of the purely gluonic operator are not necessarily small and
over a significant
part of the supergravity parameter space their
contribution to the neutron edm is comparable
to the contribution from the electric dipole operator and can even
exceed it in certain regions of the parameter space.
In this Letter we first derive the full one loop contribution to the color
dipole operator arising from the gluino, the chargino and the
neutralino exchange
contributions exhibiting the dependence on the two CP violating
phases allowed in the soft SUSY breaking sector of minimal
supergravity models. To our knowledge this is the first complete one loop
analysis of this operator.
We also recompute the purely gluonic dimension
six operator to incorporate the dependence on the two CP violating
phases. We then give a numerical analysis of
the relative strength of the color dipole contribution
to the neutron edm relative to the contribution from the electric
dipole term using the framework of supergravity unification under the
constraint of radiative breaking of the
electro-weak symmetry.
Our analysis shows that the
color dipole contribution relative to the electric dipole contribution
can vary greatly over the parameter space of the model. We find
that there are significant regions of the parameter space where
the color dipole contribution can be comparable to the contribution
from the electric
dipole term and in some regions of the parameter space the color
dipole contribution can even exceed the contribution from the electric
dipole term. A similar analysis is also carried out for the
purely gluonic dimension six operator. Here again one finds that there
exist regions of the parameter space where the contribution of the
purely gluonic dimension six operator to the neutron edm
can be comparable to, and may even exceed,
the contribution from the electric dipole term. Thus one is
not justified in discarding the effects of the color dipole
operator and of the gluonic dimension six operator in the neutron
edm computation.
The outline of the Letter is as follows: In Sec.2 we give
the complete one loop calculation
including the gluino, the chargino and the neutralino exchange
contribution to
the color dipole operator and discuss its contribution to the
electric dipole moment of the neutron including its dependence on the
two CP violating phases in minimal supergravity.
In Sec.3 we recompute
the dimension six gluonic operator to include the effects of the two
CP violating phases.
In Sec.4 we give the renormalization group analysis in
minimal supergravity of the relative numerical strengths of
these contributions and show that they can be comparable to the
electric dipole
contribution in significant regions of the parameter space.
\section{Analysis of Chromoelectric Dipole Contribution}
The parameters that enter in minimal supergravity grand unification with
radiative breaking of the electro-weak symmetry can be taken to be
$m_0, m_{1/2}, A_0, tan\beta$ and phase($\mu$), where
$m_0$ is the universal scalar mass, m$_{1/2}$ is the universal gaugino mass,
$A_0$ is the universal trilinear coupling,
tan$\beta=<H_2>/<H_1>$, where $H_2$ gives mass to the up quark and $H_1$ gives
mass to the down quark, and $\mu$ is the Higgs mixing
parameter\cite{cham,applied}.
As noted in Sec. 1 only two phases in the soft SUSY breaking sector
have a physical meaning in minimal
supergravity, and we choose them
to be the phase $\alpha_{A0}$ of $A_0$
and the phase $\theta_{\mu0}$ of $\mu_0$.
The quark chromoelectric dipole moment is defined to be the factor $\tilde d^c$
in the effective operator:
\begin{equation}
{\cal L}_I=-\frac{i}{2}\tilde d^c \bar{q} \sigma_{\mu\nu} \gamma_5 T^{a} q
G^{\mu\nu a}
\end{equation}
where $T^a$ are the generators of $SU(3)$.
In the following we give an analysis of the one loop contributions
to $\tilde d^c$ from
the chargino, the neutralino and the gluino exchange diagrams
shown in Figs. 1a and 1b.
\subsection{Gluino Contribution}
The quark-squark-gluino vertex can be derived using the
interaction\cite{applied}
\begin{equation}
-{\cal L}_{q-\tilde{q}-\tilde{g}}=\sqrt 2 g_s T_{jk}^a \sum_{i=u,d}
(-\bar{q}_{i}^j {1-\gamma_5\over 2} \tilde{g}_a \tilde{q}_{iR}^k +
\bar{q}_{i}^j {1+\gamma_5\over 2} \tilde{g}_a \tilde{q}_{iL}^k) + H.c. ,
\end{equation}
where $a=1-8$ are the gluino color indices, and $j,k=1-3$ are the quark and
squark color indices. The complex phases enter via the
squark $(mass)^2$ matrix $ M_{\tilde{q}}^2$ which can be diagonalized by the
transformation
\begin{equation}
D_{q}^\dagger M_{\tilde{q}}^2 D_q={\rm diag}(M_{\tilde{q}1}^2,
M_{\tilde{q}2}^2)
\end{equation}
where
\begin{equation}
\tilde{q}_L=D_{q11} \tilde{q}_1 +D_{q12} \tilde{q}_2
\end{equation}
\begin{equation}
\tilde{q}_R=D_{q21} \tilde{q}_1 +D_{q22} \tilde{q}_2.
\end{equation}
and where $\tilde{q}_1$ and $\tilde{q}_2$ are the mass eigenstates.
Writing ${\cal L}$ in terms of $\tilde{q}_1$
and $\tilde{q}_2$ and integrating
out the gluino and squark fields and by using the identities
\begin{equation}
T^a_{ij}T^a_{kl}=\frac{1}{2}[\delta_{il}\delta_{kj}-\frac{1}{3}
\delta_{ij}\delta_{kl}]
\end{equation}
and
\begin{equation}
f^{abc} T^b T^c= \frac{3}{2}i T^a
\end{equation}
one can obtain the gluino exchange contribution to $\tilde d^c$. We find
\begin{equation}
\tilde d_{q-gluino}^c=\frac{g_s\alpha_s}{4\pi} \sum_{k=1}^{2}
{\rm Im}(\Gamma_{q}^{1k}) \frac{m_{\tilde{g}}}{M_{\tilde{q}_k}^2}
{\rm C}(\frac{m_{\tilde{g}}^2}{M_{\tilde{q}_k}^2}),
\end{equation}
where $\Gamma_{q}^{1k}=D_{q2k} D_{q1k}^*$
and $m_{\tilde{g}}$ is the gluino mass and
\begin{equation}
C(r)=\frac{1}{6(r-1)^2}(10r-26+\frac{2rlnr}{1-r}-\frac{18lnr}{1-r}).
\end{equation}
\subsection{Neutralino Contribution}
Here the CP violating phases enter via the
the squark $(mass)^2$ matrix, which contains the phases of $A_q$ and
$\mu$, and via the neutralino mass matrix given by
\begin{equation}
M_{\chi^0}=\left(\matrix{\m1 & 0 & -M_z\sin\theta_W\cos\beta & M_z\sin\theta_W\sin\beta \cr
0 & \m2 & M_z\cos\theta_W\cos\beta & -M_z\cos\theta_W\sin\beta \cr
-M_z\sin\theta_W\cos\beta & M_z\cos\theta_W\cos\beta & 0 & -\mu \cr
M_z\sin\theta_W\sin\beta & -M_z\cos\theta_W\sin\beta & -\mu & 0}
\right).
\end{equation}
which carries the phase of $\mu$.
The matrix $M_{\chi^0}$ is a complex non hermitian and symmetric matrix,
which can be diagonalized by a unitary transformation such that
\begin{equation}
X^T M_{\chi^0} X={\rm diag}(\mx1, \mx2, \mx3, \mx4)
\end{equation}
Quark-squark-neutralino vertex can be derived from the
interaction\cite{applied}
\begin{eqnarray}
-{\cal L}_{q-\tilde{q}-\tilde{\chi}^{0}} & & = {\sum_{j=1}^{4}
\sqrt 2 \bar{u}[(\alpha_{uj}
D_{u11}-\gamma_{uj} D_{u21}){1-\gamma_5\over 2}}\nonumber\\
& &
{+(\beta_{uj} D_{u11}-\delta_{uj} D_{u21}){1+\gamma_5\over 2}]
\tilde{\chi}_j^{0} \tilde{u}_1}
{+\sqrt 2 \bar{u}[(\alpha_{uj}
D_{u12}-\gamma_{uj} D_{u22}){1-\gamma_5\over 2}}\nonumber\\
& &
{+(\beta_{uj} D_{u12}-\delta_{uj} D_{u22}){1+\gamma_5\over 2}]
\tilde{\chi}_j^{0} \tilde{u}_2 +(u\rightarrow d)+H.c.}
\end{eqnarray}
where $\alpha$, $\beta$, $\gamma$ and $\delta$ are given by
\begin{equation}
\alpha_{u(d)j}=\frac{gm_{u(d)}X_{4(3),j}}{2m_W \sin{\beta}(\cos{\beta})}
\end{equation}
\begin{equation}
\beta_{u(d)j}=eQ_{u(d)}X_{1j}^{'*} +\frac{g}{\cos{\theta_W}}
X_{2j}^{'*}(T_{3u(d)}-Q_{u(d)}\sin^{2}{\theta_W})
\end{equation}
\begin{equation}
\gamma_{u(d)j}=eQ_{u(d)}X_{1j}^{'} -\frac{gQ_{u(d)}\sin^{2}{\theta_W}}
{\cos{\theta_W}} X_{2j}^{'}
\end{equation}
\begin{equation}
\delta_{u(d)j}=\frac{gm_{u(d)}X_{4(3),j}^*}{2m_W \sin{\beta}(\cos{\beta})}
\end{equation}
and where
\begin{equation}
X_{1j}^{'}=X_{1j} \cos{\theta_W}+X_{2j} \sin{\theta_W}
\end{equation}
\begin{equation}
X_{2j}^{'}=-X_{1j} \sin{\theta_W}+X_{2j} \cos{\theta_W}.
\end{equation}
The one loop analysis using Eq.(12) gives for $\tilde d^c$
\begin{equation}
\tilde d_{q-neutralino}^c=\frac{g_s g^2}{16\pi^2}\sum_{k=1}^{2}\sum_{i=1}^{4}
{\rm Im}(\eta_{qik})
\frac{{\tilde m}_{\chi^{0}_i}}{M_{\tilde{q}k}^2}
{\rm B}(\frac{{\tilde m}_{\chi^{0}_i}^2}{M_{\tilde{q}k}^2}),
\end{equation}
where
\begin{eqnarray}
\eta_{qik} & &={[-\sqrt 2 \{\tan\theta_W (Q_q-T_{3q}) X_{1i}
+T_{3q} X_{2i}\}D_{q1k}^{*}+
\kappa_{q} X_{bi} D_{q2k}^{*}]}\nonumber\\
& &
\hspace{4cm} {(\sqrt 2 \tan\theta_W Q_q X_{1i} D_{q2k}
-\kappa_{q} X_{bi} D_{q1k})}.
\end{eqnarray}
Here
\begin{eqnarray}
\kappa_u=\frac{m_u}{\sqrt 2 m_W \sin\beta},
~~\kappa_{d}=\frac{m_{d}}{\sqrt 2 m_W \cos\beta}
\end{eqnarray}
where $b=3(4)$ for $T_{3q}=-\frac{1}{2}$($\frac{1}{2})$ and :
\begin{equation}
B(r)=\frac{1}{2(r-1)^2}(1+r+\frac{2rlnr}{1-r}).
\end{equation}
\subsection{Chargino Contribution}
Here the CP violating phases enter via the squark (mass)$^2$ matrix
and via the chargino mass matrix given by
\begin{equation}
M_C=\left(\matrix{\m2 & \sqrt 2 m_W \sin\beta \cr
\sqrt 2 m_W \cos\beta & \mu }
\right)
\end{equation}
which involves the phase of $\mu$. The chargino matrix
can be diagonalized by the unitary transformation:
\begin{equation}
U^* M_C V^{-1}={\rm diag}(\mc1, \mc2)
\end{equation}
The quark-squark-chargino vertex can be derived from the
interaction\cite{applied}
\begin{eqnarray}
-{\cal L}_{q-\tilde{q}-\tilde{\chi}^{+}} & & =
{g \bar{u}[(U_{11}
D_{d11}-\kappa_{d}U_{12} D_{d21}){1+\gamma_5\over 2}}\nonumber\\
& &
{-(\kappa_{u}V_{12}^{*} D_{d11}){1-\gamma_5\over 2}]
\tilde{\chi}_1^{+} \tilde{d}_1}
{+g \bar{u}[(U_{21}
D_{d11}-\kappa_{d}U_{22} D_{d21}){1+\gamma_5\over 2}}\nonumber\\
& &
{-\kappa_{u}V_{22}^{*} D_{d11}){1-\gamma_5\over 2}]
\tilde{\chi}_2^{+} \tilde{d}_1}
{+g \bar{u}[(U_{11}
D_{d12}-\kappa_{d}U_{12} D_{d22}){1+\gamma_5\over 2}}\nonumber\\
& &
{-(\kappa_{u}V_{12}^{*} D_{d12}){1-\gamma_5\over 2}]
\tilde{\chi}_1^{+} \tilde{d}_2}
{+g \bar{u}[(U_{21}
D_{d12}-\kappa_{d}U_{22} D_{d22}){1+\gamma_5\over 2}}\nonumber\\
& &
{-(\kappa_{u}V_{22}^{*} D_{d12}){1-\gamma_5\over 2}]
\tilde{\chi}_2^{+} \tilde{d}_2}\nonumber\\
& &
{+(u\longleftrightarrow d,
U\longleftrightarrow V, \tilde{\chi_i}^{+}\rightarrow
\tilde{\chi_i}^{c})+H.c.},
\end{eqnarray}
and for $\tilde d^c$ one gets for the up and down flavors
\begin{equation}
\tilde d_{u-chargino}^c=\frac{-g^2 g_s}{16\pi^2}\sum_{k=1}^{2}\sum_{i=1}^{2}
{\rm Im}(\Gamma_{uik})
\frac{{\tilde m}_{\chi^{+}_i}}{M_{\tilde{d}k}^2}
{\rm B}(\frac{{\tilde m}_{\chi^{+}_i}^2}{M_{\tilde{d}k}^2}),
\end{equation}
\begin{equation}
\tilde d_{d-chargino}^c=\frac{-g^2 g_s}{16\pi^2}\sum_{k=1}^{2}\sum_{i=1}^{2}
{\rm Im}(\Gamma_{dik})
\frac{{\tilde m}_{\chi^{+}_i}}{M_{\tilde{u}k}^2}
{\rm B}(\frac{{\tilde m}_{\chi^{+}_i}^2}{M_{\tilde{u}k}^2}),
\end{equation}
where
\begin{equation}
\Gamma_{uik}=\kappa_u V_{i2}^* D_{d1k} (U_{i1}^* D_{d1k}^{*}-
\kappa_d U_{i2}^* D_{d2k}^{*})
\end{equation}
\begin{equation}
\Gamma_{dik}=\kappa_d U_{i2}^* D_{u1k} (V_{i1}^* D_{u1k}^{*}-
\kappa_u V_{i2}^* D_{u2k}^{*}).
\end{equation}
The contribution to EDM of the quarks can be computed using the
naive dimensional analysis\cite{manohar} which gives
\begin{equation}
d^c_q=\frac{e}{4\pi} \tilde d^c_{q} \eta^c
\end{equation}
where $\eta^c$ is the renormalization group evolution of the operator
of Eq.(1) from the electroweak scale down to hadronic scale\cite{brat,aln}.
For the neutron electric dipole moment $d_n$ we use the naive quark model
$d_n=(4d_d-d_u)/3$.
\section{CP Violating Purely Gluonic Dimension 6 operator}
The gluonic dipole moment $d^G$ is defined to be the factor in the
effective operator
\begin{equation}
{\cal L}_I=-\frac{1}{6}d^G f_{\alpha\beta\gamma}
G_{\alpha\mu\rho}G_{\beta\nu}^{\rho}G_{\gamma\lambda\sigma}
\epsilon^{\mu\nu\lambda\sigma}
\end{equation}
where $G_{\alpha\mu\nu}$ is the
gluon field strength tensor, $f_{\alpha\beta\gamma}$
are the Gell-Mann coefficients, and $\epsilon^{\mu\nu\lambda\sigma}$
is the totally antisymmetric tensor with $\epsilon^{0123}=+1$.
Dai et al \cite{dai}
have calculated $d^G$
considering the top quark-squark loop with a gluino exchange
in terms of the complex phase $\phi$ of the gluino mass while the
squark (mass)$^2$ matrix in their analysis was considered
to be real. In our case the squark (mass)$^2$ matrix is complex and
carries the phases of $A_q$ and of $\mu$. We have recalculated $d^G$
for our case and we find
\begin{equation}
d^G=-3\alpha_s m_t (\frac{g_s}{4\pi})^3
{\rm Im} (\Gamma_{t}^{12})\frac{z_1-z_2}{m_{\tilde{g}}^3}
{\rm H}(z_1,z_2,z_t)
\end{equation}
where
\begin{equation}
z_{\alpha}=(\frac{M_{\tilde{t}\alpha}}{m_{\tilde{g}}})^2,
z_t=(\frac{m_t}{m_{\tilde{g}}})^2
\end{equation}
and
\begin{equation}
\Gamma_{t}^{12}=D_{t22} D_{t12}^*
\end{equation}
where $D_t$ is the diagonalizing matrix for the stop $(mass)^2$ matrix,
and the function H is the same as in \cite{dai}.
The contribution to $d_n$ from $d^G$ can be estimated by the naive dimensional
analysis\cite{manohar} which gives
\begin{equation}
d_{n}^G=\frac{eM}{4\pi} d^G \eta^G
\end{equation}
where $M$ is the chiral symmetry breaking scale with the numerical value
1.19 GeV, and $\eta^G$ is the
renormalization group evolution of the operator of Eq.(31) from the electroweak scale
down to the hadronic scale. $\eta^c$ and $\eta^G$ have
been estimated
to be $\sim 3.3$\cite{brat,aln}.
\section {RG Analysis and Results}
We discuss now the relative size of the three contributions to the
neutron edm, i.e., from the electric dipole operator, from the color
dipole operator and from the purely gluonic
operator. To study their relative contributions we consider
a given point in the supergravity parameter space characterized by
the set of six quantities at the GUT scale:
$m_0, m_{\frac{1}{2}}, A_0, \tan\beta, \theta_{\mu0}$ and
$\alpha_{A0}$. In the numerical
analysis we evolve the gauge coupling constants, the Yukawa couplings,
magnitudes of the soft SUSY breaking parameters, $\mu$ and the
CP violating phases from the GUT scale down to the Z boson scale.
We use one-loop renormalization group equations
(RGEs) for the soft SUSY breaking parameters
and two-loop RGEs for the Yukawa and
gauge couplings. Using the data gotten from the RG analysis at the
scale $M_Z$ we compute the contributions to the quark edm from the
electric dipole part ($d^E_q$), from the color dipole part
($d^C_q$), and from the purely gluonic part ($d^G_n$).
These are further evolved to the hadronic scale
by using renormalization group analysis as discussed in Secs. 2 and 3.
\begin{center} \begin{tabular}{|c|c|c|c|c|c|}
\multicolumn{6}{c}{Table~1: $m_{\tilde{g}}
=500$ GeV, $m_0=2000$ GeV, $|A_0|=1.0$} \\
\hline
case &$\tan\beta$ & phases (rad) & $d_n^E(10^{-26} e cm)$ & $d_n^C(10^{-26} e cm)$
&$d_n^G(10^{-26} e cm)$\\
\hline
(i) &2 & $\theta_{\mu_0}=0.2$, $\alpha_{A_0}=-0.5$ & 0.124 & 3.049 & -1.168 \\
\hline
(ii) & 2 & $\theta_{\mu_0}=0.2$, $\alpha_{A_0}=0.5$ & -8.46 & 12.84 & 17.315 \\
\hline
(iii) & 4 & $\theta_{\mu_0}=0.2$, $\alpha_{A_0}=-0.5$ & -4.33 & 0.764 & -5.13 \\
\hline
(iv) & 4 & $\theta_{\mu_0}=0.2$, $\alpha_{A_0}=0.5$ & -11.74 & -12.65 & 7.28 \\
\hline
\end{tabular}
\end{center}
\noindent
To exhibit the importance of the color dipole operator and
of the purely gluonic operator
we display the relative sizes of $d_n^E, d_n^C$ and of $d_n^G$ for
few illustrative examples in Table 1.
One can understand the
smallness of $d_n^E$ for case(i) in the following way. The main contributions
to $d_n^E$ arise from the chargino and from the gluino exchange. The chargino
exchange gives a negative contribution while the gluino exchange gives
a positive contribution and the smallness of $d_n^E$ is due to a
cancellation between these two.
For the color dipole term $d_n^C$ there is also a cancellation
but this time the cancellation is only partial and it occurs
between the d-quark and the u-quark contributions.
Because of a large cancellation for
$d_n^E$ and only a partial cancellation for $d_n^C$
one has dominance of $|d_n^C|$ over $|d_n^E|$ in this case.
One may contrast the result of case(i) with that of case(ii) where
the sign of $\alpha_{A_0}$ is switched. Here the gluino contribution
in $d_n^E$ switches sign and this time one has a reinforcement of the
chargino and the gluino contributions making $|d_n^E|$ much larger
than for case(i). Further, in the color dipole part the d-quark
contribution in the gluino exchange switches sign and there is a
reinforcement of the d-quark and the u-quark contributions making
$|d_n^C|$ larger than for case(i). We see then that in this case
$|d_n^E|$ and $|d_n^C|$ are comparable. One may note that a very
large change occurs for the purely gluonic term in going from case(i) to
case(ii). To understand the large shift in the value of $d_n^G$
we display explicitly the imaginary part of Eq. (34)
\begin{equation}
{\rm Im}(\Gamma_{t}^{12})=\frac{-m_t}{(M_{\tilde{t}1}^2-M_{\tilde{t}2}^2)}
(m_0 |A_t| \sin \alpha_{A} + |\mu| \sin \theta_{\mu} \cot\beta),
\end{equation}
where $\theta_{\mu}$ and $\alpha_{A}$ are the values of
$\theta_{\mu_0}$ and of $\alpha_{A_0}$ at the electro-weak scale. From
Eq. (36) we see that
the magnitude of ${\rm Im}(\Gamma_{t}^{12})$
depends on the relative sign and the
magnitude of $\theta_{\mu}$ and of $\alpha_{A}$. Thus one has a cancellation
between the $A_t$ and the $\mu$ terms when
$\theta_{\mu}$ and $\alpha_{A}$ have opposite signs as in case(i) and
a reinforcement between the $A_t$ term and the $\mu$ term when
$\theta_{\mu}$ and $\alpha_{A}$ have the same sign
as in case(ii). Thus one can qualitatively understand the largeness
of $d_n^G$ in case(ii) relative to case(i). The very large reduction
in $d_n^C$ for case(iii) relative to case (ii)
occurs because of a reduction in the down quark
contribution, which is the dominant term in $d_n$,
due to a switch in the $\alpha_{A_0}$ sign and a change in
the value of tan$\beta$. In going from case(iii) to case(iv)
the further switch in the sign of $\alpha_{A_0}$ once again increases the
down quark contribution making $d_n^C$ the largest in magnitude.
It is interesting to note that each of the four cases of Table 1 leads
to a distinct pattern of hierarchy among the three contributions.
Thus one has\\
\noindent
(i) $ |d_n^C|> |d_n^G| >|d_n^E|$,~~
(ii) $|d_n^G|> |d_n^C| >|d_n^E|$ \\
(iii) $|d_n^G|> |d_n^E| >|d_n^C|$,~~
(iv)$ |d_n^C|> |d_n^E| >|d_n^G|$\\
\noindent
The analysis clearly shows that any one of the three contributions
can dominate $d_n$ depending on the part of the parameter space one is in.
We note also that $d_n^E$ for all the four cases in Table 1 is consistent
with experiment while the total edm which includes the
color and the purely gluonic part for cases (ii) and (iv) would be
outside the experimental bound.
In Fig. 2 we display the magnitudes of $d_n^E$, $d_n^C$ and $d_n^G$ as a
function of $m_{\tilde{g}}$ for a specific set of $m_0$, $A_0$, $\tan\beta$,
$\alpha_{A_0}$ and $ \theta_{\mu0}$ values. Here we find that
$|d_n^C|$ is comparable to $|d_n^E|$ over most of the $m_{\tilde{g}}$ region
and in fact exceeds it for values of $m_{\tilde{g}}$ below $\sim 800$ GeV.
Further $|d_n^G|$ also exceeds $|d_n^E|$ for values of $m_{\tilde{g}}$ below
$\sim 400$ GeV in this case. The broad peak in $d_n^E$
arises from a destructive interference between the gluino
and the chargino components of $d_n^E$ which leads to a relatively rapid
fall off of $d_n^E$ for values of $m_{\tilde g}$ below $\sim 500$
GeV.
There are
other regions where similar phenomena occur. To exhibit the commonality
of the largeness of $d_n^C$ we give in Fig.(3a) a scatter plot of the
ratio $|d_n^C/d_n^E|$ for the ranges indicated in the Fig.(3a) caption.
We see that there exist significant regions of the
parameter space where the ratio $|d_n^C/d_n^E|\sim O(1)$
and hence $d_n^C$ is non-negligible in these regions.
A scatter plot of $|d_n^G/d_n^E|$ is given in Fig.(3b) and it
exhibits a similar phenomenon.
In conclusion we have given the first complete one loop analysis of the
chromoelectric contribution to the electric dipole moment of the quarks
and of the neutron exhibiting the dependence on the two arbitrary CP
violating phases allowed in minimal supergravity unification.
We find that the relative strength of the
chromoelectric contribution varies sharply depending on what part of the
supergravity parameter space one is in.
In significant parts of the parameter space the chromoelectric contribution
is comparable to the electric dipole contribution and can even exceed
it in certain regions of the parameter space. A similar conclusion also
holds for the contribution of Weinberg's
CP violating purely gluonic dimension
six operator. The analysis of the neutron edm exploring the full parameter
space of supergravity unified models is outside the scope of
this Letter and will be discussed elsewhere. We only state here
that the inclusion of all the three components of the neutron edm, i.e, $d_n^E,
d_n^C$, and $d_n^G$, is essential in making reliable predictions of the
neutron edm. In fact, the full analysis shows there exist regions
of the supergravity parameter space where internal cancellation among
the three components lead to an acceptable value of the neutron
edm without either the use of an excessively heavy SUSY spectrum or an
excessive
finetuning of phases.
These results have important implications for the
mechanisms needed to suppress the neutron edm in SUSY, for the effect of
CP violating phases on dark matter and on the analyses of baryon
asymmetry in the universe.
\section {Acknowledgements}
One of us (TI) wishes to thank Uptal Chattopadhyay for discussions and
help in the RG analysis of this work. This research was supported in part
by NSF grant number PHY-96020274.
\section{Figure Captions}
Fig. 1a: One loop diagram contributing to the color dipole operator
where the external gluon line ends on an exchanged squark line in
the loop.Squarks are represented by $\tilde q_k$ in the internal
lines. \\
Fig. 1b: One loop diagram contributing to the color dipole operator
where the external gluon line ends on an exchanged gluino line labelled
by $\tilde g$ in
the loop. \\
Fig. 2: Plot of $d_n^E$, $d_n^C$, and $d_n^G$ as a function of $m_{\tilde g}$
when $m_0$=2000 GeV, tan$\beta$=3.0, $|A_0|$=1.0, $\theta_{\mu_0}$=0.1
and $\alpha_{A_0}$=0.5.\\
Fig. 3a: Scatter plot of the ratio $|d_n^C/d_n^E|$ as a function of $m_0$
for the case $|A_0|$=1.0, $\tan\beta=2.0$ and the other parameters in the range
200 GeV$<m_{\tilde g}<600$ GeV
and $-\pi/5 <\theta_{\mu_0},\alpha_{A_0} <\pi/5$.\\
Fig. 3b: Scatter plot of the ratio $|d_n^G/d_n^E|$ as a function of $m_0$
for the same range of parameters as in Fig. 3a.\\
|
1,108,101,564,363 | arxiv | \section{Introduction}
The major goals of cancer genomics include identification of risk factors associated with the progression of cancer and prediction of disease outcomes. In conventional cancer studies, clinical factors such as age, gender, and tumor stage are routinely studied and used as prognostic factors. Recent advances in high-throughput technologies facilitate the generation of high-dimensional genomic data, which provide useful insights into the molecular pathways underlying cancer development. For example, in The Cancer Genome Atlas (TCGA), clinical and omics data, including copy number alteration, DNA methylation, mutation, and the expressions of mRNA, microRNA, and protein, were collected from more than 11 000 cancer patients across 33 tumor types. Also, in the Molecular Taxonomy of Breast Cancer International Consortium \citep{curtis2012genomic}, copy number alteration, mutation, and mRNA expression data were collected from about 2 000 breast cancer patients. Such massive omics data enable researchers to gain deeper understanding of the biological mechanisms involved in cancer progression. Many studies have shown that the integrative analysis of clinical and genomic data confers greater prognostic power than the analysis of clinical data alone \citep{li2006survival, shedden2008gene, bovelstad2009survival, fan2011building, zhao2015combining}.
Methods for integrative analysis of clinical and genomic data have been extensively investigated in recent decades. A straight-forward integration strategy is to combine clinical and genomic data into a single data set, on which conventional analyses are performed. Some studies demonstrated that direct combination of clinical factors and gene expressions improves risk prediction over the use of either data type alone \citep{bovelstad2009survival, fan2011building, zhao2015combining}. Alternatively, one may take into account the difference in prognostic power of the data types through some weighting approach. \cite{gevaert2006predicting} developed a Bayesian network approach that builds separate models for clinical and microarray data and used a weighted approach to combine the model predictions. \cite{daemen2007integration} proposed a weighted kernel-based method to integrate clinical and microarray data for classification. Both studies demonstrated that models that account for the distinction between clinical and genomic data tend to yield better prediction accuracy over models that treat these data types equally. However, these methods do not consider
interaction effects between genomic
and clinical variables, that is,
modifications of the effects of genomic variables by clinical variables.
Integrative methods for multiple genomic data types have also been studied.
\cite{lanckriet2004statistical}, \cite{daemen2009kernel}, and \cite{seoane2014pathway} proposed weighted kernel-based approaches to integrate multiple heterogeneous data types. \cite{boulesteix2017ipf} and \cite{wong2019boost} proposed penalization regression methods on multiple data types while accounting for their differences in prognostic power. \cite{wang2013ibag} and \cite{zhu2016integrating}
incorporated prior knowledge of regulatory relationship
among different types of genomic variables for the regression of
disease outcomes on genomic variables.
These methods, though accounting for differences among different data types, do not allow for interaction effects. \cite{nevins2003towards} and \cite{pittman2004integrated} developed tree-based classification methods to evaluate the effects of clinical and genomic data on (binary) disease outcomes, allowing for potential interactions among multiple risk factors. However, the estimated model does not have simple interpretations, and the methods may not accommodate a large number of variables. In a recent study, \cite{li2020semiparametric} proposed a regularization method to select for gene-gene interaction effects on disease outcomes,
but the interactions between clinical and genomic data were not considered.
The effects of genomic features on cancer progression are often modified by clinical factors. For example, \cite{landi2008gene} demonstrated that the effects of some gene expressions on the risk of lung cancer mortality vary with tobacco consumption. Also, \cite{chen2017differentiated} and \cite{relli2018distinct} showed that the molecular mechanisms of carcinogenesis exhibit a high level of heterogeneity between two subtypes of non-small-cell lung carcinoma (NSCLC), and the same set of features can have distinct effects on disease outcome across different subtypes. As the effects of genomic features can vary across different clinical characteristics, it is highly desirable to incorporate interaction effects between clinical and genomic variables in regression analyses of disease outcomes on clinical and genomic variables.
A conventional approach to model interaction effects is to include pairwise product terms of predictors into the regression model. However, this approach may not be suitable for analyzing the interactions between clinical and genomic data. First, adding product terms may greatly expand the model complexity and aggravate the high-dimensionality issue. Second, the scales of (quantitative) clinical and genomic variables are generally incomparable, and modeling interaction effects using pairwise product terms may not be appropriate.
To address the aforementioned issues, we propose a single-index varying-coefficient model to accommodate potential interaction effects between clinical and genomic features. The single-index varying-coefficient model combines the varying-coefficient model \citep{hastie1993varying} and the single-index model \citep{hardle1993optimal}. It allows the effects of genomic features to vary flexibly with a single index, which is a linear combination of clinical features. This model avoids the curse of dimensionality by projecting the clinical features to an index, so the number of parameters only increases linearly with the number of features.
Also, to accommodate the difference in scales between clinical and genomic features, effects of genomic features are formulated
as nonparametric functions of the index.
We propose a penalized (sieve) maximum likelihood estimation method for variable selection and estimation. In particular,
we adopt a novel two-part penalty, which allows for separate
selection of genomic features with effects modified by clinical features and of genomic features with non-zero constant effect.
A coordinate-wise algorithm for the computation of the
penalized estimators is developed.
The proposed methods can be applied to common types of outcome variables, including continuous, binary, and censored outcomes.
The rest of this paper is organized as follows. We describe the model and estimation procedures in Section \ref{sec:model}. We assess the estimation performance of the proposed methods through simulation studies, and the results are summarized in Section \ref{sec:sim}. We demonstrate the applications of the proposed methods on two TCGA data sets in Section \ref{sec:rda}. Finally, we make some concluding remarks in Section \ref{sec:diss}. Computation details and additional numerical results are given in the Appendix.
\section{Model and estimation}
\label{sec:model}
\subsection{Model, data, and likelihood}
Let $Y$ be an outcome of interest, $\boldsymbol{U}$ and $\boldsymbol{Z}$ be two sets of low-dimensional predictors that may overlap, and $\boldsymbol{X}\equiv(X_{0},\ldots,X_{p})^{\mathrm{T}}$ be a set of potentially high-dimensional predictors with $X_{0}=1$.
We are interested in the effect of $(\boldsymbol{X},\boldsymbol{Z})$ on $Y$, where the effect of $\boldsymbol{X}$ is allowed to depend on $\boldsymbol{U}$. We assume the following partial linear single-index varying-coefficient model:
\begin{align}
Y\mid(\boldsymbol{U},\boldsymbol{X},\boldsymbol{Z})\sim f\bigg\{\cdot\;;\sum_{j=0}^{p}g_{j}(\boldsymbol{U}^{\mathrm{T}}\boldsymbol{\beta})X_{j}+\boldsymbol{Z}^{\mathrm{\mathrm{T}}}\boldsymbol{\psi}\bigg\},\label{eq:model}
\end{align}
where $f$ is a density function, $\boldsymbol{\beta}$ and $\boldsymbol{\psi}$ are regression parameters, and $g_{0},\ldots,g_{p}$ are unspecified smooth functions. For model identifiability, we set $\Vert\boldsymbol{\beta}\Vert=1$, and if $\boldsymbol{U}$ is a subset of $\boldsymbol{Z}$, then we set the component of $\boldsymbol{\psi}$ that corresponds to the last component of $\boldsymbol{U}$ to be 0. This model assumes that the effect of each component of $\boldsymbol{X}$ is characterized by a nonparametric transformation of an index $\boldsymbol{U}^{\mathrm{T}}\boldsymbol{\beta}$. If each $g_{j}$ ($j=0,\ldots,p$) is constant, then the model contains only linear effects of $(\boldsymbol{X},\boldsymbol{Z})$. If $g_{j}$ is a linear function, then the model contains the linear effect of $X_j$ and the interaction effect of $\boldsymbol{U}^{\mathrm{T}}\boldsymbol{\beta}$ and $X_{j}$. The proposed model (\ref{eq:model}) accommodates many different types of outcomes. For continuous or binary outcomes, we set $f$ to be a density from the exponential family. For right-censored survival outcomes, we set $f$ to be the density under the Cox proportional hazards model.
For a sample of size $n$, the observed data consist of $(Y_{i},\boldsymbol{X}_{i},\boldsymbol{U}_{i},\boldsymbol{Z}_{i})$ for $i=1,\ldots,n$. For right-censored survival outcomes, we set $Y_{i}=(\widetilde{Y}_{i},\Delta_{i})$, where $\widetilde{Y}_{i}$ is the event or censoring time, and $\Delta_{i}$ is the event indicator. For uncensored outcomes, the log-likelihood function is $\ell_{n}(\boldsymbol{\beta},\boldsymbol{\psi},\mathcal{G})=\sum_{i=1}^{n}\log f\{Y_i;\sum_{j=0}^{p}g_{j}(\boldsymbol{U}_{i}^{\mathrm{T}}\boldsymbol{\beta})X_{ij}+\boldsymbol{Z}_{i}^{\mathrm{\mathrm{T}}}\boldsymbol{\psi}\}$, where $\mathcal{G}=(g_{0},\ldots,g_{p})$. For right-censored outcomes under the Cox model, we set $\ell_{n}$ to be the log-partial-likelihood function, such that
\[
\ell_{n}(\boldsymbol{\beta},\boldsymbol{\psi},\mathcal{G})=\sum_{i=1}^{n}\Delta_{i}\Big[\sum_{j=0}^{p}g_{j}(\boldsymbol{U}_{i}^{\mathrm{T}}\boldsymbol{\beta})X_{ij}+\boldsymbol{Z}_{i}^{\mathrm{\mathrm{T}}}\boldsymbol{\psi}-\log\Big\{\sum_{h:Y_{h}\ge Y_{i}}e^{\sum_{j=0}^{p}g_{j}(\boldsymbol{U}_{h}^{\mathrm{T}}\boldsymbol{\beta})X_{hj}+\boldsymbol{Z}_{h}^{\mathrm{\mathrm{T}}}\boldsymbol{\psi}}\Big\}\Big].
\]
\subsection{Penalized sieve estimation}
Because the likelihood involves the nonparametric functions $(g_{0},\ldots,g_{p})$, maximum likelihood estimation is not feasible. We propose to approximate $g_{j}$ by B-spline functions. Let $(B_{1},\ldots,B_{d})$ be a set of B-spline functions on a pre-specified set of grid points, such that each function passes through the origin; the construction of the B-spline functions are discussed in Appendix \ref{sec:bspline}. For $j=0,\ldots,p$, we approximate $g_{j}$ by $\gamma_{j}+\sum_{k=1}^{d}\alpha_{jk}B_{k}$, where $(\gamma_{j},\alpha_{j1},\ldots,\alpha_{jd})$ are regression parameters. For right-censored outcomes, we set $\gamma_{0}=0$ for identifiability.
When $p$ is large, the total number of parameters may be larger than the sample size, and penalization on $\boldsymbol{\gamma}\equiv(\gamma_{0},\ldots,\gamma_{p})^{\mathrm{T}}$ and $\boldsymbol{\alpha}\equiv(\boldsymbol{\alpha}_{jk})_{j=0,\ldots,p;k=1,\ldots,d}$ could be adopted for stable estimation and variable selection. We propose to estimate the parameters by maximizing the following penalized log-likelihood function:
\[
p\ell_{n}(\boldsymbol{\beta},\boldsymbol{\psi},\boldsymbol{\gamma},\boldsymbol{\alpha})=\ell_{n}\Big\{\boldsymbol{\beta},\boldsymbol{\psi},\big(\gamma_{j}+\sum_{k=1}^{d}\alpha_{jk}B_{k}\big)_{j=0,\ldots,p}\Big\}-\sum_{j=1}^{p}\rho_{1}(\gamma_{j};\lambda_{1})-\sum_{j=1}^p\rho_{2}(\boldsymbol{\alpha}_{j};\lambda_{2}),
\]
where $\rho_1$ and $\rho_2$ are penalty functions, $\lambda_{1}$ and $\lambda_{2}$ are tuning parameters, and $\boldsymbol{\alpha}_{j}=(\alpha_{j1},\ldots,\alpha_{jd})^\mathrm{T}$ for $j=1,\ldots,p$. This formulation allows separate selection of constant and non-constant effects of $X_{j}$ by separate penalization on $\gamma_{j}$ and $\boldsymbol{\alpha}_{j}$. Let $\widehat{\boldsymbol{\beta}}$, $\widehat{\gamma}_{j}$, and $\widehat{\boldsymbol{\alpha}}_{j}$ denote the penalized estimator of $\boldsymbol{\beta}$, $\gamma_{j}$, and $\boldsymbol{\alpha}_{j}$, respectively ($j=0,\ldots,p$). For $j=1,\ldots,p$, if $\widehat{\gamma}_{j}=0$ and $\widehat{\boldsymbol{\alpha}}_{j}=\boldsymbol{0}$, then $X_{j}$ does not have an effect on the outcome in the estimated model. If only $\widehat{\boldsymbol{\alpha}}_{j}=\boldsymbol{0}$, then $X_{j}$ has a
constant effect of $\widehat{\gamma}_{j}$. If $\widehat{\boldsymbol{\alpha}}_{j}$ is non-zero, then $X_{j}$ has a non-constant effect indexed by $\boldsymbol{U}^{\mathrm{T}}\widehat{\boldsymbol{\beta}}$.
Many choices of penalty functions, such as the (group) lasso \citep{tibshirani1996regression, yuan2006model}, smoothly clipped absolute deviation (SCAD) \citep{fan2001variable, breheny2009penalized}, and minimax concave penalty (MCP) \citep{zhang2007penalized, breheny2009penalized}, are possible.
Although these conventional choices of penalty functions for $\rho_1$ and $\rho_2$ can produce sparse
estimation of the constant and non-constant effects, they fail to take into account the fact that $\gamma_j$ and $\boldsymbol{\alpha}_j$ ($j=1,\ldots,p$) correspond to the same predictor $X_j$.
In this paper, we propose to set
$\rho_{1}(\gamma_{j};\lambda_{1})=\lambda_{1}w_{j}\vert\gamma_{j}\vert$ and $\rho_{2}(\boldsymbol{\alpha}_{j};\lambda_{2}) =\lambda_{2}w_{j}(\boldsymbol{\alpha}_{j}^\mathrm{T}\boldsymbol{K}_{j}\boldsymbol{\alpha}_{j})^{1/2}$, where $w_j$ is a weight for the $j$th predictor, and $\boldsymbol{K}_{j}$ is some $(d\times d)$-symmetric matrix;
the first penalty is similar to the adaptive lasso penalty \citep{zou2006adaptive}, and the second penalty is a weighted version of the group lasso.
The weight $w_j$ is introduced to capture the overall signal strength
of $g_j$ and unify the degree of shrinkage of $\gamma_{j}$ and $\boldsymbol{\alpha}_{j}$.
In particular, we set $\omega_{j}=(\widetilde{\gamma}_{j}^2+\Vert\widetilde{\boldsymbol{\alpha}}_{j}\Vert^2)^{-1/2}$, where $\widetilde{\gamma}_{j}$ and $\widetilde{\boldsymbol{\alpha}}_{j}$ are estimates of $\gamma_{j}$ and $\boldsymbol{\alpha}_{j}$ obtained from maximizing the penalized log-likelihood with $w_{j}=1$ for $j=1,\ldots,p$. If the initial estimates $\widetilde{\gamma}_{j}$ and $\widetilde{\boldsymbol{\alpha}}_{j}$ are accurate in that variables with stronger signal receive smaller weights,
then the weighted estimators would yield better variable selection and estimation accuracy than unweighted estimators.
We propose to compute the estimates using an alternating algorithm. In particular,
we initialize $\boldsymbol{\beta}$ as some unit vector
and update the parameter estimates of $(\boldsymbol{\gamma},\boldsymbol{\alpha},\boldsymbol{\psi})$ and $\boldsymbol{\beta}$ alternatively as follows until convergence.
For fixed $\boldsymbol{\beta}$, the objective function is essentially the penalized log-likelihood function for a conventional regression model under a group lasso penalty, and $(\boldsymbol{\gamma},\boldsymbol{\alpha},\boldsymbol{\psi})$ can be updated using existing algorithms for the group lasso \citep{breheny2009penalized}. For fixed $(\boldsymbol{\gamma},\boldsymbol{\alpha},\boldsymbol{\psi})$, using the Lagrange multiplier method, the (penalized) log-likelihood function is maximized at $\boldsymbol{\beta}$ such that $\partial\ell_{n}\big\{\boldsymbol{\beta},\boldsymbol{\psi},\big(\gamma_{j}+\sum_{k=1}^{d}\alpha_{jk}B_{k}\big)_{j=0,\ldots,p}\big\}/\partial\boldsymbol{\beta}+c\boldsymbol{\beta}=\boldsymbol{0}$ and $\Vert\boldsymbol{\beta}\Vert^{2}-1=0$ for some $c$. We solve for $\boldsymbol{\beta}$ and $c$ simultaneously using the Newton-Raphson algorithm.
We propose to select the tuning parameters $\lambda_{1}$ and $\lambda_{2}$ using a version of the Bayesian information criterion (BIC), defined as
\[
-2\ell_{n}(\widehat{\boldsymbol{\beta}},\widehat{\boldsymbol{\psi}},\widehat{\mathcal{G}})+q\log(n^{*}),
\]
where $\widehat{\mathcal{G}}=(\widehat{\gamma}_{j}+\sum_{k=1}^d\widehat{\alpha}_{jk}B_{k})_{j=0,\ldots,p}$, $q$ is the effective degrees of freedom, and $n^{*}$ is the effective sample size. Specifically, $n^{*}=n$ for uncensored outcomes, and $n^{*}$ is the number of uncensored observations for right-censored outcomes. Following \cite{breheny2009penalized}, we define the effective degrees of freedom as
\[
q=\sum_{j=1}^{p}\bigg(\frac{\widehat{\gamma}_{j}}{\widehat{\gamma}_{j}^{*}}+\sum_{k=1}^{d}\frac{\widehat{\alpha}_{jk}}{\widehat{\alpha}_{jk}^{*}}\bigg),
\]
where $(\widehat{\gamma}_{j},\widehat{\alpha}_{jk})$ denote the estimated value of $(\gamma_{j},\alpha_{jk})$, $\widehat{\gamma}_{j}^{*}$ denote the maximizer of the unpenalized log-likelihood function with respect to $\gamma_{j}$ with other parameters fixed at the estimated value, and $\widehat{\alpha}_{jk}^{*}$ denote the maximizer of the unpenalized log-likelihood function with respect to $\alpha_{jk}$ with other parameters fixed at the estimated value. We select $(\lambda_{1},\lambda_{2})$ that yield the minimum modified BIC value.
In conventional group lasso problems, the predictor matrix of the $j$th group, denoted by $\boldsymbol{W}_{j}$, is typically transformed such that $\boldsymbol{W}_{j}^\mathrm{T}\boldsymbol{W}_{j}$ is a diagonal matrix with equal diagonal elements. This is equivalent to setting $\boldsymbol{K}_{j}$ to be (a scaled version of) $\boldsymbol{W}_{j}^\mathrm{T}\boldsymbol{W}_{j}$. In the current problem, however, the ``predictor matrix,'' which consists of rows $(X_{ij}, B_{1}(\boldsymbol{U}_{i}^\mathrm{T}\boldsymbol{\beta})X_{ij},\ldots,B_{d}(\boldsymbol{U}_{i}^\mathrm{T}\boldsymbol{\beta})X_{ij})$ ($i=1,\ldots,n$), depends on the unknown parameter $\boldsymbol{\beta}$. One estimation strategy is to set $\boldsymbol{K}_{j}$ based on the predictor matrix evaluated at some initial estimator of $\boldsymbol{\beta}$, such as that obtained under $\boldsymbol{K}_{j}=\boldsymbol{I}$. Another strategy is to update $\boldsymbol{K}_{j}$ with $\boldsymbol{\beta}$ after each iteration; this can be thought of as setting $\boldsymbol{K}_{j}$ based on the converged value of $\boldsymbol{\beta}$. Another difficulty that arises from the unknown $\boldsymbol{\beta}$ is that the converged estimates may vary with the initial value of $\boldsymbol{\beta}$. We propose to consider multiple initial values and select the final estimates that yield the smallest modified BIC.
In the simulation studies, we considered 5 initial values of $\boldsymbol{\beta}$ and updated $\boldsymbol{K}$ along with $\boldsymbol{\beta}$ at each iteration, and the algorithm converged at almost all replicates.
\section{Simulation studies}
\label{sec:sim}
We set the dimension of $\boldsymbol{U}$ to be 4 and generated components of $\boldsymbol{U}$ as i.i.d. standard normal variables. We set $\boldsymbol{Z}=\boldsymbol{U}$ and generated $\boldsymbol{X}$ from the $p$-variate standard normal distribution. We set $\boldsymbol{\beta}=(0.4,-0.4,0.2,-0.8)^{\mathrm{T}}$, $\boldsymbol{\psi}=(0.2,-0.2,0.5,-0.5)^{\mathrm{T}}$, and $g_{1},\ldots,g_{20}$ to be non-zero constant, linear, or non-linear functions; the functions are plotted in Figure \ref{fig:sim_gaussian_p100}. We set $g_{0}$ and $g_{21},\ldots,g_{p}$ to be constant at 0. We considered a continuous outcome variable and a right-censored outcome variable. For the continuous outcome, we set $f(y;\mu)=(2\pi)^{-1/2}\exp\{-\frac{1}{2}(y-\mu)^2\}$, so that conditional on $(\boldsymbol{X},\boldsymbol{Z},\boldsymbol{U})$, $Y$ follows the normal distribution with unit variance. For the right-censored outcome, we set
$
f(y;\mu)=h(y)e^\mu\exp\big\{-e^\mu\int_0^yh(t)\,\mathrm{d}t\big\}$,
where $h$ is the baseline hazard function with $h(t)=t$. The censoring time was generated from an exponential distribution with the mean chosen to yield a censoring rate of about 30\%. In each setting, we considered a sample size of 500 and $p=20$, 50, and 100.
We compare the proposed methods with conventional regression models with or without interaction terms. For the proposed methods, we set the degree of the B-spline functions to be 2 and the knots at $-\max_{i}\Vert\boldsymbol{U}_{i}\Vert_{2}$, 0, and $\max_{i}\Vert\boldsymbol{U}_{i}\Vert_{2}$. We considered the proposed weighted approach and an unweighted approach with $w_{j}=1$ $(j=1,\ldots,p)$. We also considered the lasso regression on the linear predictors $(\boldsymbol{X},\boldsymbol{Z})$ and the lasso regression on $\boldsymbol{X}$, $\boldsymbol{Z}$, and pairwise interactions between components of $\boldsymbol{X}$ and $\boldsymbol{Z}$; in both cases, coefficients of $\boldsymbol{Z}$ were not penalized. In addition, we considered adaptive lasso for the models with or without interactions, where the weights are the inverse of the absolute value of the corresponding lasso estimates. In all methods, the tuning parameters were selected using the modified BIC.
We evaluate the performance of each method in terms of variable selection and prediction. For variable selection, we report the sensitivity and the false discovery rate (FDR). Sensitivity is the proportion of correctly identified signal variables among all true signal variables. FDR is the proportion of noise variables that are incorrectly identified as signal variables among all selected variables. For the proposed methods, a variable $X_{j}$ is selected if either $\gamma_{j}$ or $\boldsymbol{\alpha}_{j}$ is estimated as non-zero $(j=1,\ldots,p)$. For the proposed methods and lasso with interactions, we also report the sensitivity and FDR with respect to the selection of non-constant effects, where for the proposed methods, the non-constant effect of $X_{j}$ is selected if $\widehat{\boldsymbol{\alpha}}_{j}\ne\boldsymbol{0}$, and for lasso
with interactions, the non-constant effect is selected if the coefficient of the product of $X_{j}$ and any component of $\boldsymbol{Z}$ is non-zero.
In addition, we report the total numbers of the selected variables and the number of variables identified to have non-constant effects.
For prediction, we report the mean-squared error (MSE), defined as $\mathrm{E}(\widehat{\eta}-\eta_{0})^2$, where $\eta_{0}=\eta(\boldsymbol{\beta}_{0},\mathcal{G}_{0},\boldsymbol{\psi}_{0})$, $\eta(\boldsymbol{\beta},\mathcal{G},\boldsymbol{\psi})
\equiv\sum_{j=1}^{p}g_{j}(\boldsymbol{U}^{\mathrm{T}}\boldsymbol{\beta})X_{j}+\boldsymbol{Z}^{\mathrm{T}}\boldsymbol{\psi}$, and $(\boldsymbol{\beta}_{0},\mathcal{G}_{0},\boldsymbol{\psi}_{0})$ denote the true parameter values. For the proposed methods, $\widehat{\eta}=\eta(\widehat{\boldsymbol{\beta}},\widehat{\mathcal{G}},\widehat{\boldsymbol{\psi}})$, where $(\widehat{\boldsymbol{\beta}},\widehat{\mathcal{G}},\widehat{\boldsymbol{\psi}})$ denote the estimated parameter values. For lasso with and without interaction effects, $\widehat{\eta}=\sum_{j}\widehat{b}_{j}X_{j}+\sum_{k}\widehat{c}_{k}Z_{k}+\sum_{j,k}\widehat{d}_{jk}X_{j}Z_{k}$ and $\widehat{\eta}=\sum_{j} \widetilde{b}_{j}X_{j}+\sum_{k}\widetilde{c}_{k}Z_{k}$, respectively, where $\widehat{b}_{j}$, $\widehat{c}_{k}$, $\widehat{d}_{jk}$, $\widetilde{b}_{j}$, and $\widetilde{c}_{k}$ are the corresponding estimated regression parameters. For the right-censored outcome, we also compute the concordance index (C-index) \citep{harrell1982evaluating}, defined as $\mathrm{P}(\eta_{i}>\eta_{j}\mid\widetilde{Y}_{i}<\widetilde{Y}_{j})$ for two generic independent subjects indexed by $i$ and $j$. C-index typically takes values between 0.5 and 1, where a value of 0.5 indicates no discrimination and a value of 1 indicates perfect discrimination. For the proposed methods, we also report the absolute inner product $\vert\boldsymbol{\beta}^{\mathrm{T}}\widehat{\boldsymbol{\beta}}\vert$
to assess the estimation accuracy of $\widehat{\boldsymbol{\beta}}$. The simulation results for the continuous and right-censored outcomes based on 100 replicates are summarized in Tables \ref{Tab:sim_gaussian_lasso} and \ref{Tab:sim_cox_lasso}, respectively. Figure \ref{fig:sim_gaussian_p100} shows the average estimated values of $g_{1},\ldots,g_{20}$ for the continuous outcome under $p=100$. The simulation results under other settings are plotted in Figures \ref{fig:sim_gaussian_p50}--\ref{fig:sim_cox_p100}.
\begin{table}
\small\centering
\caption{Simulation results for the continuous outcome.}
\begin{threeparttable}
\setlength{\tabcolsep}{1mm}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{llccccccccc} \hline
&&\multicolumn{3}{c}{$p=20$}&\multicolumn{3}{c}{$p=50$}&\multicolumn{3}{c}{$p=100$}\\\cline{3-11}
&& Proposed & Main & Interaction & Proposed & Main & Interaction & Proposed & Main & Interaction\\\hline
\multicolumn{11}{l}{Unweighted}\\\hline
SEN & Overall & 0.990 & 0.800 & 0.978 & 0.978 & 0.776 & 0.966 & 0.962 & 0.754 & 0.960\\
& Non-constant & 0.999 & - & 0.949 & 0.987 & - & 0.917 & 0.983 & - & 0.912\\
FDR & Overall & 0 & 0 & 0 & 0.377 & 0.295 & 0.504 & 0.543 & 0.479 & 0.706\\
& Non-constant & 0.253 & - & 0.426 & 0.436 & - & 0.718 & 0.530 & - & 0.833\\
NS & Overall & 19.80 & 16.01 & 19.57 & 31.57 & 22.21 & 39.13 & 42.44 & 29.30 & 65.60\\
& Non-constant & 13.48 & 0 & 16.65 & 17.70 & 0 & 32.84 & 21.17 & 0 & 54.84\\
\multicolumn{2}{l}{$\vert\boldsymbol{\beta}^\mathrm{T}\widehat{\boldsymbol{\beta}}\vert$} & 0.997 & - & - & 0.996 & - & - &0.996 & - & -\\
\multicolumn{2}{l}{MSE} & 0.378 & 1.583 & 0.990 & 0.569 & 1.754 & 1.160 & 0.698 & 1.764 & 1.206\\\hline
\multicolumn{11}{l}{Weighted}\\\hline
SEN & Overall & 0.948 & 0.657 & 0.908 & 0.915 & 0.649 & 0.900 & 0.898 & 0.638 & 0.897\\
& Non-constant & 0.979 & - & 0.848 & 0.951 & - & 0.826 & 0.915 & - & 0.834\\
FDR & Overall & 0 & 0 & 0 & 0.155 & 0.096 & 0.324 & 0.247 & 0.200 & 0.540\\
& Non-constant & 0.069 & - & 0.243 & 0.154 & - & 0.552 & 0.222 & - & 0.709\\
NS & Overall & 18.96 & 13.14 & 18.15 & 21.79 & 14.47 & 26.90 & 24.11 & 16.19 & 39.43\\
& Non-constant & 10.57 & 0 & 11.36 & 11.33 & 0 & 18.83 & 11.93 & 0 & 29.19\\
\multicolumn{2}{l}{$\vert\boldsymbol{\beta}^\mathrm{T}\widehat{\boldsymbol{\beta}}\vert$} & 0.998 & - & - & 0.997 & - & - &0.997 & - & -\\
\multicolumn{2}{l}{MSE} & 0.313 & 1.602 & 1.008 & 0.408 & 1.724 & 1.135 & 0.544 & 1.730 & 1.198\\\hline
\end{tabular}
\end{threeparttable}
\begin{tablenotes}
NOTE: ``SEN'' represents sensitivity; ``NS'' represents number of selected variables; ``Main'' represents lasso regression model without interactions; ``Interaction'' represents lasso regression model with interactions; ``Overall'' gives values of corresponding measures concerning all components of $\boldsymbol{X}$; ``Non-constant'' gives values of corresponding measures concerning components of $\boldsymbol{X}$ with non-constant effects on the outcome.
\end{tablenotes}
\label{Tab:sim_gaussian_lasso}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.85]{sim_gau_p100}
\caption{\label{fig:sim_gaussian_p100}Estimated coefficients for the continuous outcome under $p=100$.}
\end{figure}
\begin{table}
\small\centering
\caption{Simulation results for the right-censored outcome.}
\begin{threeparttable}
\setlength{\tabcolsep}{1mm}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{llccccccccc} \hline
& &\multicolumn{3}{c}{$p=20$}&\multicolumn{3}{c}{$p=50$}&\multicolumn{3}{c}{$p=100$}\\\cline{3-11}
& & Proposed & Main & Interaction & Proposed & Main & Interaction & Proposed & Main & Interaction\\\hline
\multicolumn{11}{l}{Unweighted}\\\hline
SEN & Overall & 0.962 & 0.795 & 0.948 & 0.909 & 0.737 & 0.914 & 0.856 & 0.689 & 0.874\\
& Non-constant & 0.916 & - & 0.895 & 0.805 & - & 0.852 & 0.702 & - & 0.794\\
FDR & Overall & 0 & 0 & 0 & 0.319 & 0.297 & 0.446 & 0.477 & 0.480 & 0.629\\
& Non-constant & 0.176 & - & 0.392 & 0.283 & - & 0.661 & 0.386 & - & 0.776\\
NS & Overall & 19.24 & 15.90 & 18.97 & 26.95 & 21.11 & 33.28 & 33.06 & 26.78 & 48.24\\
& Non-constant & 11.28 & 0 & 14.91 & 11.42 & 0 & 25.51 & 11.84 & 0 & 36.66\\
\multicolumn{2}{l}{$\vert\boldsymbol{\beta}^\mathrm{T}\widehat{\boldsymbol{\beta}}\vert$} & 0.993 & - & - & 0.974 & - & - &0.957 & - & -\\
\multicolumn{2}{l}{MSE} & 0.843 & 1.873 & 1.419 & 1.357 & 2.142 & 1.701 & 1.518 & 2.134 & 1.746\\
\multicolumn{2}{l}{C-index} & 0.772 & 0.716 & 0.743 & 0.758 & 0.716 & 0.738 & 0.745 & 0.708 & 0.727\\\hline
\multicolumn{11}{l}{Weighted}\\\hline
SEN & Overall & 0.883 & 0.631 & 0.864 & 0.831 & 0.615 & 0.832 & 0.763 & 0.580 & 0.787\\
& Non-constant & 0.806 & - & 0.780 & 0.708 & - & 0.732 & 0.593 & - & 0.687\\
FDR & Overall & 0 & 0 & 0 & 0.136 & 0.139 & 0.303 & 0.239 & 0.273 & 0.489\\
& Non-constant & 0.073 & - & 0.238 & 0.148 & - & 0.518 & 0.217 & - & 0.658\\
NS & Overall & 17.66 & 12.62 & 17.27 & 19.35 & 14.39 & 24.13 & 20.37 & 16.19 & 31.22\\
& Non-constant & 8.78 & 0 & 10.40 & 8.42 & 0 & 15.60 & 7.64 & 0 & 21.03\\
\multicolumn{2}{l}{$\vert\boldsymbol{\beta}^\mathrm{T}\widehat{\boldsymbol{\beta}}\vert$} & 0.994 & - & - & 0.991 & - & - & 0.982 & - & -\\
\multicolumn{2}{l}{MSE} & 0.691 & 1.822 & 1.209 & 0.929 & 1.983 & 1.414 & 1.166 & 1.947 & 1.528\\
\multicolumn{2}{l}{C-index} & 0.773 & 0.714 & 0.743 & 0.766 & 0.716 & 0.740 & 0.754 & 0.710 & 0.726\\\hline
\end{tabular}
\end{threeparttable}
\begin{tablenotes}
NOTE: See NOTE to Table \ref{Tab:sim_gaussian_lasso}.
\end{tablenotes}
\label{Tab:sim_cox_lasso}
\end{table}
In terms of prediction, both the weighted and unweighted versions of the proposed methods correctly identify the interaction structure between $\boldsymbol{X}$ and $\boldsymbol{Z}$ and yield higher prediction accuracy than other methods. In particular, they yield lower MSE in all settings and higher C-index for the right-censored outcome. In addition, the estimated value of $\boldsymbol{\beta}$ is close to the true value, indicating that the proposed methods can correctly identify the composition of the index. The weighted estimators are generally accurate, whereas the unweighted estimators tend to be biased towards zero due to the uniform shrinkage imposed on all parameters. Lasso with interaction terms generally yields smaller MSE than lasso with main effects alone, suggesting that a varying-coefficient model can be approximated by a conventional regression model with pairwise interaction terms. Nevertheless, possibly due to the complexity of the interaction model, the performance of lasso with interaction is substantially worse than that of the proposed methods.
In terms of variable selection, both the proposed methods and lasso with interactions have substantially higher sensitivity than lasso with main effects alone. The FDR is lower under the proposed methods than lasso with interactions, indicating that the proposed methods tend to yield more interpretable models. The FDR for the proposed methods are higher than those for lasso with main effects alone under some settings, possibly because lasso with main effects alone generally selects much fewer variables. For all methods, the weighted estimators yield substantially lower FDR than the unweighted estimators. By setting higher penalty for noise variables and lower penalty for signal variables, the weighted method yields higher variable selection accuracy.
\section{Real data analysis}
\label{sec:rda}
\subsection{TCGA NSCLC data}
We demonstrate the application of the proposed methods using a set of NSCLC patients from TCGA. The data set consists of two subtypes of lung cancer, namely lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). We are interested in the potential risk factors associated with pulmonary function, measured by the percentage of expiratory volume in one second (FEV1); a higher FEV1 represents larger lung capacity, and patients with severely impaired lung function have an increased risk of mortality \citep{hole1996impaired}. In particular, we investigated the effects of gene expressions and clinical variables on FEV1, allowing for interactions between the two types of variables. We fit the proposed model with $\boldsymbol{U}$ consisting of age, number of packs of cigarette smoked per year or pack-year smoked (PYS), cancer subtype, tumor stage, and gender; tumor stage is dichotomized into stage I versus stage II or above. This formulation allows the effects of genomic factors to be modified by clinical variables. We set $\boldsymbol{Z} = \boldsymbol{U}$ to allow linear effects of clinical variables on FEV1. After discarding genes with zero expressions for 30\% or more subjects, the data set consists of 17 148 gene expressions. We set $\boldsymbol{X}$ to consist of 300 gene expressions that have the most significant marginal association with FEV1 (adjusted for clinical variables). After removing subjects with missing data, the sample size is 353, with 185 and 168 LUAD and LUSC patients, respectively. Following the simulation studies, we set the degree of the B-spline
functions to be 2 and the knots at $-\max_{i}\Vert\boldsymbol{U}_{i}\Vert_{2}$, 0, and $\max_{i}\Vert\boldsymbol{U}_{i}\Vert_{2}$. We adopted the weighted penalty approach. We standardized all variables to have zero mean and unit variance.
We identified 17 gene expressions to be associated with FEV1. The selected gene expressions and their estimated coefficients are shown in Table \ref{Tab:rda_nsclc_sel}. Among the selected gene expressions, EIF4A3 was known to be involved in the development of NSCLC, and KCNK2 and N4BP1 were known as prognostic factors in some cancer types \citep{lin2018systematic, innamaa2013expression, xu2017mir, li2019kcnk}. The effects of CDK11A and LRRC29 were identified to vary with the clinical variables; CDK11A has previously been shown to be associated with many cancer types \citep{zhou2016emerging}. The estimated index parameters $\boldsymbol{\beta}$ for age, PYS, gender, tumor stage, and cancer subtype are 0.199, 0.637, 0.157, $-$0.548, and $-$0.479, respectively. The index is dominated by PYS, tumor stage, and cancer subtype, suggesting that the effects of CDK11A and LRRC29 mainly depend on these three clinical factors. Figure \ref{fig:rda_nsclc_lasso} displays the estimated values of $g_{0}$ and the $g$ functions for CDK11A and LRRC29.
\begin{figure}
\centering
\includegraphics[scale=0.75]{rda_nsclc_weighted.pdf}
\caption{\label{fig:rda_nsclc_lasso}Estimated coefficients for NSCLC analysis.}
\end{figure}
To compare the performance of the proposed methods with existing methods, we performed a cross-validation analysis. Specifically, we repeatedly split the data set into pairs of training and validation sets 100 times, with a ratio of sample sizes of 7$:$3 and balanced clinical characteristics within each pair. In each split, we estimated the parameters using the training set and predicted the response on the validation set using the proposed methods and lasso. For the proposed methods, due to the small sample size, we fixed $\boldsymbol{\beta}$ at the estimate from the full data. For lasso, we considered regression on linear predictors ($\boldsymbol{X}$,$\boldsymbol{Z}$) and regression on $\boldsymbol{X}$, $\boldsymbol{Z}$, and pairwise interactions between components of $\boldsymbol{X}$ and $\boldsymbol{Z}$; in both cases, coefficients of $\boldsymbol{Z}$ were not penalized. The average mean-squared prediction errors over the 100 validation sets for the proposed methods, lasso without interactions, and lasso with interactions are 503.090, 512.011, and 624.602, respectively.
This indicates that the proposed methods achieve better prediction performance over the alternatives. Between the two methods that allow for interaction effects, lasso with interactions yields much larger error than the proposed methods, probably because there are too many pairwise interaction terms to be estimated. Another possible reason is that the interaction effects cannot be adequately captured by pairwise product terms.
\subsection{TCGA LGG data}
We also applied the proposed methods to identify potential risk factors associated with the survival of patients diagnosed with lower-grade glioma (LGG) in TCGA. The data set consists of grade II and grade III tumors. Instead of integrating clinical and a single type of genomic variables, we investigated the effects of protein expressions, gene expressions, and clinical variables on time to death since initial diagnosis, allowing for interactions between protein and gene expressions. After discarding genes with zero expressions for 30\% or more subjects, the data set consists of 17 238 gene expressions. We set the overall survival time to be the outcome of interest, which is potentially right-censored. We reduced the dimension of gene expressions using principal component analysis and set $\boldsymbol{U}$ to be the first 7 principal components, which account for over 50\% of the total variability. The set of linear predictors $\boldsymbol{Z}$ consists of $\boldsymbol{U}$, age, histological grade, and gender. The set of predictors $\boldsymbol{X}$ includes the expressions of 209 proteins or phospho-proteins. After removing subjects with missing data, the sample size is 423. The median time to censoring or death is 630 days, and the censoring rate is 76.83\%.
We identified 7 important protein expressions to be associated with the overall survival. The selected protein expressions and their estimated coefficients are shown in Table \ref{Tab:rda_lgg_sel}. Some of the selected proteins, including FoxM1, HSP70, and Cyclin B1, have previously been shown to be associated with survival of glioma patients \citep{zhang2017akt, beaman2014reliability, chen2008overexpression}. The effect of Cyclin B1 was identified to vary with the gene expressions.
Figure \ref{fig:rda_lgg_lasso_pc7} displays the estimated values of $g_0$ and the $g$ function for Cyclin B1.
\begin{figure}
\centering
\includegraphics[scale=0.75]{rda_lgg_weighted.pdf}
\caption{\label{fig:rda_lgg_lasso_pc7}Estimated coefficients for LGG analysis.}
\end{figure}
Similar to the cross-validation analysis for NSCLC, we compared the prediction performance of the proposed method with lasso with and without interactions. We split the data set into pairs of training and validation sets 100 times, with a ratio of sample sizes of 7$:$3 and balanced censoring proportions and clinical characteristics within each pair. Again, for the proposed method, we fixed $\boldsymbol{\beta}$ at the estimate from the full data.
For each data split and each method, we used the estimated model from the training set to obtain survival predictions on the corresponding validation set and computed the C-index.
The average C-index values over 100 splits for the proposed methods, lasso without interactions, and lasso with interactions are 0.718, 0.704, and 0.718, respectively.
The proposed methods and lasso with interactions yield similar average C-index values, and they show slight improvement over lasso without interaction. This suggests the presence of interaction effects. By allowing those effects in the model construction, we may yield additional predictive power.
\section{Discussion}
\label{sec:diss}
In this paper, we propose a single-index varying-coefficient model for the
integration of clinical and genomic variables, where the effects of genomic variables are allowed to vary with clinical variables.
The effects of genomic variables are set as
nonparametric functions of (a projection of)
the clinical variables to accommodate intrinsically different scales of measurements between clinical and genomic variables.
Unlike the existing estimation methods for varying-coefficient models, our penalized approach separately selects for predictors with constant effects and those with varying effects. Numerical studies illustrate that the proposed methods effectively distinguish zero, constant, and non-constant effects and yield accurate prediction.
The proposed methods are general and can be applied with different choices of penalty functions or outcome distributions. For example, different types of penalties, such as SCAD and MCP, can be chosen for the constant or varying effects. Also, different outcome models, such as the accelerated failure time model or additive hazard model, can be adopted.
There are several possible directions for future research. First, we may be interested in the interaction between two types of high-dimensional predictors, in which case the predictor vector $\boldsymbol{U}$ is high-dimensional. One possible approach is to project $\boldsymbol{U}$ to a low-dimensional space prior to fitting the proposed model. For example, as in the analysis of the LGG data, the projection can be performed by principal component analysis. However, the projected features may not have simple interpretations. Another possible approach is to perform variable selection on $\boldsymbol{U}$ by introducing an extra penalty on $\boldsymbol{\beta}$ \citep{peng2011penalized, radchenko2015high}. This approach would involve substantial computational difficulty due to the introduction of an extra penalty term. Second, it is of interest to consider more than two data types.
A possibility is to introduce extra indices that correspond to the extra data types, so that the effect of a variable may be a function of multiple indices.
This approach, however, faces enormous computational challenges because it involves multivariate nonparametric functions.
\bibliographystyle{apa}
|
1,108,101,564,364 | arxiv | \section{Introduction}
It is known that quantum gravity is hoped to
resolve the initial singularity of the universe and assign a proper
initial condition with a reasonable probability, where the
corresponding wavefunction of the universe is discribed by the
Wheeler-DeWitt equation \cite{DeWitt:1967yk}.
In order to obtain a unique solution to this equation, one natural
idea to fix the boundary condition is to choose the initial state
such that it most closely resembles the ground state, or the
Euclidean vacuum state. Moreover, it is known that the Euclidean
vacuum state wavefunction in standard quantum theory can be obtained
by the Euclidean path integral. Hence, Hartle and Hawking
\cite{Hartle:1983ai} generalized it to the case of gravity, and
introduced the Euclidean path integral to choose the wavefunction of
the universe,
\begin{eqnarray}
\Psi[h_{ij}, \chi] = \int_{\partial g = h, \partial \phi = \chi}
\mathcal{D}g\mathcal{D}\phi \;\;e^{-S_{\mathrm{E}}[g,\phi]},
\label{Psi}
\end{eqnarray}
where $h_{ij}$ is the metric of a compact 3-geometry $\Sigma$
and $\chi$ is the matter (inflaton) field $\phi$ on $\Sigma$,
which form the boundary of all possible, regular
4-geometries and matter configurations over which the path integral
is to be performed. This is called the \textit{no-boundary proposal}.
However it is known that the Hartle-Hawking no-boundary
proposal is in severe conflict with the realization of successful inflation \cite{Vilenkin:1987kf}.
It has been proved that an $O(4)$-symmetric solution gives the
lowest action for a wide class of sc alar-field theories, hence
dominating the path integral (\ref{Psi}). So it is reasonable to
assume that the metric owns the $O(4)$ symmetry even in the presence
of gravity \cite{Coleman:1977th},
\begin{eqnarray}
ds_{\mathrm{E}}^{2} = d\tau^{2} + a^{2}\left(\tau\right) d\Omega_{3}^{2}.
\end{eqnarray}
On the other hand, under the steepest-descent approximation (or
equivalently, WKB approximation) where the wavefunction is dominated
by sum-over on-shell histories, the corresponding actions are
complex-valued,
\begin{equation}
\Psi[\tilde{a},\tilde{\phi}]
\simeq A[\tilde{a},\tilde{\phi}] e^{i S[\tilde{a},\tilde{\phi}]},
\label{eq:class}
\end{equation}
where $\tilde{a}$ and $\tilde{\phi}$ are the boundary values of $a$
and $\phi$, respectively, and $A$ and $S$ are real-valued functions.
When the phase $S$ varies much faster than $A$,
\begin{equation} \label{eqn:classicality}
\left|\nabla_I A[\tilde{a},\tilde{\phi}]\right|\ll
\left|\nabla_I S[\tilde{a},\tilde{\phi}]\right|, \qquad I=\tilde{a},\tilde{\phi},
\end{equation}
these histories are classical since it satisfies the semi-classical
Halilton-Jacobi equation \cite{Hartle:2008ng}.
Thus, provided that the potential is exactly flat, if the values
$\tilde{a}$ and $\tilde{\phi}$ are such that they form a maximal
slice of an $O(4)$ symmetric regular Euclidean solution, that is,
$\tilde{a}$ and $\tilde{\phi}$ are the values at which
$\dot{{a}}=\dot{\phi}=0$, then the probability of the existence of
such a classical universe is evaluated as
\begin{equation}
P \simeq e^{- 2 S_{\mathrm{E}}},
\label{probability}
\end{equation}
where $S_{\mathrm{E}}$ is half the Euclidean action of the solution
with the maximal slice at which $a=\tilde{a}$ and $\phi=\tilde{\phi}$.
At the maximal slice the Euclidean solution can be analytically continued
to a Lorentzian classical solution, and a classical universe is born
with its initial data given by $a=\tilde{a}$ and $\phi=\tilde{\phi}$.
This analysis can be applied to evaluate the probability of creation
of a universe that has witnessed an early inflationary era, at which
the potential satisfies the slow-roll conditions. For simplicity,
let us consider the chaotic inflationary scenario where the
potential $V(\phi)\propto\phi^n$ with $n>0$ has reflection symmetric
around $\phi=0$ and monotonically increases as $|\phi|$ increases.
In Einstein gravity, there is a continuous distribution of
complex-valued instantons for $|\phi|> \phi_{\mathrm{cut}}\sim 1$
(hereafter, we adopt the natural units $8\pi
G=M_{\mathrm{Pl}}^{-2}=1$), where $\phi_{\mathrm{cut}}$ is a cutoff
scale below which the classicality condition is no longer
satisfied~\cite{Hartle:2008ng,Hartle:2007gi,Hwang:2011mp}. When
there are approximately classical Euclidean solutions, the slow-roll
condition is generally satisfied in the Lorentzian regime, and the
Euclidean action is approximately given by
\begin{equation}\label{SEGR}
S_{\mathrm{E}} \simeq -\frac{12\pi^{2}}{V(\phi)}\,.
\end{equation}
Inserting this into Eq.~(\ref{probability}), one immediately obtains
the probability of creation of such a universe:
\begin{equation}\label{PN}
P\propto\exp\left(\frac{24\pi^2}{V(\phi)}\right)\quad\Longrightarrow\quad
\ln P\propto N^{-\frac{n}{2}}\,,
\end{equation}
where $N$ is the corresponding e-folding number defined by $dN=Hdt$.
It is obvious from Eq.~(\ref{PN}) that an early universe with a
large vacuum energy or large number of $e$-folds is exponentially
suppressed.
Hence, a successful inflationary scenario
with sufficient e-folding number (e.g. 50-60 e-folding numbers) is
disfavored, which is thought to be a defect of the no-boundary
proposal
(for alternative explanations and reviews, see
\cite{Hwang:2012bd}). However, this severe problem can be naturally solved in a modified gravity
theory with a non-vanishing graviton mass, that is, the so-called
massive gravity theory.
\section{dRGT Massive gravity}
Recently, with various motivations, massive gravity models have been
extensively studied. In this paper, we consider nonlinear massive gravity
proposed by deRham, Gabadadze and Tolley~\cite{deRham:2010kj}, the
so-called dRGT massive gravity,
\begin{eqnarray}
S = \int \sqrt{-g}~d^4x \left[\frac{R}{2} +
\mathcal{L}_{\mathrm{mg}} - \frac{1}{2} \left(\nabla \phi\right)^{2}
- V(\phi) \right],
\end{eqnarray}
where $\phi$ is the inflaton and $\mathcal{L}_{\mathrm{mg}}$ is the massive gravity term
given by
\begin{eqnarray}
\mathcal{L}_{\mathrm{mg}}
&=& m_g^2\Biggl[\frac{1}{2} \left( \left[ \mathcal{K} \right]^{2}
- \left[ \mathcal{K}^{2}\right] \right) \nonumber \\
&+& \alpha_{3}\frac{1}{6} \left( \left[ \mathcal{K} \right]^{3}
- 3\left[ \mathcal{K} \right]\left[ \mathcal{K}^{2}\right]
+ \left[ \mathcal{K}^{3}\right] \right) \nonumber \\
&+& \alpha_{4}\frac{1}{24} \Bigl( \left[ \mathcal{K} \right]^{4}
- 6 \left[ \mathcal{K} \right]^{2} \left[ \mathcal{K}^{2}\right]
+ 3 \left[ \mathcal{K}^{2}\right]^{2} \nonumber \\
&& \qquad+ 8 \left[ \mathcal{K} \right]\left[ \mathcal{K}^{3}\right]
- 6\left[ \mathcal{K}^{4}\right] \Bigr)\Biggr]\,,
\end{eqnarray}
with $m_g^2$ being a graviton mass parameter, $\alpha_3$
and $\alpha_4$ being non-dimensional parameters, and
\begin{eqnarray}
\mathcal{K}^{\mu}_{\nu} = \delta^{\mu}_{\nu}
- \sqrt{g^{\mu\sigma}G_{ab}(\varphi)\partial_{\nu}\varphi^{a}
\partial_{\sigma}\varphi^{b}}.
\end{eqnarray}
The fields $\varphi^a$ ($a=0,1,2,3$) are called the St\"{u}ckelberg
fields whose role is to recover the general covariance.
We set the fiducial, field space metric $G_{ab}(\varphi)$ to be
a de Sitter metric~\cite{Zhang:2012ap,dS:2012},
\begin{eqnarray}
G_{ab}(\varphi)d\varphi^{a}d\varphi^{b} \equiv
- (d\varphi^{0})^{2} + b^{2}(\varphi^{0})d\Omega_{3}^{2},
\end{eqnarray}
where $b(\varphi^0) = F^{-1} \cosh (F \varphi^0)$ with $F^{-1}$
being the curvature radius of the fiducial metric. For simplicity,
we assume $G_{ab}(\varphi)$ to be non-dynamical as in the original
dRGT gravity~\cite{deRham:2010kj}. However, the theory may be more
consistently formulated if $G_{ab}$ is made
dynamical~\cite{Hassan:2011zd}. We will not discuss
further this issue here since it is beyond the scope of this short
article.
Under the assumption of $O(4)$ symmetry, we set $\varphi^0=f(\tau)$.
Then after a straightforward calculation, we obtain $b(\varphi^0)=
X_{\pm} a(\tau)$, where~\cite{Gumrukcuoglu:2011ew}
\begin{eqnarray}
X_{\pm} \equiv
\frac{1 + 2\alpha_{3} + \alpha_{4} \pm \sqrt{1 + \alpha_{3}
+ \alpha_{3}^{2} - \alpha_{4}} }{\alpha_{3} + \alpha_{4}}\,.
\end{eqnarray}
We note that $X_{\pm}$ must be positive. This constrains the
parameter space of the theory.
The equations of motion are given by
\begin{eqnarray}
\dot{a}^{2} - 1 - \frac{a^{2}}{3}
\left( \frac{\dot{\phi}^{2}}{2} - V_{\mathrm{eff}} \right) &=& 0\,,\\
\ddot{\phi} + 3 \frac{\dot{a}}{a} \dot{\phi} - V'_{\mathrm{eff}} &=& 0\,,
\end{eqnarray}
where a dot ($\dot{~}$) denotes $d/d\tau$, a prime (${~}'$) denotes
$d/d\phi$, and the effective potential is given by
\begin{eqnarray}
V_{\mathrm{eff}} (\phi) = V(\phi) + \Lambda_{\pm}\,,
\end{eqnarray}
with $\Lambda_{\pm}$ being the cosmological constant due to
the massive gravity terms,
\begin{eqnarray}
&&\Lambda_{\pm}
=- m_g^{2} \left( 1 - X_{\pm} \right) \Bigl[ 3\left(2 - X_{\pm} \right)
\nonumber \\
&&\quad
+ \alpha_{3} \left(1 - X_{\pm} \right)\left(4 - X_{\pm} \right)
+ \alpha_{4} \left(1 - X_{\pm} \right)^{2} \Bigr]\,.
\end{eqnarray}
Using these equations of motion, we obtain the on-shell action
as~\cite{Zhang:2012ap}
\begin{eqnarray}
S_{\mathrm{E}} = 2 \pi^{2} \int d\tau \left[ 2a^{3} V_{\mathrm{eff}}
- 6 a - m_g^{2} a^{3} Y_{\pm} \sqrt{-\dot{f}^{2}} \right]\,,
\end{eqnarray}
where
\begin{eqnarray}
Y_{\pm} \equiv 3 (1-X_{\pm})
+ 3 \alpha_{3} (1-X_{\pm})^{2} + \alpha_{4} (1-X_{\pm})^{3}.
\end{eqnarray}
Now we assume that the potential is sufficiently flat so that
at leading order approximation we can set $V_{\mathrm{eff}}'=0$,
hence $\dot\phi=0$. Thus we obtain
\begin{eqnarray}
\phi &=& \phi_{0}\,,
\\
a &=& \frac{1}{H} \cos H\tau,
\end{eqnarray}
where $H^{2} = V_{\mathrm{eff}}(\phi_0)/3$.
Then the action (over the half hemisphere) is given by~\cite{Zhang:2012ap}
\begin{eqnarray}\label{SEMG}
S_{\mathrm{E}} &\simeq&
-\frac{4\pi^{2}}{H^{2}}
\left( 1 - \frac{m_g^{2}}{F^{2}} \frac{Y_{\pm}}{X_{\pm}}
\alpha^2C\left(\alpha^2\right) \right)\,,
\label{SE}
\end{eqnarray}
where $\alpha = X_{\pm}F/H$ and
\begin{eqnarray}\label{C}
C\left(\alpha^2 \right) \equiv
\frac{2 - \sqrt{1 - \alpha^{2}} \left(2 + \alpha^{2} \right)}{6 \alpha^{4}}\,.
\end{eqnarray}
Comparing Eq.~(\ref{SEMG}) to~(\ref{SEGR}), using the Friedman
equation $3H^2=V(\phi)$, one finds that a counter term proportional
to $m_g^2$ appears in dRGT massive gravity theory, which drastically
changes the behavior of wavefunction so that the distribution of
probability may not peak at $H^2\simeq0$. It should be noted that
the above approximation is valid as long as the slow-roll condition
is satisfied and the field value is well outside the cutoff, i.e.,
$|\phi_0|> \phi_{\mathrm{cut}}$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{plotQ1}
\caption{\label{fig:plotQ}The function $Q(u)$ as a function of the
normalized Hubble parameter $h^2=1/u=H^2/(X_{\pm}^2F^2)$ for
$m_g^2Y_{\pm}/(F^2X_{\pm})=0$ (dashed), $3$ (red), $6$ (green), and
$10$ (blue) from top to bottom. As has been expected, in traditional
Einstein gravity where $m_g=0$, the probability will exponentially
decrease when $H^2$ increases, which implies disfavor of inflation
with a large number of e-foldings. However, in dRGT massive gravity,
counter terms proportional to $m_g^2$ appear so that the probability
peaks at much larger $H^2$. This implies the preference of a much
larger e-folding number for a successful inflationary scenario,
hence offers a way to realize inflation in the context of quantum
gravity.}
\end{center}
\end{figure}
\section{Sufficient $\bm{e}$-folding number for inflation}
As mentioned above, we must have $X_{\pm}>0$. On the other hand,
$\alpha$ must be in the range $0<\alpha<1$~\cite{Zhang:2012ap}.
Moreover, for definiteness, we also assume $Y_{\pm}>0$. From
Eq.~(\ref{C}), $C(\alpha)>0$, which implies that the absolute value
of $S_E$ (or $-S_E$) is smaller in the massive gravity case than in
the Einstein case, provided with the same value of $H$.
Alternatively, if we fix the model parameters and vary $H$,
$\alpha^2C(\alpha^2)\to0$ as $H\to\infty$ while it approaches 1/3
as $H\to H_{\mathrm{min}}$ where $\alpha(H_{\mathrm{min}})=1$
or $H_{\mathrm{min}}=X_{\pm}F$.
Then there arises a hope that the probability of a universe with
small $H$ may be substantially suppressed, or conversely the
probability of a universe with larger $H$ is exponentially enhanced.
To see if this is the case or not, let us introduce variable
$u\equiv\alpha^2=X_{\pm}^2F^2/H^2$ and rewrite Eq.~(\ref{SEMG}) in
the following form:
\begin{eqnarray}
S_{\rm
E}&=&-\frac{4\pi^2}{X_\pm^2F^2}\left[u-\frac{m_g^2}{F^2}\frac{Y_{\pm}}{X_{\pm}}u^2C(u)\right]\cr
&\equiv&-\frac{4\pi^2}{X_\pm^2F^2}Q(u)\,,
\end{eqnarray}
where $Q(u)\equiv -X_\pm^2F^2S_{\rm E}/(4\pi^2)$. Then one finds
\begin{eqnarray}
\frac{\partial Q}{\partial H^2}
&=&-\frac{\alpha^2}{H^2}\frac{\partial Q}{\partial\alpha^2}
\cr
&=&-\frac{\alpha^2}{H^2}\left(
1-\frac{m_g^2}{F^2}\frac{Y_{\pm}}{X_{\pm}}\frac{\alpha^2}{4\sqrt{1-\alpha^2}}
\right)\,.
\end{eqnarray}
The function $Q(u)$ is plotted in Fig.~\ref{fig:plotQ} as a function
of the normalized Hubble parameter $h^2=1/u$, for
$m_g^2Y_{\pm}/(F^2X_{\pm})=0$, $3$, $6$, and $10$, respectively. It
is readily seen that unlike the case for traditional Einstein
gravity where $m_g=0$ (dashed line), in dRGT massive gravity theory,
the function $Q$, hence $-2S_E$ will be maximized at
$\alpha^2=\alpha_{\mathrm{m}}^2$ where
\begin{eqnarray}
\frac{\alpha_{\mathrm{m}}^2}{\sqrt{1-\alpha_{\mathrm{m}}^2}}
=\frac{4F^2}{m_g^2}\frac{X_{\pm}}{Y_{\pm}}\,.
\end{eqnarray}
Since the left-hand side varies monotonicallly
from zero to infinity as $\alpha^2$ varies in the range $0<\alpha^2<1$,
there always exists a unique maximum provided that both $X_{\pm}$
and $Y_{\pm}$ are positive.
Thus the probability is maximized at $\alpha^2=\alpha_{\mathrm{m}}^2$
with the exponent given by
\begin{eqnarray}
\ln
P&\approx&-2S_{\mathrm{E}}=\frac{8\pi^2}{X_{\pm}^2F^2}G(\alpha_{\mathrm{m}}^2)\,;
\cr \cr G(u)&\equiv& \frac{u^2-2u+4-4\sqrt{1-u}}{3u}\,, \label{lnP}
\end{eqnarray}
which may be well approximated by $G(u)\approx u/2$ when $u\ll1$.
We see that the problem of the no-boundary proposal may be solved
for a sufficiently wide range of the parameter space. For example,
assuming both $X_{\pm}$ and $Y_{\pm}$ are of order unity, we may
consider the case $m_g^2\gg F^2$ which implies
$\alpha_{\mathrm{m}}^2\approx4 X_{\pm}F^2/(Y_{\pm}m_g^2)\ll1$.
Inserting this into Eq.~(\ref{lnP}), one finds
\begin{eqnarray}
\ln P\approx \frac{16\pi^2}{m_g^{2}X_{\pm}Y_{\pm}}
\approx\frac{4\pi^2}{H_{\mathrm{m}}^2}\,,
\end{eqnarray}
where $H_{\mathrm{m}}^2\approx X_{\pm}Y_{\pm}m_g^2/4$
is the Hubble parameter at which the probability is
maximized. Thus a classical universe emerges most probably
with the Hubble parameter $H^2=O(m_g^2)$. For $m_g^2$
close to the Planck scale this gives sufficient inflation.
\section{Triggering mass parameter}
We have shown that for a sufficiently large mass parameter $m_g$
the problem associated with the no-boundary proposal that
the expected number of $e$-folds is too small to realize
successful inflation may be solved. However, since we know that the
graviton mass should be extremely small today, we need a mechanism
to make it large only in the very early universe.
Here we present a couple of speculations for such a mechanism.
\\
\noindent (1) \textit{Field-dependent mass}: Let us consider the
case when $m_g^2$ is a function of the inflaton
$\phi$~\cite{Huang:2012}.
\begin{eqnarray}
m_g^2=m_g^2(\phi)\,.
\end{eqnarray}
If $m_g^2$ is finite for $|\phi|>\phi_{\mathrm{cut}}$ but
exponentially small
for $|\phi|<\phi_{\mathrm{cut}}$, then our analysis in the case of massive
gravity is still valid and the fact that there is no classical
histories at $|\phi|<\phi_{\mathrm{cut}}$ remains the same as in the
Einstein case~\cite{Hartle:2008ng}. Therefore, there will be a long
enough inflationary stage, and Einstein gravity will be recovered
when the inflation ends at $|\phi|<\phi_{\mathrm{cut}}$. For
example, a simple function like
$m_g^2=m_0^2\exp[-(\phi_{\mathrm{cut}}/\phi)^2]$ seems to satisfy
the requirement. Of course, however, we need a more thorough
analysis before we may conclude that such a model can actually lead
to the scenario described above (some discussions on its
dynamics have been done in some references, e.g.~\cite{LSS:2013}).
\noindent
(2) \textit{Running mass parameter}: Quantum effects may change $m_g^2$
through energy scales. If gravitational interactions are not asymptotically free,
then one may have a large graviton mass $m_g \sim M_{\mathrm{Pl}}$ at
the Planck energy scale, while it becomes small $m_g\sim H_0$
at the current energy scale, where $H_0$ is the current expansion rate of
the universe. If this scenario works, it may be also possible to explain the
accelerated expansion of the current universe simultaneously.
\section{Conclusion}
Studies of Hartle-Hawking no-boundary proposal for the wavefunction
of the universe in the context of dRGT massive gravity opens a
window to discuss inflationary scenario in quantum gravity theories.
Traditionally, the no-boundary wavefunction exponentially prefers
small number of $e$-foldings near the minimum of the inflaton
potential, and hence it does not seem to predict the universe we
observe today. However, we found that the contribution from the
massive gravity sector can drastically change this situation. We
showed that, for a fairly wide range of the parameters of the
theory, the no-boundary wavefunction can have a peak at a
sufficiently large value of the Hubble parameter so that one obtains
a sufficient number of $e$-folds of inflation.
To make this model to work, however, we need to find a way to trigger
the mass parameter in the very early universe while it should remain to
be extremely small in the current universe.
We speculated a couple of mechanisms for this purpose.
It is a future issue to see if these mechanisms can be actually
implemented in massive gravity.
In addition, we remain a future issue to consider the implication of potential problems of the dRGT model \cite{Deser:2012qx}
and applications for possible generalized massive gravity models that may not suffer from such problems, e.g., \cite{Lin:2013sja}.
\begin{acknowledgments}
This work was supported by the JSPS Grant-in-Aid for Scientific
Research (A) No.~21244033.
\end{acknowledgments}
|
1,108,101,564,365 | arxiv | \section{Introduction}
Continuous Variables Quantum Key Distribution (CV-QKD) tackles the problem of the generation and distribution of symmetric cryptographic keys without assuming any computational limitations while employing standard telecom equipment~\cite{grosshans02b}.
However, the amount of information available to an eavesdropper is highly dependent on the excess noise observed in the channel, which demands a careful and precise estimation of noise sources~\cite{laudenbach18}.
When implemented over standard optical fibres, one such noise source is random polarization drift in the communication channel, which will degrade the efficiency of the coherent detection scheme~\cite{liu20}.
Therefore a polarization drift compensation scheme is strictly necessary for the implementation of efficient and secure CV-QKD systems~\cite{wang19,zhao18,laudenbach18,pereira21}.
\par
Coherent-state CV-QKD typically encodes the information in the phase and amplitude of weak coherent states, thus allowing for implementation with current modulation methods and telecom-based equipment~\cite{grosshans02b,almeida21}.
The first implementations of CV-QKD protocols were carried out by using a transmitted local oscillator (LO) setup~\cite{ralph99}.
Nevertheless, that was found to be a security loophole, because an eavesdropper could manipulate the LO, thus hiding their tampering on the quantum signal itself~\cite{kleis17,laudenbach19}.
In that scenario, local LO (LLO) techniques, usually employing a relatively high power pilot tone aided by digital signal processing (DSP), are today the most common implementations of CV-QKD systems~\cite{kleis17,laudenbach19}.
Lately, LLO CV-QKD implementations using single-sideband modulation with true heterodyne detection have been proposed, avoiding low-frequency noise ~\cite{kleis17,laudenbach19}.
In order to further maximize noise rejection, CV-QKD implementations using root-raised-cosine (RRC) signal modulation have been explored~\cite{kleis17}.
Nevertheless, those implementations do not consider the impact of polarization mismatch between the quantum signal and the LLO.
Random polarization drift occurs naturally in fibres subjected to vibrations, temperature fluctuations, among others~\cite{liu19}.
Misalignments between the polarizations of the two laser fields interfering in the coherent detection scheme will severely reduce the efficiency of the detection scheme employed~\cite{liu20,kleis17}.
In CV-QKD communication systems, polarization drift is typically avoided, during a limited time window, by manually aligning the polarization of the signal with that of the LO~\cite{kleis17,laudenbach19}.
This may be appropriate in a laboratory environment, where stability times are typically in the range of hours~\cite{liu20}, but in field deployed fibres, especially aerially deployed ones, this stability will be on the order of minutes~\cite{liu19}.
Conversely, in classical communications, random polarization drift is compensated for by detecting both polarizations of the incoming light field and then compensating for the time-evolving drift in DSP~\cite{zhang12, faruk17}.
A system employing DSP aided polarization mismatch recovery was presented in~\cite{wang19}, using two optical hybrids coupled with four balanced coherent receivers.
\par
In this work, we present a polarization diverse receiver setup employing true heterodyne detection, requiring only two balanced coherent receivers, for use in CV-QKD applications, the first demonstration of such a scheme, to the best of our knowledge.
This experimental setup, coupled with the corresponding DSP, allows for passive polarization drift compensation, i.e. not requiring any manual tuning or feedback loop system.
We present experimental results showing that our system is able to achieve secure transmissions even in very adverse random polarization drift scenarios.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/systemDiagram}
\caption{Block diagram of the experimental system, the polarization diverse receiver system is highlighted.}
\label{fig:blockDiagram}
\end{figure*}
\par
This work is organized as follows.
We begin by fully describing our experimental system and the corresponding digital signal processing (DSP) utilized.
Secondly, we present and discuss experimental results extracted from the previously described system, showing the evolution of its estimated channel transmission, excess noise and secure key-rate.
We finalize this work with a summary of the major conclusions.
\section{System Description}
A block diagram of our system is presented in Figure~\ref{fig:blockDiagram}.
Alice starts by modulating the optical signal that she extracts from her local coherent source, which consists of a Yenista OSICS Band C/AG TLS laser, tuned to 1550.006~nm.
RRC modulation is chosen because of the possibility of using matched filtering at the receiver without inter-symbolic interference~\cite{faruk17}, thus allowing for optimum Gaussian white-noise minimization.
The symbol rate was set at 38.4~MBd, with an 8-phase-shift keying (8-PSK) constellation, the security of which, in the asymptotic regime, was established in~\cite{becir12}.
In order to avoid the high levels of noise present in the low frequency part of the electromagnetic spectrum~\cite{tomer60}, the RRC signal is up-converted in the transmitter to an intermediate frequency, $f_Q = 38.4$~MHz.
Furthermore, this signal is frequency multiplexed with a DC pilot tone, i.e. $f_P = 0$~Hz, which will be used for frequency and phase recovery at the receiver.
This signal is fed into a Texas Instruments DAC39J84EVM digital to analog converter (DAC), which in turn drives a u2t Photonics 32 GHz IQ modulator coupled with a SHF807 RF amplifier.
The modulated signal is first passed through a Thorlabs PL100S State Of Polarization (SOP) Locker/Scrambler, which allows us to scramble the polarization state of the signal, and then attenuated using a Thorlabs EVOA1550F variable optical attenuator until the signal has on average 0.33 photons per symbol.
The signal is then sent through a single-mode fibre spool with length 40~km before arriving at the receiver.
At the receiver side, the signal is first passed through a PBS, splitting its polarizations and sending each to different 50/50 beam-splitters, where they are mixed with the LLO.
The LLO, which consists of a Yenista OSICS Band C/AG TLS laser tuned to 1549.999~nm.
In this situation the signals have a frequency shift of $f_S\approx1$~GHz, a value chosen to coincide with the flattest region of the balanced detectors' frequency response.
The LLO is also passed through a PBS, this one with its fast-axis shifted 45$^\text{o}$ in relation to the polarization alignment of the laser, effectively sending half the power to each individual 50/50 beam-splitter.
Both 50/50 beam-splitters are polarization maintaining, ensuring that the polarization of both the signal and LO mixed in each match.
The outputs of each 50/50 beam-splitter are fed into a pair of Thorlabs PDB480C-AC balanced optical receivers, connected to the inputs of a Texas Instruments ADC32RF45EVM ADC board, which is running at a sample rate of 2.4576 GS/s.
The digitized signal is then fed into the DSP stage, which is also presented in Figure~\ref{fig:blockDiagram}.
The bulk of the DSP is performed independently for each polarization, before the recovered constellations from each polarization are combined in a constant modulus algorithm (CMA) step.
\par
The DSP starts by performing frequency recovery, where four copies of the signal obtained from the ADC are taken and a tight digital pass-band filter, centered at $\tilde{f}_P = f_P + f_S$, is applied to one of them.
Extracting the phase from this filtered signal and fitting it against a time-vector will yield an estimation for $\tilde{f}_P$.
One of the other copies from the original signal is then downconverted by multiplying it by the complex oscillator $e^{-i2\pi\tilde{f}_Pt_k}$, where $t_k$ is a time-vector, thus placing the pilot signal at close to base band.
This signal will later be used for phase noise compensation.
The third copy of the original signal is downconverted by another complex oscillator of the form $e^{-i2\pi\left(\tilde{f}_P+\frac{f_Q}{2}\right)t_k}$, which will cause the pilot to be located at roughly $\frac{f_Q}{2}$.
This signal will later be used for clock recovery.
The fourth and final copy of the original signal is downconverted by a third complex oscillator of the form $e^{-i2\pi\left(\tilde{f}_P+\Delta f\right)t_k}$, where $\Delta f=f_Q-f_P$, resulting in the oscillator taking the explicit form $e^{-i2\pi\left(f_Q+f_S\right)t_k}$, this places the quantum signal at close to base band.
Note that the estimation of $\tilde{f}_P$ is assumed to contain errors.
\par
The frequency compensated pilot and clock signals are then passed through a low-pass and a band-pass filter, respectively.
This filtering step will both reduce the noise present in the signals and isolate them from each other.
The phase of the filtered pilot signal, which is equal to the phase mismatch between the two lasers apart from a constant value, which in turn is obtained during an initial calibration stage, is then extracted and used to compensate for the phase noise in both the quantum signal and the clock.
Since the pilot and signal are sampled at the same instant, the phase mismatch estimated from the former will equal that of the latter, thus residual phase noise will arise mainly from amplitude noise degrading the accuracy of the estimation~\cite{kleis17}.
The phase compensated quantum signal is then passed through its own matched filter.
The filtering stage on the quantum signal is postponed until after the phase compensation step, this is done because small errors in the frequency estimate can be corrected by the phase noise compensation and application of the matched filter on the signal while it is not at base band may cause distortion in the final obtained constellation.
\par
Finally, the filtered clock is used to re-sample both itself and the filtered quantum signal to one sample per symbol, with one sample being taken of each for every 0 of the imaginary component of the clock signal.
At the end of this clock recovery step we are in the possession of four constellations, two corresponding to the clock constellations of the clock signal, $x_\text{C}$ and $y_\text{C}$, and two to the quantum signal ones , $x_\text{Q}$ and $y_\text{Q}$.
\par
These four constellations are then fed into the CMA algorithm, which follows a slightly modified form of the method presented in~\cite{faruk17}.
Sliding blocks of N samples of each of the four constellations are isolated, taking the form of the column vectors
\begin{align}
\vec{x}_\text{Ci}(n) &= [x_\text{C}(n)~ x_\text{C}(n-1)~ ...~ x_\text{C}(n-N)]^T, \\
\vec{y}_\text{Ci}(n) &= [y_\text{C}(n)~ y_\text{C}(n-1)~ ...~ y_\text{C}(n-N)]^T, \\
\vec{x}_\text{Qi}(n) &= [x_\text{Q}(n)~ x_\text{Q}(n-1)~ ...~ x_\text{Q}(n-N)]^T, \\
\vec{y}_\text{Qi}(n) &= [y_\text{Q}(n)~ y_\text{Q}(n-1)~ ...~ y_\text{Q}(n-N)]^T.
\end{align}
At the start of the algorithm, i.e. blocks $\vec{x}_\text{Ci,Qi}(0)/\vec{y}_\text{Ci,Qi}(0)$, are composed of all zeros except for the first element, which will consist of the first elements of the corresponding constellation.
The other elements of the sliding blocks are then progressively filled up.
The blocks for each signal are concatenated, resulting in the input column vectors~\cite{faruk17}
\begin{align}
\vec{u}_\text{Ci}(n) &= [\vec{x}_\text{Ci}(n);~\vec{y}_\text{Ci}(n)],\\
\vec{u}_\text{Qi}(n) &= [\vec{x}_\text{Qi}(n);~\vec{y}_\text{Qi}(n)].
\end{align}
Two N-tap filters are created, $\vec{h}_\text{x}$ and $\vec{h}_\text{y}$ , consisting also of column vectors.
At the start of the algorithm the first element of $\vec{h}_\text{x}$ and $\vec{h}_\text{y}$ is set to 1, with all the others being 0.
These two filters are concatenated,
\begin{align}
\vec{h} &= [\vec{h}_\text{x};~\vec{h}_\text{y}],
\end{align}
with the resulting filter being applied to the input column vectors following
\begin{align}
s_\text{C}(n) &= \vec{h}^\dagger\cdot \vec{u}_\text{Ci}(n),\label{eq:s_C}\\
s_\text{Q}(n) &= \vec{h}^\dagger\cdot \vec{u}_\text{Qi}(n),\label{eq:s_Q}
\end{align}
which correspond to the clock and quantum output constellations, respectively.
Note that both $\vec{h}$ and $\vec{u}_\text{Ci,Qi}(n)$ are $2N\times1$ column vectors, so for each of the inner products in~\eqref{eq:s_C} and~\eqref{eq:s_Q}, one output constellation point will be generated.
After each step $n$, the ``error", $\varepsilon$, of the algorithm is computed through~\cite{faruk17}
\begin{equation}
\varepsilon = \text{E}[|\vec{x}_\text{C}|]+\text{E}[|\vec{y}_\text{C}|] - s_\text{C}(n),
\end{equation}
which measures the distance of the amplitude of the latest output point of the clock constellation to the expected clock constellation amplitude.
This ``error" is then used to update the filter $h$ through~\cite{faruk17}
\begin{equation}
\vec{h} = \vec{h} + \mu \varepsilon s^*_\text{C}(n) \vec{u}_\text{C}(n)
\end{equation}
The output clock constellation can then be discarded, while the quantum output constellation is then be evaluated for security.
\par
Protocol security is evaluated following the methodology presented in~\cite{becir12}.
The achievable secret key rate is given by
\begin{equation}\label{eq:keyRate}
K=\beta I_\text{BA}-\chi_\text{BE},
\end{equation}
where $\beta$ is the reconciliation efficiency, $I_\text{BA}$ is the mutual information between Bob and Alice, given by~\cite{becir12}
\begin{equation}
I_\text{BA} = \log_2\left(1+\frac{2\tilde{T}\eta\braket{n}}{2+\tilde{T}\eta\tilde{\epsilon}+2\epsilon_\text{thermal}}\right),
\end{equation}
where, in turn, $\tilde{T}$ is the estimate for the channel transmission, $\eta$ is the quantum efficiency of Bob's detection system, $\braket{n}$ is the average number of photons per symbol, $\tilde{\epsilon}$ is the estimate for the excess channel noise and $\epsilon_\text{thermal}$ is the receiver thermal noise, expressed in shot noise units (SNU).
In~\eqref{eq:keyRate}, $\chi_\text{BE}$ describes the Holevo bound that majors the amount of information that Eve can gain on Bob's recovered states, being obtained through equation (17) in~\cite{becir12}.
For the results presented in this work $\braket{n}$ was set at 0.33 photons per symbol, $\eta=0.72$ and $\tilde{\epsilon}$ and $\epsilon_\text{thermal}$ were dynamically estimated for each measurement run.
The shot and thermal noise estimations were made with recourse to a capture of the receiver output with the transmitter laser turned off and with both lasers turned off, respectively.
To obtain precise shot and thermal noise figures, the same DSP that was applied to the quantum signal was applied to the shot and thermal noise captures obtained previously, with the noise captures being down converted, phase compensated and filtered before their variance was computed.
This was necessary because both are highly dependent on their spectral position, as can be seen in their spectra, shown here in Figure~\ref{fig:noiseSpectra}.
\begin{figure}[h]
\centering\includegraphics[width=\linewidth, trim=4cm 8.5cm 4.7cm 8.6cm, clip]{figures/noiseSpectra}
\caption{Spectra of the thermal and shot noise snapshots taken from the experimental system.}
\label{fig:noiseSpectra}
\end{figure}
Since we cannot measure the shot noise without also including the thermal noise, the latter was obtained first and its value was subtracted from the variance of the former, yielding an estimate for the true shot noise.
The variance of the shot and thermal noise signals, named here $\sigma^2_\text{shot}$ and $\sigma^2_\text{thermal}$ respectively, are both expressed in ADC counts.
Thermal noise is converted to SNU by dividing it by the shot noise estimate $\sigma^2_\text{shot}$, explicitly
\begin{equation}
\epsilon_\text{thermal}=\frac{\sigma^2_\text{thermal}}{\sigma^2_\text{shot}}.
\end{equation}
The signal output by Bob's DSP is also converted to SNU, this in turn is accomplished by dividing the ADC count output by $\sqrt{\sigma^2_\text{shot}}$.
Bob's and Alice's states, $b$ and $a$ respectively, are related by the normal linear model~\cite{kleis17}:
\begin{equation}
b = ta+z,
\end{equation}
where $a$ is assumed to be normalized such that ${\text{E}\lbrace|a|^2\rbrace = 1}$, $t = \sqrt{\eta T 2\braket{n}}$ and $z$ is the noise contribution, which follows a normal distribution with null mean and variance ${\sigma^2=2+2\epsilon_\text{thermal}+\eta T \epsilon}$.
$t$ and $\sigma^2$ can be estimated through~\cite{kleis17}
\begin{equation}
\tilde{t} = \text{Re}\left\lbrace\frac{\sum_{i=1}^Na_ib_i^*}{N}\right\rbrace,\qquad
\tilde{\sigma^2}=\frac{\sum_{i=1}^N|b_i-\tilde{t}a_i|^2}{N},
\end{equation}
the transmission and excess noise are then estimated through
\begin{equation}
\tilde{T} = \frac{t^2}{\eta2\braket{n}},\qquad\tilde{\epsilon}=\frac{\sigma^2-2-2\epsilon_\text{thermal}}{\eta \tilde{T}}.
\end{equation}
For a truly secure communication, the uncertainty of the channel parameter estimations needs to be taken into account, choosing the confidence bound that gives the most advantage to Eve~\cite{leverrier10}.
As the objective of this letter is to show the capabilities of our polarization diverse receiver, only the central estimate for the parameters is used.
\section{Experimental Results}
The system was run freely for a half hour in scrambled mode and another half hour in unscrambled mode, with 2~ms snapshots taken every 10 seconds.
A total of 200 snapshots were taken in each scenario, each snapshot containing 65536 symbols.
The occurrence frequencies for the channel transmission estimates in the scrambled scenario, for both the recovered and the single-polarization channels, is presented in Figure~\ref{fig:channelTransmission}, where a line indicating the empirically determined channel transmission is also included.
\begin{figure}[h]
\centering\includegraphics[width=\linewidth, trim=3.9cm 8.5cm 4.6cm 9.1cm, clip]{figures/channelTransmissionHistogram}
\caption{Distribution of the estimated channel transmissions for both the single polarization and recovered channels in the scrambled scenario.}
\label{fig:channelTransmission}
\end{figure}
From the results in Figure~\ref{fig:channelTransmission} we can see that the single polarization channels exhibit a much lower transmission on average, with the most likely values being located nowhere near the actual value of the channel transmission.
Meanwhile, the distribution of the estimated channel transmission from the recovered channel exhibits a maximum very close to the transmission determined empirically, with the observed deviation being attributable to losses in the receiver and in the fibre connectors.
In Figure~\ref{fig:excessNoise} we present the values of the estimated excess noise in the recovered channel observed for each of the 200 results taken in the scrambled mode.
\begin{figure}[h]
\centering\includegraphics[width=\linewidth, trim=4.2cm 8.4cm 4.5cm 9cm, clip]{figures/excessNoiseFull}
\caption{Evolution of the estimated excess noise for the 200 results taken. Situations where the a secure key was able to be transmitted are highlighted with green asterisks. Results taken in the scrambled scenario.}
\label{fig:excessNoise}
\end{figure}
Situations where the a secure key was able to be transmitted are highlighted with green asterisks.
We can see that our system was able to recover secure keys for the duration of the experiment, with excess noise hovering quite close to 0, apart from some deviations caused by failures in signal recovery, coinciding with the cases with very low transmission in the recovered channel observed in Figure~\ref{fig:channelTransmission}.
We have all reasons to believe that the system could've been run for an indefinite amount of time and still be able to achieve secure transmissions.
Further optimization of the noise calibration step could improve the overall efficiency of the system.
Some situations exhibit negative excess noise, these can be attributed to fluctuations of the thermal and shot noises, causing the variance observed during the snapshot to be lower than at the noise estimation steps.
Time-evolving imbalances of the optical components could also be a contributing factor~\cite{pereira21}.
Finally, we show the experimentally observed secure key rates in function of channel transmission in Figure~\ref{fig:secureKeyRate}, alongside with the corresponding theoretical curve, for which the average values of the observed excess noise and thermal noise were used.
\begin{figure}[h]
\centering\includegraphics[width=\linewidth, trim=3.8cm 8.4cm 4.5cm 9cm, clip]{figures/keyRateUnscrambled}
\caption{Achievable key rate, given by~\eqref{eq:keyRate}, for our polarization diverse receiver, with $\beta=0.95$.}
\label{fig:secureKeyRate}
\end{figure}
Data from both scrambled and unscrambled polarization scenarios is included.
We can see that our experimental results closely adhere to the theoretical curve.
The small separation between the scrambled and unscrambled results can be attributed to the results being taken in different days, thus having slightly different conditions (for example in temperature).
We see that in both scenarios we were able to achieve secure key rates of roughly 0.01~bits/symbol.
No secure transmissions were observed for the individual polarization channels.
\section{Conclusion}
In summary, we present a polarization diverse receiver architecture that avoids the need for manual calibration or complex feedback loops to recover from random polarization drift.
Our system works by passively monitoring both polarizations continuously and recovering the full channel from the single polarization ones.
Our system was capable of working for an indefinite period of time at a transmission distance compatible with metro network connections.
Furthermore, this stability is achieved with a relatively simple and inexpensive receiver design.
We believe our contribution brings CV-QKD closer to widespread adoption.
\begin{acknowledgments}
This work was supported in part by Fundação para a Ciência e a Tecnologia (FCT) through national funds, by the European Regional Development Fund (FEDER), through the Competitiveness and Internationalization Operational Programme (COMPETE 2020) of the Portugal 2020 framework, under the PhD Grant SFRH/BD/139867/2018, projects Q.DOT (POCI-01-0247-FEDER-039728), UIDB/50008/2020-UIDP/50008/2020 (action QuRUNNER and QUESTS).
\end{acknowledgments}
|
1,108,101,564,366 | arxiv | \section{Introduction}
\label{gidqnn_intro}
Quantum computing has attracted great attention in recent years, especially since the realization of quantum supremacy~\cite{nature_arute2019quantum,science_abe8770} with noisy intermediate-scale quantum (NISQ) devices~\cite{quantum_Preskill2018}.
Due to mild requirements on the gate noise and the circuit connectivity, variational quantum algorithms (VQAs)~\cite{nat_review_phys_cerezo2021} become one of the most promising frameworks for achieving practical quantum advantages on NISQ devices.
Specifically, different VQAs have been proposed for many topics, e.g., quantum chemistry~\cite{rmp_mcardle2020qcc,nc_peruzzo2014var,nature_kandala2017hardware,quantum_Higgott2019vqc,nc_grimsley2019adaptive,prx_hempel2018qcion,science_frank2020hf,prxq_tang2021adaptvqe,pra_delgado2021vqa}, quantum simulations~\cite{rmp_georgescu2014qs,quantum_yuan2019theoryofvariational,npj_mcardle2019variational,prl_endo2020_vqsgp,nature_neill2021qring,nature_mi2021time,science_randall2021mbltime,science_amita2021obs,science_semeghini2021topoliquid,science_satzinger2021topoorder}, machine learning~\cite{pra_schuld2020circuit,nature_havlivcek2019sl,prl_schuld2019qml,prr_yuxuan2020powerpqc,ieee_samuel2020rl,nature_saggio2021rl,pra_heliang2021experimentqgan,prxq_yuxuan2021learnability}, numerical analysis~\cite{pra_lubasch2020vqanonlinear,pra_kubo2021vqasde,prxq_yongxin2021vqd,pra_hailing2021vqapoisson,pra_kyriienko2021nonlinearde}, and linear algebra problems~\cite{bulletin_bravo2020variational,scibull_xiaosi2021vaforla,quantum_wang2021vqsvd}.
Recently, various small-scale VQAs have been implemented on real quantum computers for tasks such as finding the ground state of molecules~\cite{prx_hempel2018qcion,science_frank2020hf,prxq_tang2021adaptvqe} and exploring
applications in supervised learning~\cite{nature_havlivcek2019sl}, generative learning~\cite{pra_heliang2021experimentqgan} and reinforcement learning~\cite{nature_saggio2021rl}.
Typical variational quantum algorithms is a trainable quantum-classical hybrid framework based on parameterized quantum circuits (PQCs)~\cite{qst_Benedetti2019pqc}. Similar to classical counterparts such as neural networks~\cite{jmlr_larochelle2009exploring}, first-order methods including the gradient descent~\cite{icml_simon2019_gdglobalminima} and its variants~\cite{compstat_bottou2010sgd} are widely employed in optimizing the loss function of VQAs. However, VQAs may face the trainability barrier when scaling up the size of quantum circuits (i.e., the number of involved qubits or the circuit depth), which is known as the barren plateau problem~\cite{nc_mcclean2018barren}.
Roughly speaking, the barren plateau describes the phenomenon that the value of the loss function and its gradients concentrate around their expectation values with exponentially small variances.
We remark that gradient-based methods could hardly handle trainings with the barren plateau phenomenon~\cite{nc_cerezo2020cost}. Both the machine noise of the quantum channel and the statistical noise induced by measurements could severely degrade the estimation of gradients. Moreover, the optimization of the loss with a flat surface takes much more time using inaccurate gradients than ideal cases. Thus, solving the barren plateau problem is imperative for achieving practical quantum advantages with VQAs.
In this paper, we propose Gaussian initializations for VQAs which have theoretical guarantees on the trainability. We prove that for Gaussian initialized parameters with certain variances, the expectation of the gradient norm is lower bounded by the inverse of the polynomial term of the qubit number and the circuit depth.
Technically, we consider various cases regarding VQAs in practice, which include local or global observables, independently or jointly employed parameters, and noisy optimizations induced by finite measurements.
To summarize, our contributions are fourfold:
\begin{itemize}
[topsep=0pt,itemsep=-0.1ex,partopsep=1ex,parsep=1ex, leftmargin=0.8cm]
\item We propose a Gaussian initialization strategy for deep variational quantum circuits. By setting the variance $\gamma^2=\mathcal{O}(\frac{1}{L})$ for $N$-qubit $L$-depth circuits with independent parameters and local observables, we lower bound the expectation of the gradient norm by ${{\rm poly}(N,L)}^{-1}$ as provided in Theorem~\ref{tqnn_cost_gaussian_gradient}, which outperforms previous ${2^{-\mathcal{O}(L)}}$ results.
\item We extend the gradient norm result to the global observable case in Theorem~\ref{gidqnn_gaussian_global}, which was believed to have the barren plateau problem even for very shallow circuits. Moreover, our bound holds for correlated parameterized gates, which are widely employed in practical tasks like quantum chemistry and quantum simulations.
\item
We provide further analysis on the number of necessary measurements for estimating the gradient, where the noisy case differs from the ideal case with a Gaussian noise. The result is presented in Corollary~\ref{gidqnn_corollary_noise}, which proves that $\mathcal{O}(\frac{L}{\epsilon})$ times of measurement is sufficient to guarantee a large gradient.
\item
We conduct various numerical experiments including finding the ground energy and the ground state of the Heisenberg model and the LiH molecule, which belong to quantum simulation and quantum chemistry, respectively. Experiment results show that Gaussian initializations outperform uniform initializations, which verify proposed theorems.
\end{itemize}
\subsection{Related work}
\label{gidqnn_related_work}
The barren plateau phenomenon was first noticed in
~\citep{nc_mcclean2018barren}, which proves that if the circuit distribution forms unitary $2$-designs~\cite{cmp_harrow2009random}, the variance of the gradient of the circuit vanishes to zero with the rate exponential in the qubit number. Subsequently, several positive results are proved for shallow quantum circuits such as the alternating-layered circuit \cite{nc_cerezo2020cost, iop_2021Uvarovlocality} and the quantum convolutional neural network~\cite{prx_pesah2020absence} when the observable is constrained in small number of qubits (local observable). For shallow circuits with $N$ qubits and $\mathcal{O}(\log N)$ depth, the variance of the gradient has the order ${\rm poly}(N)^{-1}$ if gate blocks in the circuit are sampled from local $2$-design distributions. Later, several works prove an inherent relationship between the barren plateau phenomenon and the complexity of states generated from the circuit. Specifically, circuit states that satisfy the volume law could lead to the barren plateau problem~\cite{prxq_ortiz2021eibp}. Expressive quantum circuits, which is measured by the distance between the Haar distribution and the distribution of circuit states, could have vanishing gradients~\cite{holmes2021connecting}. Since random circuits form approximately $2$-designs when they achieve linear depths~\cite{cmp_harrow2009random}, deep quantum circuits were believed to suffer the barren plateau problem generally.
The parameterization of quantum circuits is achieved by tuning the time of Hamiltonian simulations, so the gradient of the circuit satisfies the parameter-shift rule~\cite{prl_jun2017parametershift}. Thus, the variance of the loss in VQAs and that of its gradient have similar behaviors for uniform distributions~\cite{nc_mcclean2018barren, arxiv_zhang2020towardtqnn}. One corollary of the parameter-shift rule is that the gradient of depolarized noisy quantum circuits vanishes exponentially with increasing circuit depth~\cite{nc_wang2021noise}, since the loss itself vanishes in the same rate. Another corollary is that both gradient-free~\cite{quantum_Arrasmith2021effectofbarren} and higher-order methods~\cite{iop_2021CerezoHigher} could not solve the barren plateau problem.
Although most existing theoretical and practical results imply the barren plateau phenomenon in deep circuits, VQAs with deep circuits do have impressive advantages from other aspects. For example, the loss of VQAs is highly non-convex, which is hard to find the global minima~\cite{prl_bittel2021vqanp} for both shallow and deep circuits. Meanwhile, for VQAs with shallow circuits, local minima and global minima have considerable gaps~\cite{icml_xiaodi2021localminima}, which could severely influence the training performance of gradient-based methods. Contrary to shallow cases, deep VQAs have vanishing gaps between local minima and global minima~\cite{arxiv_anschuetz2021critical}. In practice, experiments show that overparameterized VQAs~\cite{arxiv_larocca2021theory} can be optimized towards the global minima. Moreover, VQAs with deep circuits have more expressive power than that of shallow circuits~\cite{arxiv_du2021efficient, prxq_tobias2021geometry, arxiv_caro2021generalization}, which implies the potential to handle more complex tasks in quantum machine learning and related fields.
Inspired by various advantages of deep VQAs, some approaches have been proposed recently for solving the related barren plateau problem in practice. For example, the block-identity strategy~\cite{quantum_grant2019initialization} initializes gate blocks in pairs and sets parameters inversely, such that the initial circuit is equivalent to the identity circuit with zero depth. Since shallow circuits have no vanishing gradient problem, the corresponding VQA is trainable with guarantees at the first step. However, we remark that the block-identity condition would not hold after the first step, and the structure of the circuit needs to be designed properly. The layerwise training method~\cite{qmi_skolik2020layerwise} trains parameters in the circuit layers by layers, such that the depth of trainable part is limited. However, this method implements circuits with larger depth than that of the origin circuit, and parameters in the first few layers are not optimized. A recent work provides theoretical guarantees on the trainability of deep circuits with certain structures~\cite{arxiv_zhang2021towardtqnn}. However, the proposed theory only suits VQAs with local observables, but many practical applications such as finding the ground state of molecules and the quantum compiling~\cite{quantum_Khatri2019quantum, iop_Sharma2020vqc} apply global observables.
\section{Notations and quantum computing basics}
\label{gidqnn_pre}
We denote by $[N]$ the set $\{ 1,\cdots,N\}$.
The form $\|\cdot\|_2$ represents the $\ell_2$ norm for the vector and the spectral norm for the matrix, respectively.
We denote by $a_j$ the $j$-th component of the vector $\bm{a}$.
The tensor product operation is denoted as ``$\otimes$". The conjugate transpose of a matrix $A$ is denoted as $A^{\dag}$. The trace of a matrix $A$ is denoted as $\text{Tr}[A]$.
We denote $\nabla_{\bm{\theta}}f$ as the gradient of the function $f$ with respect to the variable $\bm{\theta}$. We employ notations $\mathcal{O}$ to describe complexity notions.
Now we introduce quantum computing knowledge and notations.
The pure state of a qubit could be written as $|\phi\> = a|0\>+b|1\>$, where $a,b \in \mathbb{C}$ satisfy $|a|^2 + |b|^2 =1$, and $|0\> = (1,0)^T, |1\> = (0,1)^T$.
The $N$-qubit space is formed by the tensor product of $N$ single-qubit spaces.
For pure states, the corresponding density matrix is defined as $\rho=|\phi\>\<\phi|$, in which $\<\phi| = (|\phi\>)^{\dag}$. We use the density matrix to represent general mixed quantum states, i.e., $\rho = \sum_{k} c_k |\phi_k\>\<\phi_k|$, where $c_k \in \mathbb{R}$ and $\sum_k c_k =1$.
A single-qubit operation to the state behaves like the matrix-vector multiplication and can be referred to as the gate
$\Qcircuit @C=0.8em @R=1.5em {
\lstick{} & \gate{} & \qw
}$ in the quantum circuit language.
Specifically, single-qubit operations are often used as $R_X (\theta)=e^{-i\theta X}$, $R_Y (\theta)=e^{-i\theta Y}$, and $R_Z (\theta)=e^{-i\theta Z}$, where
\begin{equation*}
X = \begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix},
Y = \begin{pmatrix}
0 & -i \\
i & 0
\end{pmatrix},
Z = \begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}.
\end{equation*}
Pauli matrices will be referred to as
$\{I, X, Y, Z\} = \{\sigma_0, \sigma_1, \sigma_2, \sigma_3\}$ for the convenience. Moreover, two-qubit operations, such as the CZ gate and the $\sqrt{i{\rm SWAP}}$ gate, are employed for generating quantum entanglement:
\begin{align*}
\text{CZ} ={} \left(
\begin{matrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1
\end{matrix}
\right) , \sqrt{i{\rm SWAP}} ={} \left(
\begin{matrix}
1 & 0 & 0 & 0 \\
0 & 1/\sqrt{2} & i/\sqrt{2} & 0 \\
0 & i/\sqrt{2} & 1/\sqrt{2} & 0 \\
0 & 0 & 0 & 1
\end{matrix}
\right) .
\end{align*}
We could obtain information from the quantum system by performing measurements, for example, measuring the state $|\phi\> =a|0\>+b|1\>$ generates $0$ and $1$ with probability $p(0)=|a|^2$ and $p(1)=|b|^2$, respectively. Such a measurement operation could be mathematically referred to as calculating the average of the observable $O=\sigma_3$ under the state $|\phi\>$:
\begin{equation*
\<\phi| O |\phi\> \equiv \text{Tr} [\sigma_3 |\phi\>\<\phi| ] = |a|^2 - |b|^2 = p(0) - p(1).
\end{equation*}
Mathematically, quantum observables are Hermitian matrices. Specifically, the average of a unitary observable under arbitrary states is bounded by $[-1,1]$. We remark that $\mathcal{O}(\frac{1}{\epsilon^2})$ times of measurements could provide an $\epsilon\|O\|_2$-error estimation to the value $\text{Tr}[O\rho]$.
\section{Framework of general VQAs}
\label{gidqnn_general_vqa}
In this section, we introduce the framework of general VQAs and corresponding notations. A typical variational quantum algorithm can be viewed as the optimization of the function $f$, which is defined as the expectation of observables. The expectation varies for different initial states and different parameters $\bm{\theta}$ used in quantum circuits. Throughout this paper, we define
\begin{equation}\label{gidqnn_general_loss_function}
f(\boldsymbol{\theta}) = \text{Tr} \left[ O V(\boldsymbol{\theta}) \rho_{\text{in}} V(\boldsymbol{\theta})^\dag \right]
\end{equation}
as the loss function of VQAs, where $V(\bm{\theta})$ denotes the parameterized quantum circuit, the hermitian matrix $O$ denotes the observable, and $\rho_{\text{in}}$ denotes the density matrix of the input state.
Next, we explain observables, input states, and parameterized quantum circuits in detail.
Both the observable and the density matrix could be decomposed under the Pauli basis
We define the \emph{locality} of a quantum observable as the maximum number of non-identity Pauli matrices in the tensor product, such that the corresponding coefficient is not zero. Thus, the observable with the constant locality is said to be \emph{local}, and the observable that acts on all qubits is said to be \emph{global}.
The observable and the input state in VQAs could have various formulations {for specific tasks.}
For the quantum simulation or the quantum chemistry scenario, observables are constrained to be the system Hamiltonians, while input states are usually prepared as computational basis states. For example, $(|0\>\<0|)^{\otimes N}$ is used frequently in quantum simulations~\cite{prl_endo2020_vqsgp, nature_neill2021qring}. Hartree–Fock (HF) states~\cite{nc_grimsley2019adaptive,prx_hempel2018qcion}, which are prepared by the tensor product of $\{|0\>,|1\>\}$, serve as good initial states in quantum chemistry tasks~\cite{nc_grimsley2019adaptive, science_frank2020hf, prxq_tang2021adaptvqe, pra_delgado2021vqa}.
For quantum machine learning (QML) tasks, initial states encode the information of the training data, which could have a complex form. Many encoding strategies have been introduced in the literature~\cite{pra_schuld2020circuit, scirep_araujo2021divide, arxiv_schatzki2021entangled}. In contrary with the complex initial states, observables employed in QML are quite simple. For example, $\pi_0 = |0\>\<0|$ serves as the observable in most QML tasks related with the classification~\cite{pra_schuld2020circuit,nature_havlivcek2019sl,prl_schuld2019qml} or the dimensional reduction~\cite{mlst_Bravo_Prieto_2021}.
Apart from the input states and the observable choices, parameterized quantum circuits employed in different variational quantum algorithms have various structures, which are also known as \emph{ansatzes}~\cite{farhi2014quantum, mcardle2019variational, a12020034}.
Specifically, the ansatz in the VQA denotes the initial guess on the circuit structure. For example, alternating-layered ansatzes~\cite{farhi2014quantum, fingerhuth2018quantum} are proposed for approximating the Hamiltonian evolution.
Recently, hardware efficient ansatzes~\cite{nature_kandala2017hardware, quantum_Nakaji2021expressibilityof} and tensor-network based ansatzes~\cite{natphy_cong2019quantum, prl_Felser2021tensoransatz}, which could utilize parameters efficiently on noisy quantum computers, have been developed for various tasks, including quantum simulations and quantum machine learning. For quantum chemistry tasks, unitary coupled cluster ansatzes~\cite{scirep_yung2014transistor, pra_shen2017ucc} are preferred since they preserve the number of electrons corresponding to circuit states.
In practice, ansatz is deployed as the sequence of single-qubit rotations $\{e^{-i\theta \sigma_k}, k \in \{1,2,3\}\}$ and two-qubit gates. We remark that the gradient of the VQA satisfies the parameter-shift rule~\cite{prl_jun2017parametershift, arxiv_crooks2019gradients, arxiv_wierichs2021general}; namely, for independently deployed parameters $\theta_j$, the corresponding partial derivative is
\begin{equation}\label{gidqnn_parameter_shift}
\frac{\partial f}{\partial \theta_j} = f(\bm{\theta}_{+}) - f(\bm{\theta}_{-}),
\end{equation}
where $\boldsymbol{\theta}_+$ and $\boldsymbol{\theta}_-$ are different from $\boldsymbol{\theta}$ only at the $j$-th parameter: $\theta_j \rightarrow \theta_j \pm \frac{\pi}{4}$.
Thus, the gradient of $f$ could be estimated efficiently, which allows optimizing VQAs with gradient-based methods~\cite{sciadv_zhu2019tqc, quantum_Stokes2020quantumnatural, quantum_Sweke2020stochasticgradient}.
\section{Theoretical results about Gaussian initialized VQAs}
\label{gidqnn_gaussian}
In this section, we provide our theoretical guarantees on the trainability of deep quantum circuits through proper designs for the initial parameter distribution. In short, we prove that the gradient of the $L$-layer $N$-qubit circuit is upper bounded by $1/{\rm poly}(L,N)$, if initial parameters are sampled from a Gaussian distribution with $\mathcal{O}(1/L)$ variance. Our bounds significantly improve existing results of the gradients of VQAs, which have the order $2^{-\mathcal{O}(L)}$ for shallow circuits and the order $2^{-\mathcal{O}(N)}$ for deep circuits. We prove different results for the local and global observable cases in Section~\ref{gidqnn_gaussian_local} and Section~\ref{gidqnn_gaussian_global}, respectively.
\subsection{Independent parameters with local observables}
\label{gidqnn_gaussian_local}
\begin{figure}[t]
\centerline{
\Qcircuit @C=1.0em @R=0.7em {
& & & {L} \text{ times} & & & & & & & \\
\lstick{} & \qw & \gate{R(\theta_{\ell,1})} & \qw & \multigate{3}{{\rm CZ}_{\ell}} & \qw & \gate{R_X(\theta_{L+1,1})} & \qw & \gate{R_Y(\theta_{L+2,1})} & \qw & \meter \\
\lstick{} & \qw & \gate{R(\theta_{\ell,2})} & \qw & \ghost{{\rm CZ}_{\ell}} & \qw & \gate{R_X(\theta_{L+1,2})} & \qw & \gate{R_Y(\theta_{L+2,2})} & \qw & \meter \\
\lstick{} & & {\vdots} & & \nghost{{\rm CZ}_{\ell}} & & {\vdots} & & {\vdots} & & {\vdots} \\
\lstick{} & \qw & \gate{R(\theta_{\ell,N})} & \qw & \ghost{{\rm CZ}_{\ell}} & \qw & \gate{R_X(\theta_{L+1,N})} & \qw & \gate{R_Y(\theta_{L+2,N})} & \qw & \meter
\inputgroupv{2}{5}{.8em}{4.5em}{\rho_{\text{in}}}
\gategroup{2}{3}{5}{5}{0.8em}{(}
\gategroup{2}{3}{5}{5}{0.8em}{)}
}
}
\caption{The quantum circuit framework for the local observable case. The circuit performs $L$ layers of single qubit rotations and CZ layers on the input state $\rho_{\text{in}}$, followed by a $R_X$ layer and a $R_Y$ layer. In the $\ell$-th single qubit layer, we employ the gate $e^{-i\theta_{\ell,n} G_{\ell,n}}$ for all qubits $n \in [N]$, where $G_{\ell,n}$ is a Hermitian unitary, which anti-commutes with $\sigma_3$ for $\ell \in [L]$. In each $\text{CZ}_\ell$ layer, CZ gates are employed between arbitrary qubit pairs. {The measurement is performed on $S$ qubits where the observable acts nontrivially on these qubits.}}
\label{gidqnn_local_circuit}
\end{figure}
First, we introduce the Gaussian initialization of parameters for the local observable case. We use the quantum circuit illustrated in Figure~\ref{gidqnn_local_circuit} as the ansatz in this section. The circuit in Figure~\ref{gidqnn_local_circuit} performs $L$ layers of single qubit rotations and CZ gates on the input state $\rho_{\text{in}}$, followed by a $R_X$ layer and a $R_Y$ layer. We denote the single-qubit gate on the $n$-th qubit of the $\ell$-th layer as $e^{-i\theta_{\ell,n} G_{\ell,n}}$, $\forall \ell \in \{1,\cdots,L+2\}$ and $n \in \{1,\cdots,N\}$, where $\theta_{\ell,n}$ is the corresponding parameter and $G_{\ell,n}$ is a Hermitian unitary. To eliminate degenerate parameters, we require that single-qubit gates in the first $L$ layers do not commute with the CZ gate.
After gates operations, we measure the observable
\begin{equation}\label{gidqnn_local_observable}
\sigma_{\bm{i}}= \sigma_{(i_1, i_2,\cdots,i_N)} = \sigma_{i_1} \otimes \sigma_{i_2} \otimes \cdots \otimes \sigma_{i_N},
\end{equation}
where $i_j \in \{0,1,2,3\},\forall j \in \{1,\cdots,N\}$, and $\bm{i}$ contains $S$ non-zero elements.
Figure~\ref{gidqnn_local_circuit} provides a general framework of VQAs with local observables, which covers various ansatzes proposed in the literature
\cite{arxiv_zhang2021towardtqnn,tqc_chen2021expover,prxq_tobias2021geometry,qmi_skolik2020layerwise}.
The bound of the gradient norm of the Gaussian initialized variational quantum circuit is provided in Theorem~\ref{tqnn_cost_gaussian_gradient} with the proof in Appendix
\begin{theorem}\label{tqnn_cost_gaussian_gradient}
Consider the $L$-layer $N$-qubit variational quantum circuit $V(\bm{\theta})$ defined in Figure~\ref{gidqnn_local_circuit} and the cost function $f(\bm{\theta}) = {\rm Tr} \left[ \sigma_{\bm{i}} V(\bm{\theta}) \rho_{{\rm in}} V(\bm{\theta})^{\dag}\right]$, where the observable $\sigma_{\bm{i}}$ follows the definition~(\ref{gidqnn_local_observable}). Then,
\begin{equation}\label{tqnn_cost_gaussian_main_eq}
\mathop{\mathbb{E}}\limits_{\bm{\theta}} \|\nabla_{\bm{\theta}} f\|^2 \geq \frac{L}{S^{S} (L+2)^{S+1}} {\rm Tr} \left[ \sigma_{\bm{j}} \rho_{\rm in} \right]^2,
\end{equation}
where $S$ is the number of non-zero elements in $\bm{i}$, and the index $\bm{j}=(j_1,j_2,\cdots,j_N)$ such that $j_m = 0, \forall i_m = 0$ and $j_m = 3, \forall i_m \neq 0$. The expectation is taken with the Gaussian distribution $\mathcal{N}\left(0, \frac{1}{4S(L+2)}\right)$ for the parameters $\bm{\theta}$.
\end{theorem}
Compared to existing works~\cite{nc_mcclean2018barren, nc_cerezo2020cost, iop_2021Uvarovlocality, prx_pesah2020absence, arxiv_zhang2021towardtqnn}, Theorem~\ref{tqnn_cost_gaussian_gradient} provides a larger lower bound of the gradient norm, which improves the complexity exponentially with the {depth of trainable circuits}.
Different from unitary $2$-design distributions~\cite{nc_mcclean2018barren, nc_cerezo2020cost, iop_2021Uvarovlocality, prx_pesah2020absence} or the uniform distribution in the parameter space~\cite{arxiv_zhang2020towardtqnn, arxiv_larocca2021diagnosing, arxiv_zhang2021towardtqnn} that were employed in existing works, we analyze the expectation of the gradient norm under a depth-induced Gaussian distribution. This change follows a natural idea that the trainability is not required in the whole parameter space or the entire circuit space, but only on the parameter trajectory during the training.
Moreover, large norm of gradients could only guarantee the trainability in the beginning stage, instead of the whole optimization, since a large gradient for trained parameters corresponds to non-convergence. Thus, the barren plateau problem could be crucial if initial parameters have vanishing gradients, which has been proved for deep VQAs with uniform initializations. In contrary, we could solve the barren plateau problem if parameters are initialized properly with large gradients, as provided in Theorem~\ref{tqnn_cost_gaussian_gradient}.
Finally, Gaussian initialized circuits converge to benign values if optima appear around $\bm{\theta}=\bm{0}$, which holds in many cases. For example, over-parameterized quantum circuits have benign local minima~\cite{arxiv_anschuetz2021critical} if the number of parameters exceeds the over-parameterization threshold. Moreover, over-parameterized circuits have exponential convergence rates~\cite{arxiv_liu2022analytic,arxiv_you2022convergence} on tasks like quantum machine learning and the quantum eigensolver. These works indicate that quantum circuits with sufficient depths could find good optimums near the initial points, which is similar to the classical wide neural network case~\cite{NEURIPS2018_5a4be1fa}.
\subsection{Correlated parameters with global observables}
\label{gidqnn_gaussian_global}
Next, we extend the Gaussian initialization framework to general quantum circuits with correlated parameters and global observables.
Quantum circuits with correlated parameters have wide applications in quantum simulations and quantum chemistry~\cite{nc_grimsley2019adaptive, science_frank2020hf, prxq_tang2021adaptvqe, pra_delgado2021vqa}. One example is the Givens rotation
\begin{equation}
{R^{\rm Givens}(\theta)} ={} \left(
\begin{matrix}
1 & 0 & 0 & 0 \\
0 & \cos \theta & -\sin \theta & 0 \\
0 & \sin \theta & \cos \theta & 0 \\
0 & 0 & 0 & 1
\end{matrix}
\right)
\phantom{} ={}
\begin{array}{l}
\Qcircuit @C=0.5em @R=2em {
\lstick{} & \multigate{1}{\rotatebox{90}{$\sqrt{i{\rm SWAP}}$}} & \gate{R_Z(\frac{-\theta}{2})} & \multigate{1}{\rotatebox{90}{$\sqrt{i{\rm SWAP}}$}} & \qw & \qw \\
\lstick{} & \ghost{\rotatebox{90}{$\sqrt{i{\rm SWAP}}$}} & \gate{R_Z(\frac{\theta+\pi}{2})} & \ghost{\rotatebox{90}{$\sqrt{i{\rm SWAP}}$}} & \gate{R_Z(\frac{\pi}{2})} & \qw
}
\end{array}
\label{gidqnn_gaussian_givens_rotation}
\end{equation}
which preserves the number of electrons in parameterized quantum states~\cite{science_frank2020hf}.
To analyze VQAs with correlated parameterized gates, we consider the ansatz $V(\bm{\theta})=\prod_{j=L}^{1} V_j(\theta_j)$, which consists of parameterized gates $\{V_j(\theta_j)\}_{j=1}^{L}$. {Denote by $h_j$ the number of unitary gates that share the same parameter $\theta_j$. Thus, the parameterized gate $V_j (\theta_j)$ consists of a list of fixed and parameterized unitary operations
\begin{equation}
V_j (\theta_j) = \prod_{k=1}^{h_j} W_{jk} e^{-i \frac{\theta_j}{a_j} G_{jk}}
\end{equation}
with the term $a_j \in \mathbb{R}/\{0\}$, where the Hamiltonian $G_{jk}$ and the fixed gate $W_{jk}$ are unitary $\forall k \in [h_j]$. Moreover, we consider the objective function
\begin{equation}
\label{gidqnn_gaussian_global_f_eq}
f(\bm{\theta}) = {\rm Tr} \left[ O \prod_{j=L}^{1} V_j(\theta_j) \rho_{\rm {in}} \prod_{j=1}^{L} V_j(\theta_j)^\dag \right],
\end{equation}}
where $\rho_{\rm {in}}$ and $O$ denote the input state and the observable, respectively.
In practical tasks of quantum chemistry, the molecule Hamiltonian $H$ serves as the observable $O$. Minimizing the function (\ref{gidqnn_gaussian_global_f_eq}) provides the ground energy and the corresponding ground state of the molecule.
We provide the bound of the gradient norm of the Gaussian initialized variational quantum circuit in Theorem~\ref{gidqnn_theorem_related} with the proof in Appendix.
Similar to the local observable case, we could bound the norm of the gradient of Eq.~(\ref{gidqnn_gaussian_global_f_eq}) if parameters are initialized with $\mathcal{O}(\frac{1}{L})$ variance. Theorem~\ref{gidqnn_theorem_related} provides nontrivial bounds when the gradient at the zero point is large. This condition holds when the mean-field theory provides a good initial guess to the corresponding problems, e.g. the ground energy task in quantum chemistry and quantum many-body problems~\cite{prl_Amico1998hubbard}.
\begin{theorem}\label{gidqnn_theorem_related}
Consider the $N$-qubit variational quantum algorithms with the objective function (\ref{gidqnn_gaussian_global_f_eq}). Then the following formula holds for any $\ell \in \{1,\cdots,L\}$,
\begin{equation}
\label{gidqnn_theorem_related_eq}
\mathop{\mathbb{E}}\limits_{\bm{\theta}} \left( \frac{\partial f}{\partial \theta_{\ell} } \right)^2 \geq (1-\epsilon) \left( \frac{\partial f}{\partial \theta_{\ell} } \right)^2 \bigg|_{\bm{\theta}=\bm{0}},
\end{equation}
where $\bm{0}\in \mathbb{R}^{L}$ is the zero vector.
The expectation is taken with Gaussian distributions $\mathcal{N}(0, \gamma_j^2)$ for parameters in $\bm{\theta}=\{{\theta}_j\}_{j=1}^{L}$, where the variance $\gamma_j^2 \leq \frac{a_j^2 \epsilon}{16 h_j^2 (3h_j(h_j-1)+1) L \|O\|_2^2} \left.\left( \frac{\partial f}{\partial \theta_{\ell}} \right)^2 \right|_{\bm{\theta}=\bm{0}}$
\end{theorem}
We remark that Theorem~\ref{gidqnn_theorem_related} not only provides an initialization strategy, but also guarantees the update direction during the training. Different from the classical neural network, where the gradient could be calculated accurately, the gradient of VQAs, obtained by the parameter-shift rule~(\ref{gidqnn_parameter_shift}), is perturbed by the measurement noise.
A guide on the size of acceptable measurement noise could be useful for the complexity analysis of VQAs.
Specifically, define $\bm{\theta}^{(t-1)}$ as the parameter at the $t-1$-th iteration. {Denote by ${\bm{\theta}}^{(t)}$ and $\tilde{\bm{\theta}}^{(t)}$ the parameter updated from $\bm{\theta}^{(t-1)}$ for noiseless and noisy cases, respectively. Then $\tilde{\bm{\theta}}^{(t)}$ differs from ${\bm{\theta}}^{(t)}$ by a Gaussian error term due to the measurement noise. We expect to derive the gradient norm bound for $\tilde{\bm{\theta}}^{(t)}$, as provided in Corollary~\ref{gidqnn_corollary_noise}.}
Thus, $\frac{1}{\gamma^2}=\mathcal{O}(\frac{L}{\epsilon})$ number of measurements is sufficient to guarantee a large gradient.
\begin{corollary}\label{gidqnn_corollary_noise}
Consider the $N$-qubit variational quantum algorithms with the objective function (\ref{gidqnn_gaussian_global_f_eq}). Then the following formula holds for any $\ell \in \{1,\cdots,L\}$,
\begin{equation}\label{gidqnn_corollary_noise_eq}
\mathop{\mathbb{E}}\limits_{\bm{\delta}} \left( \frac{\partial f}{\partial \theta_{\ell} } \right)^2 \bigg|_{\bm{\theta}=\bm{\theta}^{(t)}+\bm{\delta}} \geq (1-\epsilon) \left( \frac{\partial f}{\partial \theta_{\ell} } \right)^2 \bigg|_{\bm{\theta}=\bm{\theta}^{(t)}}.
\end{equation}
The expectation is taken with Gaussian distributions $\mathcal{N}(0, \gamma_j^2)$ for parameters $\bm{\delta}=\{\delta_j\}_{j=1}^{L}$, where the variance $\gamma_j^2 \leq \frac{a_j^2 \epsilon}{16 h_j^2 (3h_j(h_j-1)+1) L \|O\|_2^2} \left.\left( \frac{\partial f}{\partial \theta_{\ell}} \right)^2 \right|_{\bm{\theta}=\bm{\theta}^{(t)}}$, $\forall j \in [L]$.
\end{corollary}
Corollary~\ref{gidqnn_corollary_noise} is derived by analyzing the gradient of the function $g(\bm{\delta})=f(\bm{\delta}+\bm{\theta}^{(t)})$ via Theorem~\ref{gidqnn_theorem_related}. For any number of measurements such that the corresponding Gaussian noise $\bm{\delta}$ satisfies the condition in Corollary~\ref{gidqnn_corollary_noise}, the trainability at the updated point is guaranteed.
\section{Experiments}
\label{gidqnn_experiment}
In this section, we analyze the training behavior of two variational quantum algorithms, i.e., finding the ground energy and state of the Heisenberg model and the LiH molecule, respectively. All numerical experiments are provided using the Pennylane package~\cite{bergholm2018pennylane}.
\subsection{Heisenberg model}
\label{gidqnn_exp_heisenberg}
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=.23\linewidth]{xxx_loss_gd_15_300_0.pdf}
\label{gidqnn_exp_heisenberg_gd_loss_1}
}
\subfigure[]{
\includegraphics[width=.23\linewidth]{xxx_loss_gd_15_300_1.pdf}
\label{gidqnn_exp_heisenberg_gd_loss_2}
}
\subfigure[]{
\includegraphics[width=.23\linewidth]{xxx_grad_gd_15_300_0.pdf}
\label{gidqnn_exp_heisenberg_gd_grad_1}
}
\subfigure[]{
\includegraphics[width=.23\linewidth]{xxx_grad_gd_15_300_1.pdf}
\label{gidqnn_exp_heisenberg_gd_grad_2}
}
\subfigure[]{
\includegraphics[width=.23\linewidth]{xxx_loss_adam_15_300_0.pdf}
\label{gidqnn_exp_heisenberg_adam_loss_1}
}
\subfigure[]{
\includegraphics[width=.23\linewidth]{xxx_loss_adam_15_300_1.pdf}
\label{gidqnn_exp_heisenberg_adam_loss_2}
}
\subfigure[]{
\includegraphics[width=.23\linewidth]{xxx_grad_adam_15_300_0.pdf}
\label{gidqnn_exp_heisenberg_adam_grad_1}
}
\subfigure[]{
\includegraphics[width=.23\linewidth]{xxx_grad_adam_15_300_1.pdf}
\label{gidqnn_exp_heisenberg_adam_grad_2}
}
\caption{Numerical results of finding the ground energy of the Heisenberg model.
The first row shows training results with the gradient descent optimizer, where
Figures~\ref{gidqnn_exp_heisenberg_gd_loss_1} and \ref{gidqnn_exp_heisenberg_gd_loss_2} illustrate the loss function corresponding to Eq.(\ref{gidqnn_heisenberg_ham_eq}) during the optimization with accurate and noisy gradients, respectively. Figures~\ref{gidqnn_exp_heisenberg_gd_grad_1} and \ref{gidqnn_exp_heisenberg_gd_grad_2} show the $\ell_2$ norm of corresponding gradients
The second row shows training results with the Adam optimizer, where Figures~\ref{gidqnn_exp_heisenberg_adam_loss_1} and \ref{gidqnn_exp_heisenberg_adam_loss_2} illustrate the loss function with accurate and noisy gradients, respectively. Figures~\ref{gidqnn_exp_heisenberg_adam_grad_1} and \ref{gidqnn_exp_heisenberg_adam_grad_2} show the $\ell_2$ norm of corresponding gradients
Each line denotes the average of $5$ rounds of optimizations.}
\label{gidqnn_exp_heisenberg_fig_1}
\end{figure}
In the first task, we aim to find the ground state and the ground energy of the Heisenberg model~\cite{bonechi1992heisenberg}. The corresponding Hamiltonian matrix is
\begin{equation} \label{gidqnn_heisenberg_ham_eq}
H = \sum_{i=1}^{N-1} X_i X_{i+1} + Y_i Y_{i+1} + Z_i Z_{i+1},
\end{equation}
where $N$ is the number of qubit, $X_i=I^{\otimes (i-1)} \otimes X \otimes I ^{\otimes (N-i)}$, $Y_i=I^{\otimes (i-1)} \otimes Y \otimes I ^{\otimes (N-i)}$, and $Z_i=I^{\otimes (i-1)} \otimes Z \otimes I ^{\otimes (N-i)}$.
We employ the loss function defined by Eq.~(\ref{gidqnn_general_loss_function}) with the input state $(|0\>\<0|)^{\otimes N}$ and the observable (\ref{gidqnn_heisenberg_ham_eq}). Thus, by minimizing the function (\ref{gidqnn_general_loss_function}), we can obtain the least eigenvalue of the observable~(\ref{gidqnn_heisenberg_ham_eq}), which is the ground energy.
We adopt the ansatz with $N=15$ qubits, which consists of $L_1=10$ layers of $R_Y R_X CZ$ blocks. In each block, we first employ the CZ gate to neighboring qubits pairs $\{(1,2)\cdots,(N,1)\}$, followed by $R_X$ and $R_Y$ rotations for all qubits.
Overall, the quantum circuit has $300$ parameters.
{We consider three initialization methods for comparison, i.e., initializations with the Gaussian distribution $\mathcal{N}(0,\gamma^2)$ and the uniform distribution in $[0,2\pi]$, respectively, and the zero initialization (all parameters equal to $0$ at the initial point).}
We remark that each term in the observable (\ref{gidqnn_heisenberg_ham_eq}) contains at most $S=2$ non-identity Pauli matrices, which is consistent with the $(S,L)=(2,18)$ case of Theorem~\ref{tqnn_cost_gaussian_gradient}.
Thus, we expect that the Gaussian initialization with the variance $\gamma^2=\frac{1}{4S(L+2)}=\frac{1}{160}$ could provide trainable initial parameters.
In the experiment, we train VQAs with gradient descent (GD)~\cite{bottou2012stochastic} and Adam optimizers~\cite{kingma2014adam}, respectively.
The learning rate is $0.01$ and $0.01$ for both GD and Adam cases.
Since the estimation of gradients on real quantum computers could be perturbed by statistical measurement noise, we compare optimizations using accurate and noisy gradients. For the latter case, we set the variance of measurement noises to be $0.01$.
The numerical results of the Heisenberg model are shown in the Figure~\ref{gidqnn_exp_heisenberg_fig_1}.
The loss during the training with gradient descents is shown in Figures~\ref{gidqnn_exp_heisenberg_gd_loss_1} and \ref{gidqnn_exp_heisenberg_gd_loss_2} for the accurate and the noisy gradient cases, respectively.
{The Gaussian initialization outperforms the other two initializations with faster convergence rates. Figures~\ref{gidqnn_exp_heisenberg_gd_grad_1} and \ref{gidqnn_exp_heisenberg_gd_grad_2} verify that Gaussian initialized VQAs have larger gradients in the early stage, compared to that of uniformly initialized VQAs. We notice that zero initialized VQAs cannot be trained with accurate gradient descent, since the initial gradient equals to zero. This problem is alleviated in the noisy case, as shown in Figures~\ref{gidqnn_exp_heisenberg_gd_loss_2} and \ref{gidqnn_exp_heisenberg_gd_grad_2}. Since the gradient is close to zero at the initial stage, the update direction mainly depends on the measurement noise, which forms the Gaussian distribution. Thus, the parameter in the noisy zero initialized VQAs is expected to accumulate enough variances, which takes around 10 iterations based on Figure~\ref{gidqnn_exp_heisenberg_adam_grad_2}.
As illustrated in Figure~\ref{gidqnn_exp_heisenberg_gd_loss_2}, the loss function corresponding to the zero initialization decreases quickly after the variance accumulation stage.}
Results in Figures~\ref{gidqnn_exp_heisenberg_adam_loss_1} and \ref{gidqnn_exp_heisenberg_adam_grad_2} show similar training behaviors using the Adam optimizer.
\subsection{Quantum chemistry}
\label{gidqnn_exp_chem}
In the second task, we aim to find the ground state and the ground energy
of the LiH molecule. We follow settings on the ansatz in Refs.~\cite{prxq_tang2021adaptvqe, pra_delgado2021vqa}. For the molecule with $n_e$ active electrons and $n_o$ free spin orbitals, the corresponding VQA contains $N=n_o$ qubits, which employs the HF state~\cite{nc_grimsley2019adaptive, prx_hempel2018qcion}
\begin{equation*}
|\phi_{\rm HF} \> = \underbrace{|1\> \otimes \cdots |1\>}_{n_e} \otimes \underbrace{|0\> \otimes \cdots |0\>}_{n_o-n_e}
\end{equation*}
as the input state.
We construct the parameterized quantum circuit with Givens rotation gates~\cite{prxq_tang2021adaptvqe}, where each gate is implemented on 2 or 4 qubits with one parameter.
Specifically, for the LiH molecule, the number of electrons $n_e=2$, the number of free spin orbitals $n_o=10$, and the number of different Givens rotations is $L=24$~\cite{pra_delgado2021vqa}. We follow the molecule Hamiltonian $H_{\rm LiH}$ defined in Ref.~\cite{pra_delgado2021vqa}. Thus, the loss function for finding the ground energy of LiH is defined as
\begin{equation}\label{gidqnn_chem_loss_eq}
f(\bm{\theta}) = \text{Tr} \left[ H_{\rm LiH} V_{\rm Givens} (\bm{\theta}) |\phi_{\rm HF} \> \<\phi_{\rm HF} | V_{\rm Givens} (\bm{\theta})^\dag \right],
\end{equation}
where $V_{\rm Givens} (\bm{\theta})=\prod_{i=1}^{24} R_i^{\rm Givens} (\theta_i)$ denotes the product of all parameterized Givens rotations of the LiH molecule.
By minimizing the function (\ref{gidqnn_chem_loss_eq}), we can obtain the least eigenvalue of the Hamiltonian $H_{\rm LiH}$, which is the ground energy of the LiH molecule.
In practice, we initialize parameters in the VQA~(\ref{gidqnn_chem_loss_eq}) with three distributions for comparison, i.e., the Gaussian distribution $\mathcal{N}(0,\gamma^2)$, the zero distribution (all parameters equal to $0$), and the uniform distribution in $[0,2\pi]$.
{For 2-qubit Givens rotations, the term $(h,a)=(2,2)$ as shown in Eq.~(\ref{gidqnn_gaussian_givens_rotation}). For 4-qubit Givens rotations, the term $(h,a)=(8,8)$ \cite{arrazola2021universal}.}
{Thus, we set the variance in the Gaussian distribution
$\gamma^2 = \frac{8^2 \times \frac{1}{2} }{48 \times 8^4 \times 24}$,
which matches the $(L,h,a,\epsilon)=(24,8,8,\frac{1}{2})$ case of Theorem~\ref{gidqnn_theorem_related}. }
Similar to the task of the Heisenberg model, we consider both the accurate and the noisy gradient cases, where the variance of noises in the latter case is the constant $0.001$. Moreover, we consider the noisy case with adaptive noises, where the variance of the noise on each partial derivative $\frac{\partial f}{\partial \theta_\ell}\big|_{\bm{\theta}=\bm{\theta}^{(t)}}$ in the $t$-th iteration is
\begin{equation} \label{gidqnn_chem_noise_gamma}
\gamma^2 = \frac{1}{96\times 24 \times 8^2 \|H_{\rm LiH}\|_2^2} \left( \frac{\partial f}{\partial \theta_\ell} \right)^2 \bigg|_{\bm{\theta}=\bm{\theta}^{(t-1)}}.
\end{equation}
The variance in Eq.~(\ref{gidqnn_chem_noise_gamma}) matches the $(L,h,a,\epsilon)=(24,8,8,\frac{1}{2})$ case of Corollary~\ref{gidqnn_corollary_noise} when the VQA is nearly converged:
\begin{equation*}
\frac{\partial f}{\partial \theta_\ell}\big|_{\bm{\theta}=\bm{\theta}^{(t)}} \approx \frac{\partial f}{\partial \theta_\ell}\big|_{\bm{\theta}=\bm{\theta}^{(t-1)}}.
\end{equation*}
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=.31\linewidth]{chem_energy_gd_10_24_0_0.pdf}
\label{gidqnn_exp_chem_gd_loss_1}
}
\subfigure[]{
\includegraphics[width=.31\linewidth]{chem_energy_gd_10_24_0_1.pdf}
\label{gidqnn_exp_chem_gd_loss_2}
}
\subfigure[]{
\includegraphics[width=.31\linewidth]{chem_energy_gd_10_24_1_0.pdf}
\label{gidqnn_exp_chem_gd_loss_3}
}
\subfigure[]{
\includegraphics[width=.31\linewidth]{chem_energy_adam_10_24_0_0.pdf}
\label{gidqnn_exp_chem_adam_loss_1}
}
\subfigure[]{
\includegraphics[width=.31\linewidth]{chem_energy_adam_10_24_0_1.pdf}
\label{gidqnn_exp_chem_adam_loss_2}
}
\subfigure[]{
\includegraphics[width=.31\linewidth]{chem_energy_adam_10_24_1_0.pdf}
\label{gidqnn_exp_chem_adam_loss_3}
}
\caption{Numerical results of finding the ground energy of the molecule LiH.
The first and second rows show training results with the gradient descent and the Adam optimizer, respectively. The left, the middle, and the right columns show
results using accurate gradients, noisy gradients with adaptive-distributed noises, and noisy gradients with constant-distributed noises.
The variance of noises in the middle line (Figures~\ref{gidqnn_exp_chem_gd_loss_2} and \ref{gidqnn_exp_chem_adam_loss_2}) follows Eq.~(\ref{gidqnn_chem_noise_gamma}), while the variance of noises in the right line (Figures~\ref{gidqnn_exp_chem_gd_loss_3} and \ref{gidqnn_exp_chem_adam_loss_3}) is $0.001$.
Each line denotes the average of $5$ rounds of optimizations.}
\label{gidqnn_exp_chem_fig}
\end{figure}
In the experiment, we train VQAs with gradient descent and Adam optimizers.
Learning rates are set to be $0.1$ and $0.01$ for GD and Adam cases, respectively.
The loss (\ref{gidqnn_chem_loss_eq}) during training iterations is shown in Figure~\ref{gidqnn_exp_chem_fig}.
Optimization results with gradient descents are shown in Figures~\ref{gidqnn_exp_chem_gd_loss_1}-\ref{gidqnn_exp_chem_gd_loss_3} for the accurate gradient case, the adaptive noisy gradient case, and the noisy gradient case with the constant noise variance $0.001$, respectively.
The variance of the noise in the adaptive noisy gradient case follows Eq.~(\ref{gidqnn_chem_noise_gamma}).
Figures~\ref{gidqnn_exp_chem_gd_loss_1} and \ref{gidqnn_exp_chem_gd_loss_2} show similar performance, where the loss $f$ with the Gaussian initialization and the zero initialization converge to $10^{-4}$ over the global minimum $f_*$. The loss with the uniform initialization is higher than $10^{-1}$ over the global minimum.
Figure~\ref{gidqnn_exp_chem_gd_loss_3} shows the training with constantly perturbed gradients. The Gaussian initialization and the zero initialization induce the $10^{-3}$ convergence, while the loss function with the uniform initialization is still higher than $10^{-1}$ over the global minimum.
Figures~\ref{gidqnn_exp_chem_adam_loss_1}-\ref{gidqnn_exp_chem_adam_loss_3} show similar training behaviors using the Adam optimizer.
Based on Figures~\ref{gidqnn_exp_chem_gd_loss_1}-\ref{gidqnn_exp_chem_adam_loss_3}, the Gaussian initialization and the zero initialization outperform the uniform initialization in all cases.
We notice that optimization with accurate gradients and optimization with adaptive noisy gradients have the same convergence rate and the final value of the loss function, which is better than that using constantly perturbed gradients. {We remark that the number of measurements $T=\mathcal{O}(\frac{1}{{\rm Var}({\rm noise})})$.}
Thus, finite number of measurements with the noise~(\ref{gidqnn_chem_noise_gamma}) for gradient estimation is enough to achieve the performance of accurate gradients, which verifies Theorem~\ref{gidqnn_theorem_related} and Corollary~\ref{gidqnn_corollary_noise}.
\section{Conclusions}
\label{gidqnn_conclu}
In this work, we provide a Gaussian initialization strategy for solving the vanishing gradient problem of deep variational quantum algorithms. We prove that the gradient norm of $N$-qubit quantum circuits with $L$ layers could be lower bounded by ${\rm poly}(N,L)^{-1}$, if the parameter is sampled independently from the Gaussian distribution with the variance $\mathcal{O}(\frac{1}{L})$. Our results hold for both the local and the global observable cases, and could be generalized to VQAs employing correlated parameterized gates. {Compared to the local case, the bound for the global case depends on the gradient performance at the zero point. Further analysis towards the zero-case-free bound could be investigated as future directions.} Moreover, we show that the necessary number of measurements, which scales $\mathcal{O}(\frac{L}{\epsilon})$, suffices for estimating the gradient during the training.
We {provide} numerical experiments on finding the ground energy and state of the Heisenberg model and the LiH molecule, respectively. Experiments show that the proposed Gaussian initialization method outperforms the uniform initialization method with a faster convergence rate, and the training using gradients with adaptive noises shows the same convergence compared to the training using noiseless gradients.
|
1,108,101,564,367 | arxiv | \section{Introduction}
The fuzzy approximation scheme \cite{fuzzy} consists in
approximating the algebra of functions on a manifold with a finite
dimensional matrix algebra instead of discretising
the underlying space as a lattice approximation does.
Here we report our results for a hermitian scalar field
on the fuzzy sphere. We find the collapsed phase diagram
and in particular we calculate the uniform ordered/non-uniform ordered line
that was absent in \cite{xavier}.
The current study could be relatively easily repeated for a hermitian
scalar field on other fuzzy spaces. The simplest extension would be to
fuzzy $\mbox{\openface CP}^{\rm N}$. Some variants of the scheme can
be applied to fuzzy versions of $S^3$ and $S^4$\cite{spheres}. The
study reveals that the non-uniform disordered phase lines should
correspond to a pure matrix model transition.
As an approximation scheme, this ``fuzzification'' is well suited to
numerical simulations of field theories \cite{Nishi}. As a test run,
the first fuzzy approximation to be investigated should be the
simplest one, that of the two dimensional sphere $\mbox{\openface
CP}^1=S^2$. Both the two--dimensional commutative and Moyal planes
can be viewed as the limits of a fuzzy sphere of infinite radius.
\section{The two dimensional $\phi^4$ Model and its fuzzy version}
We are interested in the model:
\begin{equation}\label{eq:accion}
S[\Phi] = Tr \left[ a \, \Phi ^{\dagger} \left[ L_{i} , \left[ L_{i} ,
\Phi \right] \right] + b \Phi^{2} + c
\Phi^{4} \right],
\end{equation}
where $\Phi$ is a Hermitian matrix of size $N$, $b$ and $c$ are mass
and coupling parameters respectively. $L_{i}$ is the angular momentum
generator in the $N$ dimensional unitary irreducible representation of
$SU(2)$. Since a rescaling of $\Phi$ will allow us to set $a=1$, the
entire phase diagram can be explored by ranging through all real
values of $b$ and positive values of $c$. The conventions of
\cite{xavier} are $a=\frac{4\pi}{N}$, $b=a r R^2$, $c=a\lambda R^2$.
The infinite matrix limit of the action can be taken and corresponds to
a real scalar field $\phi$ on a round sphere of radius $R$
and Euclidean action
\begin{equation}\label{eq:theaction}
S[\phi] = \int_{S^{2}} d^{2} {\bf n }\left(
\phi{\cal L}^2 \phi + r R^2 \phi^{2} + \lambda R^2
\phi^{ 4 } \right)
\end{equation}
where ${\cal L}^2 =\sum_{i=1,3}{\cal L}_{i}^2$ and ${\cal L}_i$ are the
usual angular momentum generators.
The eigenvectors of $\left[ L_{i} , \left[ L_{i} , \cdot \right]
\right]$ in (\ref{eq:accion}) are the polarization tensors
$\hat{Y}_{lm}$ (normalised so that $\frac{4\pi}{N}
\mathrm{Tr}(\hat{Y}_{lm}^\dag \hat{Y}_{lm})=1$)
and it has eigenvalues $l(l+1)$ with degeneracy $2l+1$.
This is precisely the spectrum of the Laplacian $\mathcal{L}^2$ on the commutative
sphere truncated at angular momentum $N-1$.
This particular model was chosen because of its simplicity. The
diagrammatic expansion of the model (\ref{eq:theaction}) has only one
divergent diagram, the tadpole diagram, is Borel resumable, and
defines the field theory entirely. In the fuzzy version, the tadpole
splits into planar and non-planar tadpoles, which are also the only
diagrams that diverge in the infinite $N$ limit. Their difference is
finite and nonlocal and is responsible for the UV/IR
mixing phenomena of the disordered phase \cite{uvirmixing}.
Even though the scalar field on either commutative or fuzzy spheres
cannot have a phase transition, since they have finite volume or a
finite number of degrees of freedom, phase transitions may be found
when the matrix dimension or the radius of the
sphere become infinite.
The fuzzy sphere can be recognized by introducing coordinates
$\left( X_{ 1 }, X_{ 2 },X_{ 3 } \right)$
proportional to the angular momentum operators
\begin{equation}
X_{i} = \frac{ 2 R }{ \sqrt{ N^{2} - 1 }} L_{i}.
\nonumber
\end{equation}
They must satisfy the algebra
\begin{equation}\label{eq:noncommutative} \
X^{2}_{1} + X^{2}_{2}+ X^{2}_{3}=R^{2} {\bf
1}, \hspace{1cm} \left[ X_{i} , X_{j} \right] = i \frac{\Theta}{R}
\epsilon_{ijk} X_{k} \nonumber
\end{equation}
where $\Theta = \frac{ 2 R^2 }{ \sqrt{ N^{2} - 1 }}$ is
the parameter of non-commutativity, $R$ is the radius of the sphere
and ${\bf 1}$ is the unit operator.
The non-commutativity parameter depends on the matrix size,
$N$, and the radius of the sphere, $R$. By taking
different limits we can access different spaces:
\begin{center}
\begin{tabular}{||c|c|c|l||}
\hline
\hspace{0.5cm}$N$\hspace{0.5cm} &\hspace{0.5cm}$R$\hspace{0.5cm}
& \hspace{0.5cm}$\Theta$\hspace{0.5cm} & \hspace{0.5cm}Limit
\hspace{0.5cm} \\
\hline
$N$ & constant = $R$ & $2R^2/\sqrt{N^{2}-1}$ & Fuzzy Sphere \\
$\infty$ & constant = $R$ & $0$ & Commutative Sphere \\
$\infty$ &$\infty$ & $0$ & Commutative plane \\
$\infty$ & $\infty$ & constant = $\theta$ & Moyal Plane \\
\hline
\end{tabular}
\end{center}
\subsection{Order parameters}
A suitable set of order parameters can be identified from the
coefficients of a mode decomposition in terms of the polarization tensors
basis {\cite{Varshalovich}}
\begin{equation}\label{eq:expansion}
\Phi = Tr(\Phi) \frac{\bf 1}{N} + \frac{12}{N(N^2-1)}\rho_aL_a +
\sum_{l=2}^{N-1}\sum_{m=-l}^{+l} c_{lm} \hat{Y}_{lm}.
\end{equation}
We have separated explicitly $l=0$ and $l=1$ from the expansion
to identify two observables whose expectation values
we used to identify the respective phases. The observables are
$|Tr(\Phi)|$ and
$\rho^2:=\rho_a\rho_a=\displaystyle\sum_{a=1}^{3}(Tr(L_a\Phi))^2$.
The total power in all coefficients is given by
$Tr(\Phi^2) = \frac{1}{N}(Tr(\Phi))^2+\frac{12}{N(N^2-1)}\rho^2+
\frac{N}{4\pi} \displaystyle\sum_{l=2}^{N-1}\displaystyle\sum_{m=-l}^{l}
|c_{lm}|^{2}$ and can be used to estimate the importance of the
neglected higher modes.
\subsection{The phases}
This model (\ref{eq:accion}) presents three phases. As a generic
illustration of their properties, Fig. \ref{fig:thephases}(a) and
Fig.\ref{fig:thephases}(b) show the dependence on the mass parameter
$b$ of the probability distributions of $Tr(\Phi)$ and $\rho$,
respectively, for $\left\{ a=1, \ c=40,\ N=4 \right\}$.
{\bf Disordered:} Found for $|b|$ ``small'', the configurations
fluctuate close to $\Phi=0$. This is confirmed on the figure,
$<|Tr(\Phi)|>\sim <\rho>\sim 0$, but also $<Tr(\Phi^2)>\sim 0$ (not
shown).
{\bf Non-uniform ordered:} As $|b|$ increases, the figure shows
multiple symmetric peaks for the probability distribution of
$Tr(\Phi)$ whose height decreases with increasing $|Tr(\Phi)|$, and
multiple peaks not centered near zero for $\rho$. Furthermore
$Tr(\Phi^2)$ is much larger than both $\frac{<|Tr(\Phi)|>^2}{N}$ and
$<\rho>$ so that higher modes actually dominate. In particular, the
most probable configuration is not rotationally invariant and we have
spontaneous breakdown of rotational invariance. The probability
distribution of $Tr(\Phi)$ has $N+1$ symmetric peaks located
approximately at $(N-2k)\sqrt{-b/2c}$ where $k=0,1,\dots,N$, while the
probability distributions of $\rho$ and $S[\Phi]$ have $(N+1)/2$ peaks
for $N$ odd and $N/2+1$ for $N$ even.
{\bf Uniform ordered:} As $|b|$ becomes large, the figure shows two
symmetric peaks for the probability distribution of $Tr(\Phi)$
corresponding to the outer peaks of the non-uniform ordered phase
and located approximately at $Tr(\Phi)\sim \pm N\sqrt{-b/2 c}$, but
just one peak near zero for $\rho$. Furthermore, $<Tr(\Phi^2)> \sim
<Tr(\Phi)>^2/N$ indicating that the power in higher modes is
negligible. This is generic and indicates that $\Phi\sim
\sqrt{-b/2c}\,{\bf 1}$ and the rotational symmetry is thus
restored.
\begin{figure}[here]
\begin{center}
\mbox{ \subfigure[Probability distribution of
$Tr(\Phi)$]{\scalebox{0.92}{\label{fig:alpha}
\epsfig{file=histogram3d_4x4_trace.epsi,height=8cm,angle=-90}}}
\quad \subfigure[Probability distribution of
$\rho$.]{\scalebox{0.92}{\label{fig:rho}
\epsfig{file=histogram3d_4x4_rho.epsi,height=8cm,angle=-90}}} }
\caption{Figures (a) and (b) show the typical behavior of the
observables $Tr(\Phi)$ and $\rho$ in a region of the phase diagram
where decreasing $b$ passes the system through the three phases.}
\label{fig:thephases}
\end{center}
\end{figure}
\section{Simulation and Results: The specific heat and the phase diagram}
We are interested in the phase diagram of the model
(\ref{eq:accion}). To identify it, we used the coordinates in
parameter space of the peaks of the specific heat,
$C:=<S^{2}> - <S>^{2}$. The relevant
set of parameters is $\left\{ N , b , c \right\}$ where $b$ and $c$
depend implicitly in $R$. It is possible to further reduce by one the
number of parameters by finding a scaling $\left\{ b , c \right\}
\to \left\{ bN^{\theta_{b}}, cN^{\theta_{c}} \right\}$. If we
find such $\theta_{b}$ and $\theta_{c}$, the model becomes independent
of $N$ and automatically yields an infinite matrix limit.
The simulations show that in the non--uniform ordered phase, the fuzzy
kinetic term (proportional to $a$ in (\ref{eq:accion})), is negligible
compared to the potential term (the other terms). There
exists an exact solution for the corresponding limit of $a=0$ in the
large $N$ limit called the pure potential model
\cite{pavelzuberfrancesco}. This model predicts a third order phase
transition between a disordered and non--uniform ordered phase at
$c=b^{2}/4N$. Figure \ref{fig:matrix} confirms numerically the
convergence of the disordered/non--uniform ordered transition towards
this exact critical line of the pure potential model.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.53 \textwidth,angle=-90]{calor_N2468.epsi}
\caption{Plot of the specific heat at the disordered/non--uniform
ordered transition for increasing $N$ and its $N\to\infty$
limit, the exact pure potential model.}
\label{fig:matrix}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.53 \textwidth,angle=-90]{phase_diagram_FINAL.epsi}
\caption{Phase diagram obtained from Monte--Carlo simulations of the
model (\protect\ref{eq:accion})}
\label{fig:diagram}
\end{center}
\end{figure}
Numerically, it is not difficult to find the coexistence curve between
the uniform ordered and disordered phases which exist for low values
of $c$. On the other hand, the coexistence curve between the two
ordered phases is difficult to evaluate because it involves a jump in
the field configuration and tunnelling over a wide potential barrier.
The phase diagram obtained by Monte Carlo simulations with the
Metropolis algorithm is shown on figure \ref{fig:diagram}. That plot
shows the phase diagram for the model (\ref{eq:accion}). The data have
been collapsed using the scaling form defined above with
$\theta_{b}=-3/2$ and $\theta_{c}=-2$. It is remarkable that this
scaling also works for the exact solution of the pure model potential.
\section{Conclusions}
The numerical study showed three different phases. In one of those
phases, which does not exist in the commutative planar $\lambda
\phi^{4}$ theory, the rotational symmetry is spontaneously broken. The
other two phases have qualitatively the same character as the phases
of this latter model. The three coexistence curves intersect at a
triple point given by
\begin{equation}
(b_{_T},c_{_T})=(-0.15 N^{3/2},0.8 N^{2}).
\end{equation}
Those three curves and the triple point collapse using the same
scaling function of $N$ and thus give a consistent $N
\to \infty$ limit. Thus, all three phases, and in particular the new uniform
disordered phase, and the triple point survive in the limit.
We will discuss these issues more completely in Ref.
\cite{XavierDenjoeFernando}.
\paragraph{Acknowledgements} We wish to thank W. Bietenholz and J. Medina for
helpful discussions.
|
1,108,101,564,368 | arxiv | \section{Introduction}\label{intro}
Surveys are an indispensable tool for learning the characteristics of
a population, from matters of public health, such as drug use
frequency, to gauging public opinion on issues like abortion and
homosexuality, to electing the leaders of a democratic country. Their
reliability is dependent on having ample participation and an unbiased
sample. On occasion, however, they require disclosing sensitive or
otherwise private data compelling some interviewees to give faulty
answers and discouraging others from participating altogether. There
is seldom a big incentive to answer a survey, and when its questions
are potentially stigmatizing, special care should be taken to protect
respondent privacy and promote participation.
In this paper, we propose {\em Negative surveys} as a technique for
conducting surveys that is mindful of participant's privacy. Negative
surveys allow participants to keep the target datum undisclosed by
asking them, instead, to make a series of decisions with the datum in
mind. In this way, the frequency of drug use can be calculated
without respondents admitting to using any drugs, the popular opinion
on abortion can be measured without asking for anyone's specific
position, and an election can be run without any of the voters
explicitly stating their preference. Our objective is that, by
providing transparent privacy guarantees, studies using our scheme
will have greater participation and more accurate responses, and will,
therefore, be more reliable.
In what follows, we review some of the work related to our own,
specify the kind of survey under study and the negative survey
technique, and discuss the privacy of our method by analyzing the
amount of information gained by each questionnaire. We then look at
how a negative survey is applied, how the sought data is extracted
from it, and, finally, conclude by summarizing its characteristics.
\subsection{Related Work}
\cite{mythesis} studies the idea of depicting a data set
$DB$ by alternatively storing every datum {\em not} in $DB$---the
universe of possible data items is assumed to be finite. This is
accomplished by introducing a data compaction scheme and a series of
algorithms that allow the complement set to be created and stored
efficiently. One property of this construction, called {\em negative
database}, is its potential for restricting the kinds of inferences
that can be drawn from the data: given a negative database, it is a
difficult problem to recover the original dataset $DB$; yet, answers
for a certain, limited type of queries can be obtained efficiently.
Other properties or representing data negatively are outlined in
\cite{mythesis}; including the suggestion that this viewpoint may be
useful, not only to protect stored data, but also, as a paradigm to
enhance the privacy of data collection. It is this proposition that
inspired the current work.
A suit of techniques that share the same motivation as our proposal---
to protect privacy, promote participation, and increase survey
accuracy---and which have a similar procedure for conducting surveys
are known as {\em Randomized response techniques}.
Randomized response techniques (RRT) were introduced in
\cite{warner65}. The original model sets out to estimate the
proportion of a population that belongs to a particular, stigmatizing
group $A$. It offers participants two possible questions:
\begin{itemize}
\item[$Q1$]: Do you belong to group $A$?
\item[$Q2$]: Do you belong to group $B$?
\end{itemize}
where $A$ and $B$ are exhaustive and mutually exclusive groups, i.e.,
$B=\bar{A}$; for example, if $A$ represents the group of people that
have had sex with a minor; $B$ would stand for the people that
haven't. Only one of the two questions is to be answered. The
question must be selected privately using a randomizing device
provided by the interviewer and should remain undisclosed at all
times. The only information surrendered by the participant is a Yes or
No answer, not the question being answered, not the outcome of the
randomizing device. In this way, the interviewee avoids disclosing
which group he belongs to, yet provides sufficient information to
estimate the desired proportions---the Yes and No answers of all the
respondents in the sample along with the known characteristics of the
randomizing device, are enough to estimate the proportion of
population members in each group (see
\cite{warner65,fox86,chaud88,mangat94,gjestvang05}).
A further refinement of this approach relaxes the need for groups $A$
and $B$ to be exhaustive and mutually exclusive and substitutes the
second question for something less sensitive. For example, $Q1$ could
read ``Have you had sex with a minor?'' and $Q2$: ``Do you belong to
the YMCA?'' This variant is know as the unrelated question or
paired-alternative method \cite{horvitz67,moors71,chaud88}.
The RRT methods discussed so far are designed for dichotomous
populations. Negative surveys, on the other hand, are relevant only
when there are more than two categories in which the population can be
divided. \cite{Abul67} generalizes Warner's model to polychotomous
populations by employing several independent
samples. \cite{bourke73,bourke76} propose a different scheme that
necessitates only a single sample: categories are numbered 1 through
$t$, participants disclose which category they belong to with
probability $p$ or, with probability $1-p$, choose a number between 1
and $t$; each number is selected with probability $p_1 \ldots p_t$,
where $\sum p_i=1-p$ (see \cite{chaud88} for more detail and
\cite{kimwarde05} for a more recent example).
Finally, it is worth pointing out the existence of mechanisms for
conducting direct response surveys privately. For instance, anonymity
schemes can be used to conceal the identity and input of individual
participants, e.g., \cite{sudman74}'s self administered questionnaires,
Web and e-mail anonymous surveys, and cryptographically based surveys
as in \cite{feigenbaum04}. Also, legal guarantees can be set that
safeguard respondent privacy. However, these methods may not always
lead to the desired participation level since respondents still need
to answer a sensitive question---the guarantees offered by the study
need to be understood to be trusted, and trust must be put on higher
authorities not to circumvent the promise. Further, some of these
techniques require setups that are not always available for a study,
such as the use of computers, and some, like anonymous surveys, have
additional shortcomings when it comes to verifying their results or
conducting longitudinal studies. \cite{fox86} discuss the drawbacks of
some of these approaches at length.
\section{Negative Surveys}
The type of survey we consider consists of a questionnaire with a
single question (or statement) and $t$ categories
$\{X_1,X_2,\ldots,X_t\}$ from which to choose an answer (or
alternative). The survey is administered to a sample of $n$
individuals drawn uniformly at random with replacement from the
population.
We refer to it as a {\it Positive survey} or as a {\it Direct response
survey } when the subjects are asked to reveal which category they
belong to. We call it a {\it negative survey} when the requirement is
to disclose a category (a single one, for the current work) to which
they {\em do not} belong---a negative questionnaire can be obtained by
simply negating the question of a positive questionnaire. The
categories in the direct response survey are exhaustive and mutually
exclusive---one and only one option is true; in the negative survey,
one and only one category is false for a particular individual. The
object of both versions of the survey is to estimate the proportions
of the population that belong to each category.
Take, for example, a direct response, salary survey:\\
\begin{center}
\begin{minipage}[h]{0.7\linewidth}
I earn:
\begin{itemize}
\item[{\bf[ ]}] Less that 30,000 dollars a year
\item[{\bf[ ]}] Between 30,000 and 60,000 dollars a year
\item[{\bf[ ]}] More that 60,000 dollars a year
\end{itemize}
\end{minipage}
\end{center}
The negative version would read:
\begin{center}
\begin{minipage}[h]{0.7\linewidth}
I {\bf do not} earn:
\begin{itemize}
\item[{\bf[ ]}] Less that 30,000 dollars a year
\item[{\bf[ ]}] Between 30,000 and 60,000 dollars a year
\item[{\bf[ ]}] More that 60,000 dollars a year
\end{itemize}
\end{minipage}
\end{center}
If the positive version of the survey is being answered by an
individual whose income is 20,000 dollars, the first option must be
chosen. Alternatively, if the same person is answering the negative
version of the survey, one of the last two options should be selected.
Next we look more closely at the amount of information that is being
surrendered in both cases.
\subsection{Privacy Preserved}\label{information}
It is intuitively clear that the amount of information required by a
negative questionnaire is inferior to what is asked for in its direct
response version, at least when the query has more than two options.
We formalize this notion and show that, indeed, the information
required for a negative survey is at most that of its positive
counterpart.
Using Shannon's uncertainty measure, \cite{shannon48}, the amount of
information gained from a positive questionnaire in which categories
are exhaustive and mutually exclusive can be written as:
\begin{equation}\label{posinfo}
-\sum_i p_i \log p_i
\end{equation}
where $p_i$ is the probability that option $X_i$ is true and $t$ is
the number of categories in the questionnaire. The maximum amount is
obtained when all options are equally likely.
Now consider the information gained from applying a negative
questionnaire in which only one option, $X_s$, is selected by the
respondent. We compute this quantity as the difference in information
of two positive questionnaires: the information obtained by the
positive version of the questionnaire (given in Eq. \ref{posinfo}),
minus the information gained from the same questionnaire once $X_s$ is
no longer an option.
\begin{equation}\label{neginfo}
-\sum_i p_i \log p_i +\sum_{i\neq s} P(X_i=T|X_s=F) \log P(X_i=T|X_s=F)
\end{equation}
where $P(X_i=T|X_s=F)$ is the probability that category $i$ is true in
a direct response survey after $X_s$ has been removed as an option,
i.e., after finding out it is false.
It is easy to see from the above expressions that the information
gained from a negative questionnaire is at most the quantity obtained
from its positive counterpart.
\section{Estimating Proportions Using Negative Input}\label{Estimating}
In the previous sections, we explained negative surveys and discussed
how their application increases the interviewees' privacy by requiring
less information than their positive counterpart. Negative surveys
ask respondents to choose one of the $t-1$ options that truthfully
answer the question before them---all choices except one are true for
a specific individual when surveyed ``negatively''. However, we are
after the proportions of the population that ``positively'' belong to
each of the $t$ categories: a particular interviewee positively
belongs to one and only one category.
In this section, we show how to estimate these values along with their
corresponding measures of variation. The analysis follows a similar
reasoning---albeit different in the details---as the one used for
randomized response techniques, particularly as shown in
\cite{chaud88} for vector responses. We therefore adopt the same
notation.
Let $p_{i,j}$ ($1\leq i,j \leq t$) be the probability that option
$X_i$ is chosen given that a respondent positively belongs to $X_j$,
and $\sum_i p_{i,j}=1$. Let $\pi_i$ denote the proportion of the
population that positively belongs to category $i$ with
$\sum_i\pi_i=1$. Then, the probability of selecting $X_i$ is given by:
\begin{equation} \label{prob-lambda}
\lambda_i=\sum_j p_{i,j} \pi_j
\end{equation}
Let $P$ denote the matrix of $p_{i,j}$'s:
\begin{equation*}
P=\left [
\begin{array} {cccc}
p_{1,1} &+ p_{1,2} &+ \cdots &+ p_{1,t}\\
p_{2,1} &+ p_{2,2} &+ \cdots &+ p_{2,t}\\
\vdots & & \ddots &\\
p_{t,1} &+ p_{t,2} &+ \cdots &+ p_{t,t}
\end{array} \right ]
\end{equation*}
where $\sum_i p_{i,j}=1$ and $p_{i,i}=0$; let
$\boldsymbol{\pi}=(\pi_1,\ldots,\pi_t)'$ and
$\boldsymbol{\lambda}=(\lambda_1,\ldots,\lambda_t)'$. The probability
of responses for each category is written in matrix notation as:
\begin{equation}
P \boldsymbol{\pi}=\boldsymbol{\lambda}
\end{equation}
Let $n_i$ be the observed frequency of category $X_i$ ($1\leq i\leq
t$) obtained from the application of a negative survey to $n$
individuals. Observe that $n_i$ is binomially distributed with
parameters $n$ and $\lambda_i$. An unbiased estimator of $\lambda_i$
is $\hat{\lambda_i}=\frac{n_i}{n}$, and, provided $P$ is non-singular,
an unbiased estimator of $\boldsymbol{\pi}$ is given by:
\begin{equation}
\boldsymbol{\hat{\pi}}=P^{-1}\boldsymbol{\hat{\lambda}}
\end{equation}
where
$\boldsymbol{\hat{\pi}}=(\hat{\pi_1},\ldots,\hat{\pi_t})'$ and
$\boldsymbol{\hat{\lambda}}=(\hat{\lambda_1},\ldots,\hat{\lambda_t})'$.
An unbiased estimator for the variance and covariance of
$\boldsymbol{\hat{\pi}}$ is computed as follows:
Let $\boldsymbol{\hat{\lambda}^d}$ be a diagonal matrix where entry
$(i,i)$ is equal to the $i^{th}$ element of
$\boldsymbol{\hat{\lambda}}$; the estimated covariance of
$\boldsymbol{\hat{\pi}}$ is written as:
\begin{equation}
\hat{cov}(\hat{\boldsymbol{\pi}})=\frac{1}{n-1}P^{-1}(\boldsymbol{\hat{\lambda}^d}-\boldsymbol{\hat{\lambda}}\boldsymbol{\hat{\lambda}'})P^{'-1}
\end{equation}
The accuracy of the resulting $\pi_i$'s is dependent on an adequate
sampling of the $n_i$'s, as would also be the case in a positive
survey, and on having a good estimate of the $p_{i,j}{'s}$. Knowing
how individuals choose an option is the extra information needed to
estimate the desired proportions while keeping personal data
concealed. Insight may come from knowledge about the behavior of the
population---factors like name recognition may bias an individual to
select a category---from data gathered in previous surveys, or from
employing a specific design in the administration of the survey. This
last alternative is discussed further in the next section.
\subsection{How to Choose an Option}\label{HowtoChoose}
In this section we propose a scheme intended to reduce the impact of
unknown biases by automating, and hence, predetermining part of the
decision process used to elect an answer.
One way to determine how respondents select a category is by
instructing them on how to choose among available options. For
instance, a simple, straightforward design gives each category an
equal chance of being selected:
\begin{equation*}
P=\left [
\begin{array} {llll}
0 &+ \frac{1}{t-1} &+ \cdots &+ \frac{1}{t-1}\\
\frac{1}{t-1}&+ 0 &+ \cdots &+ \frac{1}{t-1}\\
\vdots & & \quad \ddots &\\
\frac{1}{t-1} &+ \frac{1}{t-1} &+ \cdots &+ 0
\end{array} \right ]
\end{equation*}
In this scenario the probability of option $X_i$ being chosen is:
\begin{equation}
\lambda_i=\frac{1}{t-1}(1-\pi_i)
\end{equation}
following \ref{prob-lambda} and noting that $\sum_i\pi_i=1$. An
unbiased estimator of $\pi_i$ is given by:
\begin{equation} \label{eq-samep-for-pi}
\hat{\pi_i}=1-(t-1)\hat{\lambda}_i
\end{equation}
where $\hat{\lambda_i}=n_i/n$ is an unbiased estimator for
$\lambda_i$. Similarly, an u.e. for the variance of $\hat{\pi_i}$:
\begin{equation}\label{eq-samep-for-var}
\hat{var}(\hat{\pi_i})=\frac{(t-1)^2}{n-1}\hat{\lambda}_i (1-\hat{\lambda}_i)
\end{equation}
and the covariance of $\hat{\pi_i}$ and $\hat{\pi_j}$:
\begin{equation}\label{eq-samep-for-cov}
\hat{cov}(\hat{\pi_i},\hat{\pi_j})=-\frac{(t-1)^2}{n-1}\hat{\lambda}_i
\hat{\lambda}_j
\end{equation}
With this scheme respondents provided with a fair, $t-1$ sided, die
can select an answer by privately obtaining a value $m$, and choosing,
for instance, the $m^{th}$ true option from the top, skipping over the
false category if needed. One difficulty of the approach is, in fact,
having a $t-1$ sided die readily available, as during a phone survey.
This is compounded when asking several questions, each with a
different number of categories. In the following section we propose a
design that addresses this point and illustrates other important
properties of gathering information with a negative survey.
\subsubsection{Two-option Survey Scheme}
It was earlier discussed how it is essential to know how respondents
choose, on average, from the available options, and considered using a
$t-1$ sided die to determine their selection. However, this has the
inconvenience of necessitating a customized device for each question
(unless an general purpose random number generator is at hand). Next,
we present a scheme that resolves this issue and reduces the impact of
unknown biases (arising from an incorrect compliance with the survey
instructions) by automating part of the decision process.
Direct response surveys as well as randomized response techniques
require presenting all categories to the interviewee---for only one
choice is true. Conversely, in a negative survey all options except
one are true; consequently, only a subset needs to be evaluated by the
respondent. With this in mind, our scheme preselects, uniformly at
random, a subset of the categories (two or more) for each of the
individuals questioned.
Consider the case where each subject is presented with a question and
two options; if both options prove true, one should be selected at
random---the interviewee privately tosses a fair coin prior to reading
the question and picks the first category if heads, the second if
tails---otherwise, when only one option is true, the true option must
be selected. Note that the setup only makes sense when the original
survey doesn't include categories of the type ``None of the above''.
The probability of choosing $X_i$ in this setup, is the probability of
it being presented in the questionnaire times the probability that it
is selected by the interviewee. We can analyze this as the sum of two
terms \footnote{The probability of $X_i$ being selected if it does not
answer the question truthfully is considered to be zero and therefore
omitted.}: when it is presented alongside the only false option
($X_j$), in which case it will be selected, and when it is paired with
another true alternative and a choice must be made between them:
\begin{equation}\label{probXi}
p_{i,j}=\frac{2}{t(t-1)}+\sum_k \frac{2}{t(t-1)}P(X_i|X_k=T), \qquad
(k\not =j)
\end{equation}
where $P(X_i|X_k=T)$ denotes the probability of $X_i$ being chosen
given that it is presented together with another true option $X_k$.
\noindent
We obtain a bound for selecting $X_i$, when it is a true alternative,
by setting $P(X_i|X_k=T)$ to zero and one respectively:
\begin{equation}
p_{i,j} \in \left [\frac{2}{t(t-1)},\frac{2(t-2)}{t(t-1)} \right ]
\end{equation}
Consider that if all categories are presented to each respondent and
respondents ignore the outcome of the randomizing device, there might
be a particular alternative that never gets chosen (true also if the
subset shown to the interviewee has more than two categories), or an
option that always does (except when it proves false to the
interviewee). Using the two-option survey scheme, this possible error
is greatly reduced.
Finally, if no additional information is available regarding how
subjects choose between true alternatives, we assume
$P(X_i|X_k=T)=\frac{1}{2}$ for all $i \not=k$, and, therefore, that
$p_{i,j}=\frac{1}{t-1}$ for all $i \not=j$.
The sought after proportions, $\pi_i's$, are computed according to
Eq. \ref{eq-samep-for-pi} and the variances and covariances according
to Eq. \ref{eq-samep-for-var} and \ref{eq-samep-for-cov}
respectively.
Three interesting features of this approach are worth noting:
first, the use of a single randomizing device for every question
independent of the number of categories (for the above scenario, the
use of a coin instead of a $t-1$ sided die); second, the ability to
conduct a survey without disclosing all of the options to individual
respondents; and third, that the error in estimating the $p_{i,j}{'s}$
is bounded even if the coin is used improperly.
\section{Conclusion}
Survey accuracy depends on minimizing the incidence of nonrespondents
and on the honest participation of respondents. In studies where
interviewees are required to answer sensitive questions, care must be
taken to avert these difficulties by ensuring respondent privacy.
In this paper we presented a method for administering a questionnaire
that safeguards interviewee's privacy. The survey in question seeks
to estimate the population frequencies of a polychotomous variable; it
consists of a single (potentially sensitive) question and $t$ options
from which to choose an answer. Its privacy preserving properties do
not rely on anonymity, cryptography or on any legal contracts, but
rather on participants not revealing their true answer to the
survey's query---respondents are only required to discard, with a
known probability distribution, some of the categories that do {\em
not} answer the question for them. This information is enough to
estimate the population proportions of the variable under study; yet,
insufficient to ascribe a sensitive datum to a particular individual.
We call the method Negative Surveys.
Negative surveys are closely related to randomized response techniques
(RRTs): both aim at conducting private surveys and both rely on the
(secret) use of a randomizing device to answer questionnaires.
One key distinction, however, is that in RRTs participants use the
device to choose among questions, at least one of which is sensitive;
while in negative surveys, they use it to choose among answers,
avoiding the problem of selecting the proper alternative question
altogether. Also, with RRTs some subjects will be selected, by the
randomizing device, to answer the potentially stigmatizing
question. It might still be problematic for them to participate, as
the question remains sensitive and answering it demands a measure of
trust on the surveying scheme and on the surveyors---\cite{fox86} cite
a study in which the randomizing device was rigged in order to study
respondent behavior; a similar practice could be used to subvert their
privacy. Negative surveys never prompt respondents to answer a
sensitive query directly.
We also presented a special setup for negative surveys that reduces
the complexity of the randomizing device to a simple, fair coin
revealing an important characteristic of our method: an interviewee
does not need to contemplate all of the question's potential answers
to pick his own, furnishing a level of secrecy to the survey itself
and providing robustness against the non-observance of questionnaire
instructions (\cite{ambainis98} discuss cryptographic techniques to
avoid cheating in RRT).
We expect that the privacy of a negative survey, its comprehensible
guarantees, and robustness will increase the level of cooperation and
accuracy in topic-sensitive studies.
\bibliographystyle{chicago}
|
1,108,101,564,369 | arxiv | \section{Introduction}
Solution of few-body scattering problems, especially above the
three-body breakup threshold, no matter in differential or integral
formalism, involves a very large amount of calculations and
therefore requires extensive use of modern computational facilities
such as powerful supercomputers. As a vivid example, we note that
one of the most active and successful groups in the world in this
area --- the Bochum--Cracow group guided up to recent time by Prof.
W.~Gl{\"o}ckle (who passed away recently) --- employed for such
few-nucleon calculations the fastest in Europe supercomputer from
JSC in J{\"ulich} with the architecture of Blue Gene
\cite{gloeckle12,gloeckle}. Quite recently, new methods for solving
Faddeev and Faddeev-Yakubovsky few-body scattering equations using
(in one way or another) the bases of square-integrable functions
have been developed \cite{Lazauskas_rep}, which allow to simplify
significantly the numerical solution schemes. Nevertheless, the
treatment of realistic three- and four-body scattering problems
includes a tremendous numerical labor and, as a result, still can be
done only by a few groups over the world that hinders the
development of these important studies.
However, recently there appeared a new possibility to use the Graphics
Processing Units (GPU) for such time-consuming calculations. This
can transform an ordinary PC into a supercomputer. There is no necessity to argue that
such variant is unmeasurably cheaper and more accessible for many
researchers in the world. However, due to the special GPU
architecture usage of GPU is effective only for those problems where
numerical schemes of solution can be realized with a high degree
of parallelism. The high effectiveness of the so-called General
Purpose Graphics Processing Unit (GPGPU) computing has been
demonstrated in many areas of quantum chemistry, molecular
dynamics, seismology, etc. (see the detailed description of
different GPU applications in refs.
\cite{CUDA,CUDA_MC,CUDA_QCD,CUDA_ch}). Nevertheless, according to
present authors' knowledge, GPU computing still has not been used
widely for a solution of few-body scattering problems (we know
only two researches but they are dedicated to the {\em ab
initio} calculation of bound states \cite{Vary} and also
resonances in the Faddeev-type formalism \cite{yarevsky}). Thus,
in this paper we would like to study in detail just the
effectiveness of GPU computing in solving general few-body
scattering problems.
In the case when the colliding particles have inner structures and
can be excited in the scattering process, i.e. should be treated as
composite ones (e.g., nucleon isobars) the numerical complexity of
the problem is increased additionally, so that without a significant
improvement of the whole numerical scheme the practical solution
of such multichannel problems becomes to be highly nontrivial
even for a supercomputer. Thus, the development of new methods in
few-body scattering which can be adapted for massively parallel
realization is of interest nowadays. We propose here a novel
approach in this area which includes two main components:
(i) A complete discretization of the continuous spectrum of the
scattering problem, i.e. the replacement of continuous momenta and
energies with their discrete counterparts, by projecting all the
scattering functions and operators onto a space spanned on the
basis of the stationary wave packets
\cite{KPR_Ann,KPRF_breakup,Kelvitch_Yaf14,KPR_GPU}. As a result,
the integral equations of the scattering theory (like the
Lippmann--Schwinger, Faddeev etc. equations) are replaced with
their matrix analogs. Moreover, due to an ordinary $L_2$
normalization of the stationary wave packets one can solve a
scattering problem almost fully similarly to bound-state
problems, i.e. without explicit account of the boundary conditions
(which are rather nontrivial above few-body breakup thresholds).
The main feature of this discretization procedure is that all the
constituents in the equations are represented with finite
matrices, elements of which are calculated independently. So, this
approach is just quite suitable for parallelization and
implementation on GPU.
(ii) The numerical solution of the resulting matrix equations with
wide usage of the multithread computing on GPU.
In the present paper, we adapt the general wave-packet
discretization algorithm for GPU implementation by an example of
calculating the elastic scattering amplitude in three-nucleon
system with realistic interactions. Also different aspects related
to GPU computing are studied and runtimes for CPU and GPU mode
calculations are compared.
The paper is organized as follows. In the section II we briefly
recall the main features of the wave-packet continuum
discretization approach towards solving two- and three-body
scattering problems. The numerical scheme for a practical
solution of the $nd$ elastic scattering problem in a discretized
representation is described in Section III. In the next Section IV
we discuss the properties of GPU computing for the above problem
and test some illustrative examples while in the Section V the
results for the $nd$ elastic scattering with realistic $NN$
interaction are presented. The conclusions are given in the last
Section VI.
\section{Continuum discretization with stationary wave-packets in few-body
scattering problems}
In this section we outline briefly the method of stationary wave
packets that is necessary for understanding the subsequent material
by reader. For detail we refer to our previous original papers
\cite{KPRF_breakup,KPR_GPU} and the recent review \cite{KPR_Ann}.
\subsection{Stationary wave packets for two-body Hamiltonian}
Let us introduce some two-body Hamiltonian $h=h_0+v$ where $h_0$
is a free Hamiltonian (the kinetic energy operator) and $v$ is an
interaction. Stationary wave packets (WPs) are constructed as
integrals of exact scattering wave functions
$|\psi_p\rangle$ (non-normalized) over some
momentum intervals
$\{\De_i\equiv[p_{i-1},p_i]\}_{i=1}^N$:
\begin{equation}
\label{z_k}
|z_k\rangle=\frac{1}{\sqrt{C_k}}\int_{{\De}_k}w(p)|\psi_p\rangle dp,
\quad C_k=\int_{\De_k}|w(p)|^2dp.
\end{equation}
Here $p=\sqrt{2mE}$ is relative momenta, $m$ is the reduced mass of
the system, $w(p)$ is a weight function and $C_k$ is the
corresponding normalization factor.
The set of WP states (\ref{z_k}) has a number of interesting and
useful properties \cite{KPR_Ann}. First of all, due to the
integration in eq.~(\ref{z_k}), the WP states have a finite
normalization as bound states. The set of WP functions together with
the possible bound-state wave functions $|z^b_n\rangle$ of the
Hamiltonian $h$ form an orthonormal set and can be employed as a
basis similarly to any other $L_2$ basis functions, which are used
to project wave functions and operators \cite{KPR_Ann}. (To simplify
notations, we will omit below the superscript $b$ for bound-states
and will differ them from WP states just by their index $n\leq
N_b$.)
The matrix of Hamiltonian $h$ is diagonal in such a WP basis. The
resolvent $g(E)=[E+{\rm i}0-h]^{-1}$ for Hamiltonian $h$ has also
a diagonal representation in the subspace spanned on the WP
basis:
\begin{equation}
\label{g_tot} g(E)\approx \sum_{n=1}^{N_b}\frac{|z_n\rangle
\langle z_n|}{E-\ep_n^*}+\sum_{k=N_b+1}^N|z_k\rangle g_k(E)
\langle z_k|,
\end{equation}
where $\ep_n^*$ are the bound-state energies and eigenvalues
$g_k(E)$ can be expressed by explicit formulas \cite{KPR_Ann}.
\subsection{Free wave-packet basis}
Useful particular case of stationary wave packets is free WP
states which are defined for the free Hamiltonian $h_0$.
As in the general case, the continuum of $h_0$
(in every spin-angular channel $\alpha$) is divided
onto non-overlapping intervals
$\{\mathfrak{D}_i\equiv[\ce_{i-1},\ce_i]\}_{i=1}^N$
and two-body free wave-packets are introduced as integrals of exact
free-motion wave functions $|p\rangle$ (an index $\alpha$ which marks
possible quantum numbers we will omit where is possible):
\begin{equation} \label{ip}
|x_i\rangle=\frac{1}{\sqrt{B_i}}\int_{\mathfrak{D}_i}f(p)|p\rangle
dp,\quad B_i=\int_{\MD_i}|f(p)|^2 dp,
\end{equation}
where $B_i$ and $f(p)$ are the normalization
factor and weight function respectively.
As has been mentioned above, in such a basis, the free
Hamiltonian $h_0$ has a diagonal finite-dimensional representation as well as the free resolvent
$g_0=[E+{\rm i}0-h_0]^{-1}$:
\begin{equation}
{g}_0(E)\approx\sum_{i=1}^N|x_i\rangle g_{0i}(E) \langle x_i|,
\end{equation}
where eigenvalues $g_{0i}(E)$ have analytical expressions \cite{KPR_Ann}.
Besides the above useful properties which are valid for any wave
packets, the free WP states have some other important features. In
momentum representation, the states (\ref{ip}) take the form of
step-like functions:
\begin{equation}
\label{theta_p}
\langle p|x_i\rangle=\frac{f(p)\theta(p\in \MD_i)}{\sqrt{B_i}},
\end{equation}
where the Heavyside-type theta-function is defined by the conditions:
\begin{equation}
\theta(p\in\MD_i)=\left\{
\begin{array}{cr}
1,&p\in\MD_i,\\
0,&p\notin\MD_i.\\
\end{array}
\right.
\end{equation}
In practical calculations, we usually used the free WP states with unit weights
$f(q)=1$.
The functions of such states are constant inside momentum intervals.
In few-body and multidimensional cases, the WP bases are constructed as direct
products of two-body ones, so that the model space can be considered as
a multidimensional lattice.
Thus, the explicit form of the free WPs makes them very convenient for use as
a basis in the scattering calculations \cite{KPR_Ann}. For
example, the special form of the basis functions in the momentum
representation allows to find easily the matrix elements of the
interaction potential in the free WP representation using the
original momentum representation $v(p,p')$ for the potential:
\begin{equation}
\label{vpot}
v_{ii'}=\frac1{\sqrt{B_iB_{i'}}}
\int_{\MD_i}\int_{\MD_{i'}}dpdp'f^*(p)v(p,p')f(p').
\end{equation}
Moreover, in some rough approximation the potential matrix elements
can be found simply as
$v_{i,i'}\approx\sqrt{B_iB_{i'}}v(p_i^*,p_{i'}^*)$,
where $p_i^*$ and $p_{i'}^*$ are the middle values of momenta in the intervals
${\MD_i}$ and ${\MD_{i'}}$ respectively.
Further we will use the above free WP representation for solution of
scattering problems.
It was shown \cite{KPR_Ann}, that the scattering WPs (\ref{z_k})
for some total Hamiltonian $h$ can be also approximated in the
free WP representation. There is no necessity to find the exact
scattering wave functions $|\psi_p\rangle$ in that case. Instead,
it is just sufficient to diagonalize the total Hamiltonian matrix
in the basis of free WPs. As a result of such direct
diagonalization one gets the approximate scattering WPs (and also
the functions of bound states if they exist) for Hamiltonian $h$
in the form of expansion into free WP basis:
\begin{equation}
\label{exp_z}
|z_k\rangle\approx \sum_{i=1}^N O_{ki}|x_i\rangle,
\end{equation}
where $O_{ki}$ are the matrix elements for rotation from one basis
to another. Note that it is not required that the potential $v$ is a
short-range one. So that, the same procedure allows to construct
wave packets for Hamiltonian including the long-range Coulomb
interaction and to get an analytical finite-dimensional
representation for the Coulomb resolvent~\cite{KPR_Ann}.
\subsection{Scheme for a solution of a two-body scattering problem}
Let us briefly discuss how to solve a two-body scattering problem
in a free WP basis. The Lippmann--Schwinger equation for the
transition operator $t(E)$
\begin{equation}
\label{lse_op} t(E)=v+vg_0(E)t(E),
\end{equation}
where $g_0(E)$ is the free resolvent,
has
the following form in momentum representation (e.g. for every
partial wave $l$):
\begin{equation}
\label{lse} t_l(p,p';E)=v_l(p,p')+\frac1{4\pi}\int_0^\infty dp''
\frac{v_l(p,p'')t_l(p'',p';E)}{E+{\rm i}0-\frac{(p'')^2}{2m}}.
\end{equation}
By
projecting the eq.~(\ref{lse_op}) onto the free WP basis, the
integral equation is reduced to a matrix one in which all the
operators are replaced with their matrices in the given basis. In
the resulting equation the momentum variables are discrete but the
energy variable remains continuous. So, in order to get the
completely discrete representation one can employ some additional
energy averaging for a projection of the free resolvent. In WP
representation, this means an averaging of its eigenvalues
${g}_{0i}(E)$:
\begin{equation}
{g}_{0i}(E)\to [{g}_0]_i^k=\frac1{D_k}\int_{\MD_k}dE\,{g}_{0i}(E),\quad
E\in\MD_k
\end{equation}
where $D_k=\ce_k-\ce_{k-1}$ is the width of the on-shell energy interval.
As a result, the WP analog for the transition operator can be
found
from solution of the matrix equation in the free WP
representation :
\begin{equation}
\label{lsk} t^k_{ii'}=v_{ii'}+ \sum_{j=1}^N v_{ij}[{g_0}]^k_j
t^k_{ji'},\quad E\in\MD_k
\end{equation}
where $v_{ij}$ are the matrix elements of the interaction operator
which are defined by the eq.~(\ref{vpot}).
Then the solution of the eq.~(\ref{lsk}) takes the form of histogram
representation for the off-shell $t$-matrix from eq.~(\ref{lse})
\begin{equation}
t_l(p,p';E)\approx \frac{t_{ii'}^k}{\sqrt{D_iD_{i'}}},\quad
\begin{array}{c}
p\in\MD_i,\\
p'\in\MD_{i'},\\
E\in\MD_k,
\end{array}
\end{equation}
where $D_i$ and $D_{i'}$ are the widths of energy intervals.
As is clear, the above WP approach has some similarities to the
methods which somehow employ a discrete momentum representation,
such as a direct solution of the integral equation (\ref{lse}) with
using mesh-points or the lattice method. However, the main
difference from those is that, in addition to introducing
mesh-points for a discretization, we average the kernel functions on
momentum and energy by an integration within energy intervals (or
over the lattice cells in a few-body case). In this way, all
possible singularities in the integral kernels are somehow smoothed
out, and instead the continuous momentum dependence one has finite
regular matrices for all operators. Moreover, all intermediate
integrations in the integral kernel can be easily performed with
using the WP projection, so that each operator in such a product is
represented as a separate matrix in the WP representation.
All these features render the solution of scattering problems
quite similar to that for a bound-state problem (e.g. with matrix
equations and without an explicit matching with boundary
conditions etc). Besides that, this fully discrete matrix form for
all scattering equations is very suitable for parallelization and
multithread implementation (e.g. on GPU).
\subsection{Three-body wave-packet basis}
The method of continuum discretization described above is directly
generalized to the case of three- and few-body system. For a general
three-body scattering problem it
is necessary to define WP bases for each set of Jacobi momenta $(p_a,q_a)$, ($a=1,2,3$).
Below we show how to define the free and the channel three-body WP
states for one Jacobi set corresponding to the $\{23\}1$
partition of the three-body system.
For the given Jacobi partition $\{23\}1$, it is appropriate
to consider three two-body subHamiltonians: the free subHamiltonian
$h_0$ corresponding the free motion over relative momentum $p$
between particles 2 and 3; the subHamiltonian $h_1=h_0+v_1$
which includes an interaction $v_1$ in the subsystem $\{23\}$ and also
the free subHamiltonian $h_0^1$ corresponding to the free motion
of the spectator particle 1 (over momentum $q$). These
subHamiltonians form two basic three-body Hamiltonians
\begin{equation}
\label{chan} H_0=h_0\oplus h_0^1,\quad H_1=h_1\oplus h_0^1,
\end{equation}
where $H_0$ is a three-body free Hamiltonian while the channel
Hamiltonian $H_1$ defines
three-body asymptotic states for the partition $\{23\}1$. The WP
approach allows to construct basis states for both Hamiltonians
$H_0$ and $H_1$.
At first we define the
three-body free WP basis by introducing partitions of the continua
for two free subHamiltonians $h_0$ and $h_0^1$ onto non-overlapping
intervals $\{\mathfrak{D}_i\equiv[\ce_{i-1},\ce_i]\}_{i=1}^N$ and
$\{\bar{\mathfrak{D}}_j\equiv[\bar\ce_{j-1},\bar\ce_j]\}_{j=1}^{\bar
N}$ and two-body free WPs as in eq.~(\ref{ip}) respectively. Here
and below we denote functions and values corresponding to the
$q$ with additional bar mark to distinguish them from the
functions corresponding to the momentum $p$.
The three-body free
WP states are built as direct products of the respective two-body
WP states. Also one should take
into account spin and angular parts of the basis functions. Thus
the three-body basis functions can be written as:
\begin{equation}
\label{xij} |X_{ij}^{\Ga\al\be}\rangle\equiv |x_i^\al,{\bar
x}_j^\be;\al,\be:\Ga\rangle=|x_i^\al\rangle\otimes|{\bar
x}_j^\be\rangle |\alpha,\beta:\Gamma\rangle,
\end{equation}
where $|\alpha\rangle$ is a spin-angular state for the $\{23\}$
pair, $|\beta\rangle$ is a spin-angular state of the third
particle, and $|\Gamma\rangle$ is a set of three-body quantum
numbers. The state (\ref{xij}) is a WP analog of the exact plane
wave state in three-body continuum $|p,q;\alpha,\beta:\Gamma\rangle$
for the three-body free Hamiltonian $H_0$.
The three-body free WP basis functions (\ref{xij}) are constant
inside the rectangular cells of the momentum lattice built from two
one-dimensional cells $\{\mathfrak{D}_i\}_{i=1}^{N}$ and
$\{\bar{\mathfrak{D}}_j\}_{j=1}^{\bar N}$ in momentum space. We
refer to the free WP basis as {\em a lattice} basis and denote
the respective two-dimensional bins (i.e. the lattice cells) by
$\MD_{ij}=\MD_i\otimes\BMD_j$. Using such a basis one can construct
finite-dimensional (discrete) analogs of the basic scattering operators.
To construct the WP basis for the channel Hamiltonian
(\ref{chan}), one has to introduce scattering wave-packets
corresponding to the subHamiltonian $h_1$ according to
eq.~(\ref{z_k}).
The states (\ref{z_k}) are orthogonal to bound-state wave functions
and jointly with the latter they form a basis for the
subHamiltonian $h_1$. To construct these states we employ
here a diagonalization procedure for $h_1$ subHamiltonian matrix in
the free WP basis and further one uses the expansion
(\ref{exp_z}).
Now the three-body wave-packets for the
channel Hamiltonian $H_1$ are defined
just as products of two types of wave-packet states for $h_1$ and
$h_0^1$ subHamiltonians
whose spin-angular parts are combined to
the respective three-body states having quantum numbers $\Ga$:
\begin{equation}
\label{si} |Z^{\Ga\al\be}_{kj}\rangle
\equiv|z_k^\al,\bar{x}_j^\be,\al,\be:\Ga\rangle,\quad
\begin{array}{c}{k=1,\ldots,N},\\ j=1,\ldots,\bar{N}.
\end{array}
\end{equation}
The properties of such WP states (as well as the properties of free WP states)
have been studied in detail in a series of our previous papers
(see e.g. the review \cite{KPR_Ann} and references therein to the
earlier works). In particular, they form an orthonormal set and
any three-body operator which functionally depends on the channel
Hamiltonian $H_1$ has a diagonal matrix representation in the
subspace spanned on this basis. It allows us to construct an
analytical finite-dimensional approximation for the {\em three-body
channel resolvent} $G_1(E)\equiv [E+{\rm i}0-H_1]^{-1}$ which
enters the Faddeev-equation kernel \cite{KPR_Ann,KPRF_breakup}.
The simple analytical representation for the channel three-body
resolvent $G_1(E)$ is one of the main features for the wave-packet
approach since it allows to simplify enormously the whole
calculation of integral kernels and thereby to simplify solving general
three- and few-body scattering problems.
\section{Discrete analogue for Faddeev equation for 3N system
in the wave-packet representation}
We will illustrate a general approach to solving few-body scattering
problems by an example of scattering in a system of three identical
particles using the Faddeev framework, namely the
elastic $nd$ scattering (treatment of the three-body breakup in $3N$
system has been discussed in ref.~\cite{KPRF_breakup}). In this
case, the system of Faddeev equations for the transition operators
(or for the total wave function components) is reduced to a single
equation. So that, the WP basis is defined for one Jacobi coordinate
set only.
\subsection{The Faddeev equation for a transition operator}
The elastic scattering observables can be found from the single
Faddeev equation for the transition operator $U$,
e.g. in the following form (the so-called Alt-Grassberger-Sandhas form):
\begin{equation}
\label{pvg} U=Pv_1+Pv_1G_1U.
\end{equation}
Here $v_1$ is the pairwise interaction between particles 2 and 3,
$G_1$ is the resolvent of the channel Hamiltonian $H_1$, and
$P$ is the particle permutation operator defined as
\begin{equation}
P=P_{12}P_{23}+P_{13}P_{23}.
\end{equation}
Note that the operators of this type enter the kernels of the
Faddeev-like equations in general case of non-identical particles as well.
So that, the presence of the permutation operator $P$ is a
peculiar feature of the Faddeev-type kernel which causes major
difficulties in a practical solution of such few-body scattering
equations.
After the partial wave expansion in terms of spin-angular
functions, the operator equation (\ref{pvg}) for each value of the
total angular momentum and parity is reduced to a system of
two-dimensional singular integral equations in momentum space. The
practical solution of this system of equations is complicated and
time-consuming task due to special features of the integral kernel
and a large number of coupled spin-angular channels which should be
taken into account \cite{gloeckle}.
In particular, the Faddeev kernel at the real total energy has
singularities of two types: two-particle cuts corresponding to
all bound states in the two-body subsystems and the three-body
logarithmic singularity (at energies above the breakup threshold).
While the regularization of the two-body singularities is
straightforward and does not pose any problems, the regularization of
the three-body singularity requires some special
techniques that greatly hampers the solution procedure. The
practical tricks which allows to avoid such complications are e.g. a
solution of the equation at complex values of energy followed by
analytic continuation to the real axis or a shift for the contour of
integration from the real axis in the plane of complex momenta.
However, the main specific feature of the Faddeev-like kernel is the
presence of the particle permutation operator $P$, which changes the
momentum variables from one Jacobi set to another one. Integral
kernel of this operator $P(p,q;p',q')$ as a function of the momenta
contains the Dirac $\delta$-function and two Heaviside
$\theta$-functions \cite{gloeckle}, so the double integrals in the
integral term have variable limits of integration. Therefore, when
replacing the integrals with the quadrature sums it is necessary to
use a very numerous multi-dimensional interpolations of the unknown
solution from a ``rotated'' momentum grid to the initial one. This
cumbersome interpolation procedure takes most of the computational
time and requires using powerful supercomputers.
The WP discretization
method described here allows to circumvent completely the above
difficulties in solving the Faddeev equations (see \cite{KPR_Ann}
and below).
\subsection{The matrix analog of the Faddeev equation and its features}
As a result of projecting the integral equation (\ref{pvg}) onto
the three-body channel WP basis (\ref{si}), one gets its matrix
analog (for each set of three-body quantum numbers $\Gamma$):
\begin{equation}
\label{m_pvg} {\mathbb U}={\mathbb P}{\mathbb V}_1+{\mathbb
P}{\mathbb V}_1 {\mathbb G}_1 {\mathbb U}.
\end{equation}
Here ${\mathbb P}$, ${\mathbb V}_1$ and ${\mathbb G}_1$ are the
matrices of the permutation operator, pair interaction and channel
resolvent respectively defined in the channel WP basis.
While the matrices of the pairwise interaction and channel
resolvent ${\mathbb V}_1$ and ${\mathbb G}_1$ in WP basis can be
easily evaluated \cite{KPR_Ann,KPR_GPU}, the calculation of
permutation matrix ${\mathbb P}$ is not quite trivial task.
However the permutation operator matrix $\mathbb P$ in the
three-body channel WP basis can be expressed through the
matrix ${\mathbb P}^0$ of the same operator in the lattice basis (\ref{xij}) using the
rotation matrices $\mathbb O$ from the expansion (\ref{exp_z})
(which depend on spin-angular two-particle state $\alpha$):
\begin{eqnarray}
\label{perm_z}
[\mathbb{P}]^{\Ga\al\be,\al'\be'}_{kj,k'j'}
\approx
\sum_{ii'}O_{ki}^\alpha O_{k'i'}^{*\alpha'}
[\mathbb{P}^0]^{\Ga\al\be,\al'\be'}_{ij,i'j'},\qquad\qquad\qquad\qquad\\
\nonumber [\mathbb{P}]^{\Ga\al\be,\al'\be'}_{kj,k'j'}\equiv \langle
Z_{kj}^{\Ga\alpha\beta}|P|Z_{k'j'}^{\Ga\alpha'\beta'}\rangle,\quad
[\mathbb{P}^0]^{\Ga\al\be,\al'\be'}_{ij,i'j'}\equiv \langle
X_{ij}^{\Ga\alpha\beta}|P|X_{i'j'}^{\Ga\alpha'\beta'}\rangle.
\end{eqnarray}
A matrix element of the operator $P$ in the lattice basis is
proportional to the overlap between basis functions defined in
different Jacobi sets \cite{KPRF_breakup}.
Such a matrix element can be
calculated by integration with the weight functions over the momentum
lattice cells:
\begin{eqnarray}
[\mathbb{P}^0]^{\Ga\al\be,\al'\be'}_{ij,i'j'} =
\int_{\MD_{ij}}p^2dpq^2dq\int_{\MD'_{i'j'}}(p')^2dp'(q')^2dq'\times\nonumber\\
\frac{f^*(p)\bar{f}^*(q)f(p')\bar{f}(q')}{\sqrt{B_iB_{i'}\bar{B}_j\bar{B}_{j'}}}\langle
pq,\al\be:\Ga|P|p'q',\al'\be':\Ga\rangle,\label{perm}\qquad \qquad
\end{eqnarray}
where the prime at the lattice cell $\MD'_{i'j'}$ indicates that
the cell belongs to the rotated Jacobi set while $\langle
pq,\al\be:\Ga|P|p'q',\al'\be':\Ga\rangle$ is the kernel of particle
permutation operator in a momentum space which can be written in the
form:
\begin{equation}
\label{gal} \langle pq,\al\be:\Ga|P|p'q',\al'\be':\Ga\rangle=
\sum_{\ga\ga'}g^{\Ga\al\be,\al'\be'}_{\ga\ga'}I_{\ga\ga'}(p,q,p',q'),
\end{equation}
where $\ga$ and $\ga'$ represents the intermediate three-body spin-angular
quantum numbers, $g_{\ga\ga'}$ are algebraic coupling coefficients and the
function $I_{\ga\ga'}(p,q,p',q')$ is proportional to the product of the Dirac
delta and Heaviside theta functions \cite{gloeckle}. However, due to integration
in the eq.~(\ref{perm}), corresponding energy and momentum singularities get
averaged over the cells of the momentum lattice and, as a result, the elements
of the permutation operator matrix in the WP basis are finite. Finally, the
matrix element (\ref{perm}) is reduced to a double integral with variable
limits and can be calculated numerically \cite{KPR_GPU}.
The on-shell elastic amplitude for the $nd$ scattering in the WP
representation is defined now via
the diagonal matrix element of the $\mathbb
U$-matrix \cite{KPR_Ann}:
\begin{equation}
A_{\rm el}^{\Gamma\al_0\be}(q_0)\approx \frac{2m}{3q_0}
\frac{[{\mathbb{U}}]^{\Ga\al_0\be,\al_0\be}_{1j_0,1j_0}}
{\bar{d}_{j_0}},
\end{equation}
where $m$ is the nucleon mass, $q_0$ is the initial two-body momentum
and the matrix element is taken between the channel WP states $|Z^{\Ga\alpha_0\beta}_{1j_0}\rangle=|z_{1}^{\alpha_0},{\bar
x}_{j_0}^\lam;\al_0,\be:\Ga\rangle$
corresponding to the initial and final scattering states. Here
$|z_{1}^{\al_0}\rangle$ is the bound state of the pair, the index
$j_0$ denotes the bin $\BMD_{j_0}$ including the on-shell momentum
$q_0$ and $\bar{d}_{j_0}$ is a momentum width of this bin.
It should be noted here that, in our discrete WP approach, {\em
the three-body breakup is treated as a particular case of
inelastic scattering} \cite{KPRF_breakup} (defined by the
transitions to the specific two-body discretized continuum
states), so that the breakup amplitude can be found in terms of
{\em the same matrix} $\mathbb U$ determined from the
eq.~(\ref{m_pvg}). This feature gives an additional advantage to
the present WP approach.
\subsection{\label{secIIIc} The features of the numerical scheme for solution in WP approach}
So, in the WP approach, we reduced the solution of integral
Faddeev equation (\ref{pvg}) to the solution of the system of
linear algebraic equations (\ref{m_pvg}) and define simple
procedures and formulas for the calculation of the kernel matrix
$\mathbb K = {\mathbb P} {\mathbb V}_1 {\mathbb G}_1 $. In such an
approach, we avoided all the difficulties of solving the integral
equation ($\ref{pvg} $), which are met in the standard approach,
but the prize paid for this is a high dimension of the resulting
system of equations. This high dimension is the only problem in
the practical solution of the matrix analogue for the Faddeev
equation.
In fact, we found \cite{KPR_Ann} that quite satisfactory results
can be obtained with a basis size along one Jacobi momentum $N\sim
\bar{N}\sim 100-150$. It means that in the simplest one-channel
case (e.g. for $s$-wave three-boson or spin-quartet $s$-wave $nd$
scattering) one gets a kernel matrix with dimension $M=N\times
\bar{N}\sim 10000 - 20000$. However, in case of realistic $3N$
scattering it is necessary to include at least up to 62
spin-angular channels and dimension of the matrix increases up to
$5\cdot 10^5-10^6$. The high dimension of the algebraic system
leads to two serious problems: the impossibility to place the
whole kernel matrix into RAM and the impossibility to get the
numerical solution for a reasonable time, even using a
supercomputer.
The second obstacle can be easily circumvented.
Indeed, to find
the elastic and breakup amplitudes one needs only on-shell matrix
elements of the transition operator. Each of these elements can be
found by means of a simple iteration procedure (without complete
solving the matrix equation (\ref{m_pvg})) with subsequent
summation of the iterations via the well-known Pade-approximant
technique.
The first problem means that one has to store the whole kernel
matrix in the external memory. However when using it the
iterative process becomes very inefficient, since most of the
processing time is spent for reading data from the external memory, while
the processor is idle. Nevertheless the specific matrix structure
of the kernel in the eq.~(\ref{m_pvg}) makes it possible to
overcome this difficulty and
{\em to eliminate completely the use of an external memory}.
Indeed, the matrix kernel $\mathbb K$ for
equation~(\ref{m_pvg}) can be written as a product of four matrices,
which have the specific structure:
\begin{equation}
\label{kernel} \mathbb K=\mathbb P {\mathbb V}_1 \mathbb G_1 \equiv
\mathbb O \mathbb P^0 \tilde{\mathbb V}_1 \mathbb G_1,
\end{equation}
where $\tilde{\mathbb V}_1 = \mathbb O^{\rm T}\mathbb V_1$. Here
$\mathbb G_1$ is a diagonal matrix, $\mathbb P^0$ is a highly sparse
permutation matrix, while $\tilde{\mathbb V}_1$ and $\mathbb O$ are
block matrices with the block dimension $(N\times N)$.
Thus, if to store in RAM only the individual multipliers of the
matrix kernel $\mathbb K$, and to store highly sparse matrix
$\mathbb P^0 $ in a compressed form (i.e. to store only its nonzero
elements), all the data required for the iteration process can still
be placed in RAM. And although in this case three extra matrix
multiplication is added at each iteration step, a computer time
spent on iterations is reduced more than 10 times in comparison with
the procedure employing an external memory.
Thus, the overall numerical scheme for solving the
three-body scattering problem in the WP discrete formalism
consists of the following main steps:\\
1. Processing of the input data.\\
2. Calculation of nonzero elements of the permutation matrix ${\mathbb P}^0$.\\
3. Calculation of the channel resolvent matrix $\mathbb{G}_1$. \\
4. Iterations of the matrix equation (\ref{m_pvg}) and finding
its
solution by making use of the Pade-approximant technique. \\
The step 1 includes the following procedures: \\
-- a construction of two-body free WP bases,
and a calculation of matrices of the interaction potential;\\
-- a diagonalization of the pairwise subHamiltonian matrices in the
free WP basis and finding parameters for the three-body channel basis including
matrices of the rotation between free and scattering WPs;\\
-- a calculation of algebraic coefficients
$g^{\Ga\al\be,\al'\be'}_{\ga\ga'}$ from eq.~(\ref{gal}) for
recoupling between different spin-angular
channels.\\
We found that the
runtimes for the steps 1 and 3 are practically negligible in comparison with the
total running time, so that we shall not discuss these steps here.
The execution of the step 4
--- the solution of the matrix system by iterations --- takes about
20\% of the total time needed to solve the whole problem in one-thread
CPU computing.
Therefore, in this work we did not aim to optimize this step using
the GPU.
The main computational efforts (in the one-core CPU realization) are
spent on the step 3 -- the calculation of elements of the matrix
${\mathbb P}^0$. Because all of these elements are calculated with
help of the same code and fully independently from each other, the
algorithm seems very suitable for a parallelization and
implementation on multiprocessor systems, in particular on GPU.
However, since the matrix ${\mathbb P}^0$ is highly sparse, it is
necessary to use special tricks in order to reach a high
acceleration degree in GPU realization. In particular, we apply an
additional pre-selection of nonzero elements of the matrix
${\mathbb P}^0$.
It should be stressed here that steps 1 and 2 do not depend on the
incident energy. The current energy is taken into account only at
steps 3 and 4 when one calculates the channel resolvent matrix
elements and solves the matrix equation for the scattering
amplitude. Thus when one needs scattering observables in some
wide energy region, the whole computing time will not increase
sufficiently because the most time-consuming part of the code (step 2) is
carried out only once for many energy points.
\begin{figure}[h!]
\centerline{\epsfig{file=fig1.eps,width=0.7\columnwidth}} \caption{
The $p$-wave partial phase shifts for the elastic $nd$ scattering
obtained within the WP approach (solid curves) and within the
standard Faddeev calculations
(circles)~\cite{gloeckle}.}
\label{phases}
\end{figure}
In Fig.~\ref{phases} the $p$-wave partial phase shifts
$\delta^{J\pi}_{\Sigma\lambda}$ of the elastic $nd$ scattering for
the Nijmegen I $NN$ potential \cite{nijm} both below and above a
three-body breakup threshold are shown. Here $J$, $\pi$ and
$\Sigma$ are the total angular momentum, parity and total channel spin
respectively while $\lambda$ is the neutron orbital momentum. The
calculation of the phase shifts {\em at 100 different energy
values} displayed in Fig.~\ref{phases} takes in our approach (in
CPU realization) only about twice as much time as compared with
the calculation for a single energy because for all energies we
employ the same permutation matrix $\mathbb P$ which is calculated
only once.
In the next section we consider the specific features related to GPU
adaptation for the above numerical scheme.
\section{GPU acceleration in calculation of kernel matrix elements}
As was noted above, the calculation of elements of a large matrix
looks to be very suitable task for effective application of GPU
computing if these elements are calculated independently from each
other and by one code. However, there are a number of aspects
associated with the organization of the data transfer from RAM to
the GPU memory and back and also with the GPU computation itself. These
aspects impose severe restrictions on the resulting acceleration in
GPU realization. One can introduce the GPU acceleration $\eta$ as a
ratio of runtime for one-thread CPU computation to runtime for
the multithread GPU computation:
\begin{equation}
\label{acc} \eta=t_{\rm CPU}/t_{\rm GPU},
\end{equation}
This acceleration depends on the ratio of the actual time for the
calculation of one matrix element, $t_0$, to the time of
transmitting the result from the GPU memory back to RAM, on the
number of GPU cores and their speed as compared to speed of CPU
core, and also on the dimension of the matrix $M$. Note that the
transition itself from a one-thread computing to multithread
computing takes some time, so that any parallelization is not
effective for matrices with low dimension. When using
the GPU, one has to take into account that the speed of GPU cores
are usually much smaller than the CPU speed. For the efficiency of
multithread computing it is also necessary that the calculations
in all threads are finished at approximately the same time.
Otherwise a part of threads, each of which occupies a physical
core, will be idle for some time. In the case of independent
matrix elements, this condition means that the numerical code for
one element should not depend on its number, in particular, the
code must not contain conditional statements that can change the
amount of computation.
When calculating the permutation matrix ${\mathbb P}^0$ in our
algorithm, the above condition is not valid: only about 1\% of its
non-vanishing matrix elements should be really calculated using a
double numerical integration, while other 99\% of elements are equal
to zero and their determination requires only a few arithmetic
operations. Therefore, when one fills the whole matrix $ {\mathbb
P}^0$ (including both zero and nonzero elements) 99\% of all threads
are idle, and we will not reach any real acceleration. Thus we have
to develop at first a numerical scheme to fill effectively sparse
matrices using GPU.
\subsection{GPU acceleration in calculating elements of a sparse matrix}
In this subsection in order to check the possibility of GPU
acceleration in the calculation of the elements of a matrix with a
dimension $M$, we consider two simple examples in which the
matrix elements
are determined by the following formulas:\\
(a) as a sum of simple functions:
\begin{equation}
A(i,j)=\sum_{k=1}^K\left (\sin^k(u_{ij})+\cos^k(w_{ij})\right ),
{\rm or} \label{Atrig}
\end{equation}
(b) as a sum of numerical integrals:
\begin{equation}
A(i,j)=\sum_{k=1}^K\int_{u_{ij}}^{w_{ij}}\left
(\sin^k(t)+\cos^k(t)\right )dt. \label{Agau}
\end{equation}
Here $u_{ij}$ and $w_{ij}$ are random numbers from the interval
$[0,1]$ and the parameter $K$ allows to vary the time $t_0$ for
calculation of each element in a wide range. The integrals in
eq.~(\ref{Agau}) are calculated numerically by the 48-point Gaussian
quadrature. Therefore the example (b) with numerical integration is
closer to our case of calculating the permutation matrix ${\mathbb
P}^0$ in the Faddeev kernel.
Figures \ref{accel_N-trig}, \ref{accel_N-gau} and \ref{accel_t0}
show the dependence of the GPU acceleration $\eta$ on the matrix
dimension $N$ and the calculation time for each element $t_0$ when
filling up the dense matrices defined by eqs.~(\ref{Atrig}) and
(\ref{Agau}). The GPU calculations were performed using $M^2$
threads, so that, each thread evaluates only one matrix element.
\begin{figure}[h!]
\centerline{\epsfig{file=fig2.eps,width=0.7\columnwidth}} \caption{
The dependence of GPU acceleration $\eta$ in calculation of elements
of dense matrix (\ref{Atrig}) on the matrix dimension $M$ for
different values of $t_0$: 0.0009 ms (solid curve), 0.0094 ms
(dashed curve), 0.094 ms (dot-dashed curve), 0.94 ms (dotted
curve).} \label{accel_N-trig}
\end{figure}
\begin{figure}[h!]
\centerline{\epsfig{file=fig3.eps,width=0.7\columnwidth}}
\caption{ The dependence of GPU acceleration $\eta$ in calculation
of elements of a dense matrix (\ref{Agau}) on the matrix dimension
$M$ for different values of $t_0$: 0.0017 ms (solid curve), 0.012 ms
(dashed curve), 0.114 ms (dot-dashed curve), 1.13 ms (dotted
curve).} \label{accel_N-gau}
\end{figure}
\begin{figure}[h!]
\centerline{\epsfig{file=fig4.eps,width=0.7\columnwidth}} \caption{
The dependence of GPU acceleration $\eta$ in
calculation of elements of dense matrix on the computational time of each matrix
element, $t_0$, for different values of matrix dimension $M$: solid
curves
correspond to calculation of matrix elements using simple trigonometric
functions (\ref{Atrig}), dashed curve --- using numerical integrals
(\ref{Agau}).}
\label{accel_t0}
\end{figure}
The calculations are performed on a desk PC with the processor
i7-3770K (3.50GHz) and the video card NVIDIA GTX-670. We use the
Portland Group Fortran compiler 12.10 including CUDA support and
CUDA compiler V5.5.
As can be seen
from the figures, GPU acceleration sufficiently rises with
increasing the dimension $M$ and the computational time for one
matrix element $t_0$. The maximal acceleration that can be reached
in this model example is 400-450(!) Such high degree of
acceleration is achieved at the matrix dimension $ M\sim 200$ and
$t_0 \gtrsim 0.1$~ms. At further increase of the dimension $M$,
the degree of acceleration does not change because in this case
all the computing resources of the GPU are already exhausted. Note
that, for the example (b) with the numerical integration, the GPU
acceleration is somewhat lower than in the case of calculating
simple functions. This is due to repeated use of the some
constants (the values of the quadrature points and weights) which
should be stored in the global GPU memory.
It should also be noted that the transition to the double-precision
calculations of the matrix elements reduces greatly the maximal
possible value of GPU acceleration $\eta$.
Consider now what efficiency of GPU computing can be reached in the case
of a sparse matrix, when it is actually required to calculate only
part of matrix elements. We introduce the following additional
condition for the matrix elements (\ref{Atrig}) and (\ref{Agau}):
\begin{equation} \tilde A(i,j)=\left
\{\begin{array} {cc}A(i,j), & u_{ij}\le\alpha\\ 0, & u_{ij} >
\alpha
\end{array} \right . . \label{Acond}
\end{equation}
Since $u_{ij}$ is a random number in the interval $[0,1]$, then
one gets a sparse matrix with the degree of a sparseness
$\sim\alpha$ as a result of such filtration. In fact, the degree
of a sparseness is the ratio of number of non-zero matrix elements
to their total number $M^2$.
\begin{figure}[h!]
\centerline{\epsfig{file=fig5.eps,width=0.7\columnwidth}} \caption{
The dependence of GPU acceleration $\eta$ in calculation of
elements of sparse matrix with elements (\ref{Acond}) on sparseness parameter $\alpha$:
solid curve --- for $M=64$
dashed curve --- for $M=128$
solid curve --- for $M=256$.}
\label{accel_spar}
\end{figure}
Fig.~\ref{accel_spar} shows the dependence of the GPU acceleration
on the sparseness parameter $\alpha $ in filling matrices with
dimensions $M = 64$, 128 and 256. As can be seen from the Figure,
the GPU acceleration is only about 2 (for $ M = 64 $) at a value of
$\alpha \sim 0.01$, which corresponds to the realistic sparseness
parameter for the permutation matrix ${\mathbb P}^0$ in the
Faddeev kernel.
Thus, to achieve a significant GPU acceleration in calculating the
permutation matrix ${\mathbb P}^0 $, it is necessary to add one more
step to our numerical scheme discussed in the section~\ref{secIIIc}
and perform {\em a pre-selection} of nonzero
elements of the permutation matrix.
\subsection{The GPU algorithm for calculating the permutation matrix in the
case of a semi-realistic $s$-wave $NN$ interaction}
Consider now a calculation of the permutation matrix ${\mathbb
P}^0$ entering the Faddeev kernel. There are additional
limitations for the GPU algorithm for this case compared to
simple examples discussed in the previous subsection.
a) As already mentioned above, the most serious
limitations are a high dimension and a high sparseness of the
permutation matrix, and therefore a special packaging for this
matrix is required. Standard packaging for a matrix (we use the
packaging on the rows
--- the so called CSR format) implies, instead of storing the
matrix in a single array ${A}$ with a dimension $M\times M$,
the presence of two linear arrays,
${B}$ and ${C}$, with dimensions $\alpha M^2$, which
store the nonzero matrix elements of ${A}$ and the respective
numbers of columns. Also the third linear array ${W}$ with
the dimension $M$ contains addresses of the last nonzero elements
(in the array ${B}$), corresponding to a given row of the
initial matrix ${A}$. With such a way of the matrix packaging
we get a gain in the memory required for storing the matrix to be
equal to $1/(2\alpha)$, i.e. about 50-fold gain for a value of the
sparseness 0.01 which is specific for the permutation matrix
$\mathbb{P}^0$ in the WP representation. So that, at the specific
matrix dimension $M\sim 5\cdot 10^5$ which is necessary for an
accurate calculation of the realistic $3N$ scattering problem, the
whole matrix occupies about 1,000 GB of RAM (with single precision),
while the same matrix in a compressed form takes about
20 GB RAM only. This is a quite acceptable value for a modern desktop
computer.
b) However, the permutation matrix of such a dimension, even in a
packed form, cannot be placed in the GPU memory which is usually
4-8 GB only. Therefore one needs to subdivide the whole
calculation of this matrix into some blocks using an external CPU
cycle and then employ the multithread GPU computation for each
block.
c) Another distinction of the calculation of the elements of the
matrix ${\mathbb P}^0$ from the simple model example discussed
above is the necessity to use a large number of constants: in
particular, the values of nodes and weights for Gaussian quadratures
for a calculation of double integrals and also (in case of a
realistic $NN$ interaction with tensor components) algebraic
coefficients $g^{\Ga\al\be,\al'\be'}_{\ga,\ga'}$ from the
eq.~(\ref{gal}) for coupling of different spin-angular channels,
values of Legendre polynomials at the nodal points etc. All these
data are stored in the global GPU memory and because of the
relatively low access rate of each thread to the global GPU memory,
the resulted acceleration is noticeably lower than in the case of
the above simple code which does not use a large amount of data from
the global GPU memory.
d) The necessary pre-selection of nonzero elements of the matrix
${\mathbb P}^0$ can be itself quite effectively parallelized for
a GPU implementation. Since the runtime for checking the selection
criteria for each element is on two orders of magnitude less than
the runtime for calculating nonzero element itself, then the
degree of GPU acceleration for the stage of a pre-selection turns
out less than for the basic calculation. Nevertheless, if do not
employ the GPU at this stage, the computing time for it turns out
even larger than the GPU calculation time for all nonzero elements
(see below).
After these general observations, we describe the results for the
GPU computing of the most tedious step of solving the Faddeev
equation in the WP approach --- the computation of nonzero
elements of the permutation matrix --- in the case of a
semi-realistic Malfliet-Tjon $NN$ interaction. There is no
spin-angular coupling for this potential, so that the Faddeev
system is reduced to a single $s$-wave equation. The results
attained for a realistic calculation of multichannel $ nd $
scattering we leave for the next section.
When the pre-selection of nonzero matrix elements is already done
one has the subsidiary arrays ${C}$ and ${W}$ containing information
about all nonzero elements of $\mathbb{P}^0$ that should be
calculated and the number of these nonzero elements is $M_t$. The
parallelization algorithm adapted here assumes that every matrix
element is computed by a separate thread. The allowable number of
threads $N_{\rm thr}$ is restricted by the capacity of the physical GPU
memory and is usually less than the total number of nonzero elements $M_t$.
In this case, our algorithm consists of the following steps.
1. The data used in calculation (endpoints of momentum intervals
in variables $ p $ and $ q $, nodes and weights of Gauss
quadratures, algebraic coupling coefficients etc.) are copied to
the GPU memory.
2. The whole set of nonzero elements of the permutation matrix is
divided into $N_b$ blocks with $N_{\rm thr} $ elements in each block
(except the last one) and the external CPU loop is organized by the
number of such blocks. Inside the loop the following operations are
performed:
3. A part of the array ${C}$ corresponding to the current
block is copied to the GPU memory.
4. The CUDA-kernel is launched on GPU in $N_{\rm thr}$ parallel
threads each of which calculates only one element (in the case of
the $s$-wave problem) of the permutation matrix.
5. The resulted $N_{\rm thr}$ nonzero elements of the matrix are
copied from the GPU memory to the appropriate place of the total
array ${B}$.
\begin{figure}[h!]
\centerline{\epsfig{file=fig6.eps,width=0.7\columnwidth}} \caption{
The CPU computing time (solid curves) and GPU computing time (dashed
curves) for preselection of tne nonzero elements of $s$-wave
permutation matrix ${\mathbb P}^0$ (triangles) and calculation of
these elements (circles) depending on the matrix dimension $M$.}
\label{time_GPU-sw}
\end{figure}
Fig.~\ref{time_GPU-sw} shows the dependence of the CPU- and
GPU-computing time for the calculation of the $s$-wave permutation
matrix upon its total dimension $M=N\times\bar{N}$ (for
$N=\bar{N}$). In our case, the GPU code was executed in 65536
threads. For the comparison, we display on this Figure also the CPU
and GPU time which are necessary for a pre-selection of nonzero
matrix elements. It is clear from the Figure that one needs to use
GPU computing not only for the calculation of nonzero elements (that
takes most of the time in one-thread CPU computing), but also for
the pre-selection of nonzero matrix elements to achieve a high
degree of the acceleration.
\begin{figure}[h!]
\centerline{ \epsfig{file=fig7.eps,width=0.7\columnwidth}} \caption{
The dependence of the GPU acceleration
$\eta$ on the matrix dimension $M$ for a calculation of
the permutation matrix (dashed curve) and for a complete solution
of the scattering problem (solid curve) in the case of $s$-wave
$NN$ interaction.} \label{accel_GPU-sw}
\end{figure}
In Fig.~\ref{accel_GPU-sw}, we present the GPU acceleration
$\eta$ for calculating the $s$-wave permutation matrix and for a
complete solution of $s$-wave $nd$ elastic scattering problem on
dimension $M$ of the matrix equation. It is evident that the
runtime for the nonzero elements of the matrix ${\mathbb P}^0$
(which takes the main part of the CPU computing time) is reduced
by more than 100 times. The total acceleration in calculating the
$s$-wave partial phase shifts reaches 50.
Finally, the total three-body calculation takes only 7 sec. on
an ordinary PC with GPU.
\section{GPU optimization for a realistic $3N$ scattering problem}
\subsection{GPU-acceleration for a realistic $nd$ scattering amplitude}
We now turn to the case of a realistic three-nucleon scattering
problem with the Nijmegen I $NN$ potential \cite{nijm} and the
calculation for the elastic $nd$ scattering cross section.
Unlike the simple $s$-wave case discussed above, now we have many
coupled spin-angular channels (up to 62 channels if the total
angular momentum in $NN$ pair is restricted as $j\le 3$). In this
case, the calculation of each element of the permutation matrix
${\mathbb P}^0$ comprises the calculation of several tens of
double numerical integrals containing the Legendre polynomials.
Each matrix element is equal to the sum of such double integrals
and the sum includes a large set of algebraic coupling
coefficients $g^{\Ga\al\be,\al'\be'}_{\ga,\ga'}$ for the spin-angular
channels as in eq.~(\ref{gal}).
Now the GPU-optimized algorithm for the permutation matrix is
somewhat different: because each calculated double integral is
used to compute several matrix elements, then each thread now
calculates all the matrix elements corresponding to one pair of
momentum cells $\{\MD_{ij},\MD_{i'j'}\}$. These matrix elements
belong to
different rows of the complete permutation matrix. So that,
after the GPU computing for each block of the permutation matrix
it is necessary to rearrange and repack (in the single-thread CPU
execution) the calculated set of the matrix elements into the
arrays ${B}$, ${C}$ and ${W}$, representing
the complete matrix $\mathbb{P}^0$ in CSR format. All the above
leads to the fact that the GPU acceleration in calculation of the
permutation matrix in a realistic case when the $NN$ interaction
has a tensor component turns out significantly less than for the
$s$-wave case.
Fig.~\ref{time_GPU_Nijm} demonstrates the GPU acceleration
$\eta$ versus the basis dimension $M=N\times\bar{N}$ in the solution
of 18-channel Faddeev equation for the partial
$nd$ elastic amplitude with total angular momentum $J=\frac12^+$ (solid line).
The dashed and dot-dashed lines show the GPU acceleration for stage of
pre-selection of nonzero elements for the permutation matrix ${\mathbb P}^0$ and
for calculating of these elements, respectively.
\begin{figure}[h!]
\centerline{ \epsfig{file=fig8.eps,width=0.7\columnwidth}}
\caption{ The dependence of the GPU acceleration
$\eta$ on dimension of the basis $M=N\times \bar{N}$ (for the case
$N=\bar{N}$) for the realistic $nd$ scattering problem at
$J=\frac12^+$: dashed line shows the acceleration for the
preselection of nonzero elements in the permutation matrix
${\mathbb P}^0$),
dot-dashed line -- for the calculation of these elements,
solid line -- the acceleration for the complete solution.
\label{time_GPU_Nijm}}
\end{figure}
From these results, it is evident that the acceleration in the calculation of the
coupled-channel permutation matrix is about 15 that is
considerably less in comparison with the above one-channel
$s$-wave case. Nevertheless, the passing from CPU- to
GPU-realization {\em on the same PC} allows to obtain a quite
impressive acceleration about 10 in the solution of the
18-channel scattering problem.
In realistic calculation of the observables for elastic $nd$
scattering, it is necessary to include up to 62 spin-orbital
channels. For the current numerical scheme, the efficiency of GPU
optimization decreases with increasing number of channels. As an
example, we present the results of the complete calculation for
elastic $nd$ scattering with the Nijmegen I $NN$ potential at
energy 22.7 MeV. In Fig.~\ref{diff_cs} as an illustration of an
accuracy of our approach, we display the differential cross
section in comparison with the results of the conventional
approach~\cite{gloeckle}.
\begin{figure}[h!]
\centerline{ \epsfig{file=fig9.eps,width=0.7\columnwidth}}
\caption{ The differential cross section of elastic $nd$ scattering
at energy 22.7 MeV calculated with the Nijmegen I $NN$ potential in
wave-packet formalism with using GPU computing (solid curve) in
comparison with the results from the ref.~\cite{gloeckle} (dashed
curve).}
\label{diff_cs}
\end{figure}
The complete calculation, including 62 spin-orbital channels and all
states with total angular momentum up to $J_{\rm max}=17/2$ took
about 30 min on our desk PC. The runtimes for separate steps are given in
Table 1.
\begin{table}[h!]
\centering \caption{Runtime (in sec) for separate steps of complete
solutions of $nd$ scattering problem}
\begin{tabular}{llcc}
&Step & CPU time & GPU time\\ \hline
1. & Processing input data & 30 & 30\\
2a.& Pre-selection & 12 & 1.9 \\
2b.& Calculation of nonzero elements & 4558 & 524\\
4. &Iterations and Pade summation & 1253&1250\\ \hline
&Total time & 5852 & 1803 \\ \hline
\end{tabular}
\end{table}
As seen from the Table, the time of calculation of the
permutation matrix elements (steps 2a and 2b) is shorten in ca.
8.7 times as a result of the GPU optimization. However, the major
part of a computational time
is now spent not on calculating
the permutation matrix but on the subsequent iterations of the
resulting matrix equation, i.e. on multiplication of a kernel matrix
by a column of current solution. The iteration time takes about 69\% of total solution time.
So that, the total acceleration in this multichannel case is only 3.2.
It should be stressed that the current numerical scheme can be further optimized.
Each iteration here includes four
matrix multiplications: one multiplication by a diagonal matrix
$\mathbb G_1$, two multiplications by block matrices $\mathbb O$ and
$\tilde {\mathbb V}_1$ and one multiplication by sparse matrix
$\mathbb P^0$, and most of the time in the iteration process
takes multiplication of a sparse matrix by a (dense) vector.
It is clear that the algorithm for the iteration can also be
parallelized and implemented on the GPU. In this paper, we did not
addressed this task and focused mainly on the GPU optimization for
the calculation the integral kernel of Faddeev equation only.
However, for a multiplication of a sparse matrix to a column there
are standard procedures, including those implemented on GPU. So
that, if to apply the GPU optimization to the iteration step the
runtime of complete solution can be reduced further by 2-3 times.
It is also clear that
employment of more powerful specialized graphics processors would
lead even to a considerably greater acceleration of the
calculations.
\subsection{Further development}
It looks evident that the described GPU approach will be
effective also in the solution of integral equations describing
the scattering in systems of four and a larger number of particles
(Faddeev--Yakubovsky equations). The main difference in these more
complicated problems from the three-body scattering problem
considered here is increasing number of channels to be included
and also rising of the dimension for integrals those define the
kernel matrix elements. As the result, the matrix dimension $M$
and the computational time of each matrix element $t_0$ will
increase. However, a degree of sparseness for the permutation
matrices and scheme for calculation of kernel matrix elements will
remain the same as in a three-body case. So that, these two
factors, i.e. growth of $M$ and $t_0$, according to our results,
will provide even greater GPU acceleration than in a three-body
case.
However, when the matrix size $M$ will reach a certain limit, no
package will be able to place all nonzero elements in RAM of a
computer. In such a case, it should be chosen another strategy: one
divides the channel space onto two parts: the major and minor
channels according to their influence to the resulted amplitude. The
minor channels would give only a small correction contribution to
the solution resulting from the subspace of the major channels.
Then, using the convenient projection formalism (such as the known
Feshbach formalism), one can account for the minor-channel
contribution in a matrix kernel defined in the subspace of the major
channels as some additional effective interaction containing the
total resolvent in the minor-channel subspace. We have shown
previously \cite{KPR_Ann, Moro} that the basis dimension for the
minor channels can be considerably reduced (for a particular
problem, it can be reduced in 10 times \cite{Moro}) without loss in
an accuracy of a complete solution.
We hope that such a combined approach together with the
multithread GPU computing will lead to the greater progress in the
exact numerical solution of quantum few-body scattering problems
when using a desktop PC.
\section{Conclusion}
In the present paper we have checked the applicability of the
GPU-computing technique in few-body scattering calculations. For
this purpose we have used the wave-packet continuum discretization
approach in which
a continuous spectrum of the
Hamiltonian is approximated by a discrete spectrum of the $L_2$
normalizable wave-packet states. If to project out all the wave
functions and scattering operators onto such a discrete basis we
arrive at simple linear matrix equation with non-singular matrix
elements instead of the complicated multi-dimensional singular
equations in the initial formulation of few-body scattering problem.
Moreover, the matrix elements of all the constituents of this
equation are calculated independently which make the numerical
scheme to be highly parallelized.
The prize for this matrix reduction is a high dimension for the
matrix kernel. In the case of fully realistic problem the dimension
of the kernel matrix turns out so high that such a matrix cannot be
placed into RAM of a desktop PC. In addition the calculation of all
kernel matrix elements requires a huge computing time in sequential
one-thread execution. However, we have developed efficient
algorithms of parallelization, which allows to perform basic
calculations in the multithread GPU execution and reach a noticeable
acceleration of calculations.
It is shown that the acceleration
obtained due to GPU-realization depends on the dimension of the
basis used and the complexity of the problem. So, in the three-body
problem of the elastic $nd$ scattering with a semi-realistic
$s$-wave $NN$ interaction, we obtained 50-fold acceleration for the
whole solution while for a separate part of the numerical scheme
(most time consuming on CPU) the acceleration achieves more than 100
times. In a case of the fully realistic $NN$ interaction for the
$nd$ scattering (including up to 62 spin-orbit channels), the
acceleration for the permutation matrix calculation is about 8.7
times. A full calculation of the differential cross section is
accelerated in this case by 3.2 times. However, the numerical scheme
allows a subsequent optimization that will be done in our further
investigations. Nevertheless, the present study has shown that the
implementation of GPU calculations in few-body scattering problems
is very perspective at all and opens new possibilities for a wide
group of researches.
It should be stressed, the developed GPU accelerated discrete
approach to solution of quantum scattering problems can be
transferred without major changes to other areas of quantum physics,
as well as to a number of important areas of classical physics
involving solution of multidimensional problems for continuous media
studies.
{\bf Acknowledgments} This work has been supported
partially by the Russian Foundation for Basic Research,
grant No. 13-02-00399.
|
1,108,101,564,370 | arxiv | \section{Introduction}
This paper investigates the statistical properties of random matrices of the form $\mathbf{W} = \mathbf{H} \mathbf{H}^\dagger$, where $\mathbf{H}$ is $2 \times 2$ with independent entries
\begin{align} \label{eq:Basic}
\mathbf{H}_{ij} \sim \mathcal{C N} (0, \phi_{ij}) , \quad \quad i, j = 1, 2 \; .
\end{align}
The distinguishing feature is that the variance profile, $\{\phi_{ij}\}_{i,j=1,2}$, is allowed to be \emph{arbitrary}.
Despite its apparent simplicity, it is remarkable that little is known about the statistical properties of such matrices, beyond specific examples. Most notable is the case where the variances factorize as $\phi_{i j} = \sigma_i \pi_j$, where the model bears a strong analogy with so-called ``Kronecker correlated'' models that have been studied extensively in communication theory (see, for example, \cite{hanlen2003capacity,shin2006capacity}) as well as in classical statistics (see, for example, \cite{James1964,muirhead2009aspects}). Such Kronecker models, as well as their numerous adaptations or extensions (e.g., \cite{jayaweera2003performance,wang2005capacity,jin2008transmit}), enjoy certain symmetry properties that allow their characterization by leveraging classical tools in multi-variate analysis, such as known matrix-variate integrals, hypergeometric functions of matrix arguments and zonal polynomials \cite{James1964,muirhead2009aspects,mehta2004random}. The model in (\ref{eq:Basic}) is fundamentally different, in that it does not readily lend itself to analysis via these classical techniques.
From a communication engineering perspective, models of the form (\ref{eq:Basic}) are useful since they can suitably characterize channels between multiple transmit and receive antennas that are arbitrarily distributed in space. These may include, among others, the so-called distributed antenna systems (DAS), which have recently attracted interest within the wireless communications community \cite{zhang2004capacity,saleh1987distributed,roh2002outage}. Despite the interest of DAS and the trends towards ever more heterogeneous and distributed network architectures, a precise understanding of such systems remains outstanding, due in part to the scarcity of statistical results on the underlying random matrix model. Such results have mainly been established in the asymptotic regime where the dimensions of the random matrix $\mathbf{H}$ grow large (see, \cite{zhang2004capacity,hachem2008clt}). These asymptotic results are rather complex and serve primarily as approximations for large-dimensional systems whose behavior may differ from that of finite ones.
In this paper, we present an exact characterization of random matrix models with arbitrary variance profiles, deriving for the first time new exact expressions for the joint distribution of (i) the random matrix $\mathbf{W}$, and (ii) its eigenvalues. While we focus on the $2 \times 2$ case, we demonstrate that the analysis is still rather complicated. A main challenge encountered in the derivations is that they involve the computation of certain integrals with respect to the group of $2 \times 2$ unitary matrices. These integrals are not classical, and we solve them by working with an explicit parametrization of the unitary group. Despite the complexity of the derivations, our results yield an exact and remarkably simple expression for the matrix density, along with a tractable expression for the eigenvalue density which reduces to particularly simple forms for various choices of the variance profile. Building upon these results, we further derive simple expressions for the distribution of the extreme eigenvalues, which are then leveraged to study the outage data rate of a dual-user multi-antenna communication system under different variance profiles. In particular, we show that asymmetry in the variance profile can significantly degrade the outage rate of systems with distributed antennas.
\section{Main results} \label{sec:main}
This section presents our key mathematical results.
\label{sec:gen}
\begin{theorem} \label{th:W}
Consider $\mathbf{W}=\mathbf{H}\mathbf{H}^{\dagger}=\bigl( \begin{smallmatrix}
w_{1} & w_{3}\\
w_{3}^\star & w_{2}
\end{smallmatrix} \bigr) \succeq \mathbf{0}$, with $\mathbf{H}$ as in (\ref{eq:Basic}), with $\phi_{ij}>0$ for $i,j=1,2$. Assume $\phi_{i1} \neq \phi_{i2}$ for some $i$. The probability density function (PDF) of $\mathbf{W}$ admits
\begin{align}
p(\mathbf{W}) &= \frac{K}{\pi} \, e^{-\frac{1}{2}\left(w_{1}s_{1}+w_{2}s_{2}\right)}\nonumber \\
& \times
\frac{\sinh{\left(\frac{1}{2}\sqrt{(w_{1}\epsilon_{1}-w_{2}\epsilon_{2})^2+
4|w_{3}|^{2}\epsilon_{1}\epsilon_{2}}\right)}}{\frac{1}{2}\sqrt{(w_{1}\epsilon_{1}-w_{2}\epsilon_{2})^2+4 |w_{3}|^{2}\epsilon_{1}\epsilon_{2}}} \label{eq:pw1}
\end{align}
where $K=\prod_{1\le i,j \le 2}\frac{1}{\phi_{ij}}$, $
s_{i}=\frac{1}{\phi_{i1}}+\frac{1}{\phi_{i2}}$, and
$\epsilon_{i}=\frac{1}{\phi_{i1}}-\frac{1}{\phi_{i2}}$.
\end{theorem}
\begin{proof}
See Appendix \ref{Ap:Th1Proof} for a complete proof. Briefly: $\mathbf{H}$ is decomposed as $\mathbf{H} = \mathbf{LQ}$, with $\mathbf{L}$ lower triangular and $\mathbf{Q}$ unitary, and after applying the corresponding Jacobian and integrating over the unitary group to eliminate $\mathbf{Q}$, we obtain the PDF of $\mathbf{L}$. Applying the variable transformation $\mathbf{W}=\mathbf{H}\mathbf{H}^{\dagger}=\mathbf{L}\mathbf{L}^{\dagger}$
leads to the result.
\end{proof}
\begin{theorem} \label{th:lambda} Assume $\epsilon_i \neq 0$ for some $i$. The joint PDF of the (ordered) eigenvalues $\lambda_1 \ge \lambda_2 >0$ of $\mathbf{W}$ admits
\begin{align}
p(\lambda_{1},\lambda_{2})&= 2 K \, (\lambda_{1}-\lambda_{2})^{2} \int_{0}^{\frac{\pi}{2}}e^{-\frac12 \nu(\lambda_{1},\lambda_{2},\kappa)} \, \nonumber \\
& \times \frac{\sinh\left(\frac12 \sqrt{\eta(\lambda_{1},\lambda_{2},\kappa)}\right)}{\sqrt{\eta(\lambda_{1},\lambda_{2},\kappa)}}\sin(2\kappa)d\kappa \, ,
\label{eq:plambda_integral}
\end{align}
where
\begin{align*}
\nu(\lambda_{1},\lambda_{2},\kappa) &= ({a_\kappa} \lambda_{1}+ {b_\kappa} \lambda_{2}) s_{1}
+({b_\kappa} \lambda_{1} + {a_\kappa} \lambda_{2} ) s_{2} , \\
\eta(\lambda_{1},\lambda_{2},\kappa) &=
({a_\kappa} \lambda_{1} + {b_\kappa} \lambda_{2})^{2} \epsilon_{1}^{2}
+ ({b_\kappa} \lambda_{1} + {a_\kappa} \lambda_{2})^{2} \epsilon_{2}^{2} \nonumber \\
& +2({a_\kappa} {b_\kappa} (\lambda_{1}-\lambda_{2})^{2}-\lambda_{1}\lambda_{2}) \epsilon_{1}\epsilon_{2} \,\, ,
\end{align*}
with ${a_\kappa} =\cos^{2}(\kappa)$ and ${b_\kappa}=\sin^{2}(\kappa)$ .
\end{theorem}
\begin{proof}
See Appendix \ref{Ap:Th2Proof}.
\end{proof}
\begin{remark}[Equivalence of the variance profile]
\label{rem:sym}
Let $\mathbf{\Phi}=(\phi_{ij})_{i,j=1,2}$ be the matrix defining the variance profile associated with $\mathbf{W}$. Since $\mathbf{H}\mathbf{H}^\dagger$ and $\mathbf{H}^\dagger\mathbf{H}$ share the same eigenvalues, it is equivalent to consider $\mathbf{\Phi}$ or $\mathbf{\Phi}^\mathrm{T}$.
\end{remark}
\subsection{Partially asymmetric variances}
Our main results---the PDF of $\mathbf{W}$ and of its eigenvalues--- have been given for a general variance profile. Consider now the special case where asymmetry in the variances is only partially allowed; specifically, consider $\phi_{i1} = \phi_{i2}$ for some $i$.
Assume without loss of generality (by symmetry of the PDF) that $\phi_{21} = \phi_{22}$ and, therefore, $\epsilon_2 = 0$.
\begin{corollary} \label{cor:epsilon}
Consider the case $\phi_{21} = \phi_{22} \triangleq \phi_3$ and define $\phi_1 \triangleq \min(\phi_{11},\phi_{12})$, $\phi_2 \triangleq \max(\phi_{11},\phi_{12})$, with $\phi_1 \neq \phi_2$ (hence $\epsilon_1 \neq 0$). The PDF of $\mathbf{W}=\mathbf{H}\mathbf{H}^{\dagger} \succeq \mathbf{0}$ admits
\begin{align}
\label{eq:w_part}
p(\mathbf{W})= \frac{K}{\pi} e^{-\frac{1}{2}\left(w_{1}s_{1}+w_{2}s_{2}\right)} \,
\frac{\sinh{\left(\frac{1}{2}w_1|\epsilon_1|\right)}}{\frac{1}{2}w_1|\epsilon_1|} ,
\end{align}
and the joint PDF of its (ordered) eigenvalues reduces to
\begin{align}
p(\lambda_{1},\lambda_{2})=& \nonumber \\
& \hspace{-1.3cm} \frac{ \, e^{-\frac{\lambda_1+\lambda_2}{\phi_3}} }{(\phi_2-\phi_1)\phi_3^2} \det \left(\lambda_i^{j-1} \right)_{i,j=1,2} \det \left( g(\lambda_j)^{i-1} \right)_{i,j=1,2},
\label{eq:lambda_part_case}
\end{align}
with $g(x) \triangleq \Ei((1/\phi_3-1/\phi_2)x) - \Ei((1/\phi_3-1/\phi_1)x)$ and $\Ei(x) = - \int_{-x}^{\infty} \frac{e^{-t}}{t}dt$ the exponential integral function.
Furthermore, the cumulative distribution function (CDF) of the minimum eigenvalue of $\mathbf{W}$ and the CDF of the maximum eigenvalue of $\mathbf{W}$ are given in (\ref{eq:Fmin}) and (\ref{eq:Fmax}) (top of the next page) for $\phi_1, \phi_2, \phi_3$ all distinct.
\end{corollary}
\begin{figure*}[!]
\normalsize
\setcounter{equation}{5}
\begin{equation}
\label{eq:Fmin}
F_{\lambda_\mathrm{min}}(x) = \mathbb{P}(\lambda_\mathrm{min}\le x) =1-\frac{e^{-x/\phi_3}}{\phi_2 - \phi_1}\left( \phi_2e^{-x/\phi_2} - \phi_1e^{-x/\phi_1} + x \left( \Ei(-x/\phi_2)- \Ei(-x/\phi_1) \right)\right)
\end{equation}
\begin{align}
\label{eq:Fmax}
F_{\lambda_\mathrm{max}}(x)=\mathbb{P}(\lambda_\mathrm{max}\le x)
&= \frac{1}{\phi_2-\phi_1} \biggl((1-e^{-x/\phi_3})\left(\phi_2(1-e^{-x/\phi_2})-\phi_1(1-e^{-x/\phi_1}) \right) \biggr. \nonumber\\
& \hspace{+1mm} \biggl. + x e^{-x/\phi_3}\left(-g(x)+\Ei(-x/\phi_2)-\Ei(-x/\phi_1)+\log \left| \frac{\phi_3-\phi_2}{\phi_3-\phi_1}\right|\right) \biggr)
\end{align}
\setcounter{equation}{7}
\vspace*{4pt}
\hrulefill
\end{figure*}
\begin{proof}
A sketch of the proof is given in Appendix \ref{app:sec}.
\end{proof}
Note the remarkable simplicity of both the joint eigenvalue PDF and the marginal CDFs of the extreme eigenvalues in this special case, which retain in part the flexibility of the general model, with $3$ arbitrary variances rather than $4$.
\begin{remark}[On the tail of the extreme eigenvalue distribution]
\label{rem:F_exp}
In the setting of \emph{Corollary} \ref{cor:epsilon}, we have the following expansions for $x$ in the neighborhood of $0$:
\begin{align}
\label{eq:fmin_exp}
F_{\lambda_\mathrm{min}}(x) &= \left(\frac{1}{\phi_3}+\frac{\log \phi_2-\log \phi_1}{\phi_2-\phi_1} \right) x + o(x) , \\
\label{eq:fmax_exp}
F_{\lambda_\mathrm{max}}(x) &= \frac{1}{12}\frac{1}{\phi_1 \phi_2 \phi_3^2}x^4 + o(x^4).
\end{align}
These results are obtained by basic algebra, and making use of \cite[eq. 8.214]{gradshteyn1965table}. The expressions are remarkably simple and shed light on how the variance profile affects the tail of the extreme eigenvalue distributions. For example, assuming the total variance is normalized, (\ref{eq:fmax_exp}) suggests that a strong asymmetry in the variance profile---i.e., some variances substantially smaller than others---leads to a more heavy-tailed distribution $F_{\lambda_\mathrm{max}}$, as compared to a more uniform profile. The insights brought by these simple expressions are further illustrated in Section \ref{sec:app}, where we use $F_{\lambda_\mathrm{min}}$ to study the outage data rate of a communication system with distributed antennas.
\end{remark}
\subsection{Connection to Kronecker correlated models}
It is instructive to relate the random matrix $\mathbf{W}$ to Kronecker correlated models, which are commonly considered in multi-antenna communications \cite{hanlen2003capacity,mckay2007performance,shin2006capacity,chiani2003capacity}. For such models, the channel matrix can be described as
\begin{align*}
\mathbf{H}_K = \mathbf{R}^{1/2}\mathbf{H}_w \mathbf{T}^{1/2},
\end{align*}
where $\left(\mathbf{H}_w \right)_{ij}$ are independent $\mathcal{CN}(0,1)$, while $\mathbf{R}$ and $\mathbf{T}$ are non-negative definite. Denoting $\mathbf{U}_\mathbf{R}$ (resp.\@ $\mathbf{U}_\mathbf{T}$) an eigenbasis of $\mathbf{R}$ (resp.\@ $\mathbf{T}$) and $r_i$ (resp.\@ $t_i$) the $i$-th eigenvalue of $\mathbf{R}$ (resp.\@ $\mathbf{T}$), $\mathbf{H}_K$ can be written (in the $2\times 2$ case) as \cite{weichselberger2006stochastic}
\begin{align*}
\mathbf{H}_K = \mathbf{U}_\mathbf{R} \left( {r_1^{1/2} \choose r_2^{1/2}}(t_1^{1/2}, t_2^{1/2}) \odot \mathbf{H}_w \right) \mathbf{U}_\mathbf{T},
\end{align*}
where $\odot$ denotes the Hadamard (entry-wise) matrix product.
Furthermore, it can be shown that the eigenvalue distribution of $\mathbf{W}_K$, described as $\mathbf{W}_K = \mathbf{H}_K \mathbf{H}_K^\dagger$, depends on $\mathbf{R}$ and $\mathbf{T}$ only through their eigenvalues (see \cite{hanlen2003capacity,mckay2007performance,shin2006capacity}). Hence our result in \emph{Theorem} \ref{th:lambda} subsumes the eigenvalue distribution of Kronecker correlated models as a special case.
\section{Outage performance of a dual-user communication system with distributed antennas}\label{sec:app}
We now demonstrate the usefulness of the mathematical results exposed above, through a concrete communications application example. Consider a communication system in which 2 single-antenna users (transmitters) communicate with a receiver comprising 2 distributed antennas. Rayleigh fading is assumed, with shadowing neglected, in which the communication channel is of the form $\mathbf{H}$ in (\ref{eq:Basic}) with variance profile $\phi_{ij}=D_{ij}^{-\nu}$, where $\nu$ is the path loss exponent and $D_{ij}$ the distance between transmit antenna $j$ and receive antenna $i$. Thus, the placement of the antennas determines the channel variance profile. For instance, if both transmitters (i.e., users) are located at equal distance from receive antenna $i$, then $\phi_{i1}=\phi_{i2}$, which corresponds to the setting of \emph{Corollary} \ref{cor:epsilon}.
We further assume that the receiver has perfect knowledge of $\mathbf{H}$, while the transmitters ignore such knowledge and send independent data with a total transmit power $P$. The noise at each receiver is assumed independent $\mathcal{C N}(0, \sigma_n^2$), and we define the transmit signal to noise ratio (SNR) as $\rho \triangleq P /\sigma_n^2$. We further assume that the total power gain of the channel is fixed, with $\mathbb{E} \left[ {\rm tr}\left( \mathbf{H} \mathbf{H}^\dagger \right) \right]= \sum_{1 \le i,j \le 2} \phi_{ij} =4$.
Denoting $\mathbf{x}$ the vector of transmitted signals and $\mathbf{n}$ the additive noise, the received signal $\mathbf{y}$ takes the form
\begin{align*}
\mathbf{y}=\mathbf{H}\mathbf{x} + \mathbf{n}.
\end{align*}
For detection, a linear zero-forcing receiver is considered. Such receivers are popular because of their low complexity \cite{gore2002transmit}, and their performance is known to approach that of minimum-mean squared error receivers at high SNR \cite{kumar2009asymptotic}. The estimate $\hat{\mathbf{x}}$ of the transmitted signal $\mathbf{x}$ then becomes
\begin{align*}
\hat{\mathbf{x}} = \left( \mathbf{H}^\dagger\mathbf{H} \right)^{-1} \mathbf{H}^\dagger \mathbf{ y} = \mathbf{x} + \left(\mathbf{H}^\dagger\mathbf{H} \right)^{-1} \mathbf{H}^\dagger \mathbf{n},
\end{align*}
and the post-processing SNR for the $i$-th user is \cite{heath2005multimode}
\begin{align*}
\mathrm{SNR}_i = \frac{\rho}{\left[\left(\mathbf{H}^\dagger \mathbf{H}\right)^{-1}\right]_{ii}}
= \frac{\rho}{\left[\mathbf{W}^{-1}\right]_{ii}}.
\end{align*}
Here, we are interested in the outage data rate, defined as the largest transmission rate (in bits/s/Hz) that can be reliably guaranteed for both users (simultaneously) at least $(1-\epsilon)\times 100\%$ of the time, i.e.,
\begin{align}
R_\mathrm{out}(\epsilon)=\underset{R\ge0}{ \sup} \left(R:P_\mathrm{out}(R) < \epsilon\right),
\label{eq:def_r}
\end{align}
with $\epsilon$ being the prescribed maximum outage level, and $P_\mathrm{out}(R)$ denoting the outage probability for a given target rate $R$. That is, $P_\mathrm{out}(R)$ reflects the probability that a reliable transmission at rate $R$ cannot be guaranteed to both users, given by
\begin{align*}
P_\mathrm{out}(R) = \mathbb{P}\left(\log_2\left(1+\mathrm{SNR}_\mathrm{min} \right) \le R \right) ,
\end{align*}
where $\mathrm{SNR}_\mathrm{min}=\min(\mathrm{SNR}_1,\mathrm{SNR}_2)$. Since \cite{heath2005multimode} $\frac{1}{\left[\mathbf{W}^{-1}\right]_{ii}} \ge \lambda_\mathrm{min}, i = 1, 2,$
with $\lambda_\mathrm{min}$ the minimum eigenvalue of $\mathbf{W}$, it follows that $\mathrm{SNR}_\mathrm{min} \ge \rho \lambda_\mathrm{min}$, which yields the upper bound
\begin{align*}
P_\mathrm{out}(R) &\le \mathbb{P} \left(\log_2\left(1+\rho\lambda_\mathrm{min} \right) \le R \right) \nonumber \\
&= F_{\lambda_\mathrm{min}}\left(\frac{1}{\rho} \left(2^R-1\right) \right) \; .
\end{align*}
Considering now the setting of \emph{Corollary} \ref{cor:epsilon} ($\phi_{21}=\phi_{22}$) and a small outage level, we can use the expansion of $F_{\lambda_\mathrm{min}}(\cdot)$ (Remark \ref{rem:F_exp}) to write
\begin{align*}
P_\mathrm{out}(R)
\lesssim \frac{1}{\rho}\left(2^{R}-1\right) \underbrace{\left(\frac{1}{\phi_3}+\frac{\log \phi_2-\log \phi_1}{\phi_2-\phi_1} \right)}_{a(\mathbf{\Phi})} \; ,
\end{align*}
where the influence of the variances is isolated through the factor $a(\mathbf{\Phi})$. Also note that if $\phi_2 \to \phi_1$, $a(\mathbf{\Phi}) \to 1/\phi_3 + 1/\phi_1$.
The results above then suggest the following lower bound for the outage data rate $R_\mathrm{out}(\epsilon)$:
\begin{align}
R_\mathrm{out}(\epsilon) \ge \log_2 \left( 1+ \rho F_{\lambda_\mathrm{min}}^{-1}(\epsilon) \right) \triangleq \check {R}_\mathrm{out}(\epsilon),
\label{eq:cout_exp}
\end{align}
where $F_{\lambda_\mathrm{min}}^{-1}$ denotes the inverse function of $F_{\lambda_\mathrm{min}}$, while in the setting of \emph{Corollary} \ref{cor:epsilon},
\begin{align}
R_\mathrm{out}(\epsilon) \gtrsim \log_2 \left(1+ \rho \frac{\epsilon}{a(\mathbf{\Phi})} \right) \triangleq {\tilde R}_\mathrm{out}(\epsilon)
.
\label{eq:cout_exp2}
\end{align}
Fig.\@ \ref{fig:r_vs_snr} shows the outage data rate as a function of the SNR $\rho$, for an asymmetric variance profile $\mathbf{\Phi}=\left( \begin{smallmatrix}
0.01 & 0.99\\
1.5 & 1.5
\end{smallmatrix} \right)$, and three maximum outage levels $\epsilon=1\%, 10\%$ and $50\%$. For each maximum outage level, we plot (i) the true empirical rate $R_\mathrm{out}(\epsilon)$ in (\ref{eq:def_r})---obtained from the average over $10^5$ realizations of the channel, (ii) the lower bound $\check {R}_\mathrm{out}(\epsilon)$ in (\ref{eq:cout_exp})---where we invert $F_{\lambda_\mathrm{min}}$ from (\ref{eq:Fmin}) numerically, and (iii) the approximate analytical lower bound ${\tilde R}_\mathrm{out}(\epsilon)$ in (\ref{eq:cout_exp2}).
\begin{figure}[h]
\centering
\includegraphics[width=0.85\columnwidth]{Figure1}
\caption{\small Outage data rate vs.\@ SNR for different maximum outage levels, $\epsilon=1\%, 10\%$ and $50\%$, with an asymmetric variance profile.}
\label{fig:r_vs_snr}
\end{figure}
Notice the tightness of the lower bound $\check R_\mathrm{out}(\epsilon)$ for all the considered maximum outage levels and all SNRs. Moreover, the analytical (approximate) bound $\tilde R_\mathrm{out}(\epsilon)$, based on the expansion (\ref{eq:fmin_exp}), shows excellent accuracy for small outage levels ($\epsilon=1\%, 10\%$) as expected, since it corresponds to evaluating $F_{\lambda_\mathrm{min}}(x)$ fairly deep in the tail. However, for higher outage levels (e.g., $\epsilon=50\%$), this bound becomes less reliable.
For $\epsilon$ small, the effect of the variance profile on the outage data rate can be analyzed from the approximate bound $\tilde R_\mathrm{out}(\epsilon)$, given its remarkable tightness and simplicity.
Recalling the normalization on the total power gain of the channel, i.e., $\phi_1+\phi_2 = 4-2\phi_3$, it is straightforward to verify that $a(\mathbf{\Phi})$ is continuous in the parameters $\phi_1,\phi_2,\phi_3$,
and that, for $\phi_3$ fixed, the mapping $(\phi_1,\phi_2) \mapsto a(\mathbf{\Phi})$ is minimum when $\phi_1=\phi_2$ and maximum in the limit $\phi_1 \rightarrow 0$ (hence $\phi_2 \rightarrow 4-2\phi_3$). This immediately implies that, for $\phi_3$ fixed, the outage data rate is maximum when $\phi_1=\phi_2=2-\phi_3$, which represents the ``most symmetric'' profile under the total variance normalization, and that any departure from such symmetry entails a performance loss. To quantify the range of such loss, we now consider the two extremes cases, ``symmetric'' $\mathbf{\Phi}^\mathrm{sym} = ( \begin{smallmatrix}
2-\phi_3 & 2-\phi_3\\
\phi_3 & \phi_3
\end{smallmatrix} )$ and ``asymmetric'' $\mathbf{\Phi}^\mathrm{asym} = ( \begin{smallmatrix}
0.01 & 4-2\phi_3-\phi_1\\
\phi_3 & \phi_3
\end{smallmatrix} )$ profiles, and define the fractional loss in the outage data rate due to asymmetry, i.e., ${ \tilde{FL}}(\epsilon) \triangleq \frac{{\tilde R}_\mathrm{out}^\mathrm{sym}(\epsilon)-{\tilde R}_\mathrm{out}^\mathrm{asym}(\epsilon)}{{\tilde R}_\mathrm{out}^\mathrm{sym}(\epsilon)}$.
In Table \ref{tab:r_vs_phi}, for $\epsilon = 1\%$, we report the value of $\tilde{FL}(\epsilon)$ for increasing values of $\phi_3 \in (0,2)$, along with the corresponding true fractional loss $FL (\epsilon)$, computed using the true outage data rates, obtained empirically by averaging over $10^5$ realizations. The numbers reveal a striking degradation in the outage data rate due to asymmetry, e.g., up to $36\%$ loss for $\phi_3 = 1.6$ and SNR$=30$dB.
\renewcommand{\arraystretch}{1.5}
\begin{table}[htb]
\caption{Fractional Loss of the Outage Data Rate Associated with Asymmetric Variance Profiles, at SNR$=30\mathrm{dB}$ and $\epsilon=1\%$.}
\label{tab:r_vs_phi}
\begin{center}
\begin{tabular}{c|cccccccc}
$\phi_3$ & 0.01 & 0.5 & 1 & 1.2 & 1.4 & 1.6 & 1.8 & 1.95 \\
\hline
$FL (\epsilon)$ & 0\% & 18\% & 28\% & 30\% & 32\% &\bf{36\%} & 33\% & 24\%\\
${ \tilde{FL}}(\epsilon)$ & 1\% & 19\% & 27\% & 29\% & 33\% & \bf{33\%} & 33\% & 23\%\\
\end{tabular}
\end{center}
\end{table}
Physically, it implies that, assuming the two transmitting users are equidistant from receive antenna 2, their position relative to receive antenna 1 is crucial: if the distances from the two users to receive antenna 1 are very different (asymmetric case, $\phi_2/\phi_1 \gg 1$), a significantly lower outage data rate is expected as compared to the case where both users are equidistant from receive antenna 1 (symmetric case, $\phi_1=\phi_2$).
|
1,108,101,564,371 | arxiv | \section{Introduction}
In machine learning pure applications of
MDL are rare, partially because of the difficulties one encounters
trying to define an adequate model code and data-to-model code,
and partially because of the operational difficulties that are poorly
understood. We
analyze aspects of both the power and the perils of
MDL precisely and formally. Let us first resurrect a familiar
problem from our childhood to illustrate some of the issues involved.
The process of solving a jigsaw puzzle involves an
\emph{incremental reduction of entropy}, and this
serves to illustrate the analogous features of
the learning problems which are the main issues of this work.
Initially, when the pieces come out
of the box they have a completely random ordering. Gradually we
combine pieces, thus reducing the entropy and increasing the order until the
puzzle is solved. In this last stage we have found a maximal
ordering. Suppose that Alice and Bob both start to solve two
versions of the same puzzle, but that they follow different
strategies. Initially, Alice sorts all pieces according to
color, and Bob starts by sorting the pieces according to
shape. (For the sake of argument we assume that the
puzzle has no recognizable edge pieces.) The crucial insight,
shared by experienced puzzle aficionados, is that Alice's
strategy is efficient whereas Bob's strategy is not and is in fact even
worse than a random strategy. Alice's strategy is efficient,
since the probability that pieces with about the same color match is
much greater than the unconditional probability of a match.
On the other hand the information about the shape of the pieces
can only be used in a relatively late stage of the puzzle process.
Bob's effort in the beginning is a waste of time, because he must
reorder the pieces before he can proceed to solve the puzzle. This
example shows that if the solution of a problem depends on finding
a \emph{maximal} reduction of entropy this does not mean that
\emph{every} reduction of entropy brings us closer to the solution.
Consequently reduction of entropy is not in all cases a good strategy.
\subsection{Entropy Versus Kolmogorov Complexity}
Above we use ``entropy''
in the often used, but inaccurate, sense of ``measure of unorderedness
of an individual arrangement.'' However, entropy is a measure of
uncertainty associated with a random variable, here a set of arrangements
each of which has a certain probability of occurring.
The entropy of every individual arrangement is by definition zero.
To circumvent this problem, often the notion of ``empirical entropy''
is used, where certain features like letter frequencies of the individual
object are analyzed, and the entropy is taken with respect to the
set of all objects having the same features. The result obviously
depends on the choice of what features to use: no features gives
maximal entropy and all features (determining the individual object
uniquely) gives entropy zero again. Unless one has knowledge of the
characteristics of a definite random variable producing
the object as a typical outcome, this procedure gives arbitrary and
presumably meaningless, results. This conundrum arises since classical
information theory deals with random variables and the communication
of information. It does not deal with the information (and the complexity
thereof) in an individual object independent of an existing
(or nonexisting) random variable
producing it.
To capture the latter notion precisely one has to use
``Kolmogorov complexity'' instead of ``entropy,'' and we will do so in our
treatment. For now, the ``Kolmogorov complexity'' of a file
is the number of bits in the ultimately compressed version of the file
from which the original can still be losslessly extracted by a fixed general
purpose decompression program.
\subsection{Learning by MDL}
Transferring the jigsaw puzzling insights to the general case
of learning algorithms using the minimum description length
principle (MDL), \cite{Ri83,BRY,Ri07}, we observe that although it
may be true that
the maximal compression yields the best solution,
it may still not be true that every incremental compression brings
us closer to the solution. Moreover, in the case of many MDL problems there
is a complicating issue in the fact that the maximal compression
cannot be computed.
More formally,
in constrained model selection the model is taken from a given
model class. Using two-part MDL codes for the given data,
we assume that the shortest two-part code for the data,
consisting of the model code and the data-to-model code, yields the best
model for the data. To obtain the shortest code, a natural way is to
approximate it by a process of finding ever shorter candidate two-part
codes.
Since we start with a finite two-part code, and
with every new candidate two-part code we decrease the code length,
eventually we must achieve the shortest two-part code (assuming that
we search through all two-part codes for the data). Unfortunately,
there are two problems: (i) the computation to find the next shorter
two-part code may be very long, and we may not know how long; and
(ii) we may not know when we have reached the shortest two-part code:
with each candidate two-part code there is the possibility that further
computation may yield yet a shorter one. But because of item (i)
we cannot a priori bound the length of that computation.
There is also the possibility that the algorithm
will never yield the shortest two-part code because it
considers only part of the search space or gets trapped in
a nonoptimal two-part code.
\subsection{Results}
We show that for some MDL algorithms the sequence
of ever shorter two-part codes for the data converges in a finite number
of steps to the best model. However, for every MDL algorithm
the intermediate
models may not convergence
monotonically in goodness. In fact, in the sequence of candidate two-part codes
converging to a (globally or locally)
shortest, it is possible that the models involved
oscillate from being good to bad.
Convergence is only monotone if the model-code parts
in the successive two-part codes are always the shortest (most compressed)
codes for the models involved. But this property cannot be
guaranteed by any effective method.
It is very difficult, if not impossible, to formalize the
goodness of fit of an individual model for individual data in
the classic statistics setting, which is probabilistic.
Therefore, it is impossible to express the practically important
issue above in those terms.
Fortunately, new developments
in the theory of Kolmogorov complexity \cite{Ko74,VV02}
make it possible to rigorously
analyze the questions involved, possibly involving noncomputable
quantities. But it is better to have a definite statement in a theory
than having no definite statement at all.
Moreover, for certain algorithms (like Algorithm Optimal MDL in
Theorem~\ref{alg.mdl}) we can guarantee that they satisfy
the conditions required, even though these are possibly noncomputable.
In Section~\ref{sect.dm}
we review the necessary notions from \cite{VV02}, both
in order that the paper is self-contained and the definitions and notations are
extended from the previously used singleton data to multiple data samples.
Theorem~\ref{theo.recoding} shows
that the use of MDL will be approximately invariant under recoding of the data.
The next two sections contain the main results:
Definition~\ref{def.MDLalg} defines the notion of an MDL algorithm.
Theorem~\ref{alg.mdl} shows that there exists such an MDL algorithm that
in the (finite) limit results in an optimal model.
The next statements are about MDL algorithms in general, also the ones
that do not necessarily result in an optimal MDL code.
Theorem~\ref{theo.approxim} states a sufficient condition
for improvement of the randomness deficiency (goodness of fit)
of two consecutive length-decreasing MDL codes.
This extends Lemma V.2 of the
\cite{VV02} (which assumes all programs are shortest) and
corrects the proof concerned.
The theory is applied and illustrated in Section~\ref{sect.single}:
Theorem~\ref{theo.fluctuate} shows by example
that a minor violation of the sufficiency
condition in Theorem~\ref{theo.approxim} can result in worsening
the randomness deficiency (goodness of fit)
of two consecutive length-decreasing MDL codes.
The special case of learning DFAs from positive examples is
treated in Section~\ref{sect.multi}. The main result shows,
for a concrete and computable MDL code, that a decrease in the length
of the two-part MDL code
does not imply a better model fit (see Section~\ref{sect.lmdl})
unless there is a sufficiently
large decrease as that required in
Theorem~\ref{theo.approxim} (see Remark~\ref{rem.smc}).
\section{Data and Model}\label{sect.dm}
Let $x,y,z \in {\cal N}$, where
${\cal N}$ denotes the natural
numbers and we identify
${\cal N}$ and $\{0,1\}^*$ according to the
correspondence
\[(0, \epsilon ), (1,0), (2,1), (3,00), (4,01), \ldots \]
Here $\epsilon$ denotes the {\em empty word}.
The {\em length} $|x|$ of $x$ is the number of bits
in the binary string $x$, not to be confused with the {\em cardinality}
$|S|$ of a finite set $S$. For example,
$|010|=3$ and $|\epsilon|=0$, while $|\{0,1\}^n|=2^n$ and
$|\emptyset|=0$.
Below we will use the natural numbers and the binary strings
interchangeably. Definitions, notations, and facts we use about prefix codes,
self-delimiting codes, and Kolmogorov complexity, can be found
in \cite{LiVi97} and are briefly reviewed in Appendix~\ref{sect.prel}.
The emphasis is on binary sequences only for convenience;
observations in any alphabet can be encoded in binary in a way
that is theory neutral.
Therefore, we consider only data
$x$ in $\{0,1\}^*$.
In a typical statistical inference situation we are given
a subset of
$\{0,1\}^*$,
the data sample, and are required to infer
a model for the data sample.
Instead of $\{0,1\}^*$
we will consider
$\{0,1\}^{n}$ for some fixed but arbitrarily large $n$.
\begin{definition}
\rm
A {\em data sample} $D$ is a subset of $\{0,1\}^n$.
For technical convenience we want a model $M$ for $D$ to contain
information about the cardinality of $D$.
A {\em model} $M$ has the form $M = M' \bigcup \{\#i\}$,
where $M' \subseteq \{0,1\}^n$
and $i \in \{0,1\}^n$.
We can think of $i$ as the $i$th binary string in $\{0,1\}^{n}$.
Denote the cardinalities by lower case letters:
\[
d = |D|, \; m = |M'|.
\]
If $D$ is a data sample and {\em $M$ is a model for $D$} then
$D \subseteq M' \subseteq M$, $M=M' \bigcup \{\#d\}$,
and we write $M \sqsupset D$ or $D \sqsubset M$.
\end{definition}
Denote the {\em complexity
of a finite set} $S$ by
$K(S)$---the length (number of bits) of the
shortest binary program $p$ from which the reference universal
prefix machine $U$
computes a lexicographic listing of the elements of $A$ and then
halts.
That is, if $S=\{x_1 , \ldots , x_{d} \}$, the elements given
in lexicographic order, then
$U(p)= \langle x_1,\langle x_2, \ldots, \langle x_{d-1},x_d\rangle \ldots\rangle \rangle $.
The shortest program $p$,
or, if there is more than one such shortest program, then
the first one that halts in a standard dovetailed running of all programs,
is denoted by $S^*$.
The {\em conditional complexity} $K(D \mid M)$ of $D \sqsubset M$
is the length (number of bits) of the
shortest binary program $p$ from which the reference universal
prefix machine $U$
from input $M$ (given as a list of elements)
outputs $D$ as a lexicographically ordered
list of elements
and halts.
We have
\begin{equation}\label{eq57}
K(D \mid M)\le\log {m \choose d}+ O(1).
\end{equation}
The upper bound follows by considering a self-delimiting code of $D$
given $M$
(including the number $d$ of elements in $D$), consisting of
a $\lceil\log {m \choose d}\rceil$ bit long index
of $D$ in the lexicographic ordering of the number of ways to choose
$d$ elements from $M'=M-\{\#d\}$.
This code is called the
\emph{data-to-model code}.
Its length quantifies the maximal ``typicality,'' or ``randomness,''
any data sample $D$ of $d$ elements can have with respect
to model $M$ with $M \sqsupset D$.
\begin{definition}
\rm
The lack of typicality
of $D$ with respect to $M$
is measured by the amount by which $K(D \mid M)$
falls short of the length of the data-to-model code.
The {\em randomness deficiency} of $D \sqsubset M$
is defined by
\begin{equation}\label{eq:randomness-deficiency}
\delta (D \mid M) = \log {m \choose d} - K(D \mid M),
\end{equation}
for $D \sqsubset M$, and $\infty$ otherwise.
\end{definition}
The randomness deficiency can be a little smaller than 0, but not more than
a constant.
If the randomness deficiency is not much greater than 0,
then there are no simple special properties that
single $D$ out from the majority of data samples of cardinality $d$
to be drawn from $M'=M-\{\#d\}$.
This is not just terminology: If $\delta (D \mid M)$ is small enough,
then $D$ satisfies {\em all} properties of low Kolmogorov complexity
that hold for the majority of subsets of cardinality $d$ of $M'$. To be precise:
A {\em property} $P$ represented by $M$ is a
subset of $M'$, and we say that
$D$ satisfies property $P$ if $D$ is a subset of $P$.
\begin{lemma}
Let $d,m,n$ be natural numbers, and let
$D \subseteq M' \subseteq \{0,1\}^{n}$,
$M=M' \bigcup \{\#d\}$,
$|D|=d, |M'|=m$, and let $\delta$ be a simple
function of the natural numbers to the real numbers,
that is, $K(\delta)$ is a constant,
for example, $\delta$ is $\log$ or $\sqrt{}$.
(i) If $P$ is a property satisfied by all $D \sqsubset M$ with
$\delta(D \mid M) \le \delta (n)$,
then $P$ holds for a fraction of at
least $1-1/2^{\delta(n)}$ of the subsets of $M' = M-\{\#d\}$.
(ii) Let
$P$ be a
property
that holds for a fraction of at least
$1-1/2^{\delta (n)}$ of the
subsets of $M'=M-\{\#d\}$.
There is a constant $c$, such that $P$ holds
for every $D \sqsubset M$
with $\delta (D \mid M)\le\delta (n)-K(P \mid M) -c$.
\end{lemma}
\begin{proof}
(i) By assumption, all data samples $D \sqsubset M$
with
\begin{equation}\label{eq.fraction}
K(D|M) \geq \log {m \choose d} - \delta (n)
\end{equation}
satisfy $P$.
There are only
\[
\sum_{i=0}^{\log {m \choose d} - \delta (n)-1}2^i
= {m \choose d} 2^{- \delta (n)}-1
\]
programs of length smaller than $\log {m \choose d} - \delta (n)$,
so there are at most that many $D \sqsubset M$
that do not satisfy \eqref{eq.fraction}.
There are ${m \choose d}$ sets $D$ that satisfy $D \sqsubset M$,
and hence a fraction of at least $1-1/2^{\delta(n)}$ of
them satisfy \eqref{eq.fraction}.
(ii)
Suppose $P$ does not hold for a data sample $D \sqsubset M$
and the randomness deficiency \eqref{eq:randomness-deficiency} satisfies
$\delta(D| M) \leq \delta (n) -K(P|M)-c$.
Then we can reconstruct $D$ from a description of $M$,
and $D$'s index $j$ in an effective enumeration of all subsets
of $M$ of cardinality $d$ for
which $P$ doesn't hold. There are at
most ${m \choose d} /2^{ \delta (n)}$ such
data samples by assumption, and therefore there are constants
$c_1,c_2$ such that
\[ K(D \mid M) \leq \log j+ c_1 \leq \log {m \choose d} - \delta ( n) + c_2. \]
Hence, by the assumption on the randomness deficiency of
$D$, we find $K(P|M) \leq c_2 -c$,
which contradicts the necessary nonnegativity
of $K(P|M)$ if we choose $c > c_2$.
\end{proof}
The {\em minimal randomness deficiency} function
of the data sample $D$ is defined by
\begin{equation}
\label{eq1}
\beta_D( \alpha) =
\min_{M} \{ \delta(D \mid M): M \sqsupset D , \; K(M) \leq \alpha \},
\end{equation}
where we set $\min \emptyset = \infty$.
The smaller $\delta(D \mid M)$ is, the more $D$ can be considered
as a {\em typical} data sample
from $M$. This means that a set $M$ for which $D$ incurs minimal
randomness deficiency, in the model class of contemplated sets of given maximal
Kolmogorov complexity, is a ``best fitting'' model
for $D$ in that model class---a most likely explanation, and $\beta_D(\alpha)$
can be viewed as a {\em constrained best fit estimator}.
\subsection{Minimum Description Length Estimator}
The length of the minimal two-part code for $D$
with model $M \sqsupset D$ consist
of the model cost $K(M)$ plus the
length of the index of $D$ in the enumeration of choices of $d$ elements
out of $m$ ($m=|M'|$ and $M'=M-\{\#d\}$).
Consider the model class of $M$'s of given maximal Kolmogorov
complexity $\alpha$.
The {\em MDL} function or {\em constrained MDL estimator} is
\begin{equation}\label{eq.3}
\lambda_{D}(\alpha) =
\min_{M} \{\Lambda(M): M \sqsupset D,\; K(M) \leq \alpha\},
\end{equation}
where $\Lambda(M)=K(M)+\log {m \choose d} \ge K(D)+O(1)$ is
the total length of two-part code of $D$
with help of the model $M$.
This function $\lambda_D (\alpha)$ is the
celebrated optimal two-part MDL code
length as a function of $\alpha$,
with the model class restricted to models
of code length at most $\alpha$. The functions $\beta_D$ and $\lambda_D$
are examples of Kolmogorov's {\em structure functions}, \cite{Ko74,VV02}.
Indeed,
consider the following \emph{two-part code}
for $D \sqsubset M$: the first part is
a shortest self-delimiting program $p$ for $M$ and the second
part is
$\lceil\log {m \choose d}\rceil$ bit long index of $D$
in the lexicographic ordering of all choices of $d$ elements from $M$.
Since $M$ determines $\log {m \choose d}$ this code
is self-delimiting
and we obtain the two-part code,
where the constant $O(1)$ is
the length of an additional program that reconstructs
$D$ from its two-part code.
Trivially, $\lambda_D (\alpha) \geq K(D)+O(1)$.
For those $\alpha$'s that
have
$\lambda_D (\alpha) = K(D)+O(1)$, the associated model $M \sqsupset D$
in at most $\alpha$ bits
(witness for
$\lambda_D(\alpha)$)
is called a {\em sufficient statistic} for $D$.
\begin{lemma}
If $M$ is a sufficient
statistic for $D$, then
the randomness deficiency of $D$ in $M$ is $O(1)$,
that is, $D$ is a typical
data sample for $M$, and $M$ is a model of best fit for $D$.
\end{lemma}
\begin{proof}
If $M$ is a sufficient
statistic for $D$, then $K(M)+\log {m \choose d} = K(D)+O(1)$. The left-hand
side of the latter equation
is a two-part description of $D$ using the model $M \sqsupset D$ and
as data-to-model code the index of $D$ in the enumeration
of the number of choices of $d$ elements from $M$ in
$\log {m \choose d}$ bits.
This left-hand side equals the right-hand side which
is the shortest one-part
code of $D$ in $K(D)$ bits. Therefore,
\begin{align*}
K(D) &\leq K(D,M) +O(1)
\\&\leq K(M)+K(D \mid M)+O(1)
\\& \leq K(M)+\log {m \choose d}+O(1) = K(D)+O(1).
\end{align*}
The first and second inequalities are straightforward, the third inequality
states that given $M \sqsupset D$
we can describe $D$ in a self-delimiting manner in
$\log {m \choose d}+O(1)$ bits,
and the final equality follows by the sufficiency property.
This sequence of (in)equalities implies
that $K(D \mid M)=\log {m \choose d} +O(1)$.
\end{proof}
\begin{remark}[Sufficient but not Typical]
\rm
Note that the data sample $D$ can have randomness deficiency about 0, and
hence be a typical element
for models $M$, while $M$ is not a sufficient statistic.
A sufficient statistic $M$
for $D$ has the additional property, apart from being a model
of best fit, that $K(D,M)=K(D)+O(1)$
and therefore by \eqref{eq.soi} in Appendix~\ref{sect.prel}
we have $K(M|D^*)=O(1)$:
the sufficient statistic $M$ is a model of best fit
that is almost completely determined by $D^*$, a shortest program
for $D$.
\end{remark}
\begin{remark}[Minimal Sufficient Statistic]
\rm
The sufficient
statistic associated with $\lambda_D(\alpha)$
with the least $\alpha$ is called the
{\em minimal sufficient statistic}.
\end{remark}
\begin{remark}[Probability Models]
\rm
Reference \cite{VV02} and this paper analyze a canonical setting
where the models are finite sets.
We can generalize the treatment to the case
where the models are the computable
probability mass functions. The computability
requirement does not seem very restrictive.
We cover most, if not all,
probability mass functions ever considered,
provided they have computable parameters.
In the case of multiple data we consider probability mass functions $P$
that map subsets $B \subseteq \{0,1\}^n$ into $[0,1]$ such that
$\sum_{B \subseteq \{0,1\}^n} P(B) = 1$. For every $0 \leq d \leq 2^n$,
we define $P_d (B) = P(B \mid |B|=d)$.
For data $D$ with $|D|=d$ we
obtain
$\lambda_D (\alpha) = \min_{P_d} \{K(P_d)+ \log 1/P_d(D):
P_d(D) > 0$ and $P_d$ is a
computable probability mass function with $K(P_d) \leq \alpha$$\}$.
The general
model class of computable probability mass functions is equivalent to
the finite set model class, up to an additive logarithmic $O( \log dn)$
term. This result for multiple data
generalizes the corresponding result for singleton data in \cite{Sh83,VV02}.
Since the other results in \cite{VV02} such as \eqref{eq.eq}
and those in Appendix~\ref{sect.formal}, generalized to multiple data,
hold only up to
the same additive logarithmic
term anyway, they carry over to the probability models.
\end{remark}
The generality of the results are at the same time a restriction.
In classical statistics one is commonly interested in model classes
that are partially poorer and partially richer than the ones we consider.
For example, the class of Bernoulli processes, or $k$-state Markov
chains, is poorer than the class of computable probability mass functions
of moderate maximal Kolmogorov complexity $\alpha$,
in that the latter class may contain
functions that require far more complex computations than the rigid
syntax of the classical classes allows. Indeed, the class of computable
probability mass functions of even moderate complexity allows
implementation of a function mimicking a universal Turing machine computation.
On the other hand, even the simple Bernoulli process can be equipped
with a noncomputable real bias in $(0,1)$, and hence the generated
probability mass function over $n$ trials is not a computable function.
This incomparability of the algorithmic model classes studied here and
the traditional statistical model classes, means that the
current results cannot be directly transplanted to the traditional setting.
They should be regarded as pristine truths that hold in a
platonic world that can be used as guideline to develop analogues
in model classes that are of more traditional concern, as in
\cite{Ri07}.
\subsection{Essence of Model Selection }
\label{sect.essence}
The first parameter we are interested in is the {\em simplicity}
$K(M)$ of the
model $M$ explaining the data sample $D$ ($D \sqsubset M$).
The second parameter is
{\em how typical} the data is
with respect to $M$, expressed by
the randomness deficiency
$\delta(D \mid M)=\log {m \choose d}-K(D \mid M)$.
The third parameter is
how {\em short
the two part code}
$\Lambda(M)=K(M )+\log {m \choose d}$
of the data sample $D$ using theory $M$ with $D \sqsubset M$ is.
The second part consists of the full-length index,
ignoring saving in code length using possible nontypicality
of $D$ in $M$ (such as being the first $d$ elements in the enumeration of
$M'=M-\{\#d\}$).
These parameters induce a partial order on the contemplated set of models.
We write
$M_1 \le M_2$, if $M_1$ scores equal or less than
$M_2$ in all three
parameters. If this is the case, then we may say that
$M_1$ is at least as good as $M_2$
as an explanation for $D$ (although the converse need not necessarily hold,
in the sense that it is possible that $M_1$ is
at least as good a model for $D$
as $M_2$ without
scoring better than $M_2$ in all three parameters simultaneously).
The algorithmic statistical properties of a data sample $D$ are
fully represented by
the set $A_D$ of all
triples
\[
\pair{ K(M), \delta(D \mid M), \Lambda(M) }
\]
with $M \sqsupset D$, together with a component wise
order relation on the elements of those triples.
The complete characterization of
this set
follows from
the results in \cite{VV02}, provided we generalize the singleton case treated
there to the multiple data case required here.
In that reference it is shown
that
if we minimize the length of a two-part code for an individual data sample,
the two-part code consisting of
a model description and a data-to-model code
over the {\em class of all computable models} of at most a given complexity,
then the following is the case.
With {\em certainty}
and not only with high probability as in the classical case
this process selects an individual model that
in a rigorous sense is (almost)
the best explanation for the individual data sample
that occurs among the contemplated models.
(In modern versions of MDL, \cite{Gr07,BRY,Ri07}, one
selects the model that
minimizes just the data-to-model code length
(ignoring the model code length), or minimax and mixture MDLs.
These are not treated here.)
These results are exposed in the proof and analysis of the
equality:
\begin{equation}\label{eq.eq}
\beta_D (\alpha ) = \lambda_D (\alpha)
- K(D),
\end{equation}
which holds within negligible additive $O (\log dn)$ terms,
in argument and value. We give the precise statement in
\eqref{eq.multipleeq} in Appendix~\ref{sect.formal}.
\begin{remark}\label{rem.witness}
\rm
Every model (set) $M$ that witnesses the value
$\lambda_D(\alpha)$,
also witnesses the value $\beta_D (\alpha)$ (but not vice versa).
The functions $\lambda_D$ and $\beta_D$ can assume all
possible shapes over their full domain of definition (up to
additive logarithmic precision in both argument and value).
We summarize these matters in Appendix~\ref{sect.formal}.
\end{remark}
\subsection{Computability}
\label{sect.comp}
How difficult is it to compute the functions $ \lambda_D, \beta_D$,
and the minimal sufficient statistic? To express the properties
appropriately we require the notion of functions
that are not computable,
but can be approximated monotonically by a computable
function.
\begin{definition}
\rm
\label{def.semi}
A function $f: {\cal N} \rightarrow {\cal R}$ is
{\em upper semicomputable} if there is a Turing machine $T$ computing a
total function $\phi$
such that $\phi (x,t+1) \leq \phi (x,t)$ and
$\lim_{t \rightarrow \infty} \phi (x,t)=f(x)$. This means
that $f$ can be computably approximated from above.
If $-f$ is upper semicomputable, then $f$ is lower semicomputable.
A function is called {\em semicomputable}
if it is either upper semicomputable or lower semicomputable.
If $f$ is both upper semicomputable and lower semicomputable,
then we call $f$ {\em computable} (or recursive if the domain
is integer or rational).
\end{definition}
To put matters in perspective: even if a function is computable,
the most feasible type identified above, this doesn't mean much in
practice. Functions like $f(x)$ of which the computation terminates
in computation time of
$t(x) = x^x$ (say measured in flops), are among the easily computable ones.
But for $x=30$, even a computer performing
an unrealistic Teraflop per second,
requires $30^{30}/ 10^{12} > 10^{28}$ seconds.
This is more than $3 \cdot 10^{20}$ years. It is out of the question
to perform such computations. Thus, the fact that a function
or problem solution is computable gives no insight in how {\em feasible}
it is. But there are worse functions and problems possible: For example,
the ones that are semicomputable but not computable. Or worse yet,
functions that are not even semicomputable.
Semicomputability gives no knowledge of convergence guarantees: even though
the limit value is monotonically approximated, at no stage
in the process do we know how close we are to the limit value.
In Section~\ref{ex.MDL}, the indirect method of Algorithm Optimal MDL shows
that the function $\lambda_D$ (the MDL-estimator)
can be monotonically approximated
in the upper semicomputable sense.
But in \cite{VV02} it was shown for singleton data samples,
and therefore {\em a fortiori} for multiple data samples $D$,
the fitness function $\beta_D$ (the direct method of Remark~\ref{rem.direct})
cannot be monotonically approximated in that sense, nor in the
lower semicomputable sense, in both cases not even
up to any relevant precision. Let us formulate this a little more
precisely:
The functions $ \lambda_D (\alpha), \beta_D(\alpha)$
have a finite domain
for a given $D$ and hence can be given as a table---so formally speaking
they are computable. But this evades the issue: there is no
algorithm that computes these functions for given $D$ and $\alpha$.
Considering them as two-argument functions it was
shown (and the claimed precision quantified):
\begin{itemize}
\item The function $\lambda_D(\alpha)$
is upper semicomputable but not computable up to any reasonable
precision.
\item There is no algorithm that given $D^*$ and $\alpha$ finds
$\lambda_D(\alpha)$.
\item The function $\beta_D(\alpha)$
is not upper nor lower semicomputable, not even to any reasonable precision.
To put $\beta_D(\alpha)$'s computability properties in perspective,
clearly we can compute it given an oracle for the halting
problem.
\begin{quote}
The {\em halting problem} is the problem
whether an arbitrary Turing machine
started on an initially all-0 tape will eventually terminate or
compute forever. This problem was shown to be undecidable by A.M. Turing
in 1937, see for example
\cite{LiVi97}. An oracle for the halting problem will, when asked, tell
whether a given Turing machine computation will or will not terminate.
Such a device is assumed in order to
determine theoretical degrees of (non)computability, and
is deemed not to exist.
\end{quote}
But using such an oracle gives us power beyond effective (semi)computability
and therefore brings us outside the concerns of this paper.
\item There is no algorithm
that given $D$ and $K(D)$ finds a minimal sufficient statistic for $D$
up to any reasonable precision.
\end{itemize}
\subsection{Invariance under Recoding of Data}
\label{ex.recoding}
In what sense are the functions invariant
under recoding of the data? If the functions $\beta_D$ and $\lambda_D$
give us the stochastic properties of the data $D$, then we would not expect
those properties to change under recoding of the data into another format.
For convenience, let us look at a singleton example.
Suppose we recode $D= \{x\}$
by a shortest program $x^*$ for it.
Since $x^*$ is incompressible
it is a typical element of the set of all strings of length $|x^*|=K(x)$,
and hence $\lambda_{x^*} (\alpha)$ drops to the Kolmogorov complexity $K(x)$
already for some $\alpha \leq K(K(x))$, so almost immediately (and it
stays within logarithmic distance of that line henceforth).
That is,
$\lambda_{x^*} (\alpha) = K(x)$ up to
logarithmic additive terms in argument and value,
irrespective of the (possibly quite different)
shape of $\lambda_x$. Since the Kolmogorov complexity function
$K(x)=|x^*|$ is not recursive, \cite{Ko65},
the recoding function $f(x) = x^*$ is also not recursive.
Moreover, while $f$ is one-to-one and total
it is not onto.
But it is the
partiality of the inverse function (not all strings are shortest
programs) that causes the collapse of the structure function.
If one restricts the finite sets containing $x^*$ to be subsets of
$\{y^*: y \in \{0,1\}^n\}$, then the resulting
function $\lambda_{x^*}$ is within a logarithmic strip around $\lambda_x$.
The coding function $f$ is upper semicomputable and deterministic.
(One can consider
other codes, using more powerful computability assumptions or probabilistic
codes, but that is outside the scope of this paper.)
However, the structure function
is invariant under ``proper'' recoding of the data.
\begin{theorem}\label{theo.recoding}
Let $f$ be a recursive permutation of the set of
finite binary strings in $\{0,1\}^n$
(one-to-one, total, and onto), and extend $f$ to subsets $D \subseteq \{0,1\}^n$.
Then,
$\lambda_{f(D)}$ is ``close'' to $\lambda_D$ in the sense that the graph of
$\lambda_{f(D)}$ is situated within a strip of width $K(f)+O(1)$ around
the graph of $\lambda_D$.
\end{theorem}
\begin{proof}
Let $M \sqsupset D$ be a witness of $\lambda_D(\alpha)$. Then,
$M_f = \{f(y): y \in M\}$ satisfies $K(M_f) \leq \alpha + K(f)+O(1)$
and $|M_f|=|M|$. Hence, $\lambda_{f(D)} (\alpha + K(f)+O(1)) \leq \lambda_D(\alpha)$.
Let $M^f \sqsupset f(D)$ be a witness of $\lambda_{f(D)} (\alpha)$. Then,
$M^f_{f^{-1}} = \{f^{-1} (y): y \in M^f\}$ satisfies
$K(M^f_{f^{-1}}) \leq \alpha + K(f)+O(1)$ and $|M^f_{f^{-1}}|=|M^f|$.
Hence, $\lambda_{D} (\alpha + K(f)+O(1)) \leq \lambda_{f(D)}(\alpha)$ (since
$K(f^{-1}) = K(f)+O(1)$).
\end{proof}
\section{Approximating the MDL Code}
\label{ex.MDL}
Given $D\subseteq \{0,1\}^n$,
the data to explain, and the
model class consisting of all models $M$ for $D$
that have complexity $K(M)$ at most
$\alpha$. This $\alpha$ is
the maximum complexity of an explanation we allow.
As usual, we denote $m=|M|-1$ (possibly indexed like $m_t = |M_t|-1$)
and $d=|D|$. We search for programs $p$
of length at most $\alpha$ that print a finite set $M\sqsupset D$. Such
pairs $(p,M)$ are possible explanations.
The {\em best explanation} is defined to be
the $(p,M)$ for
which $\delta(D \mid M)$ is minimal, that is,
$\delta(D \mid M)=\beta_D(\alpha)$. Since the function
$\beta_D(\alpha)$ is not computable, there is no algorithm that halts with
the best explanation.
To overcome this problem
we minimize the randomness deficiency by minimizing the MDL code
length, justified by \eqref{eq.eq}, and thus maximize the
fitness of the model for this data sample. Since \eqref{eq.eq} holds only
up to a small error we should more
properly say ``almost minimize the randomness deficiency''
and ``almost maximize the
fitness of the model.''
\begin{definition}\label{def.MDLalg}
\rm
An algorithm $A$ is an {\em MDL algorithm} if the following holds.
Let $D$ be a data sample consisting of $d$ separated words of length $n$
in $dn+O(\log dn)$ bits.
Given inputs $D$ and $\alpha$ ($0 \leq \alpha \leq dn +O(\log dn)$),
algorithm $A$
written as $A(D, \alpha )$ produces a finite sequence of
pairs $(p_1,M_1), (p_2,M_2), \ldots , (p_{f}, M_f)$, such that
every $p_t$ is a binary program
of length at most $\alpha$ that
prints a finite set
$M_t$
with $D \sqsubset M_t$ and
$|p_t|+\log {{m_t} \choose d} < |p_{t-1}|+\log {{m_{t-1}} \choose d}$ for
every $1 < t \leq f$.
\end{definition}
\begin{remark}
\rm
It follows that
$K(M_t) \leq |p_t|$ for all $1 < t \leq f$.
Note that an MDL algorithm may consider only
a proper subset of all binary programs of length at most $\alpha$. In particular,
the final $|p_f|+\log {{m_f} \choose d}$ may be greater than
the optimal MDL code of length
$\min \{ K(M)+\log {{m} \choose d}: M \sqsupset D, \; K(M) \leq \alpha \}$.
This happens when a program
$p$ printing $M$ with $ M \sqsupset D$ and $|p|= K(M) \leq \alpha$ is not
in the subset of binary programs considered by the algorithm, or the
algorithm gets trapped in a suboptimal solution.
\end{remark}
The next theorem gives an MDL algorithm that always finds the optimal
MDL code and, moreover, the model concerned
is shown to be an approximately best fitting
model for dat $D$.
\begin{theorem}\label{alg.mdl}
There exists an MDL algorithm which given $D$ and $\alpha$
satisfies
$\lim_{t \rightarrow \infty} (p_t,M_t) = (\hat{p},\hat{M})$,
such that $\delta(D|\hat{M}) \leq \beta_D(i-O(\log dn))+O(\log dn)$.
\end{theorem}
\begin{proof}
We exhibit such an MDL algorithm:
{\bf Algorithm Optimal MDL ($D,\alpha$)}
\begin{description}
\item{\bf Step 1.}
Let $D$ be the data sample.
Run all binary
programs $p_1,p_2, \ldots$ of length at most $ \alpha$ in
lexicographic length-increasing
order in a dovetailed style.
The computation proceeds by stages $1,2, \ldots ,$
and in each stage $j$ the overall computation executes step $j-k$
of the particular subcomputation of $p_k$,
for every $k$ such that $j-k >0$.
\item{\bf Step 2.}
At every computation step $t$,
consider all pairs $(p,M)$ such that
program $p$ has printed the set $M \sqsupset D$ by time $t$.
We assume that there is a first elementary computation step
$t_0$ such that there is such a pair.
Let a {\em best explanation} $(p_t,M_t)$ at computation step $t \geq t_0$ be
a pair that minimizes the sum
$|p|+\log {m \choose d}$ among all
the pairs $(p,M)$.
\item{\bf Step 3.}
We only change the best explanation $(p_{t-1},M_{t-1})$ of
computation step $t-1$ to
$(p_{t},M_{t})$ at computation step $t$,
if $|p_t|+\log {{m_t} \choose d} < |p_{t-1}|+\log {{m_{t-1}} \choose d}$.
\end{description}
In this MDL algorithm
the best explanation $(p_t,M_t)$ changes from time to time
due to the appearance of a strictly better explanation.
Since no pair $(p,M)$ can be
elected as best explanation twice, and there are only finitely
many pairs, from some moment onward the explanation
$(p_t,M_t)$ which is declared best does not change anymore.
Therefore the limit $(\hat{p},\hat{M})$ exists.
The model $\hat{M}$ is a witness set of $\lambda_D(i)$. The lemma
follows by (\ref{eq.eq}) and Remark~\ref{rem.witness}.
\end{proof}
Thus, if we continue to approximate the two-part MDL code contemplating
every relevant model, then we will eventually
reach the optimal two-part code whose associated model
is approximately the best explanation. That
is the good news. The bad news is that we do not know
when we have reached
this optimal solution. The functions
$h_D$ and $\lambda_D$, and their witness sets, cannot be computed
within any reasonable accuracy, Section~\ref{sect.comp}.
Hence, there does not
exist a criterion
we could use to terminate the approximation somewhere
close to the optimum.
In the practice of the real-world MDL, in the
process of finding the optimal two-part MDL code,
or indeed a suboptimal two-part MDL code,
we often have to be satisfied
with running times $t$ that are much less than the time to
stabilization of the best explanation.
For such small $t$, the model
$M_t$ has a weak guarantee of goodness, since we know that
\[
\delta(D|M_t) + K(D) \le |p_t|+\log {{m_t} \choose d},
\]
because $K(D) \leq K(D,M_t) \leq K(M_t)+K(D|M_t)$
and therefore $K(D)-K(D|M_t) \leq K(M_t)\leq |p_t|$ (ignoring additive
constants).
That is,
the randomness deficiency of $D$ in
$M_t$ plus $K(D)$ is less than the
known value $|p_t|+\log {{m_t} \choose d}$.
Theorem~\ref{alg.mdl} implies that
Algorithm MDL gives not only {\em some} guarantee of goodness
during the approximation process
(see Section~\ref{sect.comp}),
but also that, in the limit, that guarantee approaches the value
of its lower bound, that is, $\delta(D|\hat{M}) + K(D)$.
Thus, in the limit,
Algorithm Optimal MDL will yield an explanation that is only a little
worse than the best explanation.
\begin{remark}\label{rem.direct}
{\bf (Direct Method)}
\rm
Use the same dovetailing process as in Algorithm Optimal MDL, with the
following addition.
At every elementary computation step $t$,
select a
$(p,M)$ for which $\log {m \choose d}-K^t(D|M)$ is minimal
among all programs $p$ that
up to this time have printed a set $M \sqsupset D$.
Here $K^t(D|M)$ is the approximation of $K(D|M)$
from above defined by
$K^t(D|M)=\min\{|q|:$ the reference universal prefix machine $U$ outputs
$D$ on input $(q,M)$
in at most $t$ steps$\}$. Hence, $\log {m \choose d}-K^t(D|M)$
is an approximation from below to $\delta (D|M)$.
Let $(q_t,M_t)$ denote the best explanation after $t$ steps.
We only change the best explanation at computation step $t$,
if $\log {{m_t} \choose d} - K^t(D|M_t)
<\log {m_{t-1} \choose d} - K^{t-1}(D|M_{t-1})$.
This time the same explanation
can be
chosen as the best one twice. However, from some time $t$ onward, the best explanation
$(q_t,M_t)$ does not change anymore.
In the approximation process, the model $M_t$
has no guarantee of goodness at all:
Since $\beta_D(\alpha)$ is not semicomputable up
to any significant precision, Section~\ref{sect.comp},
we cannot know a significant
upper bound neither for $\delta(D|M_t)$, nor for
$\delta(D|M_t) + K(D)$.
Hence, we must prefer the indirect method of Algorithm Optimal MDL, approximating
a witness set for $\lambda_D(\alpha)$, instead of the direct one of approximating
a witness set for $\beta_D(\alpha)$.
\end{remark}
\section{Does Shorter MDL Code Imply Better Model?}
In practice we often must terminate an MDL algorithm as
in Definition~\ref{def.MDLalg} prematurely.
A natural assumption is that the longer we
approximate the optimal two-part MDL code
the better the resulting model explains the data. Thus,
it is tempting to simply assume that in the approximation
every next shorter two-part MDL code also yields a better model.
However, this is not true.
To give an example
that shows where things go wrong
it is easiest to first give the conditions under
which premature search termination
is all right.
Suppose we replace
the currently best explanation
$(p_1,M_1)$ in an MDL algorithm with explanation
$(p_{2},M_{2})$ only if $|p_{2}|+\log {{m_{2}} \choose d}$
is not just less than $|p_1| +\log {{m_1} \choose d}$,
but less by more than the excess of $|p_1|$ over
$K(M_1)$.
Then, it turns out that every time we change the explanation we improve
its goodness.
\begin{theorem}\label{theo.approxim}
Let $D$ be a data sample with $|D|=d$ ($0 <d<2^n$). Let
$(p_1, M_1)$ and $(p_{2},M_{2})$ be
sequential {\rm (}not necessary consecutive{\rm )}
candidate best explanations.
produced by an MDL algorithm $A(D, \alpha)$.
If
\begin{eqnarray*}
|p_{2}|+ \log {{m_{2}} \choose d} &\leq & |p_1| +
\log {{m_1} \choose d}
\\&& - (|p_1|-K(M_1))
- 10 \log \log {{2^n} \choose d} ,
\end{eqnarray*}
then
$
\delta (D | M_{2}) \le \delta (D | M_1) - 5 \log \log {{2^n} \choose d}.
$
\end{theorem}
\begin{proof}
For every pair of sets $M_1,M_{2} \sqsupset D$ we have
\[
\delta (D | M_{2} ) - \delta (D | M_1) =
\Lambda + \Delta,
\]
with $\Lambda = \Lambda (M_{2}) - \Lambda (M_1) $ and
\begin{eqnarray*}
\Delta & =& -K(M_{2})-K(D|M_{2}) + K(M_1) + K(D|M_1)
\\& \leq & - K(M_2,D) + K(M_1,D) + K(M_1^*|M_1) +O(1)
\\&\leq&K(M_1,D|M_{2},D)+ K(M_1^*|M_1)+O(1).
\end{eqnarray*}
The first inequality uses the trivial
$-K(M_2,D) \geq -K(M_2)-K(D|M_2)$ and the nontrivial
$ K(M_1,D) + K(M_1^*|M_1) \geq K(M_1) + K(D|M_1)$ which follows by
\eqref{eq.soi}, and the second inequality uses the general property that
$K(a|b) \geq K(a)-K(b)$.
By the assumption in the theorem,
\begin{eqnarray*}
\Lambda & \le &
|p_{2}|+\log {{m_{2}} \choose d}- \Lambda (M_1)
\\ &=&|p_{2}|+\log {{m_{2}} \choose d}- \left( |p_1|
+ \log{{m_1} \choose d} \right)
\\ && + (|p_1|-K(M_1))
\\& \le &
- 10 \log \log { {2^n} \choose d}.
\end{eqnarray*}
Since by assumption the difference in MDL codes
$\Lambda = \Lambda (M_2) - \Lambda (M_1) > 0$,
it suffices to show that
$K(M_{1},D | M_2,D)+K(M_1^*|M_1) \le 5 \log \log {{2^n} \choose d}$
to prove the theorem.
Note that $(p_1,M_1)$ and $(p_{2},M_{2})$ are in this order
sequential candidate
best explanations
in the algorithm, and every candidate best explanation may appear only once.
Hence, to identify $(p_1,M_1)$ we only need to know the MDL algorithm $A$,
the maximal complexity $\alpha$ of the contemplated models, the data sample $D$,
the candidate explanation $(p_{2},M_{2})$,
and the number $j$ of candidate best explanations in between
$(p_1,M_1)$ and $(p_{2},M_{2})$.
To identify $M_1^*$ from $M_1$ we only require $K(M_1)$ bits.
The program $p_{2}$ can be found from $M_{2}$ and
the length $|p_{2}| \leq \alpha$, as the first program computing $M_{2}$
of length $|p_{2}|$ in the process of running
the algorithm $A(D, \alpha)$.
Since $A$ is an MDL algorithm we have $j \leq |p_1| +
\log {{m_1} \choose d} \leq \alpha+ \log {{2^n} \choose d}$,
and $K(M_1) \leq \alpha$. Therefore,
\begin{eqnarray*}
&&K(M_{1},D | M_2,D)+ K(M_1^*|M_1)
\\ &&\le \log |p_{2}|+\log \alpha
+ \log K(M_i) + \log j + b
\\&& \le 3\log \alpha+ \log \left(\alpha+ \log {{2^n} \choose d} \right) + b,
\end{eqnarray*}
where $b$ is the number of bits we need to encode
the description of the
MDL algorithm,
the descriptions of the constituent
parts self-delimitingly,
and the description of a program to reconstruct $M_1^*$ from $M_1$.
Since $\alpha \leq n+O(\log n)$, we find
\begin{eqnarray*}
&&K(M_{1},D | M_2,D) +K(M_1^*|M_1)
\\&&\leq 3 \log n + \log \log {{2^n} \choose d} +
O\left(\log \log \log {{2^n} \choose d}\right)
\\&&\leq 5 \log \log {{2^n} \choose d},
\end{eqnarray*}
where the last inequality follows from $0 < d < 2^n$ and $d$
being an integer.
\end{proof}
\begin{remark}
\rm
We need an MDL algorithm in order to restrict the
sequence of possible candidate models examined
to at most $\alpha + \log {{2^n} \choose d}$ with
$\alpha \leq nd +O(\log nd)$ rather than all of the $2^{2^n-d}$
possible models $M$ satisfying $M \sqsupset D$.
\end{remark}
\begin{remark}
\rm
In the sequence $(p_1,M_1), (p_2,M_2), \ldots ,$
of candidate best explanations produced by an MDL algorithm,
$(p_{t'},M_{t'})$ is actually better than
$(p_{t},M_{t})$ ($t < t'$), if
the improvement in the two-part MDL code-length is
the given logarithmic term in excess of
the unknown, and in general noncomputable
$|p_t|-K(M_t)$.
On the one hand, if
$|p_t|=K(M_t)+O(1)$, and
\[
|p_{t'}|+ \log {{m_{t'}} \choose d}
\leq |p_t| + \log {{m_t} \choose d} - 10 \log \log {{2^n} \choose d},
\]
then $M_{t'}$ is a better explanation for data sample $D$
than $M_t$, in the sense that
\[
\delta (D | M_{t'}) \le \delta (D | M_t) - 5 \log \log {{2^n} \choose d}.
\]
On the other hand, if $|p_t| - K(M_t)$ is large,
then $M_{t'}$ may be a
much worse explanation than $M_t$.
Then, it is possible that
we improve the two-part MDL code-length by giving a worse model
$M_{t'}$ using, however, a $p_{t'}$
such that $|p_{t'}|+ \log {{m_{t'}} \choose d}
< |p_{t}|+ \log {{m_{t}} \choose d}$ while
$\delta (D | M_{t'}) > \delta (D | M_t)$.
\end{remark}
\section{Shorter MDL Code May Not Be Better}
\label{sect.single}
Assume that we want to infer
a language, given a single positive example (element of the language).
The positive example is $D=\{x\}$ with
$x = x_1 x_2 \ldots x_n$, $x_i \in \{0,1\}$ for $1 \leq i \leq n$.
We restrict the question to inferring
a language consisting of a set of elements of the same
length as the positive example, that is,
we infer a subset of $\{0,1\}^n$. We can view this as inferring the slice $L^n$
of the (possibly infinite)
target language $L$ consisting of all words of length $n$ in the target
language. We identify the singleton data
sample $D$ with its constituent
data string $x$. For the models we always have $M=M' \bigcup \{\#1\}$
with $M' \subseteq \{0,1\}^n$.
For simplicity we delete the cardinality indicator $\{\#1\}$ since it is always
1 and write $M = M' \subseteq \{0,1\}^n$.
Every $M \subseteq \{0,1\}^n$ can be represented by its characteristic
sequence $\chi = \chi_1 \ldots \chi_{2^n}$ with $\chi_i =1$
if the $i$th element of $\{0,1\}^n$ is in $M$, and 0 otherwise.
Conversely, every string of $2^n$ bits is the characteristic sequence
of a subset of $\{0,1\}^n$. Most of these subsets are ``random'' in the sense
that they cannot be represented concisely: their characteristic sequence
is incompressible. Now choose some integer $\delta$.
Simple counting tells us that there are
only $2^{2^n - \delta} -1$ binary strings of length $<2^n - \delta$.
Thus, the number of possible binary programs of length $<2^n - \delta$
is at most $2^{2^n - \delta} -1$. This in turn implies (since every
program describes at best one such set) that the number of
subsets $M \subseteq \{0,1\}^n$ with $K(M|n) <2^n - \delta$ is at most
$2^{2^n - \delta} -1$. Therefore, the number of
subsets $M \subseteq \{0,1\}^n$ with
\[
K(M|n) \geq 2^n - \delta
\]
is
greater than
\[
(1- 1/2^{\delta})2^{2^n}.
\]
Now if $K(M)$ is significantly
greater than $K(x)$, then
it is impossible to learn $M$ from $x$. This follows already from
the fact that $K(M|x) \geq K(M|x^*)+O(1)
= K(M)-K(x) + K(x|M^*)+O(1)$ by \eqref{eq.soi}
(note that $K(x|M^*) > 0$). That is, we need more than $K(M)-K(x)$
extra bits of dedicated information to deduce $M$ from $x$.
Almost all
sets in $\{0,1\}^n$ have so high complexity that no
effective procedure can infer this set from a single example.
This holds in particular for every (even moderately) random set.
Thus, to infer such a subset
$M \subseteq \{0,1\}^n$,
given a sample datum $x \in M$,
using the MDL principle is clearly out of the question.
The datum $x$ can be
literally described in $n$ bits by the trivial MDL code $M=\{x\}$
with $x$ literal at self-delimiting
model cost at most $n+O(\log n)$ bits and data-to-model cost
$\log |M|=0$.
It can be concluded that the only sets
$M$ that can possibly be inferred from $x$ (using MDL or any other
effective deterministic procedure)
are those that have $K(M) \leq K(x) \leq n + O(\log n)$. Such sets
are extremely rare: only an at most
\[
2^{-2^n+n+ O(\log n)}
\]
fraction of all subsets
of $\{0,1\}^n$ has that small prefix complexity. This negligible fraction of
possibly learnable sets shows that such sets are very nonrandom;
they are simple in the
sense that their characteristic sequences
have great regularity
(otherwise the Kolmogorov complexity could not be this small).
But this is all right: we do not want to learn random, meaningless, languages,
but only languages that have meaning. ``Meaning'' is necessarily expressed in
terms of regularity.
Even if we can learn the target
model by an MDL algorithm in the limit, by selecting a sequence of models
that decrease the MDL code with each next model, it can still
be the case that a later model
in this sequence is a worse model than a preceding one.
Theorem~\ref{theo.approxim} showed conditions that prevent this from happening.
We now show that if those conditions are not satisfied, it can indeed happen.
\begin{theorem}\label{theo.fluctuate}
There is a datum $x$ ($|x|=n$) with
explanations
$(p_t,M_t)$ and
$(p_{t'},M_{t'})$ such that
$|p_{t'}|+\log m_{t'}\le|p_t|+\log m_t- 10 \log n$
but
$\delta (x|M_{t'}) \gg \delta (x|M_t)$.
That is, $M_{t'}$ is much worse fitting
than $M_t$.
There is an MDL algorithm $A(x, n)$ generating $(p_t,M_t)$ and
$(p_{t'},M_{t'})$ as best explanations with $t' > t$.
\end{theorem}
\begin{remark}
\rm
Note that the condition of Theorem~\ref{theo.approxim}
is different from the first inequality in Theorem~\ref{theo.fluctuate} since
the former required an extra $-|p_t|+K(M_t)$ term in
the right-hand side.
\end{remark}
\begin{proof}
Fix
datum $x$ of length $n$ which can be divided in
$uvw$ with $u,v,w$ of equal length
(say $n$ is a multiple of 3)
with
$K(x)=K(u)+K(v)+K(w)= \frac{2}{3}n$,
$K(u)=\frac{1}{9}n$, $K(v)=\frac{4}{9}n$, and $K(w)=\frac{1}{9}n$
(with the last four
equalities holding up to additive $O(\log n)$
terms).
Additionally, take $n$ sufficiently large so that
$0.1n \gg 10 \log n$.
Define $x^i=x_1 x_2 \ldots x_i$ and
an MDL algorithm $A(x,n)$ that
examines the sequence of models
$M_i = \{x^i\} \{0,1\}^{n-i}$, with
$i=0, \frac{1}{3}n, \frac{2}{3}n, n$.
The algorithm starts with candidate model $M_0$ and switches
from the current candidate to candidate $M_i$, $i= \frac{1}{3}n, \frac{2}{3}n, n$,
if that model gives a shorter MDL code
than the current candidate.
Now $K(M_{i})= K(x^i)+O(\log n)$
and
$\log m_{i} = n-i$, so the MDL code length
$\Lambda (M_i) = K(x^i) +n-i+O(\log n)$.
Our MDL algorithm uses a compressor that does not
compress $x^i$ all the way to length
$K(x^i)$, but codes
$x^i$ self-delimitingly at $0.9i$ bits,
that is, it compresses $x^i$
by 10\%.
Thus, the MDL code length is $0.9i+ \log m_{i}
= 0.9i+ n-i = n-0.1i $ for every
contemplated model $M_i$ ($i=0, \frac{1}{3}n, \frac{2}{3}n, n$).
The next equalities hold again up to $O(\log n)$ additive terms.
\begin{itemize}
\item
The MDL code length
of the initial candidate model $M_0$ is $n$.
The randomness deficiency $\delta (x|M_0) = n - K(x|M_0) =
\frac{1}{3}n$. The last equality holds
since clearly $K(x|M_0)=K(x|n)= \frac{2}{3}n$.
\item
For the contemplated model $M_{n/3}$ we obtain the following.
The MDL code length for model $M_{n/3}$
is $n-n/30$.
The randomness deficiency
$\delta (x|M_{n/3})= \log m_{n/3} - K(x| M_{n/3}) = \frac{2}{3}n
- K(v|n)-K(w|n) = \frac{1}{9}n$.
\item
For the contemplated model $M_{2n/3}$ we obtain the following.
The MDL code length is $n-2n/30$.
The randomness deficiency is
$\delta (x|M_{2n/3})=\log m_{2n/3} - K(x| M_{2n/3}) = \frac{1}{3}n - K(w|n) = \frac{2}{9}n$.
\end{itemize}
Thus, our MDL algorithm initializes with candidate model $M_0$, then
switches to candidate $M_{n/3}$ since this model decreases
the MDL code length by $n/30$. Indeed,
$M_{n/3}$ is a much better model than $M_0$, since it
decreases the randomness deficiency by a whopping $\frac{2}{9}n$.
Subsequently, however, the MDL process switches to candidate
model $M_{2n/3}$ since it decreases the MDL code length greatly again,
by $n/30$. But $M_{2n/3}$ is a much worse model
than the previous candidate $M_{n/3}$, since it increases
the randomness deficiency again greatly by $\frac{1}{9}n$.
\end{proof}
\begin{remark}
\rm
By Theorem~\ref{theo.approxim} we know that
if in the process of MDL estimation
by a sequence of significantly decreasing MDL codes
a candidate model is represented by its shortest program,
then the following candidate model which improves the MDL code
is actually a model of at least as good fit as the preceding one.
Thus, if in the example used in the proof above we encode the
models at shortest code length, we obtain MDL code lengths
$n$ for $M_0$, $K(u)+\frac{2}{3}n= \frac{7}{9}n$ for $M_{n/3}$, and
$K(u)+K(v)+ \frac{1}{3}n= \frac{8}{9}n$ for $M_{2n/3}$. Hence the MDL estimator
using shortest model code length changes candidate model $M_0$
for $M_{n/3}$, improving the MDL code length by $\frac{2}{9}n$ and the
randomness deficiency by $\frac{2}{9}n$. However, and correctly,
it does not change candidate model $M_{n/3}$ for $M_{2n/3}$,
since that would increase the MDL code length by $\frac{1}{9}n$. It so
prevents, correctly, to increase the randomness deficiency by $\frac{1}{9}n$.
Thus, by the cited theorem, the oscillating randomness deficiency
in the MDL estimation process in the proof above can only
arise in cases where the consecutive candidate models are not
coded at minimum cost while the corresponding two-part MDL code
lengths are decreasing.
\end{remark}
\section{Inferring a Grammar (DFA) From Positive Examples}
\label{sect.multi}
Assume that we want to infer
a language, given a set of positive examples (elements of the language)
$D$.
For convenience we restrict the question to inferring
a language
$M = M' \bigcup \{\#d\}$ with $M' \subseteq \{0,1\}^n$.
We can view this as inferring the slice $L^n$ (corresponding to $M'$)
of the target language $L$ consisting of all words of length $n$ in the target
language. Since $D$ consists of a subset of positive examples of $M'$
we have $D \sqsubset M$.
To infer a language $M$ from a set of positive examples $D \sqsubset M$
is, of course, a much more natural situation
than to infer a language from a singleton $x$
as in the previous section. Note that the complexity $K(x)$ of a singleton
$x$ of length $n$ cannot exceed $n + O(\log n)$, while the
complexity of a language of which $x$ is an element can rise to $2^n + O( \log n)$.
In the multiple data sample
setting $K(D)$ can rise to $2^n + O( \log n)$, just as $K(M)$ can.
That is, the description of $n$ takes $O(\log n)$ bits and the description of
the characteristic sequence of a subset of $\{0,1\}^n$ may take $2^n$ bits,
everything self-delimitingly.
So contrary to the singleton datum case, in principle
models $M$ of every possible model complexity can be inferred
depending on the data $D$ at hand. An obvious example is $D=M-\{\#d\}$.
Note that the cardinality of $D$ plays a role here, since the complexity
$K(D|n) \leq \log {{2^n} \choose d} + O(\log d)$ with equality for
certain $D$.
A traditional and well-studied problem in this setting is
to infer a grammar from a language example.
The field of grammar induction studies among other things
a class of algorithms
that aims at constructing a grammar by means of incremental
compression of the data set represented by the digraph
of a deterministic finite automaton (DFA) accepting the data set. This digraph
can be seen as a model for the data set.
Every word in the data set is represented as a path in the digraph
with the symbols either on the edges or on the nodes. The learning
process takes the form of a guided incremental compression of the
data set by means of merging or clustering of the nodes in the
graph. None of these algorithms explicitly makes an estimate of
the data-to-model code. Instead they use heuristics to guide
the model reduction. After a certain number of computational steps
a proposal for a grammar
can be constructed from the current state of the compressed graph.
Examples of such algorithms are SP \cite{Wolff:03-19-193,
DBLP:journals/ngc/Wolff95}, EMILE \cite{ICGI:AdrVer2002},
ADIOS
\cite{solan05languageLearningPNAS}, and a number of DFA induction
algorithms, such as ``Evidence Driven State Merging'' (EDSM),
\cite{ICGI:LanPeaPri98,ECMLPKDD/CFG03}. Related compression-based theories
and applications appear in \cite{LCLMV04,CV07}.
Our results (above and below) do not imply that compression
algorithms improving the MDL code of DFAs
can never work on real life data sets. There is considerable
empirical evidence that there are situations in which they
do work. In those cases specific properties of a restricted class of
languages or data sets must be involved.
Our results are
applicable to the common digraph simplification
techniques used in grammar inference.
The results hold equally for
algorithms that use just positive examples, just negative examples, or both,
using any technique (not just digraph simplification).
\begin{definition}
A DFA $A=(S,Q,q_0,t,F)$, where $S$ is a finite set of {\em input symbols},
$Q$ is a finite set of {\em states}, $t: Q \times S \rightarrow Q$
is the {\em transition function}, $q_0 \in Q$ is the
{\em initial state}, and $F \subseteq Q$ is a set
of {\em final states}.
\end{definition}
The DFA $A$ is started in the initial state $q_0$.
If it is in state $q \in Q$ and receives input symbol $s \in S$
it changes its state to $q' = t(q,s)$. If the machine after zero or more
input symbols, say $s_1, \ldots , s_n$, is driven to a state $q \in F$
then it is said to {\em accept} the word $w=s_1 \ldots s_n$, otherwise
it {\em rejects} the word $w$. The {\em language accepted} by $A$
is $L(A)= \{w: w$ is accepted by $A\}$. We denote $L^n(A)= L(A) \bigcap \{0,1\}^n$.
We can effectively enumerate the DFAs as $A_1, A_2 , \ldots$ in
lexicographic length-increasing order. This enumeration we call
the {\em standard enumeration}.
The first thing we need to do is to show that all laws that hold
for finite-set models also hold for DFA models, so all theorems, lemmas,
and remarks above, both positive and negative, apply.
To do so, we show that for every data sample $D \subseteq \{0,1\}^n$
and a contemplated finite set model for it, there
is an almost equivalent DFA.
\begin{lemma}\label{prop.1}
Let $d=|D|$, $M'=M-\{\#d\}$ and $m=|M'|$.
For every $D \subseteq M' \subseteq \{0,1\}^n$ there is
a DFA $A$ with $L^n(A)=M'$ such that
$K(A,n) \leq K(M')+ O(1)$ (which implies $K(A,d,n) \leq K(M)+ O(1)$), and
$\delta(D \mid M) \leq \delta(D \mid A,d,n) +O(1)$.
\end{lemma}
\begin{proof}
Since $M'$ is a finite set of binary strings, there is a DFA
that accepts it, by elementary formal language theory.
Define DFA $A$ such that $A$ is the first DFA in the standard
enumeration for which $L^n(A)=M'$. (Note that we can infer $n$ from both
$M$ and $M'$.)
Hence, $K(A,n) \leq K(M')+O(1)$ and $K(A,d,n) \leq K(M)+O(1)$.
Trivially, $\log {m \choose d} = \log {{|L^n(A)|} \choose d}$
and $K(D \mid A,n) \leq K(D \mid M')+O(1)$,
since $A$ may have information about $D$ beyond $M'$.
This implies $K(D \mid A,d,n) \leq K(D \mid M)+O(1)$, so that
$\delta(D \mid M) \leq \delta(D \mid A,d,n)+O(1)$.
\end{proof}
Lemma~\ref{prop.2} is the converse of Lemma~\ref{prop.1}:
for every data sample $D$ and a contemplated
DFA model for it,
there is a finite set model for $D$ that has no worse complexity,
randomness deficiency, and worst-case data-to-model code for $D$,
up to additive logarithmic precision.
\begin{lemma}\label{prop.2}
Use the terminology of Lemma~\ref{prop.1}.
For every $D \subseteq L^n(A) \subseteq \{0,1\}^n$,
there is a model $M \sqsupset D$
such that $\log {m \choose d} = \log {{|L^n(A)| } \choose d}$,
$K(M') \leq K(A,n)+O(1)$ (which implies
$K(M) \leq K(A,d,n)+O(1)$), and
$\delta(D \mid M) \leq \delta(D \mid A,d,n) -O(1)$.
\end{lemma}
\begin{proof}
Choose $M' =L^n(A)$. Then,
$\log {m \choose d} = \log {{| L^n(A)|} \choose d}$
and both $K(M') \leq K(A,n)+O(1)$ and $K(M) \leq K(A,d,n)+O(1)$.
Since also $K(D \mid A,d,n) \leq K(D \mid M)+O(1)$,
since $A$ may have information about $D$ beyond $M$, we have
$\delta(D \mid A,d,n) \geq \delta(D \mid M)+O(1)$.
\end{proof}
\subsection{MDL Estimation}
To analyze the MDL estimation for DFAs, given a data sample, we first
fix details of the code. For the model code, the coding of the DFA,
we encode as follows. Let $A=(Q,S,t,q_0,F)$ with $q=|Q|$, $s=|S|$,
and $f=|F|$.
By renaming of
the states we can always take care that $F \subseteq Q$ are the
last $f$ states of $Q$. There
are $q^{sq}$ different possibilities for $t$,
$q$ possibilities for $q_0$, and $q$ possibilities for $f$.
Altogether, for every choice of $q,s$ there are
$\leq q^{qs+2}$ distinct DFAs, some of which may accept the same languages.
{\bf Small Model Cost but Difficult to Decode:}
We can enumerate the DFAs by setting $i: = 2, 3, \ldots ,$ and for every
$i$ consider all partitions $i=q + s$ to two positive
integer summands, and for every particular choice of $q,s$ considering every
choice of final states, transition function, and initial state.
This way we obtain a standard enumeration $A_1, A_2, \ldots$ of all DFAs,
and, given the index $j$ of a DFA $A_j$ we can retrieve the particular
DFA concerned, and for every $n$ we can find $L^n(A_j)$.
{\bf Larger Model Cost but Easy to Decode:}
We encode a DFA $A$ with $q$ states and $s$ symbols self-delimitingly by
\begin{itemize}
\item
The encoding of the number of symbols $s$ in self-delimiting format
in $\lceil \log s \rceil + 2 \lceil \log \log s \rceil +1$ bits;
\item
The encoding of the number of states $q$ in self-delimiting format
in $\lceil \log q \rceil + 2\lceil \log \log q \rceil +1$ bits;
\item
The encoding of the set of final states $F$ by indicating
that all states numbered $q-f, q-f+1, q$ are final states,
by just giving $q-f$ in $\lceil \log q \rceil$ bits;
\item
The encoding of the initial state $q_0$ by giving its index
in the states $1, \ldots , q$, in $\lceil \log q \rceil$ bits; and
\item
The encoding of the transition function $t$ in lexicographic
order of $Q \times S$ in $\lceil \log q \rceil$ bits per transition,
which takes
$qs \lceil \log q \rceil$ bits altogether.
\end{itemize}
Altogether, this encodes $A$ in a self-delimiting format in
$(qs+3) \lceil \log q \rceil + 2 \lceil \log \log q \rceil
+\lceil \log s \rceil + 2 \lceil \log \log s \rceil +O(1) \approx
(qs+4) \log q + 2 \log s$ bits. Thus, we reckon the model cost of
a $(q,s)$-DFA as $m(q,s)=(qs+4) \log q + 2 \log s$ bits.
This cost has the advantage that it is easy to decode and
that $m(q,s)$ is an easy function of $q,s$. We will assume this model cost.
{\bf Data-to-model cost:}
Given a DFA model $A$, the word length $n$
in $\log n + 2 \log \log n$ bits which we simplify to $2 \log n$ bits,
and the size $d$ of the data sample $D \subseteq \{0,1\}^n$,
we can describe $D$ by its index $j$ in the set of $d$ choices out of $l=L^n(A)$
items, that is, up to rounding upwards, $\log {l \choose d}$ bits.
For $0 < d \leq l/2$ this can be estimated by $l H(d/l) - \log l/2 +O(1)
\leq \log {l \choose d} \leq l H(d/l)$, where $H(p)= p \log 1/p +
(1-p) \log 1/(1-p)$
($0 < p < 1$) is Shannon's entropy function.
For $d=1$ or $d=l$ we set the data-to-model cost to $1 + 2 \log n$,
for $1 < d \leq l/2$
we set it to $2 \log n + l H(d/l)$ (ignoring the possible
saving of a $\log l/2$ term), and for $l/2 <d <l$
we set it to the cost of $d'=l-d$. This reasoning brings us to the following
MDL cost of a data sample $D$ for DFA model $A$:
\begin{definition}
\rm
The {\em MDL code length} of a data sample $D$ of $d$
strings of length $n$, given $d$, for a DFA model $A$ such that
$D \subseteq L^n(A)$ denoting $l=|L^n(A)|$,
is given by
\[
MDL (D,A|d)= (qs+4) \log q + 2 \log s + 2 \log n + l H(d/l).
\]
If $d$ is not given we write $MDL (D,A)$.
\end{definition}
\subsection{Randomness Deficiency Estimation}
Given data sample $D$ and DFA $A$ with
$D \subseteq L^n(A) \subseteq \{0,1\}^n$,
we can estimate the randomness deficiency.
Again, use $l= L^n(A)$ and $d=|D|$.
By \eqref{eq:randomness-deficiency}, the randomness deficiency is
\[
\delta (D \mid A,d,n) = \log {l \choose d} - K(D \mid A,d,n).
\]
Then, substituting the estimate for $\log {l \choose d}$ from the previous
section, up to logarithmic additive terms,
\[
\delta (D \mid A,d,n) = l H(d/l) - K(D \mid A,d,n).
\]
Thus, by finding a computable upper bound for $K(D \mid A,d,n)$,
we can obtain a computable lower bound on the randomness
deficiency $\delta (D \mid A,d,n)$ that expresses the fitness
of a DFA model $A$ with respect to data sample $D$.
\subsection{Less MDL Code Length Doesn't Mean Better Model}
\label{sect.lmdl}
The task of finding the smallest {DFA} consistent with a set of
positive examples is trivial. This is the universal DFA accepting
every example (all of $\{0,1\}^n$). Clearly, such a
universal DFA will in many cases have a poor generalization error
and randomness deficiency. As we have seen, optimal randomness deficiency
implies an optimal fitting model to the data sample. It is to be expected
that the best fitting model gives the best generalization error in the case
that the future data are as typical to this model as the data sample is.
We show that the
randomness deficiency behaves independently of the MDL code, in the sense
that the randomness deficiency can either grow or shrink with a
reduction of the length of the MDL code.
We show this by example.
Let the set $D$ be a sample set consisting of 50\% of all binary
strings of length $n$ with an even number of 1's. Note, that the
number of strings with an even number of 1's equals the number of strings
with an odd number of 1's, so $d=|D|= 2^{n}/4$.
Initialize with a DFA $A$ such that $L^n(A)=D$. We can obtain
$D$ directly from $ A,n$, so we have $K(D \mid A,n)=O(1)$, and since $d=l$
($l=|L^n(A)|$) we have $\log {l \choose d} =0$, so that
altogether $\delta (D \mid A,d,n)= -O(1)$,
while $MDL(D,A) = MDL(D,A|d)+O(1)
= (qs+4) \log q + 2 \log s + 2 \log n +O(1)=
(2q+4) \log q + 2 \log n +O(1)$, since $s=2$.
(The first equality follows since we can obtain $d$ from $n$.
We obtain a negative constant randomness deficiency which we take
to be as good as 0 randomness deficiency. All arguments hold up to
an $O(1)$ additive term anyway.)
Without loss of generality we can assume that the MDL algorithm
involved works by splitting or merging nodes of the digraphs
of the produced sequence of candidate DFAs. But the argument
works for every MDL algorithm, whatever technique it uses.
{\em Initialize:} Assume that we start our MDL estimation
with the trivial DFA $A_0$ that literally encodes
all $d$ elements of $D$ as a binary directed tree with $q$ nodes.
Then, $2^{n-1} - 1 \leq q \leq 2^{n+1}-1$, which yields
\begin{align*}
&MDL (D,A_0) \geq 2^n n
\\& \delta (D \mid A_0,d,n) \approx 0.
\end{align*}
The last approximate equality holds since $d=l$, and hence $\log {l \choose d} =0$
and $K(D \mid A_0,d,n)=O(1)$.
Since the randomness deficiency
$ \delta (D \mid A_0,d,n) \approx 0$, it follows that $A_0$ is
a best fitting model for $D$. Indeed, it represents all conceivable
properties of $D$ since it literally encodes $D$. However, $A_0$
does not achieve the optimal MDL code.
{\em Better MDL estimation:}
In a later MDL estimation we improve the MDL code by inferring
the parity DFA $A_1$ with two states ($q=2$) that checks the
parity of 1's in a sequence. Then,
\begin{align*}
&MDL (D,A_1) \leq 8 + 2\log n+ \log {{2^{n-1}} \choose {2^{n-2}}} \approx
2^{n-1} - \frac{1}{4}n
\\& \delta (D \mid A_1,d,n) = \log {{2^{n-1}} \choose {2^{n-2}}}
- K(D \mid A_1,d,n)
\\&\approx 2^{n-1} - \frac{1}{4}n- K(D \mid A_1,d,n)
\end{align*}
We now consider two different instantiations of $D$, denoted
as $D_0$ and $D_1$. The first one is regular data, and the
second one is random data.
{\bf Case 1, regular data:}
Suppose $D=D_0$ consisting of the lexicographic first 50\% of all $n$-bit
strings with an even number of occurrences of 1's.
Then $K(D_0 \mid A_1,d,n)=O(1)$ and
\[
\delta (D_0 \mid A_1,d,n) = 2^{n-1} - O(n).
\]
In this case, even though DFA $A_1$ has a much better MDL code than DFA $A_0$
it has nonetheless a much worse fit since its randomness deficiency
is far greater.
{\bf Case 2, random data:}
Suppose $D$ is equal to $D_1$, where $D_1$ is a random subset consisting of 50\%
of the $n$-bit strings with even number of occurrences of 1's.
Then,
$K(D_1 \mid A_1,d,n) = \log {{2^{n-1}} \choose {2^{n-2}}}+O(1)
\approx 2^{n-1} -\frac{1}{4} n$, and
\[
\delta (D_1 \mid A_1,d,n) \approx 0.
\]
In this case, DFA $A_1$ has a much better MDL code than DFA $A_0$,
and it has equally good fit since both randomness deficiencies
are about 0.
\begin{remark}
We conclude that improved MDL estimation of DFAs for multiple data
samples doesn't necessarily result in better models, but can do so
nonetheless.
\end{remark}
\begin{remark}[Shortest Model Cost]\label{rem.smc}
\rm
By Theorem~\ref{theo.approxim} we know that if, in the process of MDL estimation
by a sequence of significantly decreasing MDL codes,
a candidate DFA is represented by its shortest program,
then the following candidate DFA which improves the MDL estimation
is actually a model of at least as good fit as the preceding one.
Let us look at an Example:
Suppose we start with DFA $A_2$ that accepts all strings
in $\{0,1\}^*$. In this case we have $q=1$ and
\begin{align*}
&MDL (D_0,A_2) = \log {{2^n} \choose {2^{n-2}}} +O(\log n)
\\& \delta (D_0 \mid A_2,d,n) = \log {{2^n} \choose {2^{n-2}}}-O(1).
\end{align*}
Here $ \log {{2^n} \choose {2^{n-2}}} = 2^n H(\frac{1}{4})-O(n)
\approx \frac{4}{5} \cdot 2^{n}
-O(n)$, since $H(\frac{1}{4}) \approx \frac{4}{5}$.
Suppose the subsequent candidate DFA is the parity machine $A_1$.
Then,
\begin{align*}
&MDL (D_0,A_1)= \log {{2^{n-1}} \choose {2^{n-2}}} +O(\log n)
\\&
\delta (D_0 \mid A_1,d,n) \approx
\log {{2^{n-1}} \choose {2^{n-2}}}
- O(1),
\end{align*}
since $K(D_0 \mid A_1,d,n)=O(1)$. Since
$\log {{2^{n-1}} \choose {2^{n-2}}}
=2^{n-1}-O(n)$, we have
$MDL (D_0,A_1 ) \approx \frac{5}{8} MDL (D_0,A_2 )$,
and
$\delta (D_0 \mid A_1,d,n) \approx
\frac{5}{8} \delta (D_0 \mid A_2,d,n)$.
Therefore, the improved MDL cost from model $A_2$ to
model $A_1$ is accompanied by an improved model fitness since
the randomness deficiency decreases as well. This
is forced by Theorem~\ref{theo.approxim}, since
both DFA $A_1$ and DFA $A_2$ have $K(A_1),K(A_2)= O(1)$.
That is, the DFAs are represented and penalized according
to their shortest programs (a fortiori of length $O(1)$) and therefore
improved MDL estimation increases the fitness of the
successive DFA models significantly.
\end{remark}
|
1,108,101,564,372 | arxiv | \section{Introduction}
In a helicity formalism the simplest Yang-Mills amplitudes are the
MHV amplitudes where precisely two external gluons have negative
helicity and the remaining legs all have positive helicity.
If legs $j$ and $k$ have negative helicity, the colour-ordered~\cite{TreeColour} partial
amplitude takes the form~\cite{ParkeTaylor},
$$
\eqalign{
A^{\rm tree}_n(1^+,\ldots,j^-,\ldots,k^-,
\ldots,n^+)\,
&\;=\;\ i\, { {\spa{j}.{k}}^4 \over \spa1.2\spa2.3\cdots\spa{n}.1 }\,.
\cr}
\refstepcounter{eqnumber\label{ParkeTaylor}
$$
We use the notation $\spa{j}.{l}\equiv \langle
j^- | l^+ \rangle $, $\spb{j}.{l} \equiv \langle j^+ |l^- \rangle
$, with $| i^{\pm}\rangle $ being massless Weyl spinors with momentum
$k_i$ and chirality $\pm$~\cite{SpinorHelicity,ManganoReview}. The
spinor products are related to momentum invariants by
$\spa{i}.j\spb{j}.i=2k_i \cdot k_j\equiv s_{ij}$ with
$\spa{i}.j^*=\spb{j}.i$. As in twistor-space studies we define,
$$
\lambda_i \;=\; | i^+\rangle
\; , \;\;\;
\bar\lambda_i \;= \; | i^-\rangle
\,.
\refstepcounter{eqnumber
$$
Inspired by the duality between twistor string theory and
Yang-Mills~\cite{WittenTopologicalString} (and generalising a
previous description of the simplest gauge theory amplitudes by
Nair~\cite{Nair1988bq}), Cachazo, Svr\v{c}ek and Witten proposed a
reformulation of perturbation theory in terms of off-shell
MHV-vertices~\cite{Cachazo:2004kj}, which can be depicted,
\vspace{0.5cm}
\begin{center}
\begin{picture}(100,100)(55,-40)
\SetWidth{0.7} \Line(30,30)(0,30) \Line(30,30)(10,50)
\Line(30,30)(40,50) \Line(30,30)(10,10) \Line(30,30)(50,30)
\Line(120,30)(100,50) \Line(120,30)(140,50) \Line(120,30)(100,30)
\Line(120,30)(140,30) \Line(120,30)(120,10)
\Line(240,30)(220,30) \Line(240,30)(260,50) \Line(240,30)(260,10)
\Line(240,30)(270,30)
\Vertex(30,30){2} \Vertex(120,30){2} \Vertex(240,30){2}
\Text(-35,32)[c]{{\huge $\sum$}} \Text(7,0)[c]{$k_{i_1}^-$}
\Text(-9,32)[c]{$k_{i_2}^+$} \Text(7,61)[c]{$k_{i_3}^+$}
\Text(39,55)[c]{$\ldots$} \Text(58,29)[c]{$\times$}
\Text(93,29)[c]{$\times$}
\Text(75,29)[c]{$\displaystyle\frac{1}{p_{j_1}^2}$}
\Text(160,29)[c]{$\times\ \ \ldots$} \Text(212,29)[c]{$\times$}
\end{picture}
\end{center}
\vspace{-1.2cm}
The off-shell continuation for a leg of momentum $p$
was achieved by replacing
$\lambda(p)$ by,
$$
\lambda_a(p )\;=\; p_{a\dot a } \bar \eta^{\dot a}\,, \refstepcounter{eqnumber
$$
where $\bar \eta^{\dot a}$ is an arbitrary reference spinor.
While individual CSW diagrams depend on $\bar\eta$, the full amplitude is
$\bar\eta$-independent. This reformulation has led to or inspired a variety of
calculational advances, both for tree level scattering~\cite{trees}
and at loop level~\cite{oneloop} in Yang-Mills theory.
This reformulation has been
demonstrated to reproduce all known results for gluon scattering at tree level
and, often,
gives relatively simple expressions for these amplitudes. Although originally
given for gluon scattering only, these rules have been shown to extend to
other types of massless particle~\cite{CSW:matter} and indeed to
massive particles~\cite{CSWmassive}.
It has been shown~\cite{Brandhuber:2004yw,Quigley:2004pw,Bedford:2004py}
that, with the correct off-shell prescription,
these vertices can be used to reproduce known one-loop results~\cite{BDDKa,BDDKb}
in supersymmetric theories.
In an alternate approach to computing tree level amplitudes, Britto,
Cachazo, Feng and Witten~\cite{Britto:new} obtained a recursion relation
based on analytically shifting a pair of external legs,
$$
\eqalign{
\lambda_i \;\longrightarrow\; \lambda_i \;+\;z \lambda_j\,,
\;\;\;
\bar \lambda_j \;\longrightarrow\; \bar\lambda_j \;-\;z \bar\lambda_i\,,
\cr}
\refstepcounter{eqnumber
$$
and determining the physical amplitude, $A_n(0)$, from the poles in the
shifted amplitude, $A_n(z)$. This leads to a recursion relation in the number of external legs, $n$, of
the form,
$$
A_n(0) \;=\; \sum_\alpha \hat A_{n-k_\alpha+2}(z_\alpha)\times {
i \over P_\alpha^2}
\times \hat A_{k_\alpha}(z_\alpha)\,,
\refstepcounter{eqnumber
$$
where the factorisation is only on these poles, $z_\alpha$,
where legs $i$ and $j$ are connected to
different sub-amplitudes. This is depicted below:
\begin{center}
\begin{picture}(100,100)(15,-40)
\SetWidth{0.7}
\Line(10,30)(30,30)
\Line(30,50)(30,30)
\Line(45,45)(30,30)
\Line(15,45)(30,30)
\SetWidth{0.7}
\Line(55,30)(30,30)
\SetWidth{0.7}
\Vertex(26,40){2}
\SetWidth{1}
\Line(30,10)(30,30)
\SetWidth{0.7}
\GCirc(30,30){5}{0.8}
\SetWidth{0.7}
\Line(75,30)(100,30)
\SetWidth{0.7}
\Line(100,50)(100,30)
\Line(85,45)(100,30)
\Line(115,45)(100,30)
\Line(100,30)(120,30)
\Vertex(104,40){2}
\SetWidth{1}
\Line(100,10)(100,30)
\SetWidth{0.7}
\GCirc(100,30){5}{0.8}
\Text(65,29)[c]{$\displaystyle\frac{i}{P_\alpha^2}$}
\Text(31,0)[c]{{\rm $\hat k_i$}}
\Text(101,0)[c]{{\rm $\hat k_j$}}
\Text(-20,30)[c]{\Huge$\sum$}
\Text(-20,10)[c]{$\alpha$}
\end{picture}
\end{center}
\vspace{-0.8cm}
These recursion relations also give relatively compact formulae for
tree amplitudes~\cite{Luo:2005rx,SplitHelicity}.
Recursion relations based on analyticity can also be used at loop level both to
calculate rational terms~\cite{BDKrecursionA} and the coefficients of integral
functions~\cite{BBDI2005}.
The factorisation properties of the
amplitudes seem to lie at the heart of both approaches. In both
cases the amplitude is expressed as a sum of its factorisations in a
well specified manner. As such, one might hope to derive the
MHV-vertex formulation by applying an
analytic shift and obtaining a recursion relation.
In ref.~\cite{Kasper} it was demonstrated that such shifts exist and can be
used to derive the MHV vertex approach in gauge theory.
The shift
affects all of the negative helicity legs, $k_{m_i}$,
$$
\bar\lambda_{m_i}\; \to \;\hat{\bar\lambda}_{m_i} \;=\;\bar\lambda_{m_i} \;+\;z r_i \bar \eta\,,
\refstepcounter{eqnumber\label{Kshift}
$$
with the $r_i$ chosen to ensure momentum conservation.
Most of the above developments have been made for gauge
theory amplitudes.
The existence of a BCFW recusion relation for gravity amplitudes
was strongly supported in~\cite{BBSTgravity,CSgravity},
and in this article we construct a
CSW approach using the newly established shift (\ref{Kshift})
under the assumption that gravity amplitudes are sufficiently
well behaved for large values of $z$ in (\ref{Kshift}).
The key ingredient in obtaining the MHV rules is the analytic
structure of the amplitude which also underlies the derivation of the
recursion relations. In this context it becomes clear that these two
formalisms have their roots in the same physical behaviour of on-shell
amplitudes.
\section{Graviton Scattering Amplitudes}
Graviton scattering amplitudes are generally considerably
more complicated than those for gauge theory. To date,
explicit expressions have only been given for the MHV
amplitudes~\cite{BerGiKu,BBSTgravity} and for the six-point NMHV
amplitude~\cite{CSgravity,BDIgravity}. (As for gauge theories,
amplitudes with all helicities identical vanish, as do those with
one different, $M(1^{\pm},2^+,3^+,\cdots, n^+)=0$.)
In principle, gravity amplitudes can be constructed through the
Kawai, Lewellen and Tye (KLT)-relations~\cite{KLT}.
The explicit forms of these, up to six points, are,
$$
\eqalign{\hspace{1.2cm}
M_3^{\rm tree}(1,2,3) \;=\;
&
-iA_3^{\rm tree}(1,2,3)A_3^{\rm tree}(1,2,3)\,,
\cr
M_4^{\rm tree}(1,2,3,4) \;=\;
&
-is_{12}A_4^{\rm tree}(1,2,3,4)A_4^{\rm tree}(1,2,4,3)\,, \label{KLTFour} \cr
M_5^{\rm tree}(1,2,3,4,5) \;=\;&
\; is_{12}s_{34}\ A_5^{\rm tree}(1,2,3,4,5)A_5^{\rm tree}(2,1,4,3,5) \cr
& \hskip 1.9 cm \null
\;+\;i s_{13}s_{24}\ A_5^{\rm tree}(1,3,2,4,5)A_5^{\rm tree}(3,1,4,2,5)\,,
\label{KLTFive} \cr
M_6^{\rm tree}(1,2,3,4,5,6) \;=\; &
-is_{12}s_{45}\ A_6^{\rm tree}(1,2,3,4,5,6)(s_{35}A_6^{\rm tree}(2,1,5,3,4,6)
\cr
& \hskip 1.9 cm \null
\;+\;(s_{34}+s_{35})\ A_6^{\rm tree}(2,1,5,4,3,6)) \;+\; {\cal P}(2,3,4)\,,
\cr}
\refstepcounter{eqnumber\label{KLTSix}
$$
where
${\cal P}(2,3,4)$ represents the sum over
permutations of legs $2,\ 3,\ 4$ and the $A^{\rm tree}_n$ are the tree-level
colour-ordered gauge theory partial amplitudes. We have suppressed
factors of $g^{n-2}$ in the $A^{\rm tree}_n$, as well as a factor
of $(\kappa/2)^{n-2}$ in the gravity amplitude.
This formulation allows results from Yang-Mills theory to be recycled
in theories of gravity and
supergravity~\cite{StringBased,Bern:1998xc,BDWGravity,GravityReview,EffKLT}.
While these relations are directly applicable to tree amplitudes, this
formulation also has implications for loop amplitude calculations,
particularly in unitarity based methods where the tree amplitudes are
used to compute the loop amplitudes~\cite{BDDPR,Bern:1998sv,DunNor}.
Consequently, similar relationships can hold for the coefficients of
integral functions~\cite{BeBbDu}.
Although, in principle, the KLT relations can be used to calculate gravity
tree amplitudes, they have several undesirable features. Firstly, the
factorisation structure is rather obtuse. The Yang-Mills tree
amplitudes contain single poles so we might expect un-physical double
poles to appear in the sum. These are actually canceled by the
multiplying momentum factors, but often in a non-trivial manner.
Secondly, the expressions do not tend to be compact as the permutation sums
grow rather quickly with the number of points. In fact,
the Berends, Giele and Kuijf (BGK)
form of the MHV gravity amplitude~\cite{BerGiKu},
$$
\eqalign{
M_n^{\rm tree}
&(1^-,2^-,3^+,\cdots, n^+)
\cr
&\;=\;-i\spa1.2^8\times
\biggl[{ \spb1.2\spb{n-2}.{n-1} \over \spa1.{n-1} N(n) }
\Bigl( \prod_{i=1}^{n-3} \prod_{j=i+2}^{n-1} \spa{i}.j \Bigr)
\prod_{l=3}^{n-3} (-[n|K_{l+1,n-1}|l\rangle)
\cr&\hspace{8.7cm}+{\cal P}(2,3,\cdots,n-2)
\biggr]\,,
\cr}
\refstepcounter{eqnumber\label{BGKform}
$$
is rather more compact than that of the KLT sum (as is the expression
in~\cite{BBSTgravity}.)
In the above we use the definitions,
$$
\BBRQ k {{K}_{i,j}} l \;\equiv\; \BRQ {k^+} {\Slash{K}_{i,j}} {l^+} \;\equiv\; \BRQ
{l^-} {\Slash{K}_{i,j}} {k^-} \;\equiv\; \langle l |{{K}_{i,j}}| k] \;\equiv\; \sum_{a=i}^j\spb k.a\spa a.l\,, \refstepcounter{eqnumber
$$
and $N(n)=\prod_{1\leq i<j \leq n} \spa{i}.{j}$.
Both the KLT form of the MHV amplitude~(\ref{KLTSix}) and the above
form~(\ref{BGKform}) display a feature not shared by the Yang-Mills
expressions: they not only depend on the holomorphic variables
$\lambda$, but also on the $\bar\lambda$ - within the $s_{ij}$ for the
KLT expression and explicitly in the BGK expression. In both cases
this dependence is polynomial in the numerator. This feature
complicates the twistor space structure of any potential form of a MHV
vertex for gravity. For Yang-Mills, the holomorphic vertex
corresponds simply to points lying on a line in twistor space. For
gravity the picture will be of points lying on the ``derivative of a
$\delta$-function''~\cite{WittenTopologicalString}. The practical
difference is that both $\lambda(q)$ and $\bar\lambda(q)$ must be
correctly continued off-shell. (The exception to this is the
three-point vertex for which the gravity MHV expression is
holomorphic.) Various attempts have been made to find the off-shell
continuation~\cite{GBgravity,ZhuGrav}. Despite the failure to find a
MHV vertex formulation, gravity amplitudes are amenable to recursive
techniques~\cite{BBSTgravity,CSgravity}. In~\cite{NairGravity} a
current algebra formulation was demonstrated for the MHV gravity
amplitudes which also suggests that a MHV vertex might exist.
\section{NMHV Graviton Scattering Amplitudes}
We shall demonstrate the off-shell MHV vertex for gravity using the
analytic structure of the amplitudes with three negative helicity legs
(known as ``next-to-MHV`` or NMHV amplitudes). The shift
of~\cite{Kasper} allows us to rewrite the NMHV amplitudes as products
of MHV-amplitudes and thus gives a CSW type expansion for these
amplitudes directly, from which we can identify the off-shell gravity
MHV-vertices.
Let us start by considering a generic $n$-point NMHV graviton amplitude
$M_n(m_1^-,\break m_2^-, m_3^-, \cdots ,n^+)$, where
we label the three negative helicity legs $1$, $2$ and $3$ by $m_i$.
We can make the same continuation as in the Yang-Mills case,
$$
\eqalign{
\hat{\bar\lambda}_{m_1} \;=\;\bar\lambda_{m_1} \;+\;z \spa{m_2}.{m_3} \bar
\eta\,,
\cr
\hat{\bar\lambda}_{m_2} \;=\;\bar\lambda_{m_2} \;+\;z \spa{m_3}.{m_1} \bar
\eta\,,
\cr
\hat{\bar\lambda}_{m_3} \;=\;\bar\lambda_{m_3} \;+\;z \spa{m_1}.{m_2} \bar
\eta\, ,
\cr}
\refstepcounter{eqnumber\label{ourshift}
$$
which shifts the momentum of the negative helicity legs,
$$
\hat{k}_{m_i}(z) \;=\; \lambda_{m_i}\left(\bar \lambda_{m_i}
\;+\;z\spa{m_{i-1}}.{m_{i+1}} \bar \eta\right)\,,
\refstepcounter{eqnumber
$$
but leaves them on-shell, $k_{m_i}(z)^2=0$, while the combination
$k_{m_1}(z)+k_{m_2}(z)+k_{m_3}(z)$ is independent of $z$.
Under the shift we obtain the analytic continuation of the amplitude
$M_n(z)=\hat M_n$
into the complex plane. We use a `hat` to distinguish the
unshifted objects, $a$, from the shifted ones, $\hat a$.
For a shifted amplitude we can evaluate the following
contour integral at infinity,
$$
\frac{1}{2\pi i} \oint {dz\over z}M_n(z) \;=\; C_\infty
\;=\; M_n(0) + \sum_\alpha {\rm Res}_{z=z_\alpha}{M_n(z) \over z}\,.
\refstepcounter{eqnumber
$$
If $M_n(z)$ is rational with simple poles at points $z_\alpha$
and $C_\infty$ vanishes,
$M_n(0)$
can be expressed in terms of residues,
$$
M_n(0) \;=\; -\sum_{\alpha} \, {\rm Res}_{z=z_\alpha} {M_n(z)\over z }\,.
\refstepcounter{eqnumber
$$
The first condition is satisfied as a result of the general
factorisation properties of amplitudes, however the second is
difficult to prove in general
for gravity amplitudes.
The shifted amplitude has poles in $z$ whenever a momentum invariant
$\hat P^2(z)$ vanishes. Given the form of the shift, all momentum
invariants apart from those containing all or none of the negative
helicities are $z$-dependent. Thus the NMHV amplitudes have
factorisations where two of the negative helicity legs lie on one
side and one on the other. For the above shift it can be checked that
all factorisations involving the MHV googly 3-point amplitude $M(-++)$
vanish. All poles of the amplitude must therefore factorise as,
$$
M^{\rm MHV}(m_{i_1}^-,\cdots , P^-) \times { i \over P^2 }\times M^{\rm
MHV}((-P)^+, m_{i_2}^-,m_{i_3}^-,\cdots )\,,
\refstepcounter{eqnumber
$$
for $i_k=1,\ 2$ or $3$, as we expect for a CSW-type expansion.
$\hat P^2(z)$ vanishes linearly in $z$, so $M_n(z)$ has simple
poles when $z_\alpha$ satisfies,
$$
\hat P^2 \;=\; P^2 +z_\alpha\spa{m_{i_2}}.{m_{i_3}} [\eta | P | m_{i_1}
\rangle=0\,.
\refstepcounter{eqnumber
$$
The residue at each pole is just the product of the two MHV tree
amplitudes evaluated at $z=z_\alpha$.
Spinor products $\spa{i}.{j}$ which are not $z_\alpha$ dependent take
their normal values, while
terms like $\langle{i}{\hat P}\rangle$ are evaluated by noting,
$$
\langle{i}\,{\hat P}\rangle \;=\;
{\langle i\,{\hat P}\rangle[{\hat P}\,{\eta}] \over [{\hat P}\,{\eta}] }
\;=\; { \langle i | P | \eta ] \over \omega}\,,
\refstepcounter{eqnumber\label{NMHV2}
$$ where $P$ is the unshifted form.
The objects $\omega$ will cancel between the two tree amplitudes since the
product has zero spinor weight in $P$. This substitution is precisely the
CSW prescription,
$\lambda(P) \longrightarrow P|\eta]$.
For Yang-Mills this would be all we need, but for gravity we must also
consider
substitutions for $\spb{i}.j$ where $i$ and/or $j$ are one of the negative
helicities or $\hat P$. These substitutions are
of the form,
$$
\eqalign{
[{l^+}\,{\hat P}]
\;=\;& {[l^+\,{\hat P}] \langle{\hat P}\,\alpha\rangle
\over \langle{\hat P}\,\alpha\rangle}
\;=\; { \omega [ l^+ | \hat P | \alpha \rangle \over [ \eta | P | \alpha \rangle }
\;=\; { \omega [ l^+ | P | m_{i_1} \rangle
\over [ \eta | P | m_{i_1} \rangle }\,,
\cr
\spb{\hat m_{i_2}}.{\hat m_{i_3}}
\;=\;& \spb{m_{i_2}}.{m_{i_3}} + z_\alpha [ \eta | P_{m_{i_2}m_{i_3}} |
m_{i_1} \rangle\,,
\cr
\spb{\hat m_{i_1}}.{l^+}\,
\;=\;& \spb{m_{i_1}}.{l^+} +z_\alpha \spb{\eta}.{l^+}
\spa{m_{i_2}}.{m_{i_3}}\,,
\cr}
\refstepcounter{eqnumber\label{NMHV1}
$$
where $l^+$ denotes a positive helicity leg. We choose the arbitrary spinor
$\alpha$
to be $m_{i_1}$ in order to replace $\hat P$ by $P$.
Equations (\ref{NMHV2}) and (\ref{NMHV1}) are the specific
substitutions that determine the value
of the MHV amplitudes on the pole and thus the MHV vertices.
Note that the form of the off-shell continuation,
$$
\hat{\bar\lambda}_{m_1} = \bar\lambda_{m_1}+ z\spa{m_2}.{m_3} \bar \eta
= \bar\lambda_{m_1}- { P^2 \bar \eta
\over [ \eta | P | {m_1} \rangle}\, ,
\refstepcounter{eqnumber
$$
can be interpreted as yielding contact terms
since the $P^2$ factor may cancel the pole.
We conclude that the NMHV graviton scattering amplitude can be expressed
in terms of MHV vertices as,
$$\hspace{-0.3cm}
\eqalign{
&M_n(1^-,2^-,3^-,4^+,\ldots, n^+)
\;=\;
\sum_{r=0}^{n-4}
\sum_{{\cal P}(i_1,i_2,i_3) }\sum_{{\cal P}(d_i) }
M_{r+3}^{{\rm MHV}}(\hat m_{i_2}^-,\hat
m_{i_3}^-,d_1^+,\cdots, d_r^+,\hat P^+ )(z_r)
\cr&\hspace{5cm}\times { i \over P_{m_{i_1} d_{r+1}\cdots, n}^2 }
\times
M_{n-r-1}^{{\rm MHV}}((-\hat P)^-,\hat m_{i_1}^-,d_{r+1}^+,\cdots,n
)(z_r)\,.
\cr}
\refstepcounter{eqnumber
$$
Here the sums over ${\cal P}_r(i_1,i_2,i_3)$ and ${\cal P}_r(d_i)$ are respectively
sums over those permutations of the negative and positive helicity legs that swap legs between the
two MHV vertices.
We now turn to the discussion of the behaviour of $M_n(z)$ for large $z$.
By naive power counting one might expect shifted gravity amplitudes to
diverge at large $z$. However in both ref.~\cite{BBSTgravity}
and ref.~\cite{CSgravity} it was established by various techniques,
including numerical studies, that NMHV gravity amplitudes do vanish asymptotically
under the BCFW shift. This behaviour is difficult to prove either by
analysing Feynman diagrams or using the KLT relations, since large
cancellations are inherent in both formalisms.
Under the shift~(\ref{ourshift}) the amplitudes we have examined
are very well behaved at large $z$, with
$$
M_{6,7}(z) \;\sim\; {1 \over z^5}\,,
\refstepcounter{eqnumber
$$
for both the six and seven point NMHV amplitudes. This is a much stronger
behaviour than under the BCF shift where,
$
M_{6,7}(z)\;\sim\; {1 \over z}\,.
$
If we choose a specific value for the reference spinor,
$$
\bar\eta = \bar \lambda_a\,,
\refstepcounter{eqnumber
$$
where
$a$ is one of the positive helicity legs, then the shift we
use is a combination of three BCF shifts involving the three negative
helicity legs and a positive helicity leg $a$,
$$
\eqalign{
\lambda_a \longrightarrow \lambda_a +z_1 \spa{2}.3 \lambda_{1}
,\;\;\;
\bar \lambda_1 \longrightarrow \bar \lambda_1-z_1\spa{2}.3\bar\lambda_a \, ,
\cr
\lambda_a \longrightarrow \lambda_a +z_2 \spa{3}.1 \lambda_{2}
,\;\;\;
\bar \lambda_2 \longrightarrow \bar \lambda_2-z_2\spa{3}.1\bar\lambda_a \, ,
\cr
\lambda_a \longrightarrow \lambda_a +z_3 \spa{1}.2 \lambda_{3}
,\;\;\;
\bar \lambda_3 \longrightarrow \bar \lambda_3-z_3\spa{1}.2\bar\lambda_a \, ,
\cr}
\refstepcounter{eqnumber
$$
with $z_1=z_2=z_3$. The shift on $\lambda_a$ vanishes due to the
Schouten identity. In ref.~\cite{CSgravity} it was proven that the
amplitude vanishes at infinity under a single shift of this form,
providing further evidence that the NMHV amplitude vanishes
asymptotically under the shift~(\ref{ourshift}).
\subsection{Five-Point Example $M(1^-,2^-,3^-,4^+,5^+)$}
In this section we show, using an explicit example, how MHV vertices
can be assembled into graviton scattering amplitudes. The first
non-trivial example is the five-point amplitude,
$M(1^-,2^-,3^-,4^+,5^+)$. This is a 'googly' amplitude as it can be
obtained by conjugating the five-point MHV amplitude. As above we
shift the negative helicity legs, here $k_1,k_2$ and $k_3$, and compute
the residues of the amplitude $\, M(\hat 1^-,\hat 2^-,\hat
3^-,4^+,5^+)(z)$. The expansion in terms of MHV vertices is
non-trivial and reveals the structure of the MHV vertices.
Up to relabeling we have two types of residue,
$$
\hspace{1.5cm}
\eqalign{ D_1(1^-,2^-,3^-,4^+,5^+)\;=\; & M(\hat 2^-,\hat 3^-,\hat
p^+)\times { i\over s_{23} } \times M((-\hat p)^-,4^+,5^+,\hat 1^-)\,,
\cr D_2(1^-,2^-,3^-,4^+,5^+)\;=\; &
M(\hat 2^-,\hat 3^-,4^+,\hat p^-)\times { i\over
s_{15} } \times M((-\hat p)^+,5^+,\hat1^-)\,, \cr} \refstepcounter{eqnumber
$$
which can be associated to the CSW diagrams,
\begin{center}
\begin{picture}(100,100)(55,-40)
\Vertex(30,30){2}
\Vertex(105,30){2}
\Line(30,30)(50,30)
\Line(85,30)(105,30)
\Line(30,30)(10,30)
\Line(30,30)(20,50)
\Line(30,30)(20,10)
\Line(105,30)(115,50)
\Line(105,30)(115,10)
\Text(125,55)[c]{$\hat 2^-$}
\Text(125,5)[c]{$\hat 3^-$}
\Text(4,30)[c]{$5^+$}
\Text(15,55)[c]{$\hat 1^-$}
\Text(15,5)[c]{$4^+$}
\Text(67.5,30)[c]{$\displaystyle\times\frac{i}{s_{23}}\times$}
\end{picture}
\begin{picture}(100,100)(-25,-40)
\Vertex(30,30){2}
\Vertex(105,30){2}
\Line(105,30)(85,30)
\Line(50,30)(30,30)
\Line(105,30)(125,30)
\Line(105,30)(115,50)
\Line(105,30)(115,10)
\Line(105,30)(115,50)
\Line(105,30)(115,10)
\Line(30,30)(15,10)
\Line(30,30)(15,50)
\Text(10,55)[c]{$\hat 1^-$}
\Text(10,5)[c]{$5^+$}
\Text(135,30)[c]{$\hat 3^-$}
\Text(125,55)[c]{$\hat 2^-$}
\Text(125,5)[c]{$4^+$}
\Text(67.5,30)[c]{$\displaystyle\times\frac{i}{s_{15}}\times$}
\end{picture}
\end{center}
\vspace{-1.2cm}
Explicitly we find for the three-point function,
$$
M(\hat 2^-,\hat 3^-,\hat p^+) \;=\; i
{\spa{2}.{3}^6 \over \spa{2}.{\hat p}^2\spa{\hat p}.{3}^2 }
= i {\omega^4 \spa{2}.{3}^6 \over [\eta | P_{23} | 2 \rangle^2
[\eta | P_{23} | 3 \rangle^2 }\,.
\refstepcounter{eqnumber
$$
The four point amplitude can be expressed in several ways, including,
$$
\eqalign{ M_4((-\hat p)^-,4^+,5^+,\hat 1^-) &\;=\; -is_{45}
A_4((-\hat p)^-,4^+,5^+,\hat 1^-)A((-\hat p)^-,5^+,4^+,\hat 1^-)
\cr &={ i\,s_{45} \spa{ \hat p}.1^8 \over \spa{\hat p}.{4}
\spa{4}.5\spa{5}.1 \spa{1}.{\hat p} \spa{\hat p}.{5}
\spa{5}.4\spa{4}.1 \spa{1}.{\hat p}
}
\cr
&\;=\; { i\, \spb4.5 \over \spa{1}.4\spa{1}.5 \spa{4}.5 }
{\omega^{-4}
[\eta | P_{23} | 1 \rangle^6 \over
[\eta | P_{23} | 4 \rangle [\eta | P_{23} | 5 \rangle}\,,
\cr}
\refstepcounter{eqnumber
$$
giving the tree diagram as,
$$
D_1(1^-,2^-,3^-,4^+,5^+)={ i\, \spb4.5
[\eta | P_{23} | 1 \rangle^6 \over\spa{1}.4\spa{1}.5 \spa{4}.5
[\eta | P_{23} | 4 \rangle [\eta | P_{23} | 5 \rangle} \,{i\over s_{23}}
\, { i\spa{2}.{3}^6 \over [\eta | P_{23} | 2 \rangle^2 [\eta | P_{23}
| 3 \rangle^2 }\,. \refstepcounter{eqnumber
$$
For this particular diagram the prescription implied by the shift
is equivalent to the CSW rules for gauge theory as there is no need to find a
continuation for $\bar\lambda$.
For $D_2$ we find,
$$
D_2(1^-,2^-,3^-,4^+,5^+)\;=\; { i\,\spa{2}.{3}^7 \spb{\hat 2}.{\hat
3} \over \spa{2}.4 \spa{3}.4 [\eta | P_{15} | 4 \rangle^2 [\eta |
P_{15} | 2 \rangle [\eta | P_{15} | 3 \rangle
}
\,{i\over s_{15}}\, {i\,[\eta | P_{15} |5\rangle^6 \over [\eta |
P_{15} | 1 \rangle^2 \spa{5}.1^2 }\,. \refstepcounter{eqnumber
$$
This differs from the simple CSW prescription in the
definition of $\spb{\hat 2}.{\hat 3}$.
Here,
$$
\eqalign{ \spb{\hat 2 }.{\hat 3} \;=\; &\spb{2}.{3} +z \left(
\spa{3}.{1} \spb{\eta}.3 +\spa{1}.{2} \spb{2}.{\eta} \right)
\;=\;\spb{2}.{3} + z [\eta | P_{23} |1 \rangle \cr
&\;=\;\spb{2}.{3} -{
P_{15}^2 \over \spa{2}.3 [\eta | P_{15} | 1 \rangle }
[\eta | P_{23} |1 \rangle\,.
\cr}
\refstepcounter{eqnumber
$$
With this substitution we can verify that the sum of diagrams is
independent of $\bar\eta$ and equal to the conjugate
of the five-point MHV tree amplitude.
For the six-point amplitude there are three diagrams. We have
explicitly checked that the sum over permutations of these
diagrams is equal to the known form of the six-point NMHV
amplitude~\cite{CSgravity,BDIgravity}. Seven point
NMHV amplitudes can be obtained explicitly using the
KLT relationships - at least using computer algebra. We have
checked numerically that the seven-point amplitudes obtained from
the MHV vertices match those obtained from the KLT relation.
\subsection{Remarks on the Twistor Space Structure of MHV Gravity Amplitudes}
The twistor space structure of an amplitude refers to the support of the amplitude
after it has been Penrose transformed into twistor space variables.
As was shown in ref.~\cite{WittenTopologicalString}, the twistor space
support of an amplitude can be tested by simply acting with certain differential operators,
without having to resort to Penrose or Fourier transformations. The operator of particular
interest is the 'collinearity operator',
$$
[F_{ijk} , \eta ] \;=\;
\spa{i}.j \left[{ \partial\over \partial\bar\lambda_k},\eta\right]
\;+\;\spa{j}.k \left[{ \partial\over
\partial\bar\lambda_i},\eta\right] \;+\;\spa{k}.i \left[{
\partial\over
\partial\bar\lambda_j},\eta\right]\,.
\refstepcounter{eqnumber
$$
The expressions for the NMHV gravity amplitudes can be used to test the
twistor structure of gravity amplitudes. In this case,
MHV amplitudes are annihilated by multiple applications of the collinearity operator,
$$
F^h M_n^{\rm MHV}\;=\;0\,,
\refstepcounter{eqnumber
$$
for some $h$. This is interpreted as the support being non-zero only if the
points are ``infinitesimally'' close to a line in twistor
space~\cite{WittenTopologicalString}.
In ref.~\cite{BeBbDu} it was explicitly shown that,
$$
[F_{ijk},\eta]^{n-2} M_n^{\rm MHV} \;=\;0\,,
\refstepcounter{eqnumber
\label{lotsofF}
$$
for n-point amplitudes with $n \leq 8$. If we compare the action of the collinearity operator
on the amplitude with that of the shift,
$$
\eqalign{
\bar\lambda_{i} \;\to\;\bar\lambda_{i} \;+\;z \spa{j}.{k} \bar \eta\,,
\cr
\bar\lambda_{j} \;\to\;\bar\lambda_{j} \;+\;z \spa{k}.{i} \bar \eta\,,
\cr
\bar\lambda_{k} \;\to\;\bar\lambda_{k} \;+\;z \spa{i}.{j} \bar \eta\,,
\cr}
\refstepcounter{eqnumber
$$
it can be seen that,
$$
[F_{ijk},\eta] M_n(0) \;=\; \frac{\partial}{\partial z}\hat M_n(z)|_{z=0}\,.
\refstepcounter{eqnumber
$$
Equation (\ref{lotsofF}) can thus be understood in terms of the
number of $s_{ij}$ factors in the KLT form of the amplitude: each
factor of $s_{ij}$ can introduce at most one power of $z$ and
in the KLT form there are $n-3$ factors of $s_{ij}$ in the n-point amplitude, so
$n-2$ applications of $[F_{ijk},\eta]$ are sufficient to annihilate the
amplitude.
\section{Beyond NMHV}
In this section we will illustrate a generalisation of the
CSW-rules for gravity amplitudes with an example and verify the
rules for this amplitude. Finally, we will present the generic
rules and discuss their proof via the BCFW approach.
\subsection{General CSW rules}
We will now extend the CSW rules for NMHV amplitudes into
more generic rules for the expansion of N$^n$MHV amplitudes,
that is amplitudes with $n+2$ negative helicity legs and the rest positive.
Consider the N$^n$MHV amplitude with $N$ external legs. One would, as
in the Yang-Mills case, begin by drawing all diagrams which may be
constructed using MHV vertices. For the off-shell continuation, three-point
MHV vertices are non-vanishing. The contribution from each diagram
will be a product of $(n+1)$ MHV vertices and $n$
propagators as indicated below.
\begin{center}
\begin{picture}(100,100)(55,-40)
\SetWidth{0.7}
\Line(30,30)(0,30)
\Line(30,30)(10,50)
\Line(30,30)(40,50)
\Line(30,30)(10,10)
\Line(30,30)(50,30)
\Line(120,30)(100,50)
\Line(120,30)(140,50)
\Line(120,30)(100,30)
\Line(120,30)(140,30)
\Line(120,30)(120,10)
\Line(240,30)(220,30)
\Line(240,30)(260,50)
\Line(240,30)(260,10)
\Line(240,30)(270,30)
\Line(120,-53)(120,-78)
\Line(120,-53)(100,-68)
\Line(120,-53)(140,-68)
\Line(120,-53)(120,-33)
\Vertex(30,30){2}
\Vertex(120,30){2}
\Vertex(240,30){2}
\Text(7,1)[c]{$k_{i_1}^-$}
\Text(120.5,1)[c]{$\times$}
\Text(120.5,-9)[c]{$\vdots$}
\Text(120.5,-25)[c]{$\times$}
\Text(-10,32)[c]{$k_{i_2}^+$}
\Text(7,59)[c]{$k_{i_3}^+$}
\Text(39,55)[c]{$\ldots$}
\Text(58,30)[c]{$\times$}
\Text(93,30)[c]{$\times$}
\Text(75,30)[c]{$\displaystyle\frac{1}{p_{j_1}^2}$}
\Text(160,30)[c]{$\times\ \ \ldots$}
\Text(212,30)[c]{$\times$}
\end{picture}
\end{center}
\vspace{1.2cm}
In contrast to gauge theory the CSW diagrams for gravity have no cyclic ordering of
the external legs.
We denote internal momenta by $p_j$ for $j=1,...,n$ and external momenta by $k_i$ for $i=1,...,N$.
We label the vertices by $l$ for $l=1,...,(n+1)$. The momenta leaving
MHV vertex $l$ are collected into the set $K_l$ and the number of external legs
of MHV vertex $l$ will be denoted by $N_l$.
The contribution of a given diagram to the total amplitude can be calculated by
evaluating the product of MHV amplitudes and propagators,
$$
M^n_N{\big |}_{\mbox{ CSW-diagram}}\;=\;\left(\prod_{l=1,n+1} M^{{\rm MHV}}_{N_l}(\hat K_l)\right)
\prod_{j=1,n} \frac{i}{ p_{j}^2}\, ,
\refstepcounter{eqnumber\label{CSWrule}
$$
where the propagators are computed on the set of momenta $k_i$ and $p_j$, and the MHV vertices are evaluated
for the momenta $\hat k_i$ and $\hat p_j$. The definitions of these momenta are given below.
The momenta $k_i$ are the given external momenta and the internal momenta, $p_{j}$, are given by
momentum conservation on each MHV-vertex.
The momenta $\hat k_i$ and $\hat p_{j}$ are uniquely specified so that they are massless
and obey momentum conservation constraints at each vertex.
Explicitly they are given by shifting the negative helicity
legs $k_{i^-}$,
$$
\hat{k}_{i^-} \;=\; k_{i^-} \;+\; a_{i^-} \lambda_{(i^-)} \bar\eta \,,
\refstepcounter{eqnumber
$$
and leaving the positive helicity legs $k_{i^+}$ untouched. This introduces
$n+2$ parameters, $a_{i^-}$.
Overall momentum conservation is used to fix two of these
parameters. Momentum conservation at each vertex
then gives the momenta $\hat p_{j}$ as functions of $k_i$ and $a_i$.
Finally the remaining parameters are fixed such that all
internal momenta, $\hat p_{j}$, are massless,
$$
\hat p_{j}^2=0\,.
\refstepcounter{eqnumber
$$
This gives $n$ further linear constraints which are
sufficient to fix the remaining $a_i$ uniquely
for a given spinor $|\eta]$.
The MHV vertices in (\ref{CSWrule}) can then be
evaluated on the on-shell momenta $\hat k$ and $\hat p$.
\subsection{Example}
As an example we will discuss an explicit CSW diagram that contributes to the 8-point amplitude
$M(1^-,2^-,3^+,4^+,6^+,7^-,8^-)$. The diagram is given by,
\begin{center}
\begin{picture}(100,100)(75,-30)
\SetWidth{0.7}
\Vertex(30,30){2}
\Vertex(140,30){2}
\Vertex(245,30){2}
\Line(30,30)(75,30)
\Line(115,30)(165,30)
\Line(205,30)(245,30)
\Text(95,29)[c]{$\times\displaystyle\frac{i}{q^2}\times$}
\Text(185,29)[c]{$\times\displaystyle\frac{i}{p^2}\times$}
\Line(140,30)(130,55)
\Line(140,30)(150,55)
\Line(30,30)(0,30)
\Line(30,30)(10,50)
\Line(30,30)(10,10)
\Line(245,30)(275,30)
\Line(245,30)(265,50)
\Line(245,30)(265,10)
\Text(5,5)[c]{$1^-$}
\Text(-8,31)[c]{$2^-$}
\Text(5,55)[c]{$3^+$}
\Text(132,63)[c]{$4^+$}
\Text(155,63)[c]{$5^+$}
\Text(275,55)[c]{$6^+$}
\Text(285,31)[c]{$7^-$}
\Text(275,5)[c]{$8^-$}
\end{picture}
\end{center}
\vspace{-1.1cm}
This specific diagram is interesting as none of its
vertices can be written purely in terms of angles, $\spa{\,}.{\,}$.
Following the algorithm from above the diagram
contributes,
$$
\eqalign{
& M(\hat 1^-,\hat 2^-,3^+,\hat q^+)\frac{i}{q^2}M((-\hat
q)^-,4^+,5^+,(-\hat p)^-)\frac{i}{p^2}M(\hat p^+,6^+,\hat
7^-,\hat 8^-)\;=\;
\cr
&\hspace{-0.4cm}\frac{i\,\spa{\hat 1}.{\hat 2}^6\spb 3.{\hat q}}{\spa{\hat
2}.3\spa3.{\hat q}\spa {\hat q}.{\hat 1}\spa{\hat 2}.{\hat q}\spa
3.{\hat 1}}\,\frac{i}{q^2}\,\frac{i\,\spa {\hat p}.{\hat q}^6\spb
4.5}{\spa {\hat q}.4\spa4.5\spa5.{\hat p}\spa {\hat
q}.5\spa4.{\hat p}}\frac{i}{p^2}\frac{i\,\spa{\hat 7}.{\hat
8}^6\spb {\hat p}.6}{\spa{\hat 8}.{\hat p}\spa {\hat
p}.6\spa6.{\hat 7}\spa{\hat 8}.6\spa {\hat p}.{\hat
7}}\,.
\cr}
\refstepcounter{eqnumber
\label{example}
$$
The internal momenta, $q$ and $p$, are given by momentum
conservation: $q+k_1+k_2+k_3=0$ and $p+k_6+k_7+k_8=0$.
For the momenta $\hat k_i,\,i=1,2,7,8$, the shift is,
$$
\eqalign{
&\hat k_i\;=\;k_i\;+\;a_i\,\lambda_i\bar\eta,\quad i\;=\;1,2,7,8\,,
\cr}
\refstepcounter{eqnumber
$$
while the momenta $k_i$ with $i\;=\;3,4,5,6$ are untouched.
The momenta $\hat p$ and $\hat q$ and the parameters $a_i$ have to be
fixed such that the momentum flowing through each of the vertices is
preserved,
$$
\eqalign{
\sum_{i=1,8}\hat k_i\;=\;0\,,&\cr
\hat q\;+\;\hat k_1\;+\;\hat k_2\;+\;k_3\;=\;0\,,&\cr
\hat p\;+\;k_6\;+\;\hat k_7\;+\;\hat k_8\;=\;0\,.&
\cr}
\refstepcounter{eqnumber
$$
This leaves two free parameters which are
fixed such that,
$$
\hat q^2\;=\;0,\quad \hat p^2\;=\;0\,.
\refstepcounter{eqnumber
$$
There is a specific shift for each CSW diagram. In
general, different diagrams that contribute to the same amplitude
yield different values of $a_i$.
The various spinor products in (\ref{example}) can be computed
using the above conditions for the $a_i$ and one finds expressions
very reminiscent of those derived by CSW for gauge theory,
$$
\eqalign{
&\spa{k_i}.{\hat q}\;=\;\spab k_i.(-P_{123}).\eta/\omega_q\,,\quad\omega_q\;=\;\spb{\hat q}.\eta\,,\cr
&\spa{k_i}.{\hat p}\;=\;\spab k_i.(-P_{678}).\eta/\omega_p\,,\quad\omega_p\;=\;\spb{\hat p}.\eta\,,\cr
&\spb3.{\hat q}\;=\;-\frac{\omega_q \spba3.{\hat P_{123}}.{\hat
p}}{\spba\eta.{\hat P_{123}}.{\hat p}}
\;=\;-\frac{\omega_q \spbb3.{\hat P_{123}\hat P_{678}}.\eta}{\spbb\eta.{\hat P_{123}\hat P_{678}}.\eta}\;=\;
\frac{\omega_q \spbb3.{P_{45}P_{678}}.\eta}{\spbb\eta.P_{123}P_{678}.\eta}\,,\cr
&\spb6.{\hat p}\;=\;-\frac{\omega_p \spba6.{\hat P_{678}}.{\hat
q}}{\spba\eta.{\hat P_{678}}.{\hat q}} \;=\;-\frac{\omega_p
\spba6.{\hat P_{678}\hat P_{123}}.\eta}{\spba\eta.{\hat P_{678}\hat P_{123}}.\eta} \;=\;
\frac{\omega_p
\spbb6.{P_{45}P_{123}}.\eta}{\spbb\eta.P_{678}P_{123}.\eta}\,.
\cr}
\refstepcounter{eqnumber
$$
Overall the $\omega_q$ and $\omega_p$ factors cancel and we find that (\ref{example}) is given by,
$$
\eqalign{
&\hspace{-0.4cm}\frac{i\,\spa1.2^6\frac{\spbb 3.{
P_{45}P_{678}}.\eta}{\spbb\eta.P_{123}P_{678}.\eta}}{\spa2.3\spab3.P_{123}.\eta\spba\eta.P_{123}.1\spab2.P_{123}.\eta\spa3.1}
\,\frac{i}{t_{123}}\,\frac{i\,\spbb\eta. P_{678}P_{123}.\eta
^6\spb4.5}{\spba
\eta.P_{123}.4\spa4.5\spab5.P_{678}.\eta\spba\eta.P_{123}.5\spab4.P_{678}.\eta}\,
\cr
&\times\frac{i}{t_{678}}\,\frac{i\,\spa7.8^6\frac{\spbb\eta.P_{123}
P_{45}.6}{\spbb\eta.P_{123}P_{678}.\eta}}{\spab8.P_{678}.\eta\spba\eta.P_{678}.6\spa6.7\spa8.6\spba\eta.P_{678}.7}\,.
\cr}
\refstepcounter{eqnumber
$$
The rules used to compute the above N$^2$MHV diagram are a natural
generalisation of the NMHV-case. As we will discuss below, they
follow from BCFW recursions, providing the shifted amplitudes
vanish for large $z$.
\subsection{Proof of MHV-vertex rules}
In this section we prove the validity of the CSW-like expansion
of the graviton scattering amplitudes in terms of MHV vertices with
the substitution rules of the previous section. We will employ
a recursive proof analogous to that for Yang-Mills~\cite{Kasper} where
recursion was employed upon the number of minus legs in the tree
amplitudes. As a first step we will prove the N${}^2$MHV case where
four legs have negative helicity and later we will generalise the proof for
generic N$^n$MHV amplitudes.
\subsection{MHV-vertex Expansion for N${}^2$MHV Amplitudes}
We shall derive the CSW-like expansion for this amplitude by
factorising the amplitude in two steps. First we shall factorise the
amplitude into a product of MHV and NMHV amplitudes and then
factorise the NMHV amplitudes to complete the expansion.
We first apply a holomorphic shift
similar to the one discussed in \cite{Kasper} for gauge theory,
$$
\bar\lambda_i \longrightarrow
\bar\lambda_i+z_1 r_i^{(1)} \bar\eta
\; , \;\; i=1,\cdots,4\,,
\refstepcounter{eqnumber
$$
where the $r_i^{(1)}$ are restricted by momentum conservation and are all non-zero.
This shift of all negative helicity legs in
$M^{ {\rm N}^2{\rm MHV}}$ allows us to factorise the full amplitude as,
$$
M^{ {\rm N}^2{\rm MHV}}=\sum_{\alpha}
M^{ {\rm MHV}}(z_{1,\alpha})\frac{i}{P_{\alpha}^2}
M^{ {\rm NMHV}}(z_{1,\alpha})\,.
\refstepcounter{eqnumber\label{alphasum}
$$
The summation is over all the physical factorisations of the amplitude.
In the above the individual tree amplitudes
are evaluated at the shifted momentum values.
In particular the trees depend upon the shifted, on-shell,
momenta $\hat P_{\alpha,(1)}$.
We consider a single term in the summation corresponding to a specific pole,
$$
D_{\alpha}= M^{ {\rm MHV}}(z_{1,\alpha})\frac{i}{P_{\alpha}^2}
M^{ {\rm NMHV}}(z_{1,\alpha})\,,
\refstepcounter{eqnumber
$$
and evaluate this by determining the poles in $D_{\alpha}(z_2)$
under the shift,
$$
\bar\lambda_i \longrightarrow
\bar\lambda_i+z_2 r^{(2)}_i\bar\eta , \;\;\; i=1,\cdots,4 \, .
\refstepcounter{eqnumber
$$
The $r^{(2)}_i$ are restricted to maintain momentum conservation
and to leave the pole unshifted,
$$
P^2_\alpha \longrightarrow P^2_\alpha \, ,
\refstepcounter{eqnumber
$$
which corresponds to the constraint,
$$
0= \sum_{i}z_2 r^{(2)}_i [ \eta | P_{\alpha} | i \rangle \, .
\refstepcounter{eqnumber
$$
where $i$ runs over the indices $(1,\ldots,4)$ which lie in the set $\alpha$.
This condition also implies that the internal legs remain on-shell.
This gives three linear constraints on the four $z_2 r^{(2)}_i$. The function
$D_{\alpha}(z_2)$ is rational and, since the two tree amplitudes do not have
simultaneous poles, has simple poles. The poles occur where
$M^{\rm NMHV}$ factorises into pairs of MHV amplitudes and thus, assuming
$D_{\alpha}(z_2)$ vanishes at infinity, we have,
$$
D_{\alpha}=\sum_{\beta}D_{\alpha,\beta}
=\sum_{\beta} M^{{\rm MHV}}(z)\frac{i}{P_{\alpha}^2}
\times
\left(
M^{ {\rm MHV}} (z)\;
\frac{i}{P_{\beta,(1)}^2 }
M^{{\rm MHV}} (z)\,
\right) \, .
\refstepcounter{eqnumber\label{lateadditiuon}
$$
where $z$ indicates the functional dependance upon the two shifts
(4.10) and (4.13).
Explicitly,
all three MHV amplitudes are evaluated at the shifted points,
$$
\bar\lambda_i \to \bar\lambda_i+(z_{1,\alpha}r_i^{(1)}+z_{2,\beta}r_i^{(2)})\bar\eta
.
\refstepcounter{eqnumber
$$ In eq.~(\ref{lateadditiuon}) we have a shifted propagator $P_{\beta,(1)}^2$ rather than $P_{\beta}^2$
since we are factorising the shifted tree amplitude $M^{\rm
NMHV}$. Hence this is not immediatelly an MHV diagram term. but is one
contribution to the MHV diagram with unshifted propagators
$i/P_{\alpha}^2$ and $i/P_{\beta}^2$. There is a second contribution
to the same MHV diagram which arises from the term with a
$P^2_{\beta}$ pole in the sum in eq.~(\ref{alphasum}). Expansion of
this yields,
$$
D_{\beta,\alpha} =\left(
M^{ \rm MHV}(z')\frac{i}{P_{\alpha,(1)}^2}
M^{\rm MHV} (z')\;
\right)
\times \frac{i}{P_{\beta}^2 }
M^{\rm MHV} (z')\, ,
\refstepcounter{eqnumber
$$
where the MHV amplitudes are now evaluated at the points
$$
\bar\lambda_i \to \bar\lambda_i+(z_{1,\beta}r_i^{(1)}+z_{2,\alpha}'r'{}_i^{(2)})\bar\eta
\; .
\refstepcounter{eqnumber
$$
To prove the MHV-diagram expansion we need to show that the sum of the
two terms $D_{\alpha,\beta}+D_{\beta,\alpha}$ gives the correct diagram, {\it i.e.},
$$
\eqalign{
M^{\rm MHV}(z)\frac{i}{P_{\alpha}^2}
&
\left(
M^{\rm MHV} (z)\;
\times \frac{i}{P_{\beta,(1)}^2 }
M^{\rm MHV} (z)\,
\right)
\cr
& \hskip 2.0 truecm + \left(
M^{\rm MHV}(z')\frac{i}{P_{\alpha,(1)}^2}
M^{\rm MHV} (z')\;
\right)
\times \frac{i}{P_{\beta}^2 }
M^{\rm MHV} (z')\,
\cr
& =M^{\rm MHV}(z_{a})\frac{i}{P^2_{\alpha}} M^{\rm MHV}(z_{a})\frac{i}
{P^2_{\beta}} M^{\rm MHV}(z_{a})\,,
\cr}
\refstepcounter{eqnumber
$$
with the $M^{\rm MHV}(z_{a})$ evaluated at the point $z_{a}$ specified by the
rules of the previous section.
\noindent
We need two facts to show this:
$\bullet$ The product of the three tree amplitudes is the same in both cases
and equal to the desired value. This is equivalent to showing that
$z\equiv z' \equiv z_a$.
$\bullet$ There is an identity involving the product of propagators,
$$
\frac{i}{P^2_{\alpha}}
\frac{i}{P^2_{\beta,(1)}}
+\frac{i}{P^2_{\alpha,(1)}}\frac{i}{P^2_{\beta}}
=\frac{i}{P^2_{\alpha}}\frac{i}{ P^2_{\beta}}\, .
\refstepcounter{eqnumber\label{SecondIdentity}
$$
Taking the first fact: in the final expression for the
$\bar\lambda_{i}$ the net effect of the two shifts is to give a total shift
of the form,
$$
\hat {\bar\lambda}_{i}=
\bar\lambda_{i} +a_i \bar\eta \, .
\refstepcounter{eqnumber
$$
The $a_i$ are such that momentum conservation is satisfied and
$\hat P^2_{\alpha,(2)} =\hat P^2_{\beta,(2)} =0$.
As discussed in the previous section,
these constraints have a unique solution and so the
$\bar\lambda_{i}$ take the same values irrespective of the
order in which we factorise. The values of the intermediate momenta,
$\hat P$, are determined by momentum conservation which
are precisely the substitutions specified in the substitution rules.
The second fact can be shown in the following way.
Considering the contour integral of two shifted propagators
$$
\oint {dz\over z} {1 \over P_\alpha^2(z) P_\beta^2(z)}
\refstepcounter{eqnumber
$$
about a contour at infinity.
Since the $P_\alpha^2(z)$ vanish at infinity
this integral vanishes and is also equal to the sum of
its residues.
Examining the residues we obtain,
$$
{1 \over P_\alpha^2}
{1 \over P_\beta^2}
-{1 \over P_\alpha^2(z_\beta)}
{1 \over P_\beta^2}
-{1 \over P_\alpha^2}
{1 \over P_\beta^2(z_\alpha)} =0 \,,
\refstepcounter{eqnumber$$
which provides a proof of eq.(\ref{SecondIdentity}).
Thus the two terms combine to give a single term
which is the MHV-vertex diagram.
\subsection{General Case}
The general case can be deduced by a repeated application of the process used in
the previous section. We give an outline of this here.
Consider a general ${\rm N}^{n}{\rm MHV}$ amplitude and shift all
the negative helicity legs,
$$
\bar\lambda_i \to \bar\lambda_i+z_1 r_i^{(1)} \bar\eta \, ,
\refstepcounter{eqnumber
$$
for a generic set of $r_i^{(1)}$. The amplitude can then be written as,
$$
M^n(0)=\sum_{\alpha}M^{n-k_\alpha+1}(...,\hat p)(z_{1,\alpha})
\frac{i}{P_{\alpha}^2} M^{k_\alpha+1}((-\hat p),...)(z_{1,\alpha})\,.
\label{BCFWexpansion}
\refstepcounter{eqnumber
$$
We evaluate an individual term in this by imposing a shift with parameter
$z_2$ that does not shift $P_{\alpha}^2$. We continue in this way until the
we have an amplitude which is a product of MHV amplitudes with propagators,
$$
D_{\alpha_1,\alpha_2,\cdots \alpha_{n}}
=\prod ( M^{\rm MHV}) \times \prod_i{i \over P_{\alpha_i,(i-1)}^2 } \, ,
\refstepcounter{eqnumber
$$
where $i/P^2_{\alpha_i,(i-1)}$ denotes the propagator we factorised on in
the $i$-th step.
As before we gather together all terms with the same pole structure and
combine them into a single diagram. This again requires two things: firstly that the
MHV amplitudes are evaluated at the same point irrespective of the order
and secondly that the pole terms sum to yield the product of the
unshifted poles.
For the first step we note that the net effect of the shifts is to apply an
overall shift to the $n+2$ negative helicity legs of the form,
$$
\hat{\bar\lambda}_i =\bar\lambda_i +a_i\bar\eta\, .
\refstepcounter{eqnumber
$$
Since momentum conservation is preserved at each step, overall momentum
conservation is guaranteed at the final stage. This is equivalent to
two linear constraints on the $a_i$. Secondly, the net effect at
their final stage is that all the $\hat P_{\alpha_i}$ are on-shell, $\hat
P_{\alpha_i}^2=0$. This imposes $n$ further linear constraints and we are
left with a unique shift.
Summing over the different orderings now gives an expression of the form,
$$
M^n(0){\big |}_{\mbox{ CSW-diagram}}=
\left(\prod M^{\rm MHV}\right)\left(\sum_\sigma\prod_{i}
\frac{i}{\hat P_{\alpha_{\sigma(i)},(i-1)}^2}\right)\,,
\refstepcounter{eqnumber\label{diagramcontr}
$$
where $\sigma$ denote the permutations of the labels $i=1,...,n$. The rather complicated sum in
(\ref{diagramcontr}) simply yields the product of propagators, as can be seen by
comparing with the Yang-Mills case.
As the total amplitude can be expressed as a sum of terms, each with
a specific pole structure, the ${\rm N}^n{\rm MHV}$ amplitude $M^n(0)$ can be
written in a CSW form,
$$
M^n(0)=\sum_{\mbox{ CSW-diagram}}
M^n(0){\big |}_{\mbox{ CSW-diagram}}\,,
\refstepcounter{eqnumber$$
with each CSW diagram contributing as
$$
M^n(0){\big |}_{\mbox{ CSW-diagram}}
=\left(\prod M^{\rm MHV}\right)\prod_i \frac{i}{P_{\alpha_i}^2}\, ,
\refstepcounter{eqnumber$$
as given in the rules of the previous section.
\section{Conclusions and Comments}
In this paper we have shown a new way of obtaining amplitudes for
graviton scattering, using a gravity MHV-vertex formalism that
resembles the CSW formalism for calculating tree amplitudes in
Yang-Mills theory. Given the assumption that gravity amplitudes are
sufficiently well behaved under a BCFW-style analytic continuation to
complex momenta, we have presented a direct proof of the formalism and
have illustrated its usefulness through concrete examples such as NMHV
amplitudes.
Although we have presented MHV-vertices for external gravitons only we
expect the procedure to extend to other matter types using
supersymmetry to obtain the relevant MHV-vertex~\cite{Nair1988bq,NairGravity}.
Although the existence of the CSW formalism can be motivated by
the duality with a twistor string theory, such a motivation is not so clear for gravity.
The natural candidate string theories contain conformal
supergravity~\cite{BerkovitsWitten} rather than conventional gravity.
Despite this conventional gravity does seem to share features with
Yang-Mills theory such as the existence of a MHV-vertex construction
and the coplanarity~\cite{BeBbDu} of NMHV amplitudes which hint at the existence of
a twistor string dual theory.
\vspace{1.0cm}
\noindent{\bf Acknowledgments}
We thank Zvi Bern for many useful discussions. This research was supported in part by
the PPARC and the EPSRC of the UK.
\vfill\eject
|
1,108,101,564,373 | arxiv | \section{Introduction}
Although one would expect the solar corona to have the same elemental abundances of the solar photosphere, this is not always the case \citep{Pottasch1963,Meyer1985a,Meyer1985b,Widing&Feldman1989,Widing&Feldman1995,Sheeley1995,Sheeley1996}.
The abundance variation observed in the corona depends on the first ionization potential (FIP) of an element.
Elements with FIP less than approximately 10 eV are enhanced in the corona by a factor of 3--4 compared to the photosphere whereas those elements with FIP greater than 10 eV tend to maintain their photospheric abundances.
This FIP effect is measured using the FIP bias which is the ratio of an element's abundance in the solar atmosphere to its abundance in the photosphere. Interestingly, the FIP effect is also observed in the solar wind, where it was suggested as a means to link components back to their source regions in the solar atmosphere \citep[e.g.][]{Brooks2011,Brooks2015, Hinodereview2019}.
It is argued that the FIP effect can be due to the ponderomotive force linked to the magnetic oscillations associated with magnetohydrodynamic (MHD) waves \citep{Laming2015}.
The ponderomotive force arises from the reflection/refraction of the magnetic-like waves in the chromosphere and acts only on the low FIP ions while leaving the mainly neutral high FIP elements unaffected.
Ions are separated from neutral elements in the chromosphere and then only the ions are transported to the corona where they may be observed with enhanced abundances compared to those of the photosphere.
However, no observational evidence of this scenario was available until very recently when, by exploiting a unique combination of high resolution observations in the chromosphere and corona with magnetic modelling, it was possible to detect magnetic perturbations in a sunspot chromosphere and find a link with the high FIP bias locations in the corona above the same sunspot \citep[][hereafter referred to as papers A and B, respectively]{Stangalini20,Deb20}. These results were also in agreement with previous studies of the same magnetic structure, where the presence of intermediate (Alfv{\'{e}}n) shocks were reported at the same locations \citep{Houston2020}. However, although providing observational support to link the FIP effect to magnetic-like waves \citep{Laming2015,Laming2017}, in papers A and B only a few possibilities were put forward to explain the surprising localised presence of magnetic perturbations only at particular locations within the sunspot umbra.
Paper A reported that the magnetic perturbations were only detected on one side of the sunspot, thus suggesting a possible role of the magnetic field geometry or the connectivity with surrounding diffuse magnetic fields. The authors suggested MHD mode conversion at the Alfv{\'{e}}n-acoustic equipartition layer (i.e, where the Alfv{\'{e}}n and acoustic speeds nearly coincide; $v_{A}=c_{s}$) as a possible cause, in agreement with \citet{Houston2020}, who detected intermediate shocks in the equipartition layer that was estimated to reside between the upper photosphere and lower chromosphere.\\
In general, waves entering the region where the Alfv{\'{e}}n and acoustic speeds nearly coincide undergo a {\it{mode conversion}} or {\it{mode transmission}} process from one form (e.g. acoustic-like to magnetic-like wave) to another \citep{Crouch2005,Suzuki2005,Cally2015}. The term `mode conversion' refers to the situation in which a wave retains its original character (i.e., fast-to-fast or slow-to-slow), yet {\it{converts}} its general nature in the form of acoustic-to-magnetic or magnetic-to-acoustic. On the other hand, with `mode transmission' one generally refers to the situation in which the wave maintains its general nature (i.e.,`magnetic-like' wave or `acoustic-like' mode), but changes character from fast-to-slow or slow-to-fast. In all cases, the attack angle, that is the angle between the wavevector and the field lines, is the dominant factor in determining both the conversion ($C$) and transmission ($T$) coefficients \citep{Cally2001,Cally2008}, with $T + |C|=1$. In particular, the fraction of incident wave energy flux transmitted from fast to slow acoustic waves is:
\begin{equation}
T = e^{-\pi k h_{s} sin^{2}(\alpha)} \
\label{eq:eqT}
\end{equation}
where $k$ is the wavenumber, $h_{s}$ the thickness of the conversion layer, and $\alpha$ the attack angle. The coefficient $C$ is a complex energy fraction to take into account possible phase changes during the process of mode conversion \citep{Hansen2009}. It was estimated that the thickness of the conversion layer can be of the order of $200-250$ km \citep{Stangalini2011}.
From the above equation, it is clear that the conversion $C$ is larger when the attack angle is larger. This implies that the field geometry plays a significant role in the mode conversion and therefore should be carefully taken into account as postulated by paper B.\\
In this work, in an attempt to shed light on the different mechanisms generating the FIP effect, we investigate the wave propagation across different heights above the sunspot as a function of the plasma and magnetic field parameters, as inferred from multi-height spectropolarimetric inversions.
For this purpose, we make use of a combination of high-resolution spectropolarimetric observations acquired by IBIS in the photosphere and chromosphere, SDO/HMI line-of-sight (LOS) Dopplergrams, and SDO/AIA data to determine the wave flux across different layers of the solar atmosphere and analyze its relation to the global parameters such as inclination angles, vertical gradients of the magnetic field, and density ratios of the magnetic region.\\
This study can be preparatory for the scientific exploitation of future space missions such as Solar-C EUVST and Solar Orbiter.
\section{Observational Data}
The dataset used in this work was acquired with the Interferometric BIdimensional Spectrometer \citep[IBIS;][]{Cav2006} instrument at the Dunn Solar Telescope (DST) on 2016 May 20 under excellent seeing conditions for more than two hours, between 13:40 -- 15:30~UT. This dataset has been the main focus of other studies \citep[see, e.g.,][]{Stangalini18,Murabito19,Houston2020,Murabito20,Stangalini20,Deb20}, due to the quality of the data and the large-scale nature of the observed sunspot, which was the leading spot of AR 12546.
The observations were carried out using the Fe~{\sc{i}} 617.3~nm and Ca~{\sc{ii}} 854.2~nm lines with a sampling of 20~m\AA~and 60~m\AA, respectively. Both lines were acquired in spectropolarimetric mode with 21 spectral points and a cadence of 48~s. A standard calibration procedure (flat field, dark subtraction, polarimetric calibration) was first applied. In order to remove the residuals of atmospheric aberrations, the dataset was processed with the Multi-Object Multi-Frame Blind Deconvolution \citep[MOMFBD;][]{vanNoort2005} technique. From the final IBIS cubes, the circular polarisation (CP) signals (for both the photospheric and chromospheric lines) were calculated pixel-by-pixel following the definition given in \citet{Stangalini20}, using the maximum amplitude of the Stokes-V spectral profile.
To complement the IBIS data and better study the wave power, we use full-disk Dopplergrams acquired by the Helioseismic and Magnetic Imager \citep[HMI;][]{Schou2012} on board the Solar Dynamics Observatory \citep[SDO;][]{Pesnell2012} satellite in the interval between 13:00 -- 16:00~UT, with a cadence of 45~s. The pixel scale of these data is 0.5\arcsec. We also analyzed simultaneous Atmospheric Imaging Assembly \citep[AIA;][]{Lemen2012} EUV filtergrams taken in the 304{\,}{\AA}, 171{\,}{\AA}, and 335{\,}{\AA} passbands. The pixel scale of the SDO/AIA data is 0.6\arcsec and the cadence is 12~s.
The combined IBIS, SDO/HMI and SDO/AIA data are used to investigate the spatial distribution of the wave power penetrating the higher layers of the sunspot atmosphere, in order to obtain a tomographic view of the embedded MHD processes. Figure \ref{fig:fig1_mappe} shows an overview of AR~12546 as observed by the SDO/HMI and SDO/AIA EUV (304{\,}{\AA}, 171{\,}{\AA}, and 335{\,}{\AA}) instruments (panels a, b, c and d) and by the IBIS instrument (panels e and f) on 2016 May 20. The IBIS field-of-view (FOV) captured one of the biggest sunspot of cycle~, manifesting as a strong coherent leading positive polarity sunspot, as displayed in the SDO/HMI magnetogram and the photospheric IBIS continuum intensity maps in Fig.~\ref{fig:fig1_mappe} (panels a and e). The AR at the time of IBIS observations was located near the disc center, at X=35\arcsec~and Y=-90\arcsec. The magnetogram also shows an asymmetric flux distribution between the trailing and leading side of the moat region around the biggest sunspot. Indeed, moving magnetic features (MMF) activity is observed, which is asymmetric, being more extended and vigorous on the left (east) side of the umbra, coinciding with the segment where the blue dots are observed in Fig.~\ref{fig:fig1_mappe}f.
\begin{figure*}[!htp]
\includegraphics[scale=0.3, clip, trim=10 150 10 0]{figure/static_FIP.png}
\centering
\caption{Three-dimensional view of the AR 12546. From bottom to top: SDO/HMI magnetogram, IBIS Fe~{\sc{i}} core with blue dots overplotted, and \textit{Hinode}/EIS FIP bias map. The purple surface represents the equipartition layer ($0.8< c_{s}/v_{A} < 1.2$) as inferred from the spectropolarimetric inversions. Selected field lines from a PFSS extrapolation of the coronal field link the blue dots with regions of high FIP bias on the eastern and southern edges of the sunspot i.e. in the penumbra. See \cite{Deb20} for more details.
\label{fig:fig2static_view}}
\end{figure*}
\section{Methods and Results}
\subsection{Magnetic perturbations and local properties of the sunspot}
\begin{figure}[h]
\includegraphics[scale=0.48, clip,trim=170 40 50 50]{figure/inversions_2.png}
\caption{Inclination angle of the magnetic field at $\log_{10}(\tau)=-1.0$ (photosphere; panel a). Expansion factor of the magnetic field between photosphere and chromosphere (i.e. at $\log_{10}(\tau)=-4.6$) (panel b). Density ratio between chromosphere and photosphere (panel c). Total magnetic field gradient (panel d). The white contour in all maps represents the umbra-penumbra boundary as seen in the continuum intensity. The hatched area indicates the central region of the umbra, where saturation effects and low
photon flux are detected in the photosphere \citep{Stangalini18}. The magnetic field, the inclination angles, and the density ratios were derived from the NICOLE inversions. }
\label{fig:fig_inversion}
\end{figure}
In paper B, the authors used observations obtained with the EUV Imaging Spectrometer \citep[EIS;][]{Culhane2007} on board the \emph{Hinode} \citep{Kosugi2007} satellite to make a spatially resolved map of coronal composition, or FIP bias, in the region of the sunspot (see Fig.~\ref{fig:fig2static_view}).
Highly fractionated plasma with FIP bias of 3$^{+}$ is observed in loops rooted in the penumbra on the eastern and southern edges of the sunspot whereas the coronal field above the umbra contains unfractionated plasma (FIP bias of 1--1.5).
On the west, FIP bias is approximately 2-2.5.
In order to investigate the role of the wave dynamics on the FIP effect observed at higher layers, we studied the spatial distribution of the wave power and compared it to the magnetic field and plasma parameters (namely, the field inclination, density ratio, and vertical gradient of the magnetic field) as inferred from spectropolarimetric inversions. The location of magnetic oscillations detected in Papers A and B are shown in Fig. ~\ref{fig:fig1_mappe} (panel f) and Fig.~\ref{fig:fig2static_view} as blue dots. These blue dots are not uniformly distributed within the umbra of the sunspot, but are only located towards the left side of it, close to the umbra-penumbra (UP) boundary, therefore on the same side as the trailing negative polarity of the AR (as shown by the magnetogram at the bottom of the three-dimensional view of Fig.~\ref{fig:fig2static_view}).
Using a Potential Field Source Surface (PFSS) extrapolation to model the magnetic field of the corona, the locations of the blue dots were magnetically linked to regions of high FIP bias at coronal heights as shown in Fig.~\ref{fig:fig2static_view} (see paper B for more details).
The magnetic field geometry is examined by using the non-local thermodynamic equilibrium (NLTE) inversions already presented in \citet{Murabito19}, which were carried out using the NICOLE code \citep{Nicole2015} on the same data (i.e. the best spectral scan of the data series, in terms of contrast). Both the photospheric and chromospheric lines are inverted simultaneously, thus providing a three-dimensional stratification of the most relevant atmospheric parameters in the range of explored heights. More details on the inversion procedure can be found in \citet{Murabito19}, although here we summarize the key points for completeness. As a first step, we investigated the atmospheric parameters obtained from the spectropolarimetric inversions at the location of the blue dots (Fig.~\ref{fig:fig1_mappe}f and Fig.~\ref{fig:fig2static_view}), focusing our attention on the two atmospheric heights corresponding to the maxima of the response functions of the two spectral lines (to magnetic field perturbations), i.e., $\log_{10}(\tau)\approx-1.0$ for the photospheric Fe~{\sc{i}} line and $\log_{10}(\tau)\approx-4.6$ for the chromospheric Ca~{\sc{ii}} line as reported in \citet{Murabito19}, and in agreement with the previous study by \citet{Quintero2016}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.35, clip,trim=0 0 0 50]{figure/PDF_beta1_bluedots_boxA_referee_version.png}
\includegraphics[scale=0.35, clip,trim=0 0 0 50]{figure/PDF_inclination_bluedots_version_referee_v2_boxbigger.png}
\includegraphics[scale=0.35, clip,trim=0 0 0 40]{figure/PDF_total_gradient_bluedots_version_referee_v2_boxbigger.png}
\includegraphics[scale=0.35, clip,trim=0 0 0 40]{figure/PDF_density_Ratio_bluedots_version_referee_v2_boxbigger.png}
\caption{(Panel a) PDF of the optical depths, $\log_{10}(\tau)$, corresponding to the equipartition layer ($v_{A}=c_{s}$) associated with the locations of the blue dots and the area inside box A shown in Fig.~{\ref{fig:fig1_mappe}f}. (Panel b) PDFs of the magnetic field inclination angles at photospheric ($\log_{10}(\tau)=-1.0$, red and blue histograms) and chromospheric ($\log_{10}(\tau)=-4.6$; dashed histograms) heights. (Panel c and d) PDF distribution of the magnetic field gradient and density ratio for the selected region (i.e. the blue dots and the box A)
}
\label{fig:histo}
\end{figure}
In Fig.~\ref{fig:fig_inversion} we show the photospheric (panel a)
magnetic field inclination maps. Given the location of the AR during the observations, i.e., at solar disc center, we neglect any projection effects. Hence, the displayed maps are consistent with the LOS reference frame. Here, we see that at photospheric heights there is no significant difference in the inclination angle (on average) between the left and right sides of the umbra. However, at chromospheric heights we note a slightly larger magnetic field inclination corresponding to the location of the blue dots (see Fig.~{\ref{fig:fig1_mappe}}f and Fig.~\ref{fig:fig2static_view}), and thus with the locations where the magnetic perturbations linked to the coronal FIP effect are detected (see the FIP bias map in Fig~\ref{fig:fig2static_view}). In order to examine the potential role of the mode conversion process, that occurs at the equipartition layer, we calculated the probability density function (PDF) of the optical depths (in $\log_{10}\tau$) corresponding to this layer (i.e., $c_{s}=v_{A}$), which is displayed in Fig.~{\ref{fig:histo}}a. This layer can play a significant role in the wave energy conversion \citep{Grant2018}, and as can be seen in Fig.~{\ref{fig:histo}}a, it is predominantly located very close to the low photosphere (i.e., $-1\geq\log_{10}(\tau)\geq0$). In particular, we can note that the right side of the umbra (i.e. blue PDF in Fig.~{\ref{fig:histo}}a) has the equipartition layer at much lower geometric heights, while on the opposite side (i.e., at the location of the blue dots, green PDF in Fig.~{\ref{fig:histo}}a) this region extends more into the mid-photosphere.
To better investigate if the magnetic field geometry could be responsible for a possible mode conversion process, we plot the distribution of the magnetic field inclinations at photospheric heights, close to the equipartition layer, at the locations corresponding to the blue dots and an area on the opposite side (see the box A in Fig.~\ref{fig:fig1_mappe}f). The PDFs of the photospheric magnetic field inclinations (Fig.~\ref{fig:histo}b) show a similar distribution for both opposite locations in the umbra, with a field inclination centered around 25$^{\circ}$ -- 30$^{\circ}$. In any case the small differences in the two distributions, if statistically significant, can not play a role in the mode conversion process (see Eq.~\ref{eq:eqT}) and justify the presence of magnetic-like disturbances on only one side. This is true for waves travelling in all directions, that would experience a similar conversion efficiency. However, we can not rule out the possibility of an asymmetry in the acoustic driver itself. This aspect will be better addressed in Sect. 3.2. For comparison, Fig.~\ref{fig:histo}b displays the chromospheric inclination angles at the two considered sides (dashed histograms).
To better highlight the differences between the chromosphere and photosphere, we report the chromospheric magnetic field inclinations in relation to their photospheric counterparts, i.e., a measure of the expansion factor, in Fig.~\ref{fig:fig_inversion}b. This map shows that at both sides (i.e., left and right sides of the umbra) the chromospheric magnetic field is more inclined (by about a factor 1.5) but at the left side, at the location of the blue dots, this factor reaches 3 -- 4 times larger. This finding indicates that the left side of the umbra experiences an expansion of the field lines about 2 times faster than the opposite side
\\
The difference in the magnetic field inclinations at chromospheric heights is further supported by the different magnetic field gradients obtained from the inversions. Figure~\ref{fig:fig_inversion}d displays larger (up to 400~G / $\log_{10}\tau$) negative values in the locations of the blue dots (i.e. the blue histogram in Fig.~\ref{fig:histo}c), meaning more rapid decreases of the magnetic flux with atmospheric height. In fact, the opposite side of the umbra displays lower negative values (see the red histograms in Fig.~\ref{fig:histo}c). It is worth noting that there are pixels in the right side of the umbra with a positive gradient of the magnetic field, which means that the chromospheric magnetic field is stronger than the photosphere. Although this finding may appear surprising, it can be explained by considering this area as being part of the central region where the magnetic sensitivity of the photospheric spectral line saturates due to the large magnetic fields, resulting in an underestimation of the photospheric magnetic field. On the other hand, the chromospheric Ca~{\sc{ii}} line is not saturated, hence the chromospheric magnetic field inferred seems to be larger than that of the underestimated photospheric one.
The higher negative value of the magnetic field gradient found at the location of the blue dots further supports the idea of faster magnetic field expansion in the chromosphere for the regions associated with the coronal FIP effect as already seen in Fig.~\ref{fig:fig_inversion}b (blue dots in Fig.~{\ref{fig:fig1_mappe}}f and Fig.~\ref{fig:fig2static_view}). The density ratio between the chromosphere and photosphere is shown in Fig.~\ref{fig:fig_inversion}c. In agreement with the above findings, this panel and its related histogram (Fig.~\ref{fig:histo}d), illustrate that a significant density drop between the two heights occurs at the same location as the blue dots.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\columnwidth, clip, trim=0 40 0 40]{figure/Power_maps_IBIS_BDots.png}
\includegraphics[width=0.8\columnwidth, clip, trim=60 0 0 30]{figure/HMI_power.png}
\caption{Chromospheric IBIS CP (upper-left) and $I_{\mathrm{core}}$ (upper-right) power maps within the $4-10$~mHz range. The two boxes in the CP map indicate where the spectral averaging has been performed for deriving the results shown in the middle panels. The black contour in the $I_{\mathrm{core}}$ map represents the CP power contour where strong magnetic wave power is detected. Plots of the CP (middle-top) and $I_{\mathrm{core}}$ (middle-bottom) wave power averaged across the two boxes drawn in the upper-left panel. For completeness, we also report the spectral averaging for the blue dots locations as green plots. The SDO/HMI velocity $5.5-7.5$~mHz amplification map with respect to the quiet Sun velocity power is displayed in the lower panel. The contour represents the penumbra boundary. The dotted white box in the SDO/HMI velocity amplification map (for $5.5-7.5$~mHz) indicates the IBIS FOV.
}
\label{fig:ibis_power}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth, clip, trim=25 0 12 0]{figure/power_maps_AIA.png}
\caption{Acoustic power in the $5$ mHz band ($1$ mHz width) for the three AIA channel spanning from upper chromosphere to the lower corona (upper row). Maps of $5$ mHz to $3$ mHz power ratio for the three AIA channels (lower row). The box in all panels indicates the IBIS FOV.}
\label{fig:power_AIA}
\end{figure}
\subsection{Wave power as a function of height}
In addition to the analysis of the photospheric and chromospheric magnetic field geometries and their relation with the spatial distribution of the blue dots (Fig.~{\ref{fig:fig1_mappe}}f and Fig.~\ref{fig:fig2static_view}), associated with enhanced coronal FIP effects (see Fig.~\ref{fig:fig2static_view}), we have examined the spatial distribution of wave power as a function of atmospheric height.
In particular, by employing chromospheric IBIS Ca~{\sc{ii}} circular polarisation (CP) and line core Doppler compensated intensity ($I_{\mathrm{core}}$) measurements, we computed spatially-resolved power maps averaged within the frequencies range of $4-10$~mHz, which are shown in the top panels of Fig.~{\ref{fig:ibis_power}}. This frequency range is chosen in such a way as to include the maximum of the power spectrum which is dominated by frequencies in the range of $4-4.5$~mHz in the chromosphere. These maps display unique changes in Fourier power that are cospatial with the blue dots depicted in Fig.~{\ref{fig:fig1_mappe}}f, which have been previously linked to the coronal FIP effect \citep{Deb20, Stangalini20}.
In particular, at the locations of the blue dots, we find an excess of magnetic wave power (upper-left panel of Fig.~{\ref{fig:ibis_power}}) and a power deficit of magneto-acoustic wave power (upper-right panel of Fig.~{\ref{fig:ibis_power}}). Also, after averaging the whole spectra in two isolated boxes (towards the left and right sides of the umbra; see the boxes drawn in the upper-left panel of Fig.~{\ref{fig:ibis_power}}), it is possible to see a broader overall magnetic and magneto-acoustic power spectra (i.e., relatively large power over a wider range of frequencies) towards the left side of the umbra (middle panels of Fig.~{\ref{fig:ibis_power}}) compared to those on the right side. For completeness, we also plot the spectra at the location of the blue dots (see the green curves in the two middle panels).\\
Many authors (see for instance \citealp{Brown1992}) have reported a high frequency power enhancement in the $5.5-7.5$ mHz band around active regions at both photospheric and chromospheric heights. This effect is also referred to as acoustic halos, and is characterized by a power enhancement up to $40-60\%$ with respect to nearby quiet Sun \citep{Hindman1998}. It was suggested that this acoustic enhancement could be due to fast wave refraction due to inclined magnetic fields, in proximity of the equipartition layer \citep{Khomenko2009}.\\
In order to check whether the observed differences in the power spectrum between the left and right side of the umbra are accompanied by a similar asymmetry of the acoustic halo, we computed the amplification map of the Doppler velocity power at high frequency ($5.5-7.5$~mHz), with respect to the quiet Sun, using SDO/HMI Dopplergrams from a larger FOV centered around the sunspot. From the lower panel of Fig.~{\ref{fig:ibis_power}}, a high frequency acoustic flux imbalance (with respect with the quiet Sun) is detected towards the left side of the sunspot, cospatial with the locations where enhanced magnetic wave activity and suppressed magneto-acoustic wave activity is found in the IBIS data sequence. The acoustic halo displays a power enhancements of the order of $50\%$, thus in agreement with previous studies \citep{Hindman1998}, although asymmetric with respect to what generally reported. \\
Furthermore, we investigated the wave power extracted from EUV intensity data associated with different temperature responses in the SDO/AIA filtergrams. In particular we have used the three SDO/AIA channels at 304{\,}{\AA} ($\log_{10}$T$ \approx 4.7$), 171{\,}{\AA} ($\log_{10}$T$ \approx 5.8$), and 335{\,}{\AA} ($\log_{10}$T$ \approx 6.4$), where T is the peak temperature response of the relevant channels \citep{Boerner2012}. As such, the selected SDO/AIA channels sample different approximate heights in the solar atmosphere, ranging from the lower transition region through to the upper corona. In particular, we look for differences in the acoustic flux transmission between these layers that can be linked to the locations where the coronal FIP effect is present (blue dots and FIP bias map shown in Fig.~{\ref{fig:fig2static_view}}). In the upper panels of Fig.~\ref{fig:power_AIA} we show the spatially-resolved power maps for the three SDO/AIA channels at a frequency of 5~mHz, which is the same frequency that dominates the chromospheric locations corresponding to the blue dots in Fig.~{\ref{fig:fig1_mappe}}f. In addition, the lower panels of Fig.~{\ref{fig:power_AIA}} show the ratios of oscillatory power between the 5~mHz and 3~mHz frequencies for each channels. Once again, despite the apparent visual symmetry of the sunspot (see, e.g., Fig.~{\ref{fig:fig1_mappe}}e), we observe an acoustic flux imbalance between the left and right side of the magnetic structure. Indeed, for the coronal channels (171{\,}{\AA} and 335{\,}{\AA}), we note that there exists specific spatial locations, notably towards the opposite side of the blue dots locations, where there is a 5~mHz power excess, while this is not the case for the 304{\,}{\AA} map, which is formed lower in the solar atmosphere and displays an almost homogeneous ring of oscillatory power \citep[similar to that shown in comparable upper-chromospheric circular-shaped sunspot umbrae;][]{Jess2013}.
Similarly, the lower panels of Fig.~\ref{fig:power_AIA} highlight a power deficit at coronal heights cospatial with the locations displaying the enhanced FIP effect. In particular, we observe that the high-frequency acoustic flux is not able to penetrate the upper layers of the solar atmosphere at all locations, manifesting as a clear deficit of power in the location of the blue dots (shown in Fig.~{\ref{fig:fig1_mappe}}f and Fig.~{\ref{fig:fig2static_view}}). This means that the wave energy is blocked/lost at some point in the lower atmosphere, suggesting that the umbral magneto-acoustic waves linked with the blue dots in Fig.~{\ref{fig:fig1_mappe}}f are unable to reach coronal heights.
It is important to keep in mind that the SDO/AIA filtergrams are sensitive only to intensity (proxy for density in optically thin media) fluctuations and it is possible that these fluctuations exist but they are smaller than the detection limit of the instrument.
We summarize our main observational findings as follows:
\begin{enumerate}
\item The averaged \textit{photospheric} inclination angle of the magnetic field lines are not significantly different between the left and right sides of the umbr
\item Despite the apparent symmetry of the umbra, at the locations corresponding to the blue dots in Fig.~{\ref{fig:fig1_mappe}}f (i.e., the locations linked with the coronal FIP effect) we measure a faster expansion of the magnetic field lines and a significant drop in vertical plasma density.
\item High-frequency (i.e., $4-10$~mHz) power maps of chromospheric IBIS CP and $I_{\mathrm{core}}$ measurements display both an excess of magnetic power and a deficit of magneto-acoustic wave power in the locations of the blue dots in Fig.~{\ref{fig:fig1_mappe}}f (i.e., towards the left side of the umbra). This region also displays broadened CP and $I_{\mathrm{core}}$ power spectra.
\item An asymmetric wave flux excess at high frequencies (i.e., acoustic halo $5.5-7.5$~mHz) is observed in the photospheric SDO/HMI LOS velocity data.
\item The acoustic flux (at 5 mHz) on the left side of the sunspot (i.e., cospatial with the blue dots depicted in Fig.~{\ref{fig:fig1_mappe}}f) does not efficiently reach coronal heights, suggesting the presence of different wave propagation mechanisms at opposite sides of the sunspot umbra.
\end{enumerate}
\section{Discussion and Conclusions}
In papers A and B, it was found that regions of high FIP bias at coronal heights are magnetically linked to chromospheric locations where magnetic fluctuations are detected. The link between magnetic perturbations and high FIP bias was already proposed by \citet{Laming2015} and \citet{Deb20} and the aforementioned works represent observational evidence of this. Also, in papers A and B a surprising asymmetry in the spatial distribution of magnetic oscillations was observed, despite the apparent circular symmetry of the sunspot investigated. These magnetic perturbations at chromospheric heights were found to be linked to the high FIP bias locations in the corona. It was also noted that the magnetic perturbations were all located on the same side of the trailing magnetic polarity of AR NOAA 12546. In addition, from the analysis of the same target at chromospheric height, \citet{Houston2020} observed the signatures of intermediate shocks. Although papers A and B were not focused on the investigation of the spatially asymmetric wave power, but only on the detection of the magnetic perturbations and their link with high FIP bias locations in the corona, a few options were put forward as a possible explanation for the excitation of the magnetic perturbations themselves. It was speculated that the magnetic field geometry or connectivity with the outside diffuse fields could play a role in the wave excitation and propagation. Along the same line, \citet{Houston2020} also argued that the increased level of wave activity found in the same region of the sunspot could be due to the magnetic field geometry, through the mode conversion mechanism which converts MHD waves in different modes with an efficiency depending on the magnetic field inclination with respect the wavevector \citep{Schunker2006}. This point was also further commented in papers A and B. Indeed, the authors in paper A concluded that a possible role between the mode conversion and the detected magnetic disturbances could exist if one takes into account the asymmetry of the distribution of the magnetic fluctuations. In this regard, although this sunspot with its peculiarities is but one case study, the results reported here suggest a likely plausible explanation.\\
Our results show that the magnetic field inclination at the equipartition layer (i.e. where the mode conversion takes place), located very close to the low photosphere (i.e., $-1\geq\log_{10}(\tau)\geq0$), is not significantly different on the two opposite sides of the sunspot. This means that waves travelling in all directions (e.g. $p$-modes) would experience the same conditions (i.e. attack angle) and therefore their conversion can not justify the asymmetric wave power observed on the left side of the sunspot (i.e. blue dots). While on the one hand, we can rule out the mode conversion for waves travelling in all directions ($p$-modes), we could hypothesize the existence of a driver that acts spatially in a different way. Indeed, the study of the power spectrum at high frequencies for the chromospheric high-resolution CP signal and core intensity from IBIS data suggests to us an asymmetric photospheric driver. Furthermore, the analysis of the lower synoptic SDO/HMI observations shows an imbalanced LOS velocity power flux detected in the left side of the whole AR. Indeed, this map has shown
a broadening of the power spectrum, referred as acoustic halo, with respect to that of a quiet sun area (where the contribution comes from the $p$-modes only). This broadening is also observed inside the umbra at the locations of the blue dots.\\
It was suggested that the halos are due to fast wave refraction in proximity to the equipartition layer and inclined magnetic fields \citep{Khomenko2009}. However, in contrast to what previously reported, in this case the halo is asymmetric and cospatial with the locations of the blue dots inside the umbra.\\
An excess of acoustic wave power in between the two polarities, due to strong photospheric plasma fluctuations in between the two polarities, could result in an excess of incident wave power on the left side of the sunspot itself (see Fig. \ref{fig:Cartoon}). In this regard, it is interesting to note that an excess moving magnetic feature activity in between the two polarities is seen in HMI imagery covering the same observing window and depicted in Fig. \ref{fig:Cartoon} (see also online movie). This provides further evidence of an high level of small scale plasma dynamics and perturbations that may result in a variation of the acoustic field and power. We recall that MMFs are manifestations of sunspot decay and result from erosion of the sunspot’s magnetic field by turbulent plasma motions \citep[e.g.,][]{Solanki2003}. These waves travelling toward the umbra could experience a largely inclined equipartition layer before entering the umbra as shown by the purple surface representing the equipartition layer in Fig.~\ref{fig:fig2static_view}. In other words, the impact angle at the equipartition layer is particularly large, and this is an ideal condition for their conversion into magnetic-like waves. In Fig.~\ref{fig:fig2static_view} we also note that the blue dots correspond to regions within the umbra which a wave travelling from the left side would reach immediately after crossing the equipartition layer. In addition, the equipartition layer itself, appears asymmetric with respect to the centre of the umbra (i.e. a steeper increase on the left side with respect to the right), thus possibly affecting the conversion coefficient.
However, it is important to note that the computation of the equipartition layer requires the knowledge of gas density (to estimate the Alfv{\'{e}}n speed). The NICOLE inversion code assumes hydrostatic equilibrium for the calculation of this parameter, and this may slightly shift the inferred position of the equipartition layer. However, we would like to point out that the most relevant aspect of our interpretation is that the equipartition layer crosses and intercepts the photosphere around the umbra, not its exact position. This is guaranteed by the extreme magnetic flux of the studied sunspot. We argue that, if the above mode conversion scenario is true, the FIP effect would be aligned with the regions outside the sunspot where there is a local increase of the acoustic power.
This fact could be directly checked in the Hinode/EIS data or, in the future, in the Solar Orbiter SPICE and Solar-C observations.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth, clip]{figure/Cartoon.png}
\caption{Cartoon of the excitation mechanism of the magnetic-like waves.}
\label{fig:Cartoon}
\end{figure}
However, an additional aspect should be considered. The spectropolarimetric inversions also show a very interesting and unnoticed aspect. On the left side of the sunspot, i.e. where the magnetic perturbations were detected, the field lines undergo a faster expansion with height. As a natural result, this is also accompanied by a stronger decrease of the plasma density with height. We note that these two aspects determine the ideal conditions for the development of magneto-acoustic shocks at low heights in the atmosphere, through which the energy contained into upward propagating waves can be dissipated well before reaching the temperature minimum, as one would expect. In particular, the fast density decrease with height determines the ideal condition for a fast steepening of the wave amplitude. The intermediate shocks reported by \citet{Houston2020} and the intense magneto-acoustic activity and magnetic perturbations reported in papers A and B appear in agreement with this scenario. This is also independently corroborated by SDO/AIA observations indicating that, compared to the right side of the sunspot, acoustic wave energy on the left side is converted/dissipated much lower in the atmosphere. We argue that the large observed density drop could be responsible for the formation of shocks at very low altitudes. However, we note that the larger field inclinations found in the chromosphere towards the left side of the sunspot may provide the necessary conditions to support a high-frequency power halo outside the umbra, similar to that first reported by \citet{Toner1993} and \citet{Braun2009}. As revealed by \citet{Khomenko2009}, such power halos may be the result of wave refraction in the vicinity of the plasma-$\beta=1$ layer. Inspection of Figures~{\ref{fig:fig2static_view}} \& {\ref{fig:fig_inversion}}(d) substantiate this hypothesis, whereby the plasma characteristics present towards the left side of the sunspot act to support wave refraction, mode conversion, and ultimate dissipation of wave energy through shock formation.
Nevertheless, the question remains as to whether the faster expansion of the field lines on the left side is a consequence of the wave energy dissipation at low heights through the aforementioned shocks, or a consequence of the overall magnetic connectivity of the sunspot. Previous studies would suggest it is the latter, as the effect of shocks on magnetic field geometry is localised and any perturbations to the field are temporary as the field relaxes back to equilibrium (e.g., \citealp{delacruz2013, Houston2018, Bose2019}). Further, only the intermediate shocks in \citet{Houston2020} are localised to the FIP region. They detect the intensity signatures associated with slow magneto-acoustic shocks across the entire outer umbra, inferring that if shocks were capable of a bulk change in the umbral field, this would be evident on the side of the sunspot unconnected with the FIP region.
In this regard, it is worth underlining that the context magnetogram obtained by SDO/HMI over a larger FOV shows the presence of small-scale magnetic field of opposite polarity on the same side, which constitute the trailing polarity of this AR. We argue that due to the magnetic connectivity the bending of the field lines is larger with the creation of low lying loops. This is in line with the simulations reported in \citet{Dahlburg2016}, which show that the FIP effect is stronger in short, high temperature loops.
In addition, it is also unclear whether or not the wave steepening and mode conversion are concurrent processes. In this regard, it is worth noting that the absence of intensity fluctuations in correspondence with the magnetic power seems to suggest that at those locations acoustic energy is mostly converted into magnetic-like waves. Further studies are necessary in order to shed light onto this aspect.
Further, we would like to note that this kind of study has additional important implications. The FIP effect could be regarded as a proxy for the wave energy dissipation. This is a long-debated problem \citep{Jess2015} and still the identification of the physical processes with which energy is transferred from waves to the plasma remains a challenge.
Finally, our results highlight the importance of studying simultaneously different heights of the solar atmosphere, by combining simultaneous or nearly simultaneous ground-based spectropolarimetric observations in the lower atmosphere, with high spectral resolution data of the corona acquired from space. This is important not only for the investigation of the FIP effect, but also for the wave dissipation mechanisms for which the FIP effect itself could be considered as a proxy.
It is worth noting that this possibility will be widened by the new solar mission Solar-C EUVST \citep{Shimizu2011,Suematsu2016}, which will provide an unprecedented view of the corona at high temporal, spatial and spectral resolution. In this regard, this case study can be considered as a pathfinder for the full exploitation of its data in combination with high resolution spectropolarimetric imaging of the lower atmosphere.
\begin{acknowledgements}
The authors are grateful to the anonymous referee for useful comments.
This research received funding from the European Union’s Horizon 2020 Research and Innovation
531 program under grant agreements No 824135 (SOLARNET) and No 729500 (PRE-EST). This work was supported by the Italian MIUR-PRIN grant 2017 "Circumterrestrial environment: Impact of Sun-Earth Interaction" and by the Istituto NAzionale di Astrofisica (INAF).
DBJ and SDTG wish to thank Invest NI and Randox Laboratories Ltd. for the award of a Research \& Development Grant (059RDEN-1), in addition to the UK Science and Technology Facilities Council (STFC) for the consolidated grant ST/T00021X/1.
SJ acknowledges support from the European Research Council under the European Union Horizon 2020 research and innovation program (grant agreement No. 682462) and from the Research Council of Norway through its Centres of Excellence scheme (project No. 262622).
The authors wish to acknowledge scientific discussions with the Waves in the Lower Solar Atmosphere (WaLSA; \href{www.WaLSA.team}{www.WaLSA.team}) team, which is supported by the Research Council of Norway (project number 262622), and The Royal Society through the award of funding to host the Theo Murphy Discussion Meeting ``High-resolution wave dynamics in the lower solar atmosphere'' (grant Hooke18b/SCTM).
D.B. is funded under STFC consolidated grant number ST/S000240/1 and L.v.D.G. is partially funded under the same grant.
The work of D.H.B. was performed under contract to the Naval Research Laboratory and was funded by the NASA Hinode program.
D.M.L. is grateful to the Science Technology and Facilities Council for the award of an Ernest Rutherford Fellowship (ST/R003246/1).
The italian scientific contribution to Solar-C is supported by the Italian Space Agency (ASI) under contract to the co-financing National Institute for Astrophysics (INAF) Accordo ASI-INAF 2021-12-HH.0 "Missione Solar-C EUVST - Supporto scientifico di Fase B/C/D".
\end{acknowledgements}
|
1,108,101,564,374 | arxiv | \section{Introduction}
Liesegang rings appear as regular patterns in many chemical
precipitation reactions. Their discovery is usually attributed to the
German chemist Raphael Liesegang who, in 1896, observed the emergence
of concentric rings of silver dichromate precipitate in a gel of
potassium dichromate when seeded with a drop of silver nitrate
solution. Related precipitation patterns were in fact observed even
earlier, see \cite{Henisch:1988:CrystalsGL} for a historical
perspective.
{From} the modeling perspective, there are two competing points of
view. One is a ``post-nucleation'' approach in which the patterns
emerge via competitive growth of precipitation germs
\cite{Smith:1984:OstwaldST}, the other a ``pre-nucleation'' approach,
a sophisticated modification of the ``post-nucleation'' approach,
suggested by Keller and Rubinow \cite{KellerR:1981:RecurrentPL} which
is the starting point of the present work. The recent
survey~\cite{DuleyFM:2017:KellerRM} gives a comprehensive summary of
the most important published research on both approaches, including
numerical and theoretical comparisons. A direct and detailed
comparison between the two theories and the history behind can be
found in~\cite{KrugB:1999:MorphologicalCL}.
The Keller--Rubinow model is based on the chain of chemical reactions
\begin{gather*}
A + B \to C \to D
\end{gather*}
with associated reaction-diffusion equations
\begin{subequations}
\begin{gather}
a_t = \nu_a \, \Delta a - k \, a \, b \,, \\
b_t = \nu_b \, \Delta b - k \, a \, b \,, \\
c_t = \nu_c \, \Delta c + k \, a \, b - P(c,d) \,, \\
d_t = P(c,d) \,,
\end{gather}
\end{subequations}
where the rate of the precipitation reaction is described by the
function
\begin{gather}
P(c,d) =
\begin{cases}
0 & \text{if } d=0 \text{ and } c<c^\top \,, \\
\lambda \, (c-c^\bot)_+ &
\text{if } d>0 \text{ or } c \geq c^\top \,.
\end{cases}
\end{gather}
Without loss of generality, we may assume that the precipitation rate
constant $\lambda=1$; this choice is assumed in the remainder of the
paper. The precipitation function $P$ expresses that precipitation
starts only once the concentration $c$ exceeds a supersaturation
threshold $c^\top$ and continues for as long as $c$ exceeds the
saturation threshold $c_\bot$.
Using \cite{HilhorstHP:1996:FastRL,HilhorstHP:1997:DiffusionPF,HilhorstHP:2000:NonlinearDP},
Hilhorst \emph{et al.}\ \cite{HilhorstHM:2007:FastRL} studied the case
where $\nu_b=0$, $c_\bot=0$, and the ``fast reaction limit'' where
$k \to \infty$. To simplify matters, they took as the spatial domain
the positive half-axis. This is precisely the setting we shall
consider in our work and which we refer to as the HHMO-model. Writing
$u$ in place of $c$ and choosing dimensions in which $\nu_c=1$, we can
state the model as
\begin{subequations}
\label{e.original}
\begin{gather}
u_t = u_{xx} +
\frac{\alpha \beta}{2 \sqrt t} \, \delta (x - \alpha \sqrt{t})
- p[x,t;u] \, u \,,
\label{e.original.a} \\
u_x(0,t) = 0 \quad \text{for } t \geq 0 \,, \label{e.original.b} \\
u(x,0) = 0 \quad \text{for } x>0 \label{e.original.c}
\end{gather}
\end{subequations}
where the precipitation function $p[x,t;u]$ depends on $x$, $t$, and
nonlocally on $u$ via
\begin{equation}
p[x,t;u] = H
\biggl(
\int_0^t (u(x,\tau) - u^*)_+ \, \d \tau
\biggr) \,.
\label{e.hhmo-p}
\end{equation}
Here, $H$ denotes the Heaviside function with the convention that
$H(0)=0$ and $u^*$ denotes the supersaturation concentration.
Hilhorst \emph{et al.}\ \cite{HilhorstHM:2007:FastRL} further
introduce the notion of a weak solution to \eqref{e.original}. Modulo
technical details, their approach is to seek pairs $(u,p)$ that
satisfy \eqref{e.original.a} integrated against a suitable test
function such that
\begin{equation}
\label{e.hhmo-p-weak}
p(x,t) \in H
\biggl(
\int_0^t (u(x,\tau) - u^*)_+ \, \d \tau
\biggr)
\end{equation}
where $H$ now denotes the Heaviside graph
\begin{equation}
\label{p.H.def}
H(y) \in
\begin{cases}
0 & \text{when } y<0 \,, \\
[0,1] & \text{when }y=0 \,, \\
1 & \text{when } y>0 \,.
\end{cases}
\end{equation}
Additionally, they require that $p(x,t)$ takes the value $0$ whenever
$u(x,s)$ is strictly less than the threshold $u^*$ for all
$s\in[0,t]$. This can be stated as
\begin{equation}
\label{e.hhmo-p-weak-alternative}
p(x,t)\in
\begin{cases}
0&\text{ if }\sup_{s\in[0,t]}u(x,s)<u^* \,,\\
[0,1]&\text{ if }\sup_{s\in[0,t]}u(x,s)=u^* \,,\\
1 &\text{ if }\sup_{s\in[0,t]}u(x,s)>u^* \,.
\end{cases}
\end{equation}
While they prove existence of such weak solutions, they cannot assert
uniqueness, nor can they guarantee that the precipitation function $p$
is truly a binary function. Further, under the assumption that a weak
solution satisfies the stronger condition \eqref{e.hhmo-p} rather than
\eqref{e.hhmo-p-weak} and the value $u^*$ is not too big, they prove
the existence of an infinite number of distinct precipitation
regions.
In this paper, we provide evidence that the long-time behavior of
solutions to the HHMO-model is determined by an asymptotic profile
that depends only on the parameters of the equation. Heuristically,
the mechanism of convergence is the following: as soon as the concentration
exceeds the precipitation threshold $u^*$, the reaction ignites and
reduces the reactant concentration. A continuing reaction burns up
enough fuel in its neighborhood to eventually pull the concentration
below the threshold everywhere, so the reaction region cannot grow
further. Eventually, the source location will move sufficiently far
from the active reaction regions that the concentration grows again
and the reaction threshold may be surpassed again. As the source
loses strength with time, the amplitudes of the concentration change
around the source will decrease with time, getting ever closer to the
critical concentration. In fact, both numerical studies and
analytical results on a simplified model suggest that convergence of
concentration to the critical value happens within a bounded region of
space-time \cite{Darbenas:2018:PhDThesis,DarbenasO:2018:BreakdownLP},
so that process of equilibrization is much more rapid than the typical
approach to a stable equilibrium point in a smooth dynamical system.
In $x$-$t$ coordinates, the source point is moving. To analyze the
time-asymptotic behavior, we must therefore change into parabolic
similarity coordinates, here defined as $\eta=x/\sqrt{t}$ and
$s=\sqrt{t}$. We further write $u(x,t) = v(x/\sqrt{t},\sqrt{t})$ and
$p[x,t;u] = q[\eta,s;v]$ to make transparent which coordinate system
is used at any point in the paper. In similarity coordinates, the
$\delta$-source in \eqref{e.original} is stationary at $\eta=\alpha$
but decreases in strength as time progresses. In what follows, we
look for asymptotic profiles where
\begin{equation}
\lim_{s \to \infty} v(\alpha, s) = u^* \,.
\label{e.valphalimit}
\end{equation}
In the classical setting of smooth dynamical systems, the limit
function would correspond to a stable equilibrium of the system in
$\eta$-$s$ coordinates. Here, stationarity is incompatible with the
ignition condition \eqref{e.hhmo-p-weak-alternative}. We thus impose
that $p$ takes a form such that the precipitation term loses its
$s$-dependence. This requirement can only be satisfied when
$p(x,t) = \gamma \, x^{-2} \, H(\alpha^2t-x^2)$ for some non-negative
constant $\gamma$, so the self-similar precipitation function takes
values outside of $[0,1]$, in fact, it is not even bounded.
Nonetheless, for each $\gamma \geq 0$, we can solve the stationary
problem to obtain a profile $\Phi_\gamma$, which, subject to suitable
conditions, is uniquely determined by the condition that
$\Phi_\gamma(\alpha) = u^*$ so the profile is consistent with the
conjectured limit \eqref{e.valphalimit}. Now the following picture
emerges.
With varying source strength (in the following, we will actually think
of varying $u^*$ for given values of $\alpha$ and $\beta$), there are
three distinct open regimes. When the source is insufficient to
ignite the reaction at all (``subcritical regime''), the dynamics
remains trivial. When the source strength is larger but not very
large (``transitional regime''), some reaction will be triggered
initially, but eventually diffusion into the active part of the
reaction overwhelms the source so that no further ignition occurs.
The scenario of asymptotic equilibrization cannot be maintained so that
\eqref{e.valphalimit} does not hold true. We find that solutions
anywhere in the transitional regime will converge to a universal
profile $\Phi_0$. When the source strength is large enough so that
continuing re-ignition is always possible (``supercritical regime''),
we identify a one-parameter family of profiles $\Phi_\gamma$ which
determine the long-time asymptotics of the concentration.
Throughout the paper, we use the following notion of convergence. For
the concentration, we look at the notion of uniform convergence in
$\eta$-$s$ coordinates, i.e., that
\begin{equation}
\lim_{s \to \infty} \sup_{\eta \geq 0} \,
\lvert v(\eta,s) - \Phi_\gamma(\eta) \rvert
= \lim_{t \to \infty} \sup_{\eta \geq 0} \,
\lvert u(\eta \sqrt t, t) - \Phi_\gamma(\eta) \rvert
= 0 \,.
\label{e.uniform}
\end{equation}
For brevity, we shall say that \emph{$u$ converges uniformly to
$\Phi_\gamma$}; the sense of convergence is always understood as
defined here.
For the precipitation function, the notion of convergence is more
subtle. In our statements on convergence, we make use of the
following assumption:
\begin{itemize}
\item [(P)] There exists a measurable function $p^*$ such that for
a.e.\ $x \in \R_+$,
\begin{equation}
\label{p.property}
p(x,t) = p^*(x) \quad \text{for} \quad t > x^2/\alpha^2 \,.
\end{equation}
\end{itemize}
When the concentration passes the threshold transversally, this
condition is satisfied. When the concentration reaches, but does not
exceed the threshold on sets of positive measure, it is not known
whether weak solutions to the HHMO-model satisfy condition (P). The
problem in general is as difficult as the uniqueness problem, and
remains open. Numerical simulations show that, in similarity
coordinates, the concentration $u(\,\cdot\,,s)$ has a maximum at the
location of the source where $\eta=\alpha$ for every fixed $s$.
Hence, there is no further ignition of precipitation in the region
$\eta<\alpha$, which is expressed by condition (P).
Assuming condition (P), we can define a notion of
convergence for the precipitation function; it is
\begin{equation}
\lim_{x \to \infty} x \int_x^\infty p^*(\xi) \, \d \xi = \gamma \,.
\label{e.p-convergence}
\end{equation}
This means that, in an integral sense, the precipitation function
along the line $\eta = \alpha$ has the same long-time asymptotics as
the precipitation function of the self-similar profile, where
$p^*(x) = \gamma/x^2$.
The results in this paper are the following. First, we derive and
explicit expression for $\Phi_\gamma$ and prove necessary and
sufficient conditions under which it is a solution to the stationary
problem with self-similar precipitation function. Second, we present
numerical evidence that the solution indeed converges to the
stationary profile as described. Third, assuming condition (P), we
prove that $\Phi_0$ is the stationary profile in the transitional
regime. Fourth, in the supercritical regime, we can only give a
partial result which states the following: If there is an asymptotic
profile for the HHMO-solution, it must be $\Phi_\gamma$; moreover, the
precipitation function $p$ is asymptotic to the self-similar profile
in the sense of \eqref{e.p-convergence}. Vice versa, if the
precipitation function is asymptotic to the self-similar profile, then
it also satisfies
\begin{equation}
\lim_{x \to \infty} \frac1x \int_0^x \xi^2 \, p^*(\xi) \, \d \xi
= \gamma
\end{equation}
and the concentration $u$ converges uniformly to the profile
$\Phi_\gamma$.
The main remaining open problem is the proof of unconditional
convergence to a time-asymptotic profile. Part of the difficulty is
that the necessary asymptotic behavior of the precipitation function
is non-local in time. Thus, it is not clear how to pass from
convergence on a subsequence (for example, convergence of the time
average of the concentration is easily obtained via a standard
compactness argument) to convergence in general. On the other hand,
the numerical evidence for rapid convergence is very robust and, as
mentioned earlier, for a simplified version of the HHMO-model, we have
proof that convergence and subsequent breakdown of solutions with
binary precipitation function occurs within a bounded region of
space-time \cite{DarbenasO:2018:BreakdownLP}. On the one hand, it
seems surprising---given the super-exponential convergence seen in the
simplified model---that it is so difficult to prove convergence at
all. On the other hand, the techniques necessary to control
non-locality in time for the simplified model in
\cite{DarbenasO:2018:BreakdownLP} and the complex behavior seen there
offer a glimpse at the analytical issues which still need to be
overcome. There is a second, more general open question. The
derivation of the only compatible asymptotic profile might offer, in a
more general context, an opportunity to coarse-grain dynamical systems
whose microscopic dynamics consists of strongly equilibrizing switches
as we find in the HHMO-model for Liesegang ring. A precise
understanding of the necessary conditions, however, remains wide open.
Let us explain how our work relates to the extensive literature on
relay hysteresis. The precipitation condition can be seen as a
\emph{non-ideal relay} with switching levels $0$ and $u^*$. Its
generalization to non-binary values for $p$ in \eqref{p.H.def} or
\eqref{e.hhmo-p-weak-alternative} can be seen as a \emph{completed
relay} in the sense of Visintin
\cite{Visintin:1986:EvolutionPH,Visintin:1994:DifferentialMH}, see
also Remark~\ref{r.monotonicity}. Local well-posedness of a
reaction-diffusion equation with a non-ideal relay reaction term was
proved by Gurevich \emph{et al.}\ \cite{GurevichST:2013:ReactionDE}
subject to a transversality condition on the initial data. If this
condition is violated, the solution may be continued only in the sense
of a completed relay, where existence of solutions is shown in
\cite{Visintin:1986:EvolutionPH,AikiK:2008:MathematicalMB}, but
uniqueness is generally open. Gurevich and Tikhomirov
\cite{GurevichT:2017:RattlingSD,GurevichT:2018:SpatiallyDR} show that
a spatially discrete reaction-diffusion system with relay-hysteresis
exhibits ``rattling,'' grid-scale patterns of the relay state which
are only stable in the sense of a density function. The question of
optimal regularity of solutions to reaction-diffusion models with
relay hysteresis is discussed in \cite{ApushkinskayaU:2015:FreeBP}.
For an overview of recent developments in the field, see
\cite{CurranGT:2016:RecentAR,Visintin:2014:TenIH}.
The study of the HHMO-model as introduced above shares many features
with the results in the references cited above; it is also marred by
the same difficulties. However, there is also a key difference to the
systems studied elsewhere: the source term in the HHMO-model is local
and, reflecting its origin through a fast-reaction limit, follows
parabolically self-similar scaling. Thus, the nontrivial dynamics
comes from the interplay of the parabolic scaling in the forcing and
the memory of the reaction term which is attached to locations $x$ in
physical space. The parabolic scaling also necessitates studying the
system on an unbounded domain, even though, in practice, the
concentration is rapidly decaying and can be well-approximated on
bounded domains, see Section~\ref{s.numerics} and
Appendix~\ref{a.scheme} below. The HHMO-model has enough symmetries
that a study of the long-time behavior of the solution is possible; we
are not aware of corresponding results for other reaction-diffusion
equations with relay-hysteresis.
Our results in \cite{DarbenasO:2018:BreakdownLP} can be seen as a
proof, not for the HHMO-model, but for a closely related
reaction-diffusion equation, that loss of transversality must happen
in finite time. We expect that the HHMO-model exhibits the same
qualitative behavior, i.e., for the purposes of this paper, we must
think of solutions in the sense of completed relay solutions. Thus,
despite the open issues regarding uniqueness (in the context of the
HHMO-model, see \cite{DarbenasO:2018:UniquenessSK}), and despite the
fact that discrete realizations rattle, we actually see very simple
and regular long-time dynamics. Thus, we conjecture that this
problem, and possibly a large class of related problems, is
potentially amenable to a coarse-grained description in terms of
precipitation density functions which is simpler and more regular than
the description via spatially distributed relays.
The paper is structured as follows. In the preliminary
Section~\ref{s.without}, we rewrite the equations in standard
parabolic similarity variables and derive the similarity solution
without precipitation, which is a prerequisite for defining the notion
of weak solution and is also used as a supersolution in several
proofs. In Section~\ref{weak.solution}, we recall the concept of weak
solution from \cite{HilhorstHM:2009:MathematicalSO} and proof several
elementary properties which follow directly from the definition. In
Section~\ref{s.self-similar}, we introduce the self-similar
precipitation function, derive the stationary solution in similarity
variables and prove necessary and sufficient conditions for their
existence under the required boundary conditions.
Section~\ref{s.numerics} describes the phenomenology of solutions to
the HHMO-model by numerical simulations which confirm the picture
outlined above; details about the numerical code are given in the
appendix. The final two sections are devoted to proving rigorous
results on the long-time asymptotics. In Section~\ref{s.auxiliary},
we study the long-time dynamics of a linear auxiliary problem, in
Section~\ref{s.hhmo} we use the results on the auxiliary problem to
state and prove our main theorems on the long-time behavior of the
HHMO-model.
\section{Self-similar solution without precipitation}
\label{s.without}
We begin writing \eqref{e.original} in terms of the parabolic
similarity coordinates $\eta=x/\sqrt{t}$ and $s=\sqrt{t}$. Setting
$u(x,t) = v(x/\sqrt{t},\sqrt{t})$, $p[x,t;u] = q[\eta,s;v] \equiv q$,
and
$\delta(\eta - \alpha) = \delta_\alpha(\eta) \equiv \delta_\alpha$, we
obtain
\begin{subequations}
\label{e.v}
\begin{gather}
s \, v_s - \eta \, v_\eta = 2 \, v_{\eta\eta}
+ \alpha \beta \, \delta_\alpha - 2 \, s^2 \, q[\eta,s;v] \, v \,,
\label{e.v-a} \\
v_\eta(0,s) = 0 \quad \text{for } s \geq 0 \,. \label{e.v-c}
\end{gather}
\end{subequations}
Since the change of variables is singular at $s=0$, we cannot
translate the initial condition \eqref{e.original.c} into $\eta$-$s$
coordinates. We will augment system \eqref{e.v} with suitable
conditions when necessary.
Self-similar solution are steady-states in $\eta$-$s$ coordinates. We
first consider the case where $p=0$ or $q=0$, respectively. Then
\eqref{e.v} reduces to the ordinary differential equation
\begin{subequations}
\label{e.v-zerop}
\begin{gather}
\Psi'' + \frac\eta2 \, \Psi' + \frac{\alpha\beta}2 \, \delta(\eta-\alpha)
= 0 \,, \label{e.ode1} \\
\Psi'(0) = 0 \,, \label{e.v-zerop-b} \\
\Psi(\eta) \to 0 \quad \text{as } \eta \to \infty \,.
\label{e.v-zerop-c}
\end{gather}
\end{subequations}
Condition \eqref{e.v-zerop-c} encodes that we seek solutions where the
total amount of reactant is finite. Note that in the full
time-dependent problem, decay of the solution at spatial infinity is
encoded into the initial data and must be shown to propagate in time
within an applicable function space setting.
The integrating factor for \eqref{e.ode1} is $\exp(\tfrac14 \eta^2)$,
so that by integrating with respect to $\eta$ and using
\eqref{e.v-zerop-b} as initial condition, we find
\begin{equation}
\Psi'(\eta) = - \frac{\alpha\beta}2 \,
\e^{\tfrac{\alpha^2-\eta^2}4} \, H(\eta-\alpha) \,.
\end{equation}
Another integration, this time on the interval $[\eta,\infty)$ using
condition \eqref{e.v-zerop-c}, yields
\begin{align}
\label{self.similar.p0}
\Psi(\eta)
& = \frac{\alpha\beta}2 \, \e^{\tfrac{\alpha^2}4}
\int_\eta^\infty \e^{-\tfrac{\zeta^2}4} \,
H(\zeta-\alpha) \, \d \zeta
\notag \\
& = \frac{\alpha \beta \sqrt\pi}2 \, \e^{\tfrac{\alpha^2}4} \cdot
\begin{cases}
\operatorname{erfc}(\alpha/2) & \text{if } \eta \leq \alpha \,, \\
\operatorname{erfc}(\eta/2) & \text{if } \eta > \alpha \,.
\end{cases}
\end{align}
Translating this result back into $x$-$t$ coordinates and setting
$\psi(x,t)=\Psi(x/\sqrt t)$, we obtain the self-similar,
zero-precipitation solution,
\begin{gather}
\label{psi.def}
\psi(x,t)
= \begin{dcases}
\frac{\alpha\beta}2 \, \e^{\tfrac{\alpha^2}4}
\int_{\alpha\vphantom\int}^\infty
\e^{-\tfrac{\zeta^2}4} \, \d \zeta &
\text{if } x \leq \alpha \sqrt{t} \,, \\
\frac{\alpha\beta}2 \, \e^{\tfrac{\alpha^2}4}
\int_{x/\sqrt{t}}^{\infty\vphantom\int}
\e^{-\tfrac{\zeta^2}4} \, \d \zeta &
\text{if } x > \alpha \sqrt{t} \,.
\end{dcases}
\end{gather}
\section{Weak solutions for the HHMO-model}
\label{weak.solution}
We start with a rigorous definition of a (weak) solution for the
HHMO-model \eqref{e.original}. In this formulation, we allow for
fractional values of the precipitation function $p$ as \emph{a priori}
we do not know whether $p$ is binary, or will remain binary for all
times.
For non-negative integers $n$ and $k$, and $D \subset \R\times\R_+$
open, we write $C(D)$ to denote the set of continuous real-valued
functions on $D$, and
\begin{subequations}
\begin{gather}
C^{n,k}(D)
= \Bigl\{
f \in C(D) \colon
\frac{\partial^nf}{\partial x^n} \in C(D),
\frac{\partial^kf}{\partial t^k} \in C(D)
\Bigr\} \,.
\end{gather}
Similarly, we write $C(\R \times [0,T])$ to denote the continuous
real-valued functions on $\R \times [0,T]$, and
\begin{multline}
C^{n,k}(\R\times[0,T])
= \Bigl\{
f \in C(\R\times[0,T]) \colon \\
\frac{\partial^nf}{\partial x^n} \in C(\R\times[0,T]),
\frac{\partial^kf}{\partial t^k} \in C(\R\times[0,T])
\Bigr\} \,.
\end{multline}
\end{subequations}
It will be convenient to extend the spatial domain of the HHMO-model
to the entire real line by even reflection. We write out the notation
of weak solutions in this sense, knowing that we can always go back to
the positive half-line by restriction.
\begin{definition}
\label{weak.sol.def}
A \emph{weak solution} to problem \eqref{e.original} is a pair $(u,p)$
satisfying
\begin{enumerate}[label={\upshape(\roman*)}]
\item $u$ and $p$ are symmetric in space, i.e.\ $u(x,t)=u(-x,t)$ and
$p(x,t)=p(-x,t)$ for all $x \in \R$ and $t\ge0$,
\item\label{weak.ii} $u-\psi\in C^{1,0}(\R\times[0,T])\cap L^{\infty}(\R\times[0,T])$ for
every $T>0$,
\item\label{weak.iii} $p$ is measurable and satisfies
\eqref{e.hhmo-p-weak-alternative},
\item\label{weak.3.5} $p(x,t)$ is non-decreasing in time $t$ for every
$x \in \R$,
\item\label{weak.iv} the relation
\begin{equation}
\label{weak.sol.def.eq}
\int_0^T\int_\R\varphi_t \, (u-\psi) \, \d y \, \d s
= \int_0^T \int_\R
\bigl(
\varphi_x \, (u-\psi)_x + p \, u \, \varphi
\bigr) \, \d y \, \d s
\end{equation}
holds for every $\varphi\in C^{1,1}(\R\times[0,T])$ that vanishes for
large values of $|x|$ and for time $t=T$.
\end{enumerate}
\end{definition}
\begin{remark}
The regularity class for weak solutions we require here is less strict
than the regularity class assumed by Hilhorst \emph{et al.}\
\cite[Equation~12]{HilhorstHM:2009:MathematicalSO}, who consider
solutions of class
\begin{equation}
u-\psi \in C^{1+\ell,\frac{1+\ell}2}(\R\times[0,T])
\cap H^1_{{\operatorname{loc}}}(\R\times[0,T]) \,.
\end{equation}
for every $\ell\in(0,1)$, where $C^{\alpha,\beta}$ denote the usual
H\"older spaces; see, e.g., \cite{LadyzhenskajaSU:1968:LinearQP}.
They prove existence of a weak solution in this stronger sense.
Clearly, every weak solution in their setting are solutions to our
problem. The question of uniqueness is open for both formulations,
but some partial results are now available
\cite{Darbenas:2018:PhDThesis, DarbenasO:2018:UniquenessSK}.
\end{remark}
\begin{remark}
\label{r.monotonicity}
The monotonicity condition \ref{weak.3.5} is not included in the
definition of weak solutions by Hilhorst \emph{et al.}\
\cite{HilhorstHM:2009:MathematicalSO}. Their construction, however,
always preserves monotonicity so that existence of solutions
satisfying this condition is guaranteed. In the following, it is
convenient to assume monotonicity. We note that, due to condition
\eqref{e.hhmo-p-weak-alternative}, monotonicity only ever becomes an
issue when $u$ grazes, but does not exceed the precipitation threshold
on sets of positive measure in space-time. We do not know if such
highly degenerate solutions exist, but the results in
\cite{DarbenasO:2018:BreakdownLP} suggest that this might be the case.
We also remark that the definition of a \emph{completed relay} by
Visintin \cite{Visintin:1986:EvolutionPH,Visintin:1994:DifferentialMH}
includes the requirement of monotonicity.
\end{remark}
To proceed, we introduce some more notation. When $u^*<\Psi(\alpha)$,
we write $\alpha^*$ to denote the unique solution to
\begin{equation}
\label{alpha.star}
\Psi(\alpha^*)=u^* \,,
\end{equation}
where $\Psi$ is the precipitation-less solution given by equation
\eqref{self.similar.p0}, and we set
\begin{equation}
\label{d.star}
D^* = \{(x,t) \colon 0<\alpha^*\sqrt t<x\} \,.
\end{equation}
Further, we abbreviate $[f-g](y,s)=f(y,s)-g(y,s)$ and
$[fg](y,s)=f(y,s) \, g(y,s)$.
In the following, we prove a number of properties which are implied by
the notion of weak solution. In these proofs, as well as further in
this paper, we rely on the fact that we can read
\eqref{weak.sol.def.eq} as the weak formulation of a \emph{linear}
heat equation of the form
\begin{equation}
w_t - w_{xx} = g(x,t) \,,
\label{e.w-strong}
\end{equation}
for a given bounded integrable right-hand function $g$. We shall
write the equations in their classical form \eqref{e.w-strong} where
convenient with the understanding that they are satisfied in the sense
of \eqref{weak.sol.def.eq}. Further, in the functional setting of
Definition~\ref{weak.sol.def}, the solution is regular enough such
that it is unique for fixed $g$, the Duhamel formula holds true, and,
consequently, the subsolution resp.\ supersolution principle is
applicable. For a detailed verification of these statement from first
principles, see, e.g., \cite[Appendix~B]{Darbenas:2018:PhDThesis}.
\begin{lemma}
\label{u.psi}
Any weak solution $(u,p)$ of \eqref{e.original} satisfies
$[u-\psi](x,0)=0$, $0<u\le\psi$ for $t>0$, and $p=0$ on $D^*$.
\end{lemma}
\begin{proof}
The inequality $u\le\psi$ is a direct consequence of the subsolution
principle. Hence, $u\le\psi<u^*$ on $D^*$, so $p=0$ on $D^*$. Now
consider the weak solution to
\begin{subequations}
\begin{gather}
u^\ell_t = u^\ell_{xx} +
\frac{\alpha \beta}{2 \sqrt t} \, \delta (x - \alpha \sqrt{t})
- u^\ell \,,\\
u^\ell_x(0,t) = 0 \quad \text{for } t \geq 0 \,, \\
u^\ell(x,0) = 0 \quad \text{for } x>0 \,,
\end{gather}
\end{subequations}
which transforms into
\begin{equation}
(\e^t \, u^\ell)_t
= (\e^t \, u^\ell)_{xx} + \e^t \, \frac{\alpha \beta}{2 \sqrt t} \,
\delta (x - \alpha \sqrt{t}) \,.
\end{equation}
As the distribution on the right hand side is positive, the Duhamel
principle implies that $\e^t \, u^\ell$ is positive for $t>0$, and so
is $u^\ell$. Due to the subsolution principle, we find
$u\ge u^\ell > 0$ for $t>0$. Finally, since
$\lim_{t\to\infty} \psi(x,t)=0$ for $x>0$ fixed, this implies
$[u-\psi](x,0)=0$.
\end{proof}
\begin{lemma}
\label{p.dependent.u}
The precipitation function $p$ is essentially determined by the
concentration field $u$. I.e., if $(u,p_1)$ and $(u,p_2)$ are weak
solutions to \eqref{e.original} on $\R\times[0,T]$, then $p_1= p_2$
almost everywhere on $\R\times[0,T]$.
\end{lemma}
\begin{proof}
Taking the difference of \eqref{weak.sol.def.eq} with $p=p_1$ and
$p=p_2$, we find
\begin{equation}
\int_0^t\int_\R (p_1-p_2)\, u \, \varphi\,\d x\, \d t = 0
\end{equation}
for every $\varphi\in C^{1,1}(\R\times[0,T])$ that vanishes for large
values of $|x|$ and time $t=T$. As such functions are dense in
$L^2(\R\times[0,T])$, we conclude $(p_1-p_2) \, u=0$ a.e.\ in
$\R\times[0,T]$. Moreover, $u>0$ for $t>0$, so that $p_1=p_2$ a.e.\
in $\R\times[0,T]$.
\end{proof}
\begin{theorem}[Weak solutions with subcritical precipitation threshold]
\label{weak.sol.ustar.threshold.subcritical}
When $u^*>\Psi(\alpha)$, then $(\psi,0)$ is the unique weak
solution of \eqref{e.original}.
\end{theorem}
\begin{proof}
We know that $u\le\psi$ from Lemma~\ref{u.psi}. Therefore, the
threshold $u^*$ will be never reached. So $p=0$ and, due to the
uniqueness of weak solutions for linear parabolic equations, $u=\psi$.
\end{proof}
The following result shows that, in general, we cannot expect
uniqueness of weak solutions: When the precipitation threshold is
marginal, the concentration can remain at the threshold for large
regions of space-time. Within such regions, spontaneous onset of
precipitation is possible on arbitrary subsets, thus a large number of
nontrivial weak solutions exists.
The precise result is the following.
\begin{theorem}[Weak solutions with marginal precipitation threshold]
\label{weak.sol.ustar.threshold.critical}
When $u^*=\Psi(\alpha)$, the set of weak solutions to
\eqref{e.original} is equal to the set of pairs $(u,p)$ such that
\begin{enumerate}[label={\upshape(\roman*)}]
\item\label{wsut.i} $p$ is an even measurable function taking values
in $[0,1]$,
\item $p(x,t)$ is non-decreasing in time $t$ for every
$x \in \R$,
\item\label{wsut.ii} there exists $b>0$ such that $p(x,t)=0$ if
$(x,t) \notin U=[-b,b]\times[b^2/\alpha^2, \infty)$,
\item\label{wsut.iii} $(u,p)$ satisfies the weak form of the equation of
motion, i.e., Definition~\textup{\ref{weak.sol.def}}~\ref{weak.iv}
holds true.
\end{enumerate}
\end{theorem}
\begin{proof}
Assume that $(u,p)$ is any pair satisfying
\ref{wsut.i}--\ref{wsut.iii}. To show that $(u,p)$ is a weak
solution, we need to verify that it is compatible with condition
\eqref{e.hhmo-p-weak-alternative}; all other properties are trivially
satisfied by construction. Since $u\le\psi\le\Psi(\alpha)=u^*$, it
suffices to prove that $p(x,t)>0$ implies
$\max_{\tau\in[0,t]}u(x,\tau) = u^*$. We begin by observing that
$u(x,t)=\psi(x,t)$ for all $x\in\R$ if $t\in[0,b^2/\alpha^2]$. Since,
by construction, $p(x,t)>0$ only for $(x,t)\in U$, this implies
\begin{equation}
\max_{t\in[0,b^2/\alpha^2]}u(x,t)
\ge u(x,x^2/\alpha^2)
= \psi(x,x^2/\alpha^2) = u^* \,.
\end{equation}
In other words, $(u,p)$ is compatible with
\eqref{e.hhmo-p-weak-alternative} on $U$. For $(x,t) \notin U$,
$p(x,t)=0$ and \eqref{e.hhmo-p-weak-alternative} is trivially
satisfied. Altogether, this proves that that $(u,p)$ is a weak
solution on the whole domain $\R\times\R_+$.
Vice versa, assume that $(u,p)$ is a weak solution. If $p=0$ a.e.,
then $u=\psi$ and \ref{wsut.i}--\ref{wsut.iii} are satisfied for any
$b>0$. Otherwise, define
\begin{gather}
A(t) = \{ (x,\tau) \colon \tau \leq t \text{ and } p(x,\tau)>0 \} \,, \\
T = \inf \{ t>0 \colon m(A(t))>0 \}\,,
\end{gather}
where $m$ denotes the two-dimensional Lebesgue measure. By
definition, $p=0$ a.e.\ on $\R\times[0,T]$ so that $u=\psi$ on
$\R\times[0,T]$. We also note that
\begin{gather}
m(\{(x,t) \colon t\in[T,T+\varepsilon] \text{ and } p(x,t)>0\})>0
\end{gather}
for every $\varepsilon>0$ and that $u(x,t)>0$ for all $t>0$. Then for
every $t>T$, by the Duhamel principle,
\begin{equation}
u(x,t)
= \psi(x,t)
- \int_0^t\int_\R\HK(y,\tau) \, [pu](x-y,t-\tau) \, \d y\,\d \tau
< \psi(x,t) \le \Psi(\alpha) \,,
\label{e.inequ1}
\end{equation}
where $\HK$ is the standard heat kernel
\begin{equation}
\HK(x,t) = \begin{dcases}\frac1{\sqrt{4\pi t}} \,
\e^{-\tfrac{x^2}{4t}}&\text{if }t>0 \,,\\
0&\text{if }t\le0 \,.
\end{dcases}
\end{equation}
We first note that $T>0$. Indeed, if $T$ were zero, \eqref{e.inequ1}
would imply that $u(x,t)<u^*$ for all $x\neq 0$, so that $p=0$ a.e., a
contradiction. Moreover, taking $\abs{x}>\alpha\sqrt{T}$,
\begin{equation}
\max_{t\in[0,T]}u(x,t)
\le \max_{t\in[0,T]} \psi(x,t)
= \psi(x,T)
< \Psi(\alpha) \,.
\label{e.inequ2}
\end{equation}
Inequalities \eqref{e.inequ1} and \eqref{e.inequ2} imply that
$p(x,t)=0$ so that \ref{wsut.i}--\ref{wsut.iii} are satisfied with
$b=\alpha\sqrt{T}>0$.
\end{proof}
\begin{remark}
\label{r.nonunique}
Theorem~\ref{weak.sol.ustar.threshold.critical} illustrates how
non-uniqueness of weak solutions arises in the case of a marginal
precipitation threshold. One obvious solution is $u=\psi$ and $p=0$.
Solutions with nonvanishing precipitation can be constructed as
follows. Fix any $b>0$ and take any even measurable function $p^*$
taking values in $[0,1]$ with $\supp p^* \subset [-b,b]$. Set
$p(x,t) = p^*(x) \, H(t-b^2/\alpha^2)$. Then $p$ satisfies
\ref{wsut.i}--\ref{wsut.ii}. On the time interval $[0,b^2/\alpha^2]$,
$u=\psi$ satisfies the weak form. For $t>b^2/\alpha^2$, determine $u$
as the weak solution to the linear parabolic equation
\eqref{e.original.a} with the given function $p$. Then, by
construction, $(u,p)$ is a weak solution in the sense of
Definition~\ref{weak.sol.def}.
\end{remark}
\begin{remark}
\label{r.prec-condition}
Theorem~\ref{weak.sol.ustar.threshold.critical} admits more weak
solutions than those described in Remark~\ref{r.nonunique}. We note
that, in particular, the precipitation condition
\eqref{e.hhmo-p-weak-alternative} allows ``spontaneous precipitation''
even when the maximum concentration has fallen below the precipitation
threshold everywhere provided the concentration has been at the
threshold at earlier times. This behavior appears unphysical, so that
we suggest that an alternative to \eqref{e.hhmo-p-weak-alternative}
may read
\begin{equation}
\label{e.hhmo-p-weak-alternative2}
p(x,t)\in
\begin{cases}
\sup_{s<t} p(x,s) & \text{ if } u(x,t)<u^* \,,\\
[\sup_{s<t} p(x,s),1] & \text{ if } u(x,t)=u^* \,,\\
1 &\text{ if } u(x,t)>u^*
\end{cases}
\end{equation}
with initial condition $p(x,0)=0$. We note, however, that this does
not fix the uniqueness problem entirely, as the examples of
Remark~\ref{r.nonunique} show. This issue affects only highly
degenerate solutions, like the ones described by
Theorem~\ref{weak.sol.ustar.threshold.critical}, where the
concentration remains at the precipitation threshold on a set of
positive measure. For the results of this paper, we discard solutions
with spontaneous precipitation by imposing condition (P), which is an
even stronger requirement than \eqref{e.hhmo-p-weak-alternative2}.
Existence with modified precipitation conditions is an open question,
although we expect that the construction of
\cite{HilhorstHM:2009:MathematicalSO} can be adapted with minor
modifications.
\end{remark}
The following result shows that the concentration $u$ is uniformly
Lipschitz in $x$-$t$ coordinates. Note, however, that this result
does not imply a uniform Lipschitz estimate with respect to the
spatial similarity coordinate $\eta$; due to the change of coordinates
the constant will grow linearly in $t$. We will not use this result
in the remainder of the paper, but state it here as the best estimate
which we were able to obtain by direct estimation in the Duhamel
formula or using energy methods. However, if the conjectured
asymptotic behavior of the precipitation function holds true, then the
Lipschitz condition can be shown to be uniform in similarity
coordinates, too.
\begin{lemma} \label{l.Lipschitz}
Let $(u,p)$ be a weak solution to \eqref{e.original}. Then, for any
$T>0$, $u$ is uniformly Lipschitz continuous on $\R \times [T,\infty)$.
\end{lemma}
\begin{proof}
Let $w = \psi - u$. A weak solution must satisfy the Duhamel formula
(see, e.g., \cite[Appendix~B]{Darbenas:2018:PhDThesis}),
so
\begin{align}
w(x_2,t) - w(x_1,t)
& = \int_0^t \int_\R
\bigl( \HK(x_2-y, t-\tau) - \HK(x_1-y, t-\tau) \bigr) \,
[pu](y,\tau) \, \d y \, \d \tau
\notag \\
& \equiv W_{[0,t-\delta]} + W_{[t-\delta,t]} \,,
\end{align}
where we split the domain of time-integration into two subintervals
and write $W_I$ to denote the contribution from subinterval $I$. In
the following, we suppose that $x_1<x_2$ and choose
$\delta = \min \{ t, \tfrac14 \}$.
On the subinterval $[0, t-\delta]$, if not empty, we apply the
fundamental theorem of calculus, so that
\begin{equation}
\lvert W_{[0,t-\delta]} \rvert
= \int_0^{t-\delta} \int_\R \int_{x_1}^{x_2}
\HK_x(\xi-y, t-\tau) \, \d \xi \, [pu](y,\tau) \, \d y \, \d \tau \,.
\label{e.diff-int1}
\end{equation}
Now note that
\begin{align}
\lvert \HK_x(\xi-y, t-\tau) \rvert
& = \frac1{4 \sqrt\pi} \,
\frac{\lvert \xi-y \rvert}{(t-\tau)^{3/2}} \,
\e^{-\tfrac{(\xi-y)^2}{4(t-\tau)}}
\notag \\
& = \frac{\lvert \xi-y \rvert \, \sqrt{t-\tau+\delta}}%
{2 \, (t-\tau)^{3/2}} \,
\e^{-\tfrac{(\xi-y)^2 \, \delta}{4(t-\tau)(t-\tau+\delta)}} \,
\frac1{\sqrt{4 \pi (t-\tau+\delta)}} \,
\e^{-\tfrac{(\xi-y)^2}{4(t-\tau+\delta)}}
\notag \\
& = \frac1{\sqrt\delta} \,
\biggl( 1 + \frac{\delta}{t-\tau} \biggr) \,
\zeta \, \e^{-\zeta^2} \,
K(\xi-y,t-\tau+\delta)
\notag \\
& \leq c(\delta) \, K(\xi-y,t-\tau+\delta) \,,
\label{e.kx-estimate}
\end{align}
where we have defined
\begin{equation}
\zeta = \lvert \xi - y \rvert \,
\frac{\sqrt\delta}{2 \, \sqrt{t-\tau} \, \sqrt{t-\tau+\delta}}
\end{equation}
and, to obtain the final inequality in \eqref{e.kx-estimate}, note
that $\zeta \, \e^{-\zeta^2}$ is bounded and $t-\tau \geq \delta$.
Changing the order of integration in \eqref{e.diff-int1}, taking
absolute values, and inserting estimate \eqref{e.kx-estimate}, we
obtain
\begin{align}
\lvert W_{[0,t-\delta]} \rvert
& \leq c(\delta) \, \int_{x_1}^{x_2} \int_0^{t-\delta} \int_{\R}
K(\xi-y,t-\tau+\delta) \, [pu](y,\tau) \, \d y \, \d \tau \, \d \xi
\notag \\
& \leq c(\delta) \, \int_{x_1}^{x_2} \int_0^{t+\delta} \int_{\R}
K(\xi-y,t+\delta-\tau) \, [pu](y,\tau) \, \d y \, \d \tau \, \d \xi
\notag \\
& \leq c(\delta) \, \lvert x_2 - x_1 \rvert \,
\sup_{\xi \in \R} \, \abs{w(\xi,t+\delta)} \,.
\end{align}
Since $w$ is bounded, we have obtained a uniform-in-time Lipschitz
estimate for $w$ on the first subinterval.
On the subinterval $[t-\delta,t]$, we use the boundedness of $pu$, so
that we can take out this contribution in the space-time $L^\infty$
norm,
\begin{equation}
\lvert W_{[t-\delta,t]} \rvert
\leq \int_{t-\delta}^t \int_\R
\lvert \HK(x_2-y, t-\tau) - \HK(x_1-y, t-\tau) \rvert \,
\d y \, \d \tau \, \lVert pu \rVert_{L^\infty} \,.
\end{equation}
Setting $r = (x_2-x_1)/2$ and changing variables $t-\tau \mapsto \tau$, we
obtain
\begin{align}
\int_{t-\delta}^t \int_\R
& \lvert \HK(x_2-y, t-\tau) - \HK(x_1-y, t-\tau) \rvert \, \d y \, \d \tau
\notag \\
& = \int_0^\delta
\biggl(
\operatorname{erfc} \Bigl(-\frac{r}{2 \sqrt \tau} \Bigr)
- \operatorname{erfc} \Bigl(\frac{r}{2 \sqrt \tau} \Bigr)
\biggr) \, \d \tau
\notag \\
& \leq \int_0^{1/4}
\biggl(
\operatorname{erfc} \Bigl(-\frac{r}{2 \sqrt \tau} \Bigr)
- \operatorname{erfc} \Bigl(\frac{r}{2 \sqrt \tau} \Bigr)
\biggr) \, \d \tau
\notag \\
& = \frac{r}{\sqrt \pi} \, \e^{-r^2}
+ \frac12 \, \operatorname{erf} (r) - r^2 \, (1- \operatorname{erf} r)
\notag \\
& \leq c \, \lvert x_2 - x_1 \rvert \,,
\label{e.estimate-int2}
\end{align}
where the last inequality is based on the observation that $\operatorname{erf}(r)$
is a smooth odd concave function and that $r \, (1-\operatorname{erf} r)$ is
bounded. This proves a uniform-in-time Lipschitz estimate for $w$ on
the second subinterval as well. Since $\psi$ is uniformly Lipschitz
on $\R \times [T,\infty)$ by direct inspection, $u=\psi-w$ is
uniformly Lipschitz on the same domain.
\end{proof}
\begin{remark}
We note that the heat equation with arbitrary $L^\infty$ right-hand
side is not necessarily uniformly Lipschitz. This can be seen by
observing that if we carry out the integration in
\eqref{e.estimate-int2} with arbitrary $\delta$, the constant $c$ will
be proportional to $\sqrt \delta$. Thus, choosing $\delta=t$, thereby
eschewing the separate estimate for the first subinterval, we obtain a
Lipschitz constant which grows like $\sqrt t$. Without recourse to the
particular features of the HHMO-model, this result is sharp, as can be
seen by taking the standard step function as right-hand function for
the heat equation.
\end{remark}
\section{Self-similar solution for self-similar precipitation}
\label{s.self-similar}
The computation of Section~\ref{s.without} can be extended to the case
when the precipitation term in $\eta$-$s$ coordinates does not have
any explicit dependence on $s$. To do so, it is necessary that
precipitation is a function of the similarity variable $\eta$ only,
which requires that $q(\eta,s)=p(s \eta) = \gamma/(s\eta)^2$ for some
constant $\gamma>0$ which we treat as an unknown. This means that we
disregard \eqref{e.hhmo-p-weak-alternative} which defines the
precipitation function in the original HHMO-model. We also disregard
the requirement that $p \in [0,1]$ in the definition of the
generalized precipitation function \eqref{e.hhmo-p-weak}. The
advantage of this ansatz is that the coefficients of the right hand
side of \eqref{e.v} do not depend on $s$. Therefore, as we shall show
in the following, steady states which we call \emph{self-similar
solutions} indeed exist, and we establish sufficient and necessary
conditions for their existence.
As before, we seek a stationary solution for \eqref{e.v}, which now
reduces to
\begin{subequations}
\label{e.v-selfsimilar}
\begin{gather}
\Phi'' + \frac\eta2 \, \Phi' + \frac{\alpha\beta}2 \, \delta(\eta-\alpha)
- \frac\gamma{\eta^2} \, H(\alpha-\eta) \, \Phi = 0 \,,
\label{e.ode2} \\
\Phi(\eta) \to 0 \quad \text{as } \eta \to \infty \,,
\label{e.v-selfsimilar-c} \\
\Phi(\alpha) = u^* \,,
\label{e.v-selfsimilar-d} \\
\Phi'(0) = 0 \,. \label{e.v-selfsimilar-b}
\end{gather}
\end{subequations}
The additional internal boundary condition \eqref{e.v-selfsimilar-d}
models the observation that the HHMO-model drives the solution to the
critical value $u^*$ along the line $\eta=\alpha$. As we will show
below, subject to a certain solvability condition, there will be a
unique pair $(\Phi,\gamma)$ solving this system.
We interpret the derivatives in \eqref{e.ode2} in the sense of
distributions, so that
\begin{gather}
\Phi'(\eta) = \frac{\d \Phi}{\d \eta} + [\Phi(\alpha)] \, \delta_\alpha
\label{e.vprime}
\intertext{and}
\Phi''(\eta) = \frac{\d^2 \Phi}{\d \eta^2}
+ [\Phi'(\alpha)] \, \delta_\alpha + [\Phi(\alpha)] \, \delta'_\alpha \,,
\label{e.vpprime}
\end{gather}
where $[\Phi(\alpha)] = \Phi(\alpha+)-\Phi(\alpha-)$ and $\d/\d \eta$
denotes the classical derivative where the function is smooth, i.e.,
on $(0,\alpha)$ and $(\alpha,\infty)$, and takes any finite value at
$\eta=\alpha$ where the classical derivative may not exist. Inserting
\eqref{e.vprime} and \eqref{e.vpprime} into \eqref{e.ode2}, we obtain
\begin{multline}
\frac{\d^2 \Phi}{\d \eta^2} + \frac\eta2 \, \frac{\d \Phi}{\d \eta}
- \frac\gamma{\eta^2} \, H(\alpha-\eta) \, \Phi
+ \biggl(
\frac{\alpha\beta}2 + \frac\eta2 \, [\Phi(\alpha)]
+ [\Phi'(\alpha)]
\biggr) \, \delta_\alpha
+ [\Phi(\alpha)] \, \delta'_\alpha = 0 \,.
\end{multline}
Going from the most singular to the least singular term, we conclude
first that $[\Phi(\alpha)]=0$, i.e., that $\Phi$ is continuous across
the non-smooth point at $\eta=\alpha$. Second, we obtain a jump
condition for the first derivative, namely
\begin{equation}
[\Phi'(\alpha)] = - \frac{\alpha \beta}2 \,.
\label{e.jump-condition}
\end{equation}
On the interval $(\alpha,\infty)$, we need to solve
\begin{subequations}
\label{e.vupper}
\begin{gather}
\Phi_\r'' + \frac\eta2 \, \Phi_\r' = 0 \,, \label{e.ode4} \\
\Phi_\r(\eta) \to 0 \quad \text{as } \eta \to \infty \,.
\end{gather}
\end{subequations}
As in Section~\ref{s.without}, the solution to \eqref{e.vupper} is of
the form
\begin{subequations}
\label{e.rightsolution}
\begin{equation}
\Phi_\r(\eta) = C_2 \, \operatorname{erfc}(\eta/2)
\end{equation}
where, due to the internal boundary condition $\Phi(\alpha)=u^*$,
\begin{equation}
C_2 = \frac{u^*}{\operatorname{erfc} (\frac\alpha2)} \,.
\label{e.c2}
\end{equation}
\end{subequations}
Its derivative is given by
\begin{equation}
\Phi_\r'(\eta) = -C_2 \, \frac{\exp(-\eta^2/4)}{\sqrt\pi} \,.
\label{e.vupper-deriv}
\end{equation}
Similarly, on the interval $(0,\alpha)$, we need to solve
\begin{subequations}
\begin{gather}
\Phi_\l'' + \frac\eta2 \, \Phi_\l' - \frac\gamma{\eta^2} \, \Phi_\l = 0 \,,
\label{e.ode3} \\
\Phi_\l'(0) = 0 \,.
\end{gather}
\end{subequations}
Equation \eqref{e.ode3} is a particular instance of the general
confluent equation \cite[Equation
13.1.35]{AbramowitzS:1972:HandbookMF}, whose solution is readily
expressed in terms of Kummer's confluent hypergeometric function $M$,
also referred to as the confluent hypergeometric function of the first
kind ${}_1\!F_1$. The two linearly independent solutions are of the
form
\begin{subequations}
\label{e.leftsolution}
\begin{equation}
\Phi_\l(\eta) = C_1 \, \eta^\kappa \,
M \Bigl(\frac\kappa2,\kappa+\frac12, -\frac{\eta^2}4 \Bigr)
\label{e.vlowersol}
\end{equation}
where $\kappa (\kappa-1) = \gamma$ and, due to the internal boundary
condition $\Phi(\alpha)=u^*$,
\begin{equation}
C_1 = \frac{u^*}{\alpha^\kappa
M \bigl(\frac\kappa2,\kappa+\frac12,-\frac{\alpha^2}4 \bigr)} \,.
\label{e.c1}
\end{equation}
\end{subequations}
Solving for $\kappa$, we find that of the two roots
\begin{equation}
\kappa_{1,2}=\frac{1 \pm \sqrt{4\gamma+1}}2 \,,
\label{e.kappa}
\end{equation}
only the larger one is positive, corresponding to regular behavior of
the solution \eqref{e.vlowersol} at the origin. When
$\kappa_2+\frac12$ is not a negative integer, \eqref{e.leftsolution}
provides a second linearly independent solution with $\kappa=\kappa_2$
which we discard as it has a pole at $\eta=0$. When
$\kappa_2+\frac12$ is a negative integer, Kummer's function is not
defined, so that we use the method of method of reduction of order,
see \cite[Section 3.4]{Teschl:2012:OrdinaryDE}, to obtain a second
linearly independent solution. To do so, we assume that
$\Phi(\eta)=e(\eta) \, \Phi_\l(\eta)$ and obtain an equation for $e$,
\begin{equation}
e''+ \Bigl(2 \, \frac{\Phi_\l'}{\Phi_\l} + \frac\eta2 \Bigr) \, e'
= 0
\end{equation}
on $(0,\alpha]$. Integrating, we obtain
\begin{subequations}
\begin{gather}
e'(\eta) = C_e \, \Phi_\l^{-2}(\eta) \, \e^{-\frac{\eta^2}4} \,, \\
e(\eta) = -C_e \int^{\alpha}_\eta \Phi_\l^{-2}(\zeta) \,
\e^{-\frac{\zeta^2}4} \, \d\zeta + C_e^* \,,
\end{gather}
\end{subequations}
again on $(0,\alpha]$. Hence, the general solution to \eqref{e.ode2}
on $(0,\alpha]$ is
\begin{equation}
\Phi(\eta)
= -C_e\,\Phi_\l(\eta)\,\int^{\alpha}_\eta \Phi_\l^{-2}(\zeta) \,
\e^{-\frac{\zeta^2}4} \, \d\zeta + C_e^* \, \Phi_\l(\eta) \,.
\label{e.vlowersol.pole}
\end{equation}
To obtain a second linearly independent solution, it suffices to take
$C_e=1$ and $C_e^*=0$. We proceed to show that the first term on the
right has again a pole at $\eta$. Identity \cite[Equation
13.1.27]{AbramowitzS:1972:HandbookMF} reads
\begin{equation}
M\Bigl(\frac{\kappa_1}2,\kappa_1+\frac12, -\frac{\eta^2}4 \Bigr)
= \e^{-\frac{\eta^2}4} \,
M\Bigl(\frac{\kappa_1}2+\frac12,\kappa_1+\frac12, \frac{\eta^2}4 \Bigr)
> 0 \,.
\end{equation}
Due to \eqref{e.vlowersol}, we can find a positive constant $C$ such
that
\begin{equation}
e(\eta)
\le -C \int^{\alpha}_\eta \zeta^{-2\kappa_1} \, \d\zeta
= - \frac{C}{2\kappa_1-1} \,
\bigl( \eta^{-2\kappa_1+1}-\alpha^{-2\kappa_1+1} \bigr) \,.
\end{equation}
Therefore,
\begin{equation}
\Phi(\eta)
\le -\frac{C\,C_1}{2\kappa_1-1} \,
\bigl( \eta^{-\kappa_1+1}-\alpha^{-2\kappa_1+1} \,
\eta^{\kappa_1} \bigr) \,
M \Bigl(\frac{\kappa_1}2,\kappa_1+\frac12, -\frac{\eta^2}4
\Bigr) \,.
\end{equation}
Thus, the second linearly independent solution again has a pole at
$\eta=0$. Therefore, we consider $\kappa=\kappa_1$ onward only.
Using the properties of Kummer's function
\cite[Section~13.4]{AbramowitzS:1972:HandbookMF}, the derivative of
\eqref{e.vlowersol} is readily computed as
\begin{equation}
\Phi_\l'(\eta) = C_1 \, \kappa \, \eta^{\kappa-1} \,
M \Bigl(\frac\kappa2+1,\kappa+\frac12, -\frac{\eta^2}4 \Bigr) \,.
\label{e.vlowersold-deriv}
\end{equation}
Finally, we use the jump condition \eqref{e.jump-condition} to
determine the constant $\gamma$. Plugging the left-hand and
right-hand solution into \eqref{e.jump-condition}, we find
\begin{equation}
\label{eq.kappa}
u^* \,
\frac{\kappa \,
M \bigl( \frac\kappa2+1,\kappa+\frac12, -\frac{\alpha^2}4 \bigr)}%
{\alpha \,
M \bigl( \frac\kappa2,\kappa+\frac12,-\frac{\alpha^2}4 \bigr)}
+ u^* \, \frac{\exp \bigl(-\frac{\alpha^2}4 \bigr)}%
{\sqrt\pi \, \operatorname{erfc} (\frac\alpha2 )}
= \frac{\alpha\beta}2 \,.
\end{equation}
\begin{figure}
\centering
\includegraphics{psi_phi_plot.pdf}
\caption{Plot of $\Psi$ and of the family of profiles $\Phi_\gamma$
for different precipitation thresholds $u^*$. The profiles in between
$\Psi$ and $\Phi_0$ correspond to the transitional regime where
$\gamma$ is negative, hence they fall outside
of the class of self-similar solutions described by
Theorem~\ref{t.selfsimilar}. Solutions to the HHMO-model in the
transitional regime always converge to $\Phi_0$, not to $\Phi_\gamma$
with $\gamma<0$.}
\label{f.profiles}
\end{figure}
To proceed, we set
\begin{equation}
\label{eq.kappa.2}
u^*_\gamma =
\left(\frac{\kappa \,
M \bigl( \frac\kappa2+1,\kappa+\frac12, -\frac{\alpha^2}4 \bigr)}%
{\alpha \,
M \bigl( \frac\kappa2,\kappa+\frac12,-\frac{\alpha^2}4 \bigr)}
+ \, \frac{\exp \bigl(-\frac{\alpha^2}4 \bigr)}%
{\sqrt\pi \, \operatorname{erfc} (\frac\alpha2 )}\right)^{-1}
\frac{\alpha\beta}2
\end{equation}
and join the right-hand solution \eqref{e.rightsolution} and left-hand
solution \eqref{e.leftsolution} to define a family of functions,
parameterized by $\gamma$, by
\begin{gather}
\Phi_\gamma(\eta)
= \begin{dcases}
\frac{u^*_\gamma\,\eta^\kappa \,
M \bigl(\frac\kappa2,\kappa+\frac12, -\frac{\eta^2}4 \bigr)}
{\alpha^\kappa \, M \bigl(\frac\kappa2,\kappa+\frac12,
-\frac{\alpha^2}4 \bigr)} & \text{ if }\eta<\alpha \,, \\
\frac{u^*_\gamma}{\operatorname{erfc} (\frac\alpha2)} \,
\operatorname{erfc} \Bigl( \frac\eta2 \Bigr)
& \text{ if }\eta\ge\alpha \,.
\end{dcases}
\label{e.phi.gamma}
\end{gather}
For future reference, we note that in $x$-$t$ coordinates, this
function takes the form
\begin{equation}
\label{e.phi.gamma.x-t}
\phi_\gamma(x,t)=\Phi_\gamma(x/{\sqrt t}) \,.
\end{equation}
At this point, we know that each $\Phi_\gamma(\eta)$ satisfies the
differential equation \eqref{e.ode2} and the decay condition
\eqref{e.v-selfsimilar-c}. However, $\Phi_\gamma$ does not
necessarily satisfy the internal boundary condition
\eqref{e.v-selfsimilar-d}, equivalent to the matching condition
\eqref{eq.kappa} which can now be expressed as $u^*_\gamma = u^*$, nor
does it necessarily satisfy the Neumann boundary condition
\eqref{e.v-selfsimilar-b}, which requires $\gamma>0$ or, equivalently,
$\kappa>1$. The following theorem states a necessary and sufficient
condition such that \eqref{eq.kappa} can be solved for $\kappa>1$ or,
equivalently, $u^*_\gamma = u^*$ can be solved for $\gamma>0$. When
this is the case, the resulting matched solution solves the entire
system \eqref{e.v-selfsimilar}.
\begin{theorem}
\label{t.selfsimilar}
Let $\alpha$, $\beta$, and $u^*$ be positive. Then the matching
condition $u^*_\gamma = u^*$ has a unique solution satisfying
$\gamma>0$ if and only if $u^*<u_0^*$. If this is the case, the
unique solution to \eqref{e.v-selfsimilar} is given by
\eqref{e.phi.gamma} with this particular value of $\gamma$.
\end{theorem}
\begin{remark}
We recall that for a \emph{subcritical} precipitation threshold where
$u^* > \Psi(\alpha)$, no precipitation can occur and $\Psi$, defined
in \eqref{self.similar.p0}, provides a self-similar solution without
precipitation. The \emph{marginal} case $u^* = \Psi(\alpha)$ is
discussed in Theorem~\ref{weak.sol.ustar.threshold.critical}. In the
\emph{transitional} regime $u_0^* \leq u^* < \Psi(\alpha)$, there is
some $\gamma \leq 0$ so that \eqref{e.phi.gamma} still solves
(\ref{e.v-selfsimilar}a--c); however, $\gamma <0$ is nonphysical and
the Neumann condition \eqref{e.v-selfsimilar-b} cannot be satisfied in
this regime. For future reference, we call the limiting case
$u^* = u_0^*$ the \emph{critical} precipitation threshold. In this
case, \eqref{e.phi.gamma} takes the form
\begin{gather}
\Phi_0(\eta)
= \begin{dcases}
\frac{u^*_0}{\operatorname{erf} (\frac\alpha2)} \, \operatorname{erf} (\frac\eta2)
& \text{ if }\eta<\alpha \,, \\
\frac{u^*_0}{\operatorname{erfc} (\frac\alpha2)} \, \operatorname{erfc} (\frac\eta2)
& \text{ if } \eta \ge \alpha \,.
\end{dcases}
\label{e.phi0}
\end{gather}
As discussed, this is not a solution, but will emerge as the universal
asymptotic profile for solutions in the transitional regime. Finally,
the \emph{supercritical} regime $u^*<u_0^*$ is the regime where
Theorem~\ref{t.selfsimilar} provides a self-similar solution to the
HHMO-model with self-similar precipitation function. The profiles for
the different cases are summarized in Figure~\ref{f.profiles}.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{t.selfsimilar}]
The form of the solution is determined by the preceding construction.
It remains to show that when $u^*<u_0^*$, the derivative matching
condition \eqref{eq.kappa} has a unique solution $\kappa>1$. Let us
consider the left-hand solution \eqref{e.leftsolution} as a function
of $\eta$ and $\kappa$, which we denote by $v(\eta,\kappa)$, so that
the leftmost term in \eqref{eq.kappa} is $v_\eta(\alpha,\kappa)$.
We begin by noting that
\begin{equation}
v_\eta(\alpha,1) = u^* \,
\frac{
M \bigl( \frac32 ,\frac32, -\frac{\alpha^2}4 \bigr)}%
{\alpha \,
M \bigl( \frac12, \frac32, -\frac{\alpha^2}4 \bigr)} \,.
\end{equation}
Moreover,
\begin{equation}
\lim_{\kappa\to\infty}
M \bigl(\tfrac\kappa2+1,\kappa+\tfrac12, -\tfrac{\alpha^2}4 \bigr)
= \lim_{\kappa\to\infty}
M \bigl( \tfrac\kappa2,\kappa+\tfrac12, -\tfrac{\alpha^2}4 \bigr)
= \exp \bigl(-\tfrac{\alpha^2}8 \bigr) \,,
\end{equation}
as is easily proved by using the dominated convergence theorem on the
power series representation of Kummer's function. Consequently,
$v_\eta(\alpha,\kappa)$ grows without bound as $\kappa \to \infty$.
Solvability under the condition that $u^*<u_0^*$ is then a simple
consequence of the intermediate value theorem.
To prove uniqueness, we show that $v_\eta(\alpha,\kappa)$ is strictly
monotonic in $\kappa$. For fixed $\kappa_2>\kappa_1$, we define
\begin{equation}
V(\eta) = v(\eta,\kappa_2) - v(\eta,\kappa_1) \,.
\label{def.V}
\end{equation}
Firstly, we note that $v(\eta,\kappa_1)$ and $v(\eta,\kappa_2)$ satisfy
the differential equation \eqref{e.ode3} with respective constants
$\gamma_1<\gamma_2$. Thus,
\begin{equation}
V''(\eta) + \frac\eta2 \, V'(\eta)
= \frac{\gamma_2}{\eta^2} \, V(\eta)
+ \frac{\gamma_2-\gamma_1}{\eta^2} \, v(\eta,\kappa_1) \,.
\label{e.Vpp}
\end{equation}
We note that $V(0)=V(\alpha)=0$. Assume that $V$ attains a local
non-negative maximum at $\eta_0\in(0,\alpha)$. Then $V(\eta_0)\ge0$,
$V'(\eta_0)=0$, and $V''(\eta_0)\le0$. This contradicts \eqref{e.Vpp}
as the left hand side is non-positive and the right hand side is
positive. We conclude that $V$ is negative in the interior of
$[0,\alpha]$.
In particular, this means that $V'(\alpha) \geq 0$. The proof is
complete if we show that this inequality is strict. To proceed,
assume the contrary, i.e., that $V'(\alpha)=0$. However, inserting
$V'(\alpha)=V(\alpha)=0$ into \eqref{e.Vpp}, we see that there must
exist a small left neighborhood of $\alpha$, $(\alpha_0,\alpha)$ say,
on which $V''$ is positive. This implies that $V'$ is negative and
$V$ is positive on $(\alpha_0,\alpha)$, which is a contradiction.
\end{proof}
\begin{figure}
\centering
\includegraphics{convergence-transitional.pdf}
\caption{Plot of the function $u$ for $\alpha=1.0$, $\beta=1.0$ in the
transitional regime with $u^*=0.49$ for different times $s$, together
with the conjectured limit profile $\Phi_0$.}
\label{f.1}
\end{figure}
\begin{figure}
\centering
\includegraphics{convergence-supercritical.pdf}
\caption{Plot of the function $u$ for $\alpha=1.0$, $\beta=1.0$ in the
supercritical regime with $u^*=0.15$ for different times $s$, together
with the conjectured limit profile $\Phi_\gamma$.}
\label{f.2}
\end{figure}
\begin{figure}
\centering
\includegraphics{convergence-at-eta=alpha.pdf}
\caption{Longer-time diagnostics in the supercritical regime. Shown
are two quantities on the line $\eta=\alpha$ relative to their
conjectured limits for the simulation shown in Figure~\ref{f.2}. The
growing oscillations are an effect of the finite constant grid size,
see text.}
\label{f.3}
\end{figure}
\section{Numerical results}
\label{s.numerics}
In the following, we present numerical evidence which suggests that
the profiles $\Phi_\gamma$ derived in the previous section determine
the long-time behavior of the solution to the HHMO-model. As the
concentration is expected to converge uniformly in parabolic
similarity coordinates, it is convenient to formulate the numerical
scheme directly in $\eta$-$s$ coordinates. We use simple implicit
first-order time-stepping for the concentration field and direct
propagation of the precipitation function along its characteristic
lines $x=\text{const}$ which transform to hyperbolic curves in the
$\eta$-$s$ plane. Details of the scheme are provided in
Appendix~\ref{a.scheme}.
The observed behavior is different in the transitional and in the
supercritical regime. In the transitional regime, the source term is
too weak to maintain precipitation outside of a bounded region on the
$x$-axis, which transforms into a precipitation region which gets
squeezed onto the $s$-axis as time progresses in $\eta$-$s$
coordinates. In this regime, the asymptotic profile is always
$\Phi_0$; a particular example is shown in Figure~\ref{f.1}. Note
that the concentration peak drops well below the precipitation
threshold as time progresses.
Figure~\ref{f.2} shows the long-time behavior of the concentration in
the supercritical case. In this case, the limit profile is
$\Phi_\gamma$, where $\gamma$ is determined as a function of $\alpha$,
$\beta$, and $u^*$ by the solvability condition of
Theorem~\ref{t.selfsimilar}. The convergence is very robust with
respect to compactly supported changes in the initial condition (not
shown). We note that the evolution equation in $\eta$-$s$ coordinates
is singular at $s=0$, so the initial value problem is only
well-defined when the initial condition is imposed at some $s_0>0$.
For the numerical scheme, however, there is no problem initializing at
$s=0$.
We note that along the line $\eta=\alpha$, equivalent to the parabola
$t = x^2/\alpha^2$, on which the source point moves, the concentration
is converging toward the critical concentration $u^*$. At the same
time, the weighted average of the concentration,
\begin{equation}
h(x) = \frac1x \int_0^x \xi^2 \, p^*(\xi) \, \d \xi \,,
\end{equation}
is converging to $\gamma$ as $t \to \infty$ or, equivalently,
$x \to \infty$. This behavior is clearly visible in Figure~\ref{f.3},
where convergence in $h$ is much slower than convergence in $u$.
Figure~\ref{f.3} also shows that grid effects become increasingly
dominant as time progresses. This is due to the fact that
precipitation occupies at least one full grid cell on the line
$\eta=\alpha$. However, to be consistent with the conjectured
asymptotics, the temporal width of the precipitation region needs to
shrink to zero. In the discrete approximation, it cannot do this,
resulting in oscillations of the diagnostics with increasing
amplitude. For even larger times, the simulation eventually breaks
down completely. This behavior can be seen as a manifestation of
``rattling,'' described by Gurevich and Tikhomirov
\cite{GurevichT:2017:RattlingSD,GurevichT:2018:SpatiallyDR} in a
related setting. Here, the scaling of the problem and of the
computational domain leads to an increase of the rattling amplitude
with time.
On any fixed finite interval of time, the amplitude of the grid
oscillations vanishes as the spatial and temporal step sizes go to
zero. However, it is impossible to design a code in which this
behavior is uniform in time so long as the precipitation function
takes only binary values, i.e., strictly follows condition
\eqref{e.hhmo-p-weak-alternative}.
\section{Long-time behavior of a linear auxiliary problem}
\label{s.auxiliary}
In this section, we study the nonautonomous linear system
\begin{subequations}
\label{e.auxiliary}
\begin{gather}
u_t = u_{xx} +
\frac{\alpha \beta}{2 \sqrt t} \, \delta (x - \alpha \sqrt{t})
- p \, u \,,
\label{e.auxiliary.a} \\
u_x(0,t) = 0 \quad \text{for } t \geq 0 \,, \label{e.auxiliary.b} \\
u(x,0) = 0 \quad \text{for } x>0 \label{e.auxiliary.c}
\end{gather}
\end{subequations}
on the space-time domain $\R_+\times\R_+$. The equations coincide
with the HHMO-model (\ref{e.original.a}--c). Here, however, we
consider the precipitation function $p(x,t)$ as given, not necessarily
related to $u$ in any way. The goal of this section is to give
conditions on $p$ such that the solution $u$ converges uniformly in
parabolic similarity coordinates to one of the profiles $\Phi_\gamma$
defined in Section~\ref{s.self-similar}.
Throughout, we assume that $p \in \mathcal A$, where
\begin{multline}
\mathcal A
= \{ p\in L^1_{\operatorname{loc}}((0,\infty) \times [0,\infty)) \colon \\
\supp p\cap(\R_+\times[0,T])
\text{ is compact for every } T>0 \} \,.
\label{e.admissible}
\end{multline}
In addition, we will assume that $p$ is non-zero, non-negative,
non-decreasing in time, and satisfies condition (P) stated in the
introduction. In all of the following, we manipulate the equation
formally as if the solution was strong. A detailed verification that
all steps are indeed rigorous can be found in
\cite[Appendix~B]{Darbenas:2018:PhDThesis}; these result can be
transformed into similarity variables as in Appendix~\ref{a.weak}
below. In this context, the condition on the support of $p$ in
\eqref{e.admissible} eases the justification of the exchange of
integration and time-differentiation. More generality is clearly
possible, but this simple assumption covers all cases we need for the
purpose of this paper.
For technical reasons, we distinguish two cases which require
different treatment. In the first case, $p$ is assumed bounded. It
is then easy to show that there exists a weak solution
\begin{gather}
u - \psi \in C^{1,0}(\R_+\times \R_+)\cap L^\infty(\R_+\times\R_+) \,,
\label{e.weak-linear-class}
\end{gather}
satisfying \eqref{weak.sol.def.eq}, where $\psi$ is the solution of
the precipitation-less equation given by \eqref{self.similar.p0}; see,
e.g., \cite[Appendix B]{Darbenas:2018:PhDThesis}.
In the second case, $p$ may be unbounded. In general, the existence
of solutions is then not obvious, so that we assume a solution with
\begin{gather}
u - \psi \in C^{1,0}(\R_+ \times \R_+)\cap
W^{1,1}_{2,{\operatorname{loc}}}(\R_+ \times \R_+)
\label{e.weak-linear-class-unbounded}
\end{gather}
exists, and that this solution satisfies the bound
\begin{gather}
0\le u\le \phi_0
\quad \text{provided} \quad
\int_0^{\infty} p^*(x) \, \d x= \infty \label{u.bounded.infty}
\end{gather}
with $\phi_0$ given by \eqref{e.phi.gamma.x-t}, or
\begin{gather}
0\le u\le \psi
\quad \text{provided} \quad
\int_0^{\infty} p^*(x) \, \d x < \infty \,. \label{u.bounded}
\end{gather}
We remark that when $p$ is bounded, it is easy to prove that solutions
$u$ which decay as $x \to \infty$ satisfy the weaker bound
\eqref{u.bounded}.
\begin{remark}
\label{imposing.the.bound}
Here we will explain why we impose \eqref{u.bounded.infty}.
Proceeding formally, we fix $0<t_0<t_1$ and
$0< x_1< \alpha \sqrt{t_0}$, multiply \eqref{e.auxiliary.a} by $u$,
integrate over $[0,x_1] \times [t_0,t_1]$, and note that the domain of
integration is away from the location of the source, so that
\begin{align}
\int_{0}^{x_1} u^2 \, \d x \bigg|^{t=t_1}_{t=t_0}
& = 2 \int_{t_1}^{t_2}u_x \, u \, \d t \bigg|_{x=0}^{x=x_1}
\notag \\
& \quad
- 2 \int_{t_0}^{t_1} \int_{0}^{x_1} u_x^2 \, \d x \, \d t
- 2 \int_{0}^{x_1} p^*(x) \int_{t_0}^{t_1} u^2 \, \d t \, \d x \,.
\end{align}
As $u$ and $u_x$ are continuous on the domain of integration, the
first three integrals are finite. Thus, the last integral must be
finite, too. When $p^*$ is not integrable near zero, this can only be
true when $u(0,t)=0$ for all $t>0$. Now note that $\phi_0$ satisfies
\eqref{e.auxiliary.a} for $p\equiv0$ with Dirichlet boundary
conditions
\begin{subequations}
\begin{gather*}
\phi_0(0,t) = 0 \quad \text{for } t > 0 \,,\\
\phi_0(x,0) = \lim_{\substack{y\to x\\t\searrow0}} \phi_0(y,t) = 0
\quad \text{for } x > 0 \,,
\end{gather*}
\end{subequations}
Thus, $\phi_0$ is the natural supersolution for $u$ when $p^*$ is not
integrable.
\end{remark}
\begin{lemma}
\label{lemma.p.non-decreasing}
Let $p\in \mathcal A$ be non-negative and non-decreasing in time $t$.
Let $u$ be a weak solution to \eqref{e.auxiliary}. Then $u-\psi$ is
non-increasing in time $t$.
\end{lemma}
\begin{proof}
The proof of \cite[Lemma~3.3]{HilhorstHM:2009:MathematicalSO} applies
literally. We remark that the result in
\cite{HilhorstHM:2009:MathematicalSO} is stated for solutions to the
HHMO-model, but its proof depends only on the assumption that $p$ is
non-decreasing in $t$ and applies here as well.
\end{proof}
\begin{lemma}
\label{l.u0}
Suppose $p \in \mathcal A$ is non-zero, nonnegative, and satisfies
condition \textup{(P)}. Let $u$ be a weak solution to
\eqref{e.auxiliary}. Then $u(0,t) \to 0$ as $t \to \infty$.
\end{lemma}
\begin{remark}
This lemma can be applied to weak solutions of the HHMO-model
\eqref{e.original} provided $u^*<\Psi(\alpha)$ under the additional
assumption that \eqref{p.property} is satisfied. Then, by
\cite[Lemma~3.5]{HilhorstHM:2009:MathematicalSO}, there is at least
one non-degenerate precipitation region and the assumptions of the
lemma apply.
\end{remark}
\begin{proof}
We construct a supersolution to $u$ as follows. Fix any $y^*>0$ such
that the support of $p^*$ intersects $[0,y^*]$ on a set of positive
measure. Define $t^*=(y^*/\alpha)^2$ and
\begin{equation}
p^r(x,t)
= \begin{cases}
\min\{p^*(\abs{x}), 1\}
& \text{if } x\in[-y^*,y^*] \text{ and } t\ge x^2/\alpha^2 \,, \\
0 & \text{otherwise} \,. \\
\end{cases}
\label{e.pr}
\end{equation}
Let $u^r$ denote the unique bounded weak solution to
\eqref{e.auxiliary} with $p=p^r$ and extend $u^r$ to the left half-plane by even reflection. Due to
the subsolution principle, $0\le u\le u^r$. Our goal is to show that
$u^r(0,t)\to 0$ as $t\to\infty$. We reflect $u^r$ evenly with respect
to $x=0$ axis. Note that $p^r$ fulfills the conditions of
Lemma~\ref{lemma.p.non-decreasing}. Therefore, $u^r$ is
non-increasing in $t$ on $[-y^*,y^*]\times[t^*,\infty)$ so that
\begin{equation}
\label{C.bar}
\lim_{t \to \infty} \inf_{x\in[-y^*,y^*]}u^r(x,t) \equiv c
\end{equation}
exists. We now express $u^r(0,t)$ for $t>t^*$ via the Duhamel
formula, bound $u^r$ from below by $c$, note that $\HK(-y,t-s)$ is a
decreasing function in $y$, and recall that $p^r$ is supported on
$\{\tau \geq t^*\}$ to estimate
\begin{align}
u^r(0,t)
& = \psi(0,t)-\int_0^t\int_{-y^*}^{y^*} \HK(-y,t-\tau) \,
p^r(y) \, u^r(y,\tau) \, \d y \, \d \tau
\notag \\
& \le \Psi(0)- c \int_{-y^*}^{y^*} p^r(y) \,\d y
\int_{t^*}^t \HK(y^*,t-\tau) \, \d \tau \,.
\label{e.ur-estimate}
\end{align}
Changing variables $\tau\to t\tau^\prime$ in the second integral on the
right, we find that
\begin{align}
\int_{t^*}^t \HK(y^*,t-\tau) \, \d \tau
& = \sqrt t \int_{t^*/t}^1
\frac1{\sqrt{4\pi(1-\tau^\prime)}} \,
\e^{-\tfrac{{y^*}^2}{4t(1-\tau^\prime)}}\,\d \tau^\prime
\notag \\
& \sim \sqrt t \int_0^1\frac1{\sqrt{4\pi(1-\tau^\prime)}}\,\d \tau^\prime
= \sqrt{\frac{t}\pi}
\end{align}
as $t \to \infty$. This implies $c=0$ as otherwise
$u^r(0,t)\to-\infty$ as $t\to\infty$. Then the Harnack inequality for
the function $u^r$ on some spatial domain containing the interval
$[-y^*,y^*]$ implies that there exists a constant $C>0$ such that
\begin{equation}
\label{u.conv}
u^r(0,t)
\le \sup_{y\in[-y^*,y^*]}u^r(y,t)
\le C \, \inf_{y\in[-y^*,y^*]} u^r(y,t+\delta)
\to 0 \text{ as } t \to \infty
\end{equation}
for any fixed $\delta>0$; see, e.g.,
\cite[Section~7.1.4.b]{Evans:2010:PartialDE} and
\cite{Lieberman:1996:SecondOP}. Hence, $u(0,t)\to0$ as well.
\end{proof}
\begin{lemma}
\label{peak.lemma}
Let $p \in \mathcal A$ be non-negative and non-decreasing in time $t$.
Let $u$ a bounded weak solution to \eqref{e.auxiliary} where, as
before, we write $u(x,t) = v(x/\sqrt{t},\sqrt{t})$. Then for every
$d>0$ and $\gamma\ge0$, the following is true.
\begin{enumerate}[label=\textup{(\alph*)}]
\item\label{peak.i} There exists $\omega\in(0,1)$ such that for every
$(\eta,s)$ with $v(\eta, s)-\Phi_\gamma(\eta)\ge d$,
\begin{equation}
\min_{s'\in[\omega s,s]} \max_{\eta\in\R_+}
\{ v(\eta, s')-\Phi_\gamma(\eta)\} \ge d/2 \,.
\label{e.peak.i}
\end{equation}
\item\label{peak.ii} There exists $\omega\in(1,\infty)$ such that
for every $(\eta,s)$ with $v(\eta, s)-\Phi_\gamma(\eta)\le -d$,
\begin{equation}
\max_{s'\in[s,\omega s]} \min_{\eta\in\R_+}
\{v(\eta, s')-\Phi_\gamma(\eta) \}\le -d/2 \,.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
Set $V(\eta)=\Psi(\eta)-\Phi_\gamma(\eta)$. By direct inspection, we
see that $V$ is strictly decreasing on $\R_+$. In case \ref{peak.i},
\begin{equation}
d \le v(\eta, s)-\Phi_\gamma(\eta)\le V(\eta) \,.
\end{equation}
Therefore, the possible values of $\eta$ for which the assumption of
case \ref{peak.i} can be satisfied are bounded from above by some
$\eta^*=\eta^*(d,\gamma)$. By the mean value theorem, for
$\omega \in (0,1)$,
\begin{equation}
V(\eta) - V(\eta/\omega)
\leq \max_{\xi \in [\eta,\eta/\omega]} \lvert V'(\xi) \rvert \,
\Bigl( \frac\eta{\omega} - \eta \Bigr)
\leq \eta^* \, \max_{\xi \in [0, \eta^*/\omega]} \lvert V'(\xi) \rvert \,
\frac{1-\omega}\omega
\leq \frac{d}2
\label{e.Vdiff}
\end{equation}
where, in the last inequality, $\omega$ has been fixed sufficiently
close to $1$. This choice is independent of $\eta$. Now recall that
$t=s^2$ and $x=\eta s$. Choose any $s'\in[\omega s,s]$, set
$t'={s'}{\vphantom s}^2$ and $\eta'=x/s'$ so that
$\eta' \le \eta/\omega$. Then
\begin{align}
d - (u(x,t') - \phi_\gamma(x,t'))
& \leq (u(x,t) - \phi_\gamma(x,t)) - (u(x,t') - \phi_\gamma(x,t'))
\notag \\
& = (u(x,t) - \psi(x,t)) - (u(x,t') - \psi(x,t'))
+ V(\eta) - V(\eta')
\notag \\
& \leq V(\eta) - V(\eta/\omega) \leq d/2 \,,
\end{align}
where the first inequality is due to the assumption of case
\ref{peak.i}, the second inequality is due to
Lemma~\ref{lemma.p.non-decreasing} which states that $u-\psi$ is
non-increasing in $t$ for $x$ fixed. We further used monotonicity of
$V$ in the second inequality. The last inequality is due to
\eqref{e.Vdiff}. Altogether, we see that
\begin{equation}
v(\eta', s') - \Phi_\gamma(\eta')
= u(x,t') - \phi_\gamma(x,t')
\geq d/2 \,.
\end{equation}
This proves \eqref{e.peak.i}. The proof in case \ref{peak.ii} is
similar. Notice that
\begin{equation}
v(\eta, s) \le \Phi_\gamma(\eta) - d < \Phi_\gamma(\eta).
\end{equation}
Therefore, the possible values of $\eta$ for which the assumption of
case \ref{peak.ii} can be satisfied are bounded from below by some
$\eta^*=\eta^*(d,\gamma)>0$. The rest of the proof is obvious.
\end{proof}
\begin{figure}
\centering
\includegraphics{domainsketch.pdf}
\caption{Sketch of the region $D_{\eta,y}$ when $\eta>\alpha$.}
\label{f.4}
\end{figure}
In the following, for positive real numbers $\eta_0$, $y$, and $T$, we
define
\begin{gather}
D_{\eta,y}
= \{(x,t) \colon x\ge y, 0\le t\le \eta^{-2}x^2\} \,.
\label{D.eta.y.infty}
\end{gather}
See Figure~\ref{f.4} for an illustration.
\begin{theorem}
\label{t.linear-asymptotics}
Let $p \in \mathcal A$ be non-zero, non-negative, and non-decreasing
in time, and satisfy condition \textup{(P)}. Assume further for each
$\eta>\alpha$ there exists $y=y(\eta)$ such that $p\equiv0$ on
$D_{\eta,y}$, and that there exists $\gamma\ge 0$ such that
\begin{equation}
\lim_{x \to \infty} x \int^{\infty}_x p^*(\xi) \, \d \xi = \gamma \,,
\label{e.p-assumption}
\end{equation}
where $p^*$ denotes the values of $p$ along the line $\eta=\alpha$ as
defined in condition \textup{(P)}.
Let $u$ be a weak solution of class \eqref{e.weak-linear-class} to the
linear non-autonomous equation \eqref{e.auxiliary} with $p$ fixed as
stated. If $p$ is unbounded, assume further that $u$ is of class
\eqref{e.weak-linear-class-unbounded} and satisfies the bounds
\eqref{u.bounded.infty} or \eqref{u.bounded}. Then $u$ converges
uniformly to $\Phi_\gamma$.
\end{theorem}
\begin{proof}
Set $w=v-\Phi_\gamma$. Subtracting \eqref{e.ode2} from \eqref{e.v-a}
and noting that, by assumption, $q(\eta,s)=p^*(s \eta)$ for $\eta <
\alpha$, we obtain
\begin{subequations}
\label{w.equat}
\begin{multline}
\tfrac12 \, s \, w_s - \tfrac12 \, \eta \, w_{\eta}
= w_{\eta\eta} - s^2 \, q(\eta,s) \, w \\
+ \Bigl( \frac\gamma{\eta^2} - s^2 \, p^*(s\eta) \Bigr) \,
\Phi_\gamma \, H(\alpha-\eta)
- s^2 \, q(\eta,s) \, \Phi_\gamma \, H(\eta-\alpha)
\label{e.w.a}
\end{multline}
with assumed bounds on $w$, namely
\begin{gather}
-\Phi_\gamma\le w\le \Phi_0-\Phi_\gamma
\quad \text{provided} \quad
\int_0^{\infty} p^*(x) \, \d x= \infty
\label{e.bound-case1}
\intertext{or}
-\Phi_\gamma\le w\le \Psi-\Phi_\gamma
\quad \text{provided} \quad
\int_0^{\infty} p^*(x) \, \d x < \infty \,.
\label{e.bound-case2}
\end{gather}
\end{subequations}
To avoid boundary terms
when integrating by parts, we introduce a fourth-power function with
cut-off near zero which is defined, for every $\eps>0$, by
\begin{equation}
J_\eps(z) = \begin{cases}
0 & \text{if } |z|<\eps \,, \\
(|z|-\eps)^4 & \text{if } |z|\ge \eps \,.
\end{cases}
\end{equation}
$J_\eps$ is at least twice continuously differentiable, even, positive,
and strictly convex on $(\eps,\infty)$. We now consider the cases when
$p^*$ is integrable and $p^*$ is not integrable separately.
\begin{case}[$p^*$ is not integrable on $\R_+$]
In this case, we have the bound \eqref{e.bound-case1}, so that
$|w| \le \Phi_0+\Phi_\gamma$. Hence, for $\eps>0$, arbitrary but fixed in
the following, there are $\eta_0=\eta_0(\eps)$ and $\eta_1=\eta_1(\eps)$
with $0<\eta_0<\eta_1<\infty$ such that $|w|\le \eps$, hence
$J_\eps(w) = 0$, for $\eta \notin (\eta_0,\eta_1)$ and all $s>0$.
We multiply \eqref{e.w.a} by $J_\eps^\prime(w)$, integrate on $\R_+$, and
examine the resulting expression term-by-term. The contribution from
the first term reads
\begin{equation}
\label{eq.r.1}
\frac12 \int_0^\infty s \, w_s \, J^\prime_\eps(w) \, \d\eta
= \frac{s}2 \, \frac{\d}{\d s}
\int_{\eta_0}^\infty J_\eps(w) \, \d\eta
\end{equation}
and the second term contributes
\begin{align}
\frac12 \int_0^\infty \eta \, w_{\eta} \, J^\prime_\eps(w) \, \d\eta
& = \frac12 \int_{\eta_0}^{\eta_1} \eta \, w_{\eta} \,
J^\prime_\eps(w) \, \d\eta
\notag \\
& = -\frac12 \int_{\eta_0}^{\eta_1} J_\eps(w) \, \d\eta
= -\frac12 \int_{\eta_0}^\infty J_\eps(w) \, \d\eta \,.
\label{eq.l.2}
\end{align}
Combining both expressions, we obtain
\begin{equation}
\label{comb.2}
\frac{s}2 \, \frac{\d}{\d s} \int_{\eta_0}^\infty J_\eps(w) \, \d\eta
+ \frac12 \int_{\eta_0}^\infty J_\eps(w) \, \d\eta
= \frac{\d}{\d s}
\biggl(
\frac{s}2 \int_{\eta_0}^\infty J_\eps(w) \, \d\eta
\biggr) \,.
\end{equation}
The contribution from the first term on the right of \eqref{e.w.a}
reads
\begin{align}
\int_0^\infty w_{\eta\eta} \, J^\prime_\eps(w) \, \d\eta
& = \int_{\eta_0}^{\eta_1}w_{\eta\eta} \, J^\prime_\eps(w) \, \d\eta
\notag \\
& = - \int_{\eta_0}^{\eta_1}w_\eta^2 \, J^{\prime\prime}_\eps(w) \, \d\eta
= - \int_{\eta_0}^{\infty}w_\eta^2 \, J^{\prime\prime}_\eps(w) \, \d\eta\,.
\label{eq.l.1}
\end{align}
The contribution from the second term on the right of \eqref{e.w.a}
satisfies
\begin{equation}
\label{eq.l.3}
-\int_0^\infty s^2 \, q(\eta,s) \, w \, J^\prime_\eps(w) \, \d\eta
\leq 0
\end{equation}
because the product $w \, J^\prime_\eps(w)$ is clearly non-negative. To
investigate the contribution coming from the third term on the right
of \eqref{e.w.a}, we integrate by parts, so that
\begin{align}
& \int_0^\infty
\Bigl( \frac{\gamma}{\eta^2} - s^2 \, p^*(s\eta) \Bigr) \,
\Phi_\gamma \, H(\alpha-\eta) \, J^\prime_\eps(w) \, \d\eta
\notag \\
& = \int_{\eta_0}^\alpha
\Bigl( \frac{\gamma}{\eta^2} - s^2 \, p^*(s\eta) \Bigr) \,
\Phi_\gamma \, J^\prime_\eps(w) \, \d\eta
\notag \\
& = g(\alpha,s) \, \Phi_\gamma(\alpha) \, J^\prime_\eps(w(\alpha,s))
- \int_{\eta_0}^\alpha g \, \Phi^\prime_\gamma \,
J^\prime_\eps(w) \, \d\eta
- \int_{\eta_0}^\alpha g \, \Phi_\gamma \,
w_\eta \, J^{\prime\prime}_\eps(w) \, \d\eta \,,
\label{eq.l.4}
\end{align}
where $g$ is an anti-derivative of the term in parentheses, namely
\begin{equation}
g(\eta,s)
= s^2\int_{\eta}^\infty p^*(s\kappa)\, \d\kappa
- \frac\gamma\eta
= s\int_{s\eta}^\infty p^*(\zeta) \, \d\zeta
- \frac\gamma\eta \,.
\end{equation}
We note that
for fixed $\eta>0$, due to \eqref{e.p-assumption}, $g(\eta, s)\to0$ as
$s\to\infty$.
When $\eps<u^*$, the equation $\Phi_0(\eta)=\eps$ has one root
$\eta \leq \alpha$. Since $u \leq \Phi_0$, we can set $\eta_0=\eta$
so that $\eta_0 \leq \alpha$, which we assume henceforth. Combining
\eqref{eq.l.1} with the last term in \eqref{eq.l.4}, we obtain
\begin{align}
- \int_{\eta_0}^\infty & w_{\eta}^2 \, J^{\prime\prime}_\eps(w) \, \d\eta
- \int_{\eta_0}^\alpha g \, \Phi_\gamma \, w_\eta \,
J^{\prime\prime}_\eps(w) \, \d\eta
\notag \\
& = - \int_{\alpha}^{\infty} w_\eta^2 \, J^{\prime\prime}_\eps(w) \, \d\eta
- \int_{\eta_0}^{\alpha} \bigl( w_\eta + \tfrac12 \, g \,
\Phi_\gamma \bigr)^2 \, J^{\prime\prime}_\eps(w) \, \d\eta
\notag \\
& \quad + \frac14 \int_{\eta_0}^\alpha g^2 \,
\Phi_\gamma^2 \, J^{\prime\prime}_\eps(w) \, \d\eta
\notag \\
& = \frac12 \int_{\eta_0}^\alpha g^2 \,
\Phi_\gamma^2 \, J^{\prime\prime}_\eps(w) \, \d\eta - G^*(s)
\label{comb.1}
\end{align}
where
\begin{align}
G^*(s)
& = \int_{\alpha}^{\infty} w_\eta^2 \, J^{\prime\prime}_\eps(w) \, \d\eta
+ \int_{\eta_0}^{\alpha}
\bigl( w_\eta+ \tfrac12 \, g \, \Phi_\gamma \bigr)^2 \,
J^{\prime\prime}_\eps(w) \, \d\eta
+ \frac14 \int_{\eta_0}^{\alpha}
g^2 \, \Phi_\gamma^2 \, J^{\prime\prime}_\eps(w) \, \d\eta
\notag\\
& \ge \int_{\alpha}^{\infty} w_\eta^2 \, J^{\prime\prime}_\eps(w) \, \d\eta
+ \frac12 \int_{\eta_0}^{\alpha} w_\eta^2 \,
J^{\prime\prime}_\eps(w) \, \d\eta
\notag\\
& \ge \frac12 \int_0^{\infty} w_\eta^2 \,
J^{\prime\prime}_\eps(w) \, \d\eta \,.
\label{e.gstar}
\end{align}
We note that we have used the Jensen inequality in the first
inequality of this lower bound estimate.
Finally, the last term on the right of \eqref{e.w.a} is treated as
follows. We define
\begin{equation}
F(s)
= s^2 \int_0^\infty q(\eta,s) \, \Phi_\gamma \, J_\eps'(w) \,
H(\eta-\alpha) \, \d\eta
\end{equation}
and
\begin{equation}
\Gamma(x)=x\int_x^\infty p^*(\xi)\,\d \xi \,.
\label{Gamma.def}
\end{equation}
For fixed $\eta^*>\alpha$ we can find $y=y(\eta^*)$ such that
$p\equiv0$ on $D_{\eta^*,y}$, i.e., $q(\eta, s)=0$ for all
$\eta>\eta^*$ and $s\ge y/\eta^*$.
Then, for
$s\ge s_0 \equiv y/\eta^*$,
\begin{align}
F(s) \le \Phi_\gamma(\alpha) \, J_\eps'(\Psi(\alpha))
\int_\alpha^{\eta^*} s^2 \, q(\eta,s) \, \d\eta \,.
\label{e.F-bound1}
\end{align}
Since $p$ is non-decreasing in time $t$, we estimate
\begin{equation}
\int_\alpha^{\eta^*} s^2 \, q(\eta, s) \, \d\eta
\leq \int_\alpha^{\eta^*} s^2 \, p^*(\eta s) \, \d\eta
= s \int_{s\alpha}^{s\eta^*} p^*(\kappa) \, \d\kappa
= \frac{\Gamma(s\alpha)}{\alpha}
- \frac{\Gamma(s\eta^*)}{\eta^*} \,.
\end{equation}
Inserting this bound into \eqref{e.F-bound1} and noting that, due to
\eqref{e.p-assumption}, $\lim_{x\to\infty}\Gamma(x)=\gamma$, we find
that
\begin{equation}
\limsup_{s \to \infty} F(s)
\le \Phi_\gamma(\alpha) \, J_\eps'(\Psi(\alpha)) \,
\Bigl( \frac\gamma\alpha - \frac\gamma{\eta^*} \Bigr) \,.
\label{e.F-bound2}
\end{equation}
Adding up the individual contributions and neglecting the clearly
non-positive terms on the right hand side as indicated, we obtain
altogether
\begin{equation}
\frac{\d}{\d s}
\biggl( \frac{s}2 \int_0^\infty J_\eps(w) \, \d\eta \biggr)
\leq G(s) - G^*(s) + F(s)
\label{e.diff-ineq}
\end{equation}
with
\begin{multline}
G(s)
= g(\alpha,s) \, \Phi_\gamma(\alpha) \, J^\prime_\eps(w(\alpha,s))
- \int_{\eta_0}^\alpha g \, \Phi^\prime_\gamma
J^\prime_\eps(w) \, \d\eta
+ \frac12 \int_{\eta_0}^\alpha
g^2 \, \Phi_\gamma^2 \, J^{\prime\prime}_\eps(w) \, \d\eta \,.
\label{G.exp}
\end{multline}
We note that $G(s) \to 0$ as $s \to 0$. Indeed, the first term
converges to zero because $g$ converges to zero. For the two integrals,
we note that, in addition, on the interval $[\eta_0, \alpha]$
the function $g$ satisfies the uniform bound
\begin{equation}
g(\eta,s)
= \frac{\Gamma(\eta s) - \gamma}\eta
\leq \frac1\eta \, \sup_{x \geq \eta_0 s_0} \Gamma(x)
\end{equation}
which, together with the known bounds on $\Phi$, $\Phi'$, and $w$,
implies that the dominated convergence theorem is applicable. Hence,
each of the integrals converges to zero.
Integrating \eqref{e.diff-ineq} from $s_0$ to $s$, we obtain
\begin{equation}
\int_0^\infty J_\eps(w(\eta,s)) \, \d\eta
- \frac{s_0}s \int_0^\infty J_\eps(w(\eta,s_0) \, \d\eta
\leq \frac{2}s \int_{s_0}^s
\bigl( G(\sigma) - G^*(\sigma) + F(\sigma) \bigr) \,
\d \sigma \,.
\label{e.integrated}
\end{equation}
We now take $\limsup_{s\to\infty}$. The second term on the left
vanishes trivially. Since $G$ converges to zero, so does its time
average, so its contribution is negligible in the limit. $G^*$ is
non-negative, hence can be neglected. For the contribution from $F$,
we refer to \eqref{e.F-bound2}. Hence,
\begin{equation}
\limsup_{s\to+\infty}\int_0^\infty J_\eps(w) \, \d\eta
\le 2 \Phi_\gamma(\alpha) \, J_\eps'(\Psi(\alpha)) \,
\Bigl( \frac\gamma\alpha - \frac\gamma{\eta^*} \Bigr) \,.
\label{limsup.J_l.F}
\end{equation}
Since $\eta^*>\alpha$ is arbitrary, we conclude that
\begin{equation}
\lim_{s\to\infty} \int_0^\infty J_\eps(w(\eta,s)) \, \d\eta = 0 \,.
\label{e.Jl-convergence}
\end{equation}
This implies that for every fixed $\eps>0$,
\begin{equation}
m(\{ \eta \colon \lvert w \rvert>2\eps\} ) \, J_\eps(2\eps)
\leq \int_{\{ \eta \colon \lvert w \rvert>2\eps\}} J_\eps(w)\,\d \eta
\leq \int_0^\infty J_\eps(w) \, \d \eta \to 0
\label{J_l.in.measure}
\end{equation}
as $s\to\infty$, where $m$ is the Lebesgue measure on the real line,
i.e., $\lvert w \rvert$ converges to zero in measure. Due to the
bound on $w$, the dominated convergence theorem with convergence in
measure, e.g.\ \cite[Corollary~2.8.6]{Bogachev:2007:MeasureT}, implies
that $v\to\Phi_\gamma$ in $L^r(\R_+)$ for every $r \in [1,\infty)$.
\end{case}
\begin{case}[$p^*$ is integrable on $\R_+$]
\label{c.2}
When $p^*$ is integrable, we only have the weaker bound on $w$ given
by \eqref{e.bound-case2}. Thus, we must take $\eta_0=0$. On the
other hand, due to Lemma~\ref{l.u0}, $u(0,s)$ is converging to $0$ as
$s\to\infty$. Thus, we fix $\eps>0$ and choose $s_0=s_0(\eps)$ satisfying
$u(0,s_0)<\eps$. Then $J_\eps(w(0,s))=J_\eps^\prime(w(0,s))=0$ for all $s>s_0$
so that the boundary terms when iterating by parts in \eqref{eq.l.1},
\eqref{eq.l.2}, and \eqref{eq.l.4} vanish as before, so that all
computations from Case~1 up to equation \eqref{G.exp} remain valid as
before.
The bound on $g$ now takes the form
\begin{equation}
g(\eta,s)
= \frac{\Gamma(\eta s) - \gamma}\eta
\leq \frac1\eta \, \sup_{x \geq 0} \Gamma(x)
\end{equation}
where, as before, $\Gamma$ is given by \eqref{Gamma.def}. This
implies that the integrands in the second and third term in
\eqref{G.exp} satisfy bounds on the interval $[0,\alpha]$ which take
the form
\begin{subequations}
\begin{gather}
\lvert g \, \Phi^\prime_\gamma \, J^\prime_\eps(w)\rvert
\le C_1 \, \eta^{\kappa-2} \,,
\label{G.2nd.member.bound} \\
\lvert g^2 \, \Phi_\gamma^2 \, J^{\prime\prime}_\eps(w)\rvert
\le C_2 \, \eta^{2(\kappa-1)} \,,
\label{G.3rd.member.bound}
\end{gather}
\end{subequations}
where $C_1$ and $C_2$ are positive constants. When $\gamma>0$ so that
$\kappa>1$, both bounds are integrable on $[0,\alpha]$ and the
dominated convergence theorem applies as before, proving that
$G(s)\to0$ as $s\to\infty$. When $\gamma=0$ so that $\kappa=1$, the
second bound \eqref{G.3rd.member.bound} is still integrable, but the
first is not. Thus, for the second term on the right of
\eqref{G.exp}, we change the strategy as follows.
Observe that when $\gamma=0$, then
\begin{equation}
g(\eta,s) = s \int_{s\eta}^\infty p^*(\zeta) \, \d\zeta \ge 0 \,.
\end{equation}
Thus, the second term in \eqref{G.exp} is bounded above by
\begin{equation}
- \int_0^{\alpha} g(\eta,s) \, \Phi_0'(\eta) \,
J_\eps'(w(\eta,s)) \, \d \eta
\le \int_0^{\alpha} \I_{\{w(\eta,s)<0\}}(\eta) \, g(\eta,s) \,
\Phi_0'(\eta) \, \lvert J_\eps'(w(\eta,s)) \rvert \, \d \eta \,.
\label{e.g-second-term}
\end{equation}
Note that $w(\eta,s)<0$ if and only if $u(\eta,s)<\Phi_0(\eta)$.
Moreover, $\Phi_0(\eta) = O(\eta)$ as $\eta \to 0$ so that
$\I_{\{w<0\}} \, J_\eps'(w) = O(\eta^3)$. Altogether, there exists $C_3>0$
such that
\begin{equation}
\I_{\{w(\eta,s)<0\}}(\eta) \, g(\eta,s) \, \Phi_0'(\eta) \,
\lvert J_\eps'(w(\eta,s)) \rvert
\leq C_3 \, \eta^2
\end{equation}
which provides an integrable upper bound for the integrand on the
right of \eqref{e.g-second-term}. The dominated convergence theorem
then proves that the integral on the right of \eqref{e.g-second-term}
converges to zero as $s \to \infty$.
Thus, we find in all cases that
$\limsup_{\sigma\to\infty}G(\sigma)\le 0$, so that the argument from
\eqref{e.integrated} to \eqref{e.Jl-convergence} proceeds as before
and \eqref{J_l.in.measure} is valid for every $\eps>0$. This shows that
$\lim_{s\to\infty}w=0$ in $L^r$.
\end{case}
In the final step, we bootstrap from $L^r$-convergence to uniform
convergence on $\R_+$. We argue by contradiction and for both cases
at once.
Suppose convergence is not uniform. Then there exists $d>0$ such that
\begin{equation}
\limsup_{s\to\infty}\max_{\eta\in\R_+}w(\eta,s) \ge 2d
\end{equation}
or
\begin{equation}
\liminf_{s\to\infty}\min_{\eta\in\R_+}w(\eta,s) \le -2d \,.
\label{e.uniform-case2}
\end{equation}
Suppose that the first alternative holds; the argument for the second
alternative proceeds analogously and shall be omitted. By
Lemma~\ref{peak.lemma}\ref{peak.i}, there exists $\omega \in (0,1)$, a
sequence $s_i \to \infty$, and a sequence $\eta_i$ such that for every
$i \in \N$,
\begin{equation}
\min_{s \in [\omega s_i,s_i]} w(\eta_i,s) \geq d/2 \,.
\end{equation}
Due to the uniform bound on $w$ which decays as $\eta \to \infty$, the
sequence $\eta_i$ must be contained in a compact interval of length
$L$ (possibly dependent on $d$). In the following, fix $\eps < d/4$.
For fixed $s \in [\omega s_i,s_i]$, let
\begin{equation}
\eta_0 = \max \{ \eta < \eta_i \colon J_\eps(w(\eta,s)) = 0 \} \,.
\end{equation}
(By continuity, the maximum exists and is less than $\eta_i$; in
Case~2 we may need to take $i$ large enough so that $\omega s_i > s_0$.)
Due to the fundamental theorem of calculus,
\begin{equation}
J_\eps^{1/2}(w(\eta_i,s)) - J_\eps^{1/2}(w(\eta_0,s))
= \int_{\eta_0}^{\eta_i} \partial_\eta
J_\eps^{1/2} (w(\eta,s)) \, \d \eta
\end{equation}
so that, noting that $J_\eps^{1/2}(w(\eta_i,s))=0$,
$J_\eps^{1/2}(w(\eta_0,s))\ge (d/2-\eps)^2$, and $d/2-\eps\ge d/4$ on the left
and using the Cauchy--Schwarz inequality on the right, we obtain
\begin{equation}
\Bigl( \frac{d}4 \Bigr)^2
\leq (\eta_i - \eta_0)^{\tfrac12} \,
\biggl(
\int_{\eta_0}^{\eta_i} 4 \, (\lvert w \rvert - \eps)^2 \,
w_\eta^2 \, \d \eta
\biggr)^{\tfrac12}
\leq \sqrt L \,
\biggl(
\frac13 \int_{\R_+} J_\eps^{\prime\prime} (w) \,
w_\eta^2 \, \d \eta
\biggr)^{\tfrac12} \,.
\end{equation}
We conclude that the integral on the right is bounded below by some
strictly positive constant, say $b$, which only depends on $d$. Due
to \eqref{e.gstar}, $b/2$ is also a lower bound on $G^*$. Thus,
returning to \eqref{e.integrated} with $s=s_i$ and $s_0=\omega s_i$, we
obtain
\begin{align}
\int_0^\infty J_\eps(w(\eta,s_i)) \, \d\eta
& - \omega \int_0^\infty J_\eps(w(\eta,\omega s_i) \, \d\eta
\leq \frac{2}{s_i} \int_{\omega s_i}^{s_i}
\bigl( G(\sigma) - G^*(\sigma) + F(\sigma) \bigr) \, \d \sigma
\notag \\
& \leq
- (1-\omega) \, b + \frac{2}{s_i} \int_{\omega s_i}^{s_i}
\bigl( G(\sigma) + F(\sigma) \bigr) \, \d \sigma \,.
\end{align}
We now let $i \to \infty$ and observe that, due to
\eqref{e.Jl-convergence}, the two terms on the left converge to zero.
For the integral on the right, we apply the same asymptotic bounds as
in the first part of the argument, so that
\begin{equation}
0
\le - (1-\omega) \, b + \Phi_\gamma(\alpha) \, J_\eps'(\Psi(\alpha)) \,
\Bigl( \frac\gamma\alpha - \frac\gamma{\eta^*} \Bigr) \,.
\end{equation}
Since $\eta^*>\alpha$ is arbitrary, we reach a contradiction.
Alternative \eqref{e.uniform-case2} can be argued similarly, with
reference to Lemma~\ref{peak.lemma}\ref{peak.ii}. This completes the
proof of uniform convergence.
\end{proof}
\begin{remark}
The use of the cut-off function $J_\eps$ is a technical necessity to
avoid boundary terms when integrating by parts. Our particular choice
of $J_\eps$ amounts essentially to an $L^4$ estimate; the exponent $4$
was chosen purely for the convenience of an easy explicit cut-off
construction. The implication of $L^r$-convergence for any
$r \in [1,\infty)$ can then be understood as a consequence of
boundedness of $w$ and $L^p$-interpolation.
\end{remark}
\section{Long-time behavior of the HHMO-model}
\label{s.hhmo}
In this section, we turn to studying the long-time behavior of
solutions to the actual HHMO-model \eqref{e.original}. We first prove
a series of simple results,
Theorems~\ref{self.similar.does.not.exists}--\ref{no.interring.infinite},
which are all based on constructing suitable sub- and supersolutions
whose long-time behavior can be described by
Theorem~\ref{t.linear-asymptotics}. We then turn to maximum principle
arguments which show that the onset of precipitation in the HHMO-model
is asymptotically close to the line $\eta=\alpha$, so that a statement
like Theorem~\ref{t.linear-asymptotics} also holds true for
HHMO-solutions. Finally, we prove the main result of this section,
which can be seen as a converse statement, the identification of the
only possible limit profile for the HHMO-model. The two main
statements are summarized as Theorem~\ref{convergence.conclusion} at
the end of the section.
\begin{theorem}[Long-time behavior in the transitional regime]
\label{self.similar.does.not.exists}
Let $(u,p)$ be a weak solution to \eqref{e.original} in the transitional
regime where $u^*_0<u^*<\Psi(\alpha)$. Then $p(x,t)=0$ for all $x$
large enough. Moreover, $u$ converges uniformly to the profile
$\Phi_0$.
\end{theorem}
\begin{proof}
Set $Y=X_1$, the right endpoint of the first precipitation region, see
\cite[Lemma~3.5]{HilhorstHM:2009:MathematicalSO}, provided that it is
finite. If it were infinite, we would set $Y=1$. (The theorem shows
that this case is impossible, but at this point we do not know.) We
then define
\begin{equation}
p_1(x,t)
= \begin{cases}
H(t-x^2/\alpha^2) & \text{for } x \le Y \,,\\
0 & \text{otherwise} \,.
\end{cases}
\end{equation}
and note that $p_1\le p$. This function satisfies condition (P) with
$p_1^*(x)=H(Y-x)$ for $x \ge 0$ as well as the conditions of
Theorem~\ref{t.linear-asymptotics}; we note, in particular, that
\begin{equation}
x \int_x^\infty p_1^*(\xi) \, \d\xi = 0
\end{equation}
for $x \ge Y$ so that \eqref{e.p-assumption} holds with $\gamma=0$.
Let $u_1$ denote the weak solution to the linear non-autonomous
problem \eqref{e.auxiliary} with $p=p_1$. By construction, $u_1$ is a
supersolution to $u$ and by Theorem~\ref{t.linear-asymptotics}, $u_1$
converges uniformly to $\Phi_0$. This implies that there exists $T>0$
such that for all $t>T$,
\begin{equation}
u(x,t) \leq u_1(x,t)
\leq \tfrac12 \, (\Phi_0(\alpha)+u^*)
= \tfrac12 \, (u_0^*+u^*) < u^* \,.
\end{equation}
Further, due to Lemma~\ref{u.psi}, $u(x,t)<u^*$ for all $(x,t)$ with
$x>\alpha^* \sqrt{T}$ and $t \leq T$. Combining these two bounds, we
find that $u(x,t)<u^*$ for all $x>\alpha^*\sqrt{T}$ and therefore
no ignition of precipitation is possible in this region.
Let $p_2 (x,t)= \I_{[0,\alpha^*\sqrt{T}]}(x)$. By the same argument
as before, $p_2$ satisfies the conditions of
Theorem~\ref{t.linear-asymptotics} with $\gamma=0$. Let $u_2$ denote
the solution to \eqref{e.auxiliary} corresponding to $p=p_2$. Since
$p_2 \geq p$, $u_2$ is a subsolution of $u$.
Theorem~\ref{t.linear-asymptotics} implies that $u_2$ converges
uniformly to $\Phi_0$. Altogether, as $u_2\le u \le u_1$, we conclude
that $u$ converges uniformly to $\Phi_0$ as well.
\end{proof}
\begin{remark}
A similar argument can be made in case of a marginal precipitation
threshold. In Theorem~\ref{weak.sol.ustar.threshold.critical}, we
have already seen that marginal solutions are not unique. For the
long-time behavior, there are two possible cases: If $p$ remains zero
a.e., then $u=\psi$ everywhere, so the long-time profile in $\eta$-$s$
coordinates is $\Psi$. As soon as spontaneous precipitation occurs on
a set of positive measure, the long-time profile is $\Phi_0$ instead.
To see this, let $c\in(0,1]$ be such value that $p\ge c$ on some
subset of $\R\times\R_+$ of positive measure. Select $t^*$ such that
$p(\,\cdot\,, t^*)\ge c$ on some subset $A \subset \R$ of positive
measure. Set $p_1(x,t)=c \, \I_A(x) \, H(t-t^*)$ and let $u_1$ denote
the associated bounded solution to the auxiliary problem
\eqref{e.auxiliary}; $u_1$ is a supersolution for $u$. Even though
condition (P) does not hold literally, the argument in the proof of
Theorem~\ref{t.linear-asymptotics} still works when restricted to
$s\ge \sqrt{t^*}$. Hence, $u^1$ converges uniformly to $\Phi_0$. A
subsolution, also converging to $\Phi_0$, can be constructed as in the
proof of Lemma~\ref{l.u0}.
\end{remark}
The next theorem states that it is impossible to have a precipitation
ring of infinite width in the strict sense that $u$ permanently
exceeds the precipitation threshold in some neighborhood of the source
point. A similar theorem is stated in \cite[Theorem
3.10]{HilhorstHM:2009:MathematicalSO}, albeit under a certain
technical assumption on the weak solution. The theorem here does not
depend on this assumption.
\begin{theorem}[No ring of infinite width]
\label{no.ring.infinite}
Let $(u,p)$ be a weak solution to \eqref{e.original}. Then
\begin{equation}
\liminf_{x \to \infty} u(x, x^2/\alpha^2) \le u^*
\label{e.u-claim}
\end{equation}
and there exist precipitation gaps for arbitrarily large $x$ in the
following sense: for every $Y> 0$,
\begin{equation}
\essinf_{\substack{x \ge Y \\ t \ge x^2/\alpha^{2}}} p(x,t) < 1 \,.
\label{e.p-claim}
\end{equation}
\end{theorem}
\begin{proof}
Suppose the converse, i.e., that there exists $Y>0$ such that
$p(x,t)=1$ for almost all pairs $(x,t)$ with $x \geq Y$ and
$t \geq x^2/\alpha^2$. Choose $\gamma>0$ such that
$\Phi_\gamma(\alpha) < u^*$. This is always possible because the
argument used in the proof of Theorem~\ref{t.selfsimilar} shows that
$\Phi_\gamma(\alpha) = u_\gamma^* \to 0$ as $\gamma \to \infty$. Now
increase $Y$ such that $Y \geq \sqrt \gamma$, if necessary, and set
\begin{equation}
p_3(x,t) =
\begin{dcases}
\frac\gamma{x^2} &
\text{for } x \ge Y \text{ and } t\ge x^2/\alpha^2 \,, \\
0 & \text{otherwise} \,.
\end{dcases}
\end{equation}
Then $p_3 \leq p$ and $p_3$ clearly satisfies the assumptions of
Theorem~\ref{t.linear-asymptotics} with the chosen value of $\gamma$.
Let $u_3$ denote the weak solution to the linear nonautonomous problem
\eqref{e.auxiliary} with $p=p_3$. By construction, $u_3$ is a
supersolution to $u$ and by Theorem~\ref{t.linear-asymptotics}, $u_3$
converges uniformly to $\Phi_\gamma$. This implies that there exists
$T>0$ such that for all $t>T$,
\begin{equation}
u(x,t) \leq u_3(x,t)
\leq \tfrac12 \, (\Phi_\gamma(\alpha)+u^*) < u^* \,.
\end{equation}
Further, due to Lemma~\ref{u.psi}, $u(x,t)<u^*$ for all $(x,t)$ with
$x>\alpha^* \sqrt{T}$ and $t \leq T$. Combining these two bounds, we
find that $u(x,t)<u^*$ for all $x>\alpha^*\sqrt{T}$. Therefore,
$p\equiv 0$ in this region. Contradiction. This proves that
\eqref{e.p-claim} holds true for every $Y>0$.
To prove \eqref{e.u-claim}, assume the contrary, i.e., that
$\liminf_{x \to \infty} u(x, x^2/\alpha^2) > u^*$. Then there exists
$Y>0$ such that $u(x, x^2/\alpha^2)>u^*$ for all $x\ge Y$, so that
\begin{equation}
\essinf_{\substack{x \ge Y \\ t \ge x^2/\alpha^{2}}} p(x,t) = 1 \,.
\end{equation}
As this contradicts \eqref{e.p-claim}, the proof is complete.
\end{proof}
In the supercritical regime, we also have the converse: there is no
precipitation gap of infinite width, i.e., the reaction will always
re-ignite at large enough times. The following theorem mirrors
\cite[Theorem~3.13]{HilhorstHM:2009:MathematicalSO} but does not
require the technical condition assumed there.
\begin{theorem}[No gap of infinite width in the supercritical regime]
\label{no.interring.infinite}
Let $(u,p)$ be a weak solution to \eqref{e.original} in the
supercritical regime where $u^*<u^*_0<\Psi(\alpha)$. Then there is
ignition of precipitation for arbitrarily large $x$ in the following
sense: for every $Y>0$,
\begin{equation}
\esssup_{\substack{x \ge Y \\ t\in\R_+}} p(x,t) > 0 \,.
\label{e.nogap}
\end{equation}
\end{theorem}
\begin{proof}
Assume the contrary, i.e., there exists $Y>0$ such that $p=0$ a.e.\ on
$[Y,\infty)\times\R_+$. We construct the supersolution $u_1$ as in
the proof of Theorem~\ref{self.similar.does.not.exists}. In
particular, $u_1$ converges uniformly to $\Phi_0$.
We set $p_2(x,t)=\I_{[-Y,Y]}(x)$ and let $u_2$ be the associated weak
solution to \eqref{t.linear-asymptotics} with given $p_2$. Since
$p\ge p_2$, $u_2$ is a subsolution of $u$. Further, we note that
$p_2$ satisfies condition (P) with $p^*_2(x)=\I_{[0,Y]}(x)$. Hence,
\begin{equation}
x\int_x^\infty p^*_2(\xi)\,\d\xi = 0\text{ for }x\ge Y
\end{equation}
so that the pair $(u_2,p_2)$ satisfies the conditions of
Theorem~\ref{t.linear-asymptotics} for $\gamma=0$. Therefore, $u_2$
converges uniformly to $\Phi_0$.
Altogether, $u$ converges uniformly to $\Phi_0$, in particular,
$\lim_{t\to\infty}u(\alpha\sqrt t, t)=\Phi_0(\alpha)=u^*_0>u^*$. This
contradicts Theorem~\ref{no.ring.infinite}, so \eqref{e.nogap} holds
for every $Y>0$.
\end{proof}
\begin{remark}
\label{r.critical}
Between Theorem~\ref{self.similar.does.not.exists} and
Theorem~\ref{no.interring.infinite}, we cannot say anything about the
critical case when $u_0^*=u^*$. This case is highly degenerate, so
that both arguments above fail. We believe that the problem is of
technical nature, i.e., treating the degeneracy in the proof. We have
no indication that the qualitative behavior is different from the
neighboring cases and conjecture that the asymptotic profile is
$\Phi_0$ as well.
\end{remark}
\begin{lemma}
\label{p=0-on-D.infty}
Let $(u,p)$ be a weak solution to \eqref{e.original}. Suppose
$\eta\geq \alpha$ and $t_0\geq 0$ are such that
$u(\eta\sqrt t, t) \leq u^*$ for all $t\ge t_0$. Then
\begin{enumerate}[label={\upshape(\roman*)}]
\item \label{i.p0.i} there exists $z\geq 0$ such that $u<u^*$ and
$p\equiv 0$ in the interior of $D_{\eta, z}$.
\item \label{i.p0.iii} If $\eta=\alpha$ and the bound $u(\eta\sqrt t,
t) < u^*$ for all $t\ge t_0$ holds with strict inequality, then
$u(x,t) < u^*$ and $p(x,t)=0$ for all $x\ge z$ and $t \geq 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
Select $z \geq \eta \sqrt{t_0}$ such that $u(z,t) \leq u^*$ for all
$t\in[0,z^2/\eta^2]$. This is always possible for otherwise, due to
\eqref{e.hhmo-p-weak-alternative}, the solution $(u,p)$ would have a
ring of infinite width. By assumption, we also have
$u(x,x^2/\eta^2) < u^*$ for all $x\geq z$. Since $u(x,0)=0$, the
parabolic maximum principle then implies that $u$ takes its maximum on
the boundary of $D_{\eta, z}$ where it is bounded above by
$u^*$, and that $u<u^*$ anywhere in the interior. This implies $p=0$
in the interior of $D_{\eta, z}$, so that the proof of case
\ref{i.p0.i} is complete.
(To see how this derives from the standard statement of the maximum
principle, take, for every $x \geq z$, the cylinder
\begin{equation}
U_x = [x, X(x)] \times [0,x^2/\eta^2] \,,
\end{equation}
where, due to the upper bound $u \leq \psi$ from Lemma~\ref{u.psi}, we
can choose $X(x)$ large enough so that the maximum of $u$ on
$\partial U_x$ does not lie on the right boundary. Then $u$ takes its
maximum on the parabolic boundary of $U_x$; by construction, the
maximum must lie on the left-hand boundary
$\{ (x,t) \colon 0 \leq t \leq x^2/\eta^2 \}$. Moreover, as $u$
cannot be a constant, it is strictly smaller than its maximum
everywhere in the interior of $U_x$. Since $x \geq z$ is arbitrary,
the maximum must lie on any of the left side boundaries which is not
itself an interior point for some other $U_x$. The set of all such
points is contained in the boundary of $D_{\eta, z}$.)
When $\eta=\alpha$, we recall that, by
Lemma~\ref{lemma.p.non-decreasing}, $u(x,t)$ is non-increasing in $t$ for
$t\geq x^2/\alpha^2$. This implies \ref{i.p0.iii}.
\end{proof}
\begin{theorem}
\label{u.uniform.convergence}
Let $(u,p)$ be a weak solution to \eqref{e.original} with
$u^*<\Psi(\alpha)$. Assume that $p$ satisfies condition \textup{(P)}
and that there exists $\gamma \geq 0$ such that
\begin{equation}
\lim_{x \to \infty}
x \int^{\infty}_x p^*(\xi) \, \d \xi = \gamma \,.
\label{e.uniform-convergence-assumption}
\end{equation}
Then $u$ converges uniformly to $\Phi_\gamma$. Furthermore,
$\Phi_\gamma(\alpha)=u^*$ if $\gamma>0$ and $0<\Phi_0(\alpha)\le u^*$
if $\gamma=0$.
\end{theorem}
\begin{proof}
We shall show that for every $\eta>\alpha$ there exists $y$ such that
$p\equiv0$ on $D_{\eta,y}$. Uniform convergence of $u$ to
$\Phi_\gamma$ is then a direct consequence of
Theorem~\ref{t.linear-asymptotics}. To do so, assume the contrary,
i.e., that there exists $\eta_*>\alpha$ such that for every $y \in \R$
we have $p>0$ somewhere in $D_{\eta^*,y}$. Due to
Lemma~\ref{p=0-on-D.infty}, this implies that there exists a sequence
$t_i\to\infty$ such that $u(x_i, t_i) \ge u^*$ with
$x_i=\eta_*\sqrt{t_i}$.
We now claim that $p^*(x)=1$ for every $x \in (\alpha \sqrt{t_i}, x_i)$.
To prove the claim, fix $t_i$ and choose $X$ large enough such that
$\max_{t\in[0,t_i]}u(X,t)\le \psi(X,t_i)<u^*/2$. Fix
$x \in (\alpha \sqrt t, x_i)$ and consider the cylinder
$U=(x, X)\times(0,t_i)$ with parabolic boundary $\Gamma$. By the
parabolic maximum principle,
\begin{equation}
\label{p=1}
\max_{\Gamma}u=\max_{\bar U}u\ge u(x_i,t_i)
\end{equation}
with equality only if $u$ is constant, which is incompatible with the
initial condition. Hence,
\begin{equation}
\max_{t\in[0,t_i]} u(x,t) > u(x_i,t_i) \ge u^* \,.
\end{equation}
Since $p$ satisfies \eqref{e.hhmo-p-weak-alternative}, this implies
$p(x,t_i) = p^*(x) = 1$ as claimed. Next, for $z_i=\alpha\sqrt{t_i}$,
we estimate
\begin{equation}
z_i \int_{z_i}^\infty p^*(\xi)\,\d\xi
\ge z_i\int_{z_i}^{x_i} p^*(\xi)\,\d\xi
= z_i \, (x_i-z_i) = \eta_* \, (\eta_*-\alpha) \, t_i
\to \infty
\end{equation}
as $i \to \infty$. This contradicts
\eqref{e.uniform-convergence-assumption}. We conclude that $p\equiv0$
on $D_{\eta_*,y}$ for some $y>0$.
To prove the final claim of the theorem, we note that
$\Phi_\gamma(\alpha)>u^*$ would imply the existence of a ring with
infinite width, which is impossible due to
Theorem~\ref{no.ring.infinite}. Hence,
$0<\Phi_\gamma(\alpha)\le u^*$. When $\gamma=0$, this is all that is
claimed. So supposed that $\gamma>0$ and $\Phi_\gamma(\alpha)< u^*$.
Then Lemma~\ref{p=0-on-D.infty}\ref{i.p0.iii} implies that
$p(\xi,t)=p^*(\xi)=0$ for all $\xi$ big enough, say, when $\xi\ge R$,
and therefore
\begin{equation}
x\int^{\infty}_x p^*(\xi) \, \d \xi = 0
\end{equation}
for $x\ge R$, contradicting \eqref{e.uniform-convergence-assumption}.
Hence, $\Phi_\gamma(\alpha)=u^*$ when $\gamma>0$.
\end{proof}
We now prove a result which provides a converse to
Theorem~\ref{u.uniform.convergence}. We assume that a solution to
\eqref{e.original} has limit profile in $\eta$-$s$ coordinates and
conclude that this limit can only be the self-similar profile
$\Phi_\gamma(\eta)$ from \eqref{e.phi.gamma}.
\begin{theorem}
\label{limit.profile.th}
Let be $(u,p)$ a weak solution to \eqref{e.original} with
$u^*<\Psi(\alpha)$. Assume that $p$ satisfies condition \textup{(P)}
and that for a.e.\ $\eta \geq 0$ the limit
\begin{equation}
V(\eta)=\lim_{t\to\infty}u(\eta\sqrt t,t)
= \lim_{s \to \infty} v(\eta,s)
\label{e.limit-assumption}
\end{equation}
exists. Then the limits
\begin{equation}
\gamma = \lim_{x \to \infty} \frac1x \int_0^x \xi^2 \, p^*(\xi) \, \d\xi
\end{equation}
and
\begin{equation}
\gamma = \lim_{x \to \infty} x \int_x^\infty p^*(\xi) \, \d\xi
\end{equation}
exist and are equal, $V(\eta)=\Phi_\gamma(\eta)$, and $u$ converges uniformly to
$\Phi_\gamma$. Further, $\Phi_\gamma(\alpha)=u^*$ if $\gamma>0$ and
$0<\Phi_\gamma\le u^*$ if $\gamma=0$.
\end{theorem}
\begin{proof}
Write $U_V$ to denote the domain of definition of $V$, change the
coordinate system into $\eta=x/\sqrt t$ and $s=\sqrt t$, and set
$w=v-\Psi$ and $W=V-\Psi$. As detailed in Appendix~\ref{a.weak}, the
weak formulation of the HHMO-model in these similarity variables can
be stated as
\begin{align}
A(S; S_0, f)
& = \frac{S_0}S\int_\R w(\eta, S_0) \, f(\eta) \, \d\eta\notag
- \int_\R w(\eta, S) \, f(\eta) \, \d\eta\notag\\
& \quad
- \frac1S\int_{S_0}^S\int_\R \eta \, w \, f_\eta \, \d \eta \, \d s
- \frac2{S}\int_{S_0}^S\int_\R w_\eta \, f_\eta\, \d \eta \, \d s
\label{A.equation.eta-s}
\end{align}
for all $0<S_0<S$ and $f \in H^1(\R)$ with compact support, where
\begin{equation}
A(S; S_0, f)
= \frac2S \int_{S_0}^S \int_\R s^2 \, q(\eta, s) \,
v(\eta, s) \, f(\eta) \, \d \eta \, \d s \,.
\label{e.Adef.eta-s}
\end{equation}
Writing
\begin{align}
A(S; S_0, f)
& = \frac{S_0}S\int_\R w(\eta, S_0) \, f(\eta) \, \d\eta
- \int_\R w(\eta, S) \, f(\eta) \, \d\eta\notag\\
& \quad
- \frac1S\int_{S_0}^S\int_\R \eta \, w \, f_\eta \,\d \eta \, \d s
+ \frac2{S}\int_{S_0}^S\int_\R w \, f_{\eta\eta}\, \d \eta \, \d s \,,
\label{A.equation.temporary.eta-s}
\end{align}
we note that the limit $S\to\infty$ exists for each term on the right
of \eqref{A.equation.temporary.eta-s}, so that
$\lim_{S\to\infty} A(S; S_0, f)$ exists for $S_0$ and $f$ fixed.
Moreover, for every $b>0$ fixed, definition \eqref{e.Adef.eta-s}
implies that $A(S;S_0,f)$ is bounded uniformly for all $S \geq S_0$
and $f\in L^1$ that satisfy $0\le f\le \I_{[-b,b]}$. Indeed, if
$g \geq \I_{[-b,b]}$ is smooth with compact support, then
\begin{equation}
A(S; S_0, f)\le A(S; S_0, g)\le \sup_{S\ge S_0} A(S; S_0, g) < \infty
\end{equation}
since $\lim_{S\to\infty} A(S; S_0, g)$ exists.
By Lemma~\ref{lemma.p.non-decreasing}, $u-\psi$ is non-increasing in
time $t$ for $x$ fixed. This implies that, in $\eta$-$s$ coordinates,
for $\eta_1, \eta_2 \in U_V$ with $0<\eta_1 < \eta_2$,
\begin{equation}
W(\eta_1)
= \lim_{s \to \infty} w(\eta_1,s)
\leq \lim_{s \to \infty} w(\eta_2,s)
= W(\eta_2) \,,
\label{e.lim-comparison.eta-s}
\end{equation}
and for any fixed $\eta\in(\eta_1,\eta_2)$
\begin{align}
W(\eta_1)
& = \lim_{s \to \infty} w(\eta_1,s)
\leq \liminf_{s \to \infty} w(\eta,s) \notag\\
& \leq \limsup_{s \to \infty} w(\eta,s)
\leq \lim_{s \to \infty} w(\eta_2,s) = W(\eta_2) \,.
\label{e.lim-comparison.middle.eta-s}
\end{align}
By Lemma~\ref{l.u0}, $V(0) = 0$, so that
\begin{equation}
W(0)=V(0)-\Psi(0)=-\Psi(0)\le V(\eta)-\Psi(\eta)=W(\eta)
\end{equation}
for all $\eta\in U_V$. Altogether, we find that $W=V-\Psi$ is
non-decreasing on $U_V$.
Now we will show that $W$ is locally Lipschitz continuous on $U_V$.
Fix $b>0$. For every $\eta_0\in[0,b]$, take the family of compactly
supported test functions $f_\eps(\eta)$ whose derivative is given by
\begin{equation}
f_\eps'(\eta)
= \begin{dcases}
\eps^{-1} & \text{for } \eta \in[-\eps,0] \,, \\
- \eps^{-1} & \text{for } \eta\in[\eta_0,\eta_0+\eps] \,, \\
0 & \text{ otherwise} \,.
\end{dcases}
\end{equation}
We insert $f_\eps$ into \eqref{A.equation.eta-s} and let $\eps\searrow0$.
Clearly,
\begin{equation}
\lim_{\eps\searrow0} A(S;S_0,f_\eps)=A(S;S_0,\I_{[0,\eta_0]}(\eta))
\end{equation}
and
\begin{equation}
\int_\R w(\eta, s)f(\eta)\,\d \eta
\to \int_0^{\eta_0} w(\eta, s)\,\d \eta \,.
\end{equation}
Moreover,
\begin{equation}
\int_\R \eta \, w(\eta,s) \, f_\eps'(\eta)\, \d\eta
\to - \eta_0 \, w(\eta_0,s)
\end{equation}
and
\begin{equation}
\int_\R w_\eta(\eta, s) \, f_\eps'(\eta)\, \d\eta
\to w_\eta(0,s) - w_\eta(\eta_0,s) = - w_\eta(\eta_0,s) \,.
\end{equation}
(Recall that $w_\eta$ is space-time continuous due to the definition
of weak solution.) Altogether, we find that \eqref{A.equation.eta-s}
converges to
\begin{align}
A(S; S_0, \I_{[0,\eta_0]}(\eta))
& = \frac{S_0}S\int_0^{\eta_0} w(\eta, S_0)\, \d\eta
- \int_0^{\eta_0} w(\eta, S)\, \d\eta\notag\\
& \quad
+ \frac{\eta_0}S\int_{S_0}^Sw(\eta_0, s)\, \d s
+ \frac2{S}\int_{S_0}^S
w_\eta(\eta_0, s) \, \d s \,.
\end{align}
Noting that $0\le \I_{[0,\eta_0]}\le f_\eps \le \I_{[-1,b+1]}$ for
$0<\eps\leq 1$, we see that the left hand side is bounded uniformly
for all $\eta_0\in[0,b]$ and $S \geq S_0$. By direct inspection, so
are the first three terms on the right hand side. We conclude
that
\begin{equation}
\frac1S \int_{S_0}^S w_\eta(\eta_0,s) \, \d s \leq C_b
\end{equation}
for some constant $C_b$ independent of $\eta_0\in[0,b]$ and
$S\ge S_0$. Then, for any pair $\eta_1,\eta_2 \in U_V\cap[0,b]$ with
$\eta_1<\eta_2$,
\begin{align}
0 \leq W(\eta_2)-W(\eta_1)
& = \lim_{S\to\infty} \frac1S \int_{S_0}^S w(\eta_2,s) \, \d s
- \lim_{S\to\infty} \frac1S\int_{S_0}^S w(\eta_1,s) \, \d s
\notag \\
& = \lim_{S\to\infty} \frac1S \int_{S_0}^S \int_{\eta_1}^{\eta_2}
w_\eta(\eta,s) \, \d\eta \, \d s
\notag \\
& = \lim_{S\to\infty} \int_{\eta_1}^{\eta_2}
\frac1S \int_{S_0}^S w_\eta(\eta,s) \, \d s \, \d\eta
\notag \\
& \le C_b \, \abs{\eta_2-\eta_1} \,.
\end{align}
Due to \eqref{e.lim-comparison.middle.eta-s}, we conclude that $W$ is
locally Lipschitz continuous, defined on $U_V=\R_+$, and
non-decreasing. In particular, $V(\alpha)$ is well-defined and
strictly positive. To see the latter, suppose the contrary, i.e.,
that $V(\alpha)=0$. Then Lemma~\ref{p=0-on-D.infty}\ref{i.p0.iii}
implies that $p(x,t)=0$ for all $x$ large enough. It follows that we
can take $\gamma=0$ in Theorem~\ref{t.linear-asymptotics} to conclude
that $V=\Phi_0$, contradicting $V(\alpha)=0$.
Since $V(\alpha)>0$, there is a neighborhood
$I=(\eta_0,\eta_1)\subset (0,\alpha)$ such that $V>\frac12V(\alpha)>0$
on $I$. Further, set
\begin{subequations}
\begin{gather}
v_+(\eta;S_0) = \sup_{s\ge S_0}v(\eta,s) \,, \\
v_-(\eta;S_0) = \inf_{s\ge S_0}v(\eta,s)
\end{gather}
\end{subequations}
and choose $S_0^*$ large enough such that
$v_-(\eta_0, S_0^*)>\frac12V(\eta_0)>0$. Since $u-\psi$ is
non-increasing in time in $x$-$t$ coordinates and $\Psi$ is constant
on $I$, we have $v_+(\eta;S_0)\ge v_-(\eta;S_0)\ge\frac12V(\eta_0)$
for all $S_0\ge S_0^*$ and $\eta \in I$. Take $g \in H^1(\R)$ with
$\supp g\subset I$. Noting that, due to \eqref{p.property},
$q(\eta,s) = p^*(\eta s)$, we estimate
\begin{align}
A(S; S_0, g)
& = \int_{\eta_0}^{\eta_1} \frac2S\int_{S_0}^{S}
s^2 \, p^*(\eta s) \, v(\eta,s) \,
g(\eta) \, \d s \, \d \eta
\notag \\
& \le \int_{\eta_0}^{\eta_1} g(\eta) \, v_+(\eta; S_0) \,
\frac2S\int_{S_0}^{S} s^2 \, p^*(\eta s) \,\d s \, \d \eta
\notag \\
& = \int_{\eta_0}^{\eta_1} \frac{g(\eta)}{\eta^3} \,
v_+(\eta; S_0) \, \frac2S\int_{S_0\eta}^{S\eta}
\xi^2 \, p^*(\xi)\,
\d\xi \, \d\eta
\notag \\
& \le \int_{\eta_0}^{\eta_1} \frac{2g(\eta)}{\eta^3} \,
v_+(\eta; S_0) \, \d \eta \,
\frac1S\int_{0}^{S\eta_1} \xi^2 \, p^*(\xi)\, \d\xi
\end{align}
where, in the second equality, we have used the change of variables
$\xi=s\eta$. Taking $\liminf_{S \to \infty}$, we infer that
\begin{equation}
\lim_{S\to\infty} A(S; S_0, g)
\le \gamma^- \, \eta_1
\int_{\eta_0}^{\eta_1} \frac{2g(\eta)}{\eta^3} \,
v_+(\eta;S_0) \, \d\eta \,,
\label{e.ubound.eta-s}
\end{equation}
where
\begin{equation}
\gamma^-
= \liminf_{S\to\infty} \frac1S \int_0^S \xi^2 \,
p^*(\xi) \, \d\xi \,.
\end{equation}
Similarly,
\begin{align}
A(S; S_0, g)
& \ge \int_{\eta_0}^{\eta_1} g(\eta)\, v_-(\eta;S_0)\,
\frac2S\int_{S_0}^Ss^2 \,
p^*(\eta s) \, \d s \, \d\eta
\notag \\
& = \int_{\eta_0}^{\eta_1} \frac{g(\eta)}{\eta^3} \,
v_-(\eta;S_0) \,
\frac2S \int_{S_0\eta}^{S\eta}
\xi^2 \, p^*(\xi) \, \d\xi \, \d\eta
\notag \\
& \ge \int_{\eta_0}^{\eta_1} \frac{2g(\eta)}{\eta^3} \,
v_-(\eta;S_0) \, \d\eta \,
\frac1S \int_{S_0\eta_1}^{S\eta_0}
\xi^2 \, p^*(\xi) \, \d\xi \,,
\end{align}
so that
\begin{equation}
\lim_{S\to\infty} A(S; S_0, g)
\ge \gamma^+ \, \eta_0\int_{\eta_0}^{\eta_1}
\frac{2g(\eta)}{\eta^3} \, v_-(\eta;S_0) \, \d\eta
\label{e.lbound.eta-s}
\end{equation}
with
\begin{equation}
\gamma^+
= \limsup_{S\to+\infty} \frac1S \int_0^S \xi^2 \,
p^*(\xi) \, \d\xi
\geq \gamma_- \,.
\end{equation}
Equation \eqref{e.lbound.eta-s} also implies that $\gamma^+<\infty$.
Since the bounds \eqref{e.ubound.eta-s} and \eqref{e.lbound.eta-s} are
valid for arbitrary $S_0\ge S_0^*$, we can now let $S_0 \to \infty$,
so that
\begin{equation}
\gamma^+ \, \eta_0 \int_{\eta_0}^{\eta_1}
\frac{2g(\eta)}{\eta^3} \, V(\eta) \, \d\eta
\le \gamma^- \, \eta_1 \int_{\eta_0}^{\eta_1}
\frac{2g(\eta)}{\eta^3} \, V(\eta) \, \d\eta \,.
\end{equation}
Since $V>0$ on $I$, we can divide out the integral to conclude that
$\gamma_+ \, \eta_0 \le \gamma_- \, \eta_1$. Further, we can take
$\eta_0$ and $\eta_1$ arbitrary close to each other by taking a test
function $g$ with arbitrarily narrow support, so that
$\gamma_+=\gamma_-$ and both are equal to
\begin{equation}
\gamma = \lim_{S\to\infty} \frac1S \int_0^S \xi^2 \,
p^*(\xi) \, \d\xi < \infty \,.
\label{e.gamma}
\end{equation}
To proceed, we define
\begin{equation}
\Gamma(x)
= x\int_x^\infty p^*(\xi) \, \d \xi
\end{equation}
as in the proof of Theorem~\ref{t.linear-asymptotics}, introduce its
average
\begin{equation}
\bar \Gamma(x)
= \frac1x \int_0^x \Gamma(\xi) \, \d \xi \,,
\end{equation}
and set
\begin{equation}
h(x)
= \frac1x \int_0^x \xi^2 \, p^*(\xi) \, \d \xi \,.
\end{equation}
In \eqref{e.gamma}, we have already shown that $h(x) \to \gamma$ as
$x \to \infty$. It remains to prove that $\Gamma(x) \to \gamma$ as
well. We first note that $p^*$ is integrable so that $\Gamma$ is
well-defined. To see this, we write
\begin{equation}
\int_1^x p^*(\xi) \, \d \xi
= \int_1^x \frac1{\xi^2} \, \xi^2 \, p^*(\xi) \, \d \xi
= \frac{h(x)}x - h(1)
+ 2 \int_1^x \frac{h(\xi)}{\xi^2} \, \d \xi \,,
\end{equation}
where we have integrated by parts, noting that $x\, h(x)$ is an
anti-derivative of $x^2\, p^*(x)$. As $h(x)$ converges and $p^*$ is
non-negative, $p^*$ is integrable on $\R_+$.
Next, by direct calculation,
\begin{equation}
\xi^2 \, p^*(\xi) = \Gamma(\xi) - \xi \, \Gamma'(\xi) \,.
\end{equation}
Inserting this expression into the definition of $h$ and integrating
by parts, we find that
\begin{equation}
h(x) = 2 \, \bar \Gamma(x) - \Gamma(x)
\label{e.limit-identity1}
\end{equation}
First, divide \eqref{e.limit-identity1} by $x$ and observe that
$h(x)/x$ and $\Gamma(x)/x$ both converge to zero as $x\to\infty$.
Consequently,
\begin{equation}
\lim_{x \to \infty} \frac{\bar \Gamma(x)}x = 0 \,.
\label{e.bargammalimit}
\end{equation}
Second, note that \eqref{e.limit-identity1} can be written in the form
\begin{equation}
\frac{h(x)}{x^2} = -\frac{\d}{\d x} \frac{\bar \Gamma(x)}x \,.
\end{equation}
Integrating from $x$ to $\infty$ and using \eqref{e.bargammalimit}, we
find that
\begin{equation}
\bar \Gamma(x)
= x \int_x^\infty \frac{h(\xi)}{\xi^2} \, \d \xi \,.
\end{equation}
Since $h(x) \to \gamma$, this expression converges to $\gamma$ by
l'H\^opital's rule. Thus, by \eqref{e.limit-identity1},
$\Gamma(x) \to \gamma$ as well. We recall that, due to
\cite[Lemma~3.5]{HilhorstHM:2009:MathematicalSO}, $p$ has at least one
non-degenerate precipitation region, so $p$ is non-zero. Hence, we
can finally apply Theorem~\ref{u.uniform.convergence} which asserts
uniform convergence of $u$ to $\Phi_\gamma$.
\end{proof}
We summarize the results of this section in the following theorem.
\begin{theorem}
\label{convergence.conclusion}
Let $(u,p)$ be a weak solution to \eqref{e.original} with
$u^*<\Psi(\alpha)$. Assume that $p$ satisfies condition \textup{(P)}.
Then the following statements are equivalent.
\begin{enumerate}[label={\upshape(\roman*)}]
\item\label{equiv.0}
$\displaystyle \lim_{x \to \infty} \frac1x \int_0^x \xi^2 \, p^*(\xi)
\, \d \xi = \gamma$,
\item\label{equiv.i}
$\displaystyle \lim_{x \to \infty} x \int^{\infty\vphantom\int}_x
p^*(\xi) \, \d \xi = \gamma$,
\item\label{equiv.ii} $u$ converges uniformly to $\Phi_\gamma$ with
$\Phi_\gamma(\alpha)=u^*$ if $\gamma>0$ and
$0<\Phi_\gamma(\alpha)\le u^*$ if $\gamma=0$,
\item\label{equiv.iii} $u$ converges to some limit profile $V$
pointwise a.e.\ in $\eta$-$s$ coordinates,
\end{enumerate}
\end{theorem}
\begin{proof}
Statement \ref{equiv.i} implies \ref{equiv.ii} by
Theorem~\ref{u.uniform.convergence}, and \ref{equiv.ii} trivially
implies \ref{equiv.iii}. Conversely, \ref{equiv.iii} implies
\ref{equiv.0} and \ref{equiv.0} implies \ref{equiv.i} by
Theorem~\ref{limit.profile.th} and its proof.
\end{proof}
|
1,108,101,564,375 | arxiv | \section{Introduction}
Accretion disks are ubiquitous in astrophysical systems, feeding objects ranging in size from planets to black holes. Rapid disk accretion requires rapid angular momentum transport, but only a few transport mechanisms are known: self-gravity, turbulence from the magnetorotational instability, driven hydrodynamical turbulence, and (in rare circumstances) magneto-centrifugal winds. Our focus is on the dynamics of disks around young, rapidly accretings protostars, for which self-gravity is the key ingredient \citep{1987MNRAS.225..607L, Gam2001, KMK08}. Gravitational instability (hereafter GI) is important whenever disks are cold enough or massive enough to trigger it, as they typically are during the early phases of star formation.
GI plays a strong role in AGN disks as well, and possibly in other contexts where disks are cold and accretion is fast.
The role of GI in angular momentum transport is complicated by the fact that it can lead to runaway collapse. Indeed, this makes GI an attractive mechanism for the formation of stars with companions
\citep[binaries, brown dwarfs, even planets in some circumstances:][]{1994MNRAS.269L..45B,1994MNRAS.271..999B,2006astro.ph..2367W}. \citealt{KM06} (2006, hereafter KM06) and \citealt{KMK08} (2008, hereafter KMK08)
found that disk fragmentation and binary formation are increasingly likely as one considers more and more massive stars, whereas disks in low-mass star formation are relatively stable \citep{ML2005}.
The increased frequency of giant planet formation around A stars relative to F and G stars, and its lower sensitivity to the stellar metallicity \citep{2007ApJ...665..785J}, may also be related.
During the earliest phases of star formation, protostellar disks are deeply embedded within their natal clouds.
Observing this stage has been difficult because the source is only visible at wavelengths where resolution is poor. But because it sets the initial conditions for stellar and planetary systems, an understanding of this phase is critical. Models of rapidly accreting disks, like those presented here, will be useful for the interpretation of observations of future facilities such as ALMA and the EVLA.
Although semi-analytical and low-dimensional studies can illuminate trends and provide useful approximate results, disk fragmentation is inherently a nonlinear and multidimensional process.
For this reason we have embarked on
a survey of global, three-dimensional, numerical experiments to examine the role of GI as the mediator of the accretion rate in self-gravitating disks, and as a mechanism for creating
disk-born companions.
Any such project faces a central difficulty: the GI is famously sensitive to the disk's thermodynamics \citep{2000ApJ...528..325B,Gam2001,2003ApJ...590.1060P,Rice05,2005ApJ...619.1098M,Krum2006b,2008ApJ...673.1138C}. While it is possible and valuable to incorporate detailed heating and cooling into numerical simulations as has been explored by the authors above, there is a cost: simulations with these important physical processes cannot be scaled to represent a whole range of physical environments, whereas those without them can. We choose to separate the dynamical problem from the thermal one. We exclude thermal physics from our simulations entirely, while scanning a thermal parameter in our survey.
By this means we reduce the physical problem to two dimensionless parameters: one for the disk's temperature, another for its rotation period -- both in units determined by its mass accretion rate. We hold these fixed in each simulation by choosing well-controlled initial conditions corresponding to self-similar core collapse. This parameterization is a central aspect of our work: it forms the basis for our numerical survey; it allows us to treat astrophysically relevant disks, including fragmentation and the formation of binary companions, while also maintaining generality; and it distinguishes our work from previous numerical studies of core collapse, disk formation, GI, and fragmentation.
This paper, the first in a series, focuses on the broad conclusions we can draw from our parameter space study; subsequent papers will discuss the detailed behavior of multiple systems, three dimensional effects such as turbulence, and vertical flows, and non-linear GI mode coupling. We begin here by introducing our dimensionless parameters in \S \ref{survey}. We describe the initial conditions and the numerical code used in \S \ref{code}. In \S \ref{scalings} we derive analytic predictions for the behavior of disks as a function of our parameters. We describe the main results from our numerical experiments in \S \ref{results}, with more detailed analysis in \S \ref{caveats}. We compare them in detail to other numerical and analytic models of star formation in \S \ref{prevwork}.
\section{A New Parameter Space for Accretion} \label{survey}
We consider the gravitational collapse of a rotating, quasi-spherical gas core onto a central pointlike object, mediated by
a disk. In the idealized picture we will explore in this paper, the disk and the mass flows into and out of it can be characterized by a
few simple parameters. At any given time, the central point mass (or
masses, in cases where fragmentation occurs) has mass $M_*$, the disk
has mass $M_d$, and the combined mass of the two is $M_{*d}$. The disk
is characterized by a constant sound speed $c_{s,d}$. Material from
the core falls onto the disk with a mass accretion rate $\dot{M}_{\rm
in}$, and this material carries mean specific angular momentum
$\langle j \rangle_{\rm in}$, and as a result it circularizes and goes
into Keplerian rotation at some radius $R_{k, \rm in}$; the angular
velocity of the orbit is $\Omega_{k, \rm in}$. In general in what
follows, we refer to quantities associated with the central object
with subscript *, quantities associated with the disk with a subscript
d, quantities associated with infall with subscript in. Angle brackets
indicate mass-weighted averages over the disk (with subscript d) or over infalling mass
(with subscript in).
Our simple decomposition of the problem is motivated by the work of \cite{Gam2001}, \cite{ML2005}, and KMK08. We characterize our numerical experiments using two dimensionless parameters which are well-adapted to systems undergoing rapid accretion. We encapsulate the complicated physics of heating and cooling through the thermal parameter
\begin{equation}\label{xi}
\xi = \frac{\Mdot_{\rm in} G}{c_{s,d}^3},
\end{equation}
which relates the infall mass accretion rate $\Mdot_{\rm in}$ to the characteristic sound speed $c_{s,d}$ of the disk material. Our parameter $\xi$ is also related to the physics of core collapse leading to star formation. If the initial core is characterized by a signal speed $c_{{\rm eff},c}$ then $\Mdot_{\rm in}\sim c_{{\rm eff},c}^3/G$, implying $\xi\sim c_{{\rm eff},c}^3/c_{s,d}^3$ -- although there can be large variations around this value \citep{1972MNRAS.156..437L,1993ApJ...416..303F}.
The second, rotational parameter
\begin{equation}\label{gamma}
\Gamma = \frac{\Mdot_{\rm in}}{M_{*d} \Omega_{k,{\rm in}} } = \frac{\Mdot_{\rm in} \left<j\right>_{\rm in}^3 }{G^2 M_{*d}^3},
\end{equation}
compares the system growth rate or accretion timescale, $\Mdot_{\rm in}/M_{*d}$ to the orbital timscale of infalling gas. Unlike $\xi$, $\Gamma$ is independent of disk heating and cooling, depending only on the core structure and velocity field. In general, $\Gamma$ compares the relative strength of rotation and gravity in the core. Systems with a large value of $\Gamma$ (e.g. accretion-induced collapse of a white dwarf) gain a significant amount of mass in each orbit, and tend to be surrounded by thick, massive accretion disks, while those with very low $\Gamma$ (e.g. active galactic nuclei) grow over many disk lifetimes, and tend to harbor thin disks with little mass relative to the central object. We consider characteristic values for our parameters in \S \ref{thepars}, and their evolution in the isothermal collapse of a rigidly rotating Bonnor-Ebert sphere in \S \ref{bonnorebert}.
In addition to being physically motivated, our parameters are practical from a numerical and observational perspective. The thermal parameter $\xi$ is straightforward to calculate in other simulations, theoretical models, and observed disks. Accretion rates are routinely estimated from measures of infall velocities in cores \citep[although some uncertainties persist]{cesaroni07a}. Disk temperatures can also be measured using infrared and submillimeter disk detections. By contrast, measuring classic dimensionless disk parameters such as Toomre's $Q = c_s \Omega/ (\pi G \Sigma)$ \citep{Toom1964} can be difficult. In observed disks, estimating $Q$ is challenging due to the current resolution and sensitivity of even the best instruments. While constraining disk temperatures is possible, measuring accurate (within a factor of $\sim 3$) surface densities are not \citep{2008ApJ...683..304E}. The rotation parameter $\Gamma$ is similarly practical: one need not know the density distribution of the initial core, nor the radial and angular distribution of the velocity profile -- a mean value for $j$ and an estimate of the core mass is sufficient to make an estimate for the disk size, and thus $\Gamma$.
To study the evolution of systems as they accrete, we hold $\xi$ and $\Gamma$ fixed for each experiment via the self-similar collapse of a rotating, isothermal sphere (\S \ref{selfsim}). This strategy allows us to map directly between the input parameters, and relevant properties of the system. Specifically, we expect dimensionless properties like the disk-to-star mass ratio, Toomre parameter, stellar multiplicity, etc., to fluctuate around well-defined mean values (see \S \ref{selfsim_idea}).
We aim to use our parameters $\xi$ and $\Gamma$ to: (a) explore the parameter space relevant to a range of star formation scenarios; (b) better understand the disk parameters, both locally and globally, which dictate the disk accretion rate and fragmentation properties; (c) make predictions for small scale disk behavior based on larger scale, observable quantities; and (d) allow the results of more complicated and computationally expensive simulations to be extended into other regimes.
\subsection{Characteristic values of the accretion parameters} \label{thepars}
We base our estimates of $\Gamma$ and $\xi$ on observations of core rotation in low-mass and massive star-forming regions \citep{1992ApJ...396..631M, 1993ApJ...406..528G,1999ApJ...511..208W}, as well as the analytical estimates of core rotation and disk temperature in \cite{ML2005}, \cite{Krum2006b}, KM06, and KMK08. Using simple models of core collapse in which angular momentum is conserved in the collapse process and part of the matter is cast away by protostellar outflows \citep{MM2000}, we find that both $\xi$ and $\Gamma$ are higher in massive star formation than in low-mass star formation. In our models, the characteristic value of $\Gamma$ rises from $\sim 0.001 - 0.03$ as one considers increasingly massive cores for which turbulence is a larger fraction of the initial support.
The value of $\xi$ is more complicated, as it reflects the disk's thermal state as well as infalling accretion rate, but the models of KMK08 and \cite{Krum2006b} indicate that its characteristic value increases from $\lesssim 1$ to $\sim 10$ as one considers higher and higher mass cores -- although the specific epoch in the core's accretion history is also important. In the case of massive stars, such rapid accretion has been observed as in \cite{2006Natur.443..427B} and \cite{2008arXiv0812.1789B}. Numerical simulations also find rapid accretion rates from cores to disks. Simulations such as those of \cite{BanPud07} report $\xi \sim 10$ at early times in both magnetized and non-magnetized models. We note that $\Gamma$ has significant fluctuations from core to core when turbulence is the source of rotation, and both $\xi$ and $\Gamma$ are affected by variations of the core accretion rate around its characteristic value \citep{1993ApJ...416..303F}.
A major goal of this work is to probe the evolution of disks with $\xi \geq 1$, as mass accretion at this rate cannot be accommodated by the \cite{SS1973} model with $\alpha<1$. Values of $\alpha$ exceeding unity imply very strong GI, and possibly fragmentation.
\section{Numerical Methodology}\label{code}
\subsection{Numerical Code}\label{code_details}
We use the code ORION to conduct our numerical experiments \citep{Truelove98,Klein99, 2002PhDT.........5F}. ORION is a parallel
adaptive mesh refinement (AMR), multi-fluid, radiation-hydrodynamics code with self-gravity and lagrangian sink particles (Krumholz et al. 2004). Radiation transport and multi-fluids
are not used in the present study. The gravito-hydrodynamic equations are solved using a conservative, Godunov scheme, which is second order accurate in both space and time. The gravito-hydrodynamic equations are:
\begin{eqnarray}
\frac{\partial}{\partial t} \rho & = & -\nabla \cdot (\rho\mathbf{v}) - \sum_i \dot{M}_i W(\mathbf{x}-\mathbf{x}_i)
\label{masscons}
\\
\frac{\partial}{\partial t} (\rho\mathbf{v}) & = & -\nabla \cdot (\rho\mathbf{v}\vecv) - \nabla P - \rho\nabla \phi \\ \nonumber
& -& \sum_i \dot{\mathbf{p}}_i W(\mathbf{x}-\mathbf{x}_i) \\ \nonumber
\label{gasmom}
\\
\frac{\partial}{\partial t} (\rho e) & = & -\nabla \cdot [(\rho e + P)\mathbf{v}] + \rho \mathbf{v} \cdot \nabla \phi \\ \nonumber
&-&\sum_i \dot{\mathcal{E}}_i W(\mathbf{x}-\mathbf{x}_i)
\label{gasen}
\end{eqnarray}
Equations (\ref{masscons})-(\ref{gasen}) are the equations of mass, momentum and energy conservation respectively. In the equations above, $\Mdot_i$, $\dot{\mathbf{p}}_i$, and $\dot{\mathcal{E}}_i$ describe the rate at which mass and momentum are transfered from the gas onto the $i$th lagrangian sink particles. Summations in these equations are over all sink particles present in the calculation. $W(\mathbf{x})$ is a weighting function that defines the spatial region over which the particles interact with gas. The corresponding evolution equations for sink particles are
\begin{eqnarray}
\frac{d}{dt} M &=& \Mdot_i \\
\frac{d}{dt} \mathbf{x}_i & = & \frac{\mathbf{p}_i}{M_i}\\
\frac{d}{dt} \mathbf{p}_i &= &-M_i \nabla \phi +\dot{\mathbf{p}_i}.
\end{eqnarray}
These equations describe the motion of the point particles under the influence of gravity while accreting mass and momentum from the surrounding gas.
The Poisson equation is solved by multilevel elliptic solvers via the multigrid method. The potential $\phi$ is given by the Poisson equation
\begin{equation}
\label{poisson}
\nabla^2 \phi = 4 \pi G \left[\rho + \sum_i M_i \delta(\mathbf{x}-\mathbf{x}_i)\right],
\end{equation}
and the gas pressure $P$ is given by
\begin{equation}
\label{pres}
P = \frac{\rho k_{\rm B} T_{\rm g}}{\mu} = (\gamma - 1) \rho \left(e - \frac{1}{2} v^2\right),
\end{equation}
where $T_{\rm g}$ is the gas temperature, $\mu$ is the mean particle mass, and $\gamma$ is the ratio of specific heats in the gas. We adopt $\mu=2.33 m_{\rm H}$, which is appropriate for standard cosmic abundances of a gas of molecular hydrogen and helium.
We use the sink particle implementation described in \cite{Krumholz04} to replace cells which become too dense to resolve. Sink particle creation and AMR grid refinement are based on the Truelove criterion \citep{Truelove97} which defines the maximum density that can be well resolved in a grid code as:
\begin{equation}\label{truelove}
\rho < \rho_j = \frac{N_J^2\pi c_s^2 } {G (\Delta x^l)^2},
\end{equation}
where, $N_J$ is the Jeans number, here set to $0.125$ for refinement, and $0.25$ for sink creation, and $\Delta x^l$ is the cell size on level $l$. When a cell violates the Jeans criterion, the local region is refined to the next highest grid level. If the violation occurs on the maximum level specified in the simulation, a sink particle is formed. Setting $N_J$ to 0.125 is also consistent with the resolution criterion in \cite{Nelson2006}. Sink particles within 4 cells of each other are merged in order to suppress unphysical n-body interactions due to limited resolution. At low resolution, unphysical sink particle formation and merging, can cause rapid advection of sink particles inwards onto the central star, generating spurious accretion. Moreover, because an isothermal, rotating gas filament will collapse infinitely to a line \citep{Truelove97}, an entire spiral arm can fragment and be merged into a single sink particle. To alleviate this problem, we implement a small barotropic switch in the gas equation of state such that
\begin{eqnarray}\label{baroswitch}
\gamma &=& 1.0001,~ \rho < \rho_{J^{s}}/4 \\
\gamma &= &1.28,~ \rho_{J^s}/4 < \rho < \rho_{J^s},
\end{eqnarray}
where the $J^s$ subscript indicates the Jean's criterion used for sink formation.
With this prescription, gas is almost exactly isothermal until fragmentation is imminent, at which point it stiffens somewhat.
This modest stiffening helps turn linear filaments into resolved spheres just prior to collapse and provides separation between newborn sink particles.
The primary effect of this stiffening is to increase the resolution of the most unstable wavelength in a given simulation, at the expense of some dynamical range. We describe the influence of this stiffening on our results in \S \ref{thermo}, where we conduct some experiments in which it is turned off.
As described via equations (\ref{masscons})-(\ref{gasen}), sink particles both accrete from and interact with the gas and each other via gravity. Accretion rates are computed using a modified Bondi-Hoyle formula which prevents gas which is not gravitationally bound to the particles from accreting. See \cite{Krumholz04} and \cite{Offner08} for a detailed study of the effects of sink particle parameters. Note that we also use a secondary, spatial criterion for AMR refinement based on an analytic prediction for the disk size as a function of time (see \S \ref{domain}).
\subsection{Initial conditions}\label{selfsim}
We initialize each run with an isothermal core:
\begin{eqnarray}\label{iso_shu}
\rho (r) = \frac {A c_{s,{\rm core}}^2}{4 \pi G r^2}.
\end{eqnarray}
There is a small amount of rotational motion in our initial conditions, but no radial motion. A core with this profile is out of virial balance when $A>2$, and accretes at a rate
\begin{equation}\label{MdotCore}
\dot{M} = {c_{s, {\rm core}}^3\over G} \times
\left\{
\begin{array}{lc}
0.975, ~& (A=2) \\
(2A)^{3/2} /\pi. &(A \gg 2)
\end{array}
\right.
\end{equation}
The value for $A=2$ represents the \cite{Shu77} inside-out collapse solution, whereas the limit $A\gg 2$ is derived assuming pressureless collapse of each mass shell.
It is possible to predict $\dot{M}$ analytically \citep{Shu77}, but in practice we initialize our simulations with a range of values $A>2$ and measure $\dot{M}$ just outside the disk.
Because our equation of state is isothermal up to densities well above the typical disk density ($c_{s,d}= c_{s,{\rm core}}$), $\dot{M} G/c_{s,{\rm core}}^3$ is equivalent to our parameter $\xi$.
In order to set the value of our rotational parameter $\Gamma$ and hold it fixed, we initialize our cores with a constant, subsonic rotational velocity:
\begin{equation}\label{omega}
\Omega = {2A c_s\over \varpi} \left(\frac{\Gamma}{\xi}\right)^{1/3} ,
\end{equation}
where $\varpi$ is the cylindrical radius. We arbitrarily choose a constant velocity rather than rigid rotation on spheres in order to concentrate accretion near the outer disk radii. Our definition of $\Gamma$ in terms of the mean value of $j_{\rm in}$ rather than its maximum value is intended to reduce the sensitivity of our results to the choice of rotational profile.
Given these initial conditions, our parameters $\xi$ and $\Gamma$ remain constant throughout the simulation, while the collapsed mass and disk radius (as determined by the Keplerian circularization radius of the infalling material) increase linearly with time. We define a resolution parameter,
\begin{equation}
\lambda = {R_{\rm k,in}\over dx_{\rm{min}}},
\end{equation}
to quantify the influence of numerics on our results. Because we hold the minimum grid spacing $dx_{\rm min}$ constant, $\lambda$ increases $\propto t$ as the simulation progresses.
By artificially controlling the infall parameters of our disks, and then watching them evolve in resolution, we gain insight into the physical behavior of accretion with certain values of $\xi$ and $\Gamma$, as captured in a numerical simulation with a given dynamical range ($\lambda$). Our initial conditions are necessarily ideal, allowing us to perform controlled experiments: of course realistic star-forming cores will undoubtedly be somewhat turbulent with time variable infall rates.
\subsection{Domain and Resolution} \label{domain}
Due to the dimensionless nature of these experiments, we do not use physical units to analyze our runs. The base computational grid is $128^3$ cells, and for standard runs we use nine levels of refinement, with a factor of two increase in resolution per level: this gives an effective resolution of $65,536^3$. More relevant to our results, however, is the resolution with which our disks are resolved: $\lambda \lesssim 10^2$. To compare this to relevant scales in star formation, this is equivalent to sub-AU resolution in disks of $\sim 50-100$ AU.
The initial core has a diameter equal to one half of the full grid on
the base level. The gravity solver obeys periodic boundary conditions on the
largest scale; as the disk is 2.5 to 3 orders of magnitude smaller
than the grid boundaries, disk dynamics are unaffected by this choice.
The initial radius of the current infall is $(\pi \Gamma)^{-2/3}
R_{\rm k,in}$ (from equations (\ref{gamma}), (\ref{iso_shu}), and (\ref{MdotCore})); although this is
much larger than the disk itself, it is still $\sim 15-40$ times
smaller than the initial core and $\sim 30-80$ times smaller than the
base grid. Tidal distortions of the infall are therefore very small,
although they may be the dominant seeds for the GI. We return to this
issue in \S \ref{reso}, where we compare two runs in which
only the tidal effects should be different.
In addition to the density criterion for grid refinement described in \S \ref{code}, we also refine spatially to ensure that the entire disk is resolved at the highest grid level. We use $\xi$ and $\Gamma$ to predict the outer disk radius (see \S \ref{scalings}), and refine to $150\%$ of this value in the plane of the disk. In the vertical direction, we refine to $40\%$ of the disk radius: this value is larger than the expected scaleheight for any of our disks by at least $\sim 15\%$. We find that we accurately capture the vertical and radial extent of the disk with this prescription, and the density criterion ensures that any matter at disk-densities extending beyond these radii will be automatically refined.
\subsection{Dynamical Self-Similarity}\label{selfsim_idea}
Because our goal is to conduct a parameter study isolating the effects of our parameters $\xi$ and $\Gamma$, we hold each fixed during a single run. At a given resolution $\lambda$, we expect the simulation to produce consistent results regarding the behavior of the accretion disk, the role of the GI, and fragmentation into binary or multiple stars. Since $\lambda$ increases linearly in time, each simulation serves as a resolution study in which numerical effects diminish in importance as the run progresses. Because the GI is an intrinsically unsteady phenomenon, a disk should fluctuate around its mean values even when all three of $\Gamma$, $\xi$, and $\lambda$ are fixed. Because of this, and because $\lambda$ changes over the run, we expect our runs to be self-similar, but only in a limited, statistical sense.
To illustrate how we expect resolution to affect our results, consider another unsteady
problem: the turbulent wake of a solid body in air. Our parameters $\xi$
and $\Gamma$ are analogous to the macroscopic parameters of the problem,
such as the body's aspect ratio and the Mach number of its motion,
whereas $\lambda$ is analogous to the Reynolds number of the
flow. With all the parameters fixed, the flow field fluctuates around
a well-defined average state. The average flow properties do depend
on Reynolds number, but increasingly slowly in the high-Reynolds
limit. In our simulations the resolution parameter is never fixed,
but increases linearly in time; therefore we expect the disk to settle
into an approximate steady state in which each ten orbits
resemble the previous ten. Unlike a Sedov blast wave, we should \emph{not} expect our disks to be exactly invariant under scaling: this would not be consistent with the turbulent saturation of non-linear instabilities.
Moreover, whereas many physical systems are captured perfectly in the limit of infinite resolution ($\lambda\rightarrow\infty$), this is not true of isothermal, gravitational gas dynamics, in which the minimum mass and spacing of fragments both scale as $\lambda^{-1}$ \citep{1992ApJ...388..392I}. For this reason we quote the resolution $\lambda$ whenever reporting on the state of the disk-star system.
We note that there exists a minimum scale in real accretion disks as well, namely the opacity-limited minimum fragment mass \citep{1976MNRAS.176..483R}. The finite dynamical range of our numerical simulations is therefore analogous to a phenomenon of Nature, albeit for entirely different reasons.
\begin{figure*}
\centering
\includegraphics[scale=0.8]{fig1a.pdf}
\includegraphics[scale=0.8]{fig1b.pdf}
\caption{Two examples of single, binary, and multiple systems. The resolution across each panel is 328x328 grid cells. The single runs are $\xi=2.9, \Gamma=0.018$ (top), $\xi = 1.6, \Gamma = 0.009$ (bottom). The binaries are $\xi = 4.2, \Gamma = 0.014$ (top), $\xi = 23.4, \Gamma = 0.008$, (bottom). The multiples are $\xi = 3.0 , \Gamma = 0.016$ (top), $\xi = 2.4, \Gamma = 0.01$ (bottom). Black circles with plus signs indicate the locations of sink particles. These correspond to runs 5, 1, 9, 16, 7, and 4 respectively.}
\label{prettypics}
\end{figure*}
\section{Disk properties in terms of the accretion parameters} \label{scalings}
To assess the physical importance of $\xi$ and $\Gamma$, it is useful to consider the case of a single star and its accretion disk. Because many $\xi$, $\Gamma$ pairs lead to fragmentation, this assumption is only self-consistent within a subregion of our parameter space; nevertheless it helps to guide our interpretation of the numerical results. In order to associate results from our parameters with those of previous studies, we also derive expressions for disk averaged quantities such as $Q$ and the disk-to-system mass ratio, $\mu$ as a function of $\xi$ and $\Gamma$.
The combination
\begin{equation} \label{HonRfromGamma/Xi}
\left(\Gamma\over\xi\right)^{1/3} = {\left<j\right>_{\rm in} c_{s,d}\over G M_{*d} } = {c_{s,d}\over v_{\rm k,in}}
\end{equation}
is particularly useful, since it provides an estimate for the disk's aspect ratio (the scale height compared to the circularization radius). Being independent of $\Mdot$, it is more a property of the disk than of the accretion flow.
The other important dimensionless quantity whose mean value depends primarily on $\xi$ and $\Gamma$ (and slowly on resolution) is the disk-to-system mass ratio
\begin{equation}\label{mu}
\mu = {M_d\over M_{*d}}.
\end{equation}
When the disk is the sole repository of angular momentum, the specific angular momentum stored in the disk is related to the infalling angular momentum via:
\begin{equation}\label{j_d_vs_j_in}
j_d = \left(J_{\rm in} \over \left<j\right>_{\rm in} M_{*d} \right)
{\left<j\right>_{\rm in}\over \mu}
\end{equation}
where $J_{\rm in}$ is the total angular momentum accreted, so that $J_{\rm in}/(\left<j\right>_{\rm in} M_{*d}) = 1/(l_j+1)$ in an accretion scenario where $\left<j\right>_{\rm in}\propto M_{*d}^{l_j}$. In our simulations $l_j=1$, so $j_d=\left<j\right>_{\rm in}/(2\mu)$. Given the relation between $j_d$ and $\left<j\right>_{\rm in}$, we can define
\begin{equation}\label{R_d_Omega_d}
\left.\begin{array}{c}R_d \\ \Omega_d \end{array}\right.
\left.\begin{array}{c}= \\= \end{array}\right.
\left.\begin{array}{c} \left[(l_j+1)\mu\right]^{-2} R_{\rm k,in} \\ \left[(l_j+1)\mu\right]^{3}\Omega_{\rm k, in}
\end{array}\right.
\end{equation}
which relate the disk's characteristic quantities ({\em not} the location of its outer edge) to conditions at the current circularization radius $R_{\rm k,in} = \left<j\right>_{\rm in}^2/(G M_{*d})$. Such ``characteristic" quantities are valuable for describing properties of the disk as a whole, rather than at single location, with an effective mass weighting. If we further suppose that the disk's column density varies with radius as $\Sigma(r) \propto r^{-k_\Sigma}$ (we expect $k_\Sigma\simeq 3/2$ for a constant Q, isothermal disk), we may define its characteristic column density $\Sigma_d = (1-k_\Sigma/2) M_d/(\pi R_d^2)$:
\begin{equation}\label{Sigma_d}
\Sigma_d \simeq f_\Sigma
{G^2 M_{*d}^3\over \left<j\right>_{\rm in}^4 }
\mu^5
\end{equation}
where $f_\Sigma = {(1-k_\Sigma/2) (1+l_j)^4 /\pi }$. Using equations (\ref{HonRfromGamma/Xi}) and (\ref{R_d_Omega_d})-(\ref{Sigma_d}), we can rewrite the Toomre stability parameter $Q$ (ignoring the difference between $\Omega$ and the epicyclic frequency for simplicity):
\begin{eqnarray}\label{Q_d}
Q & = & \frac{ c_s \kappa}{\pi G \Sigma} \rightarrow \frac{c_s \Omega_d}{\pi G \Sigma_d} \\
Q_d &\simeq& \frac{f_Q^{-1}}{\mu^{2}} \frac{c_{s,d} \left<j\right>_{\rm in} }{G M_{*d} }
\nonumber \\
& &= \left(\frac{\Gamma}{\xi}\right)^{1/3} \frac{f_Q^{-1}}{ \mu^{2}}.
\end{eqnarray}
where $f_Q = (1-k_\Sigma/2)(1 + l_j) $.
To the extent that we expect $Q_d\sim1$ in any disk with a strong GI, this suggests $\mu\sim (\Gamma/\xi)^{1/6} (1-k_\Sigma/2)^{-1/2}(1+l_j)^{-1/2}$; and because we expect that $\mu$ has an upper limit of around 0.5 \citep[see \S \ref{results} and discussion in KMK08 and][]{Sling1990}, we see there is an upper limit to $\xi/\Gamma$ above which the system is likely to become binary or multiple. This is not surprising, as $\mu$ is proportional to scale height when $Q$ is constant; equation (\ref{Q_d}) simply accounts self-consistently for the fact that $\mu$ also affects $R_d$.
To go any further with analytical arguments, we must introduce the \cite{SS1973} $\alpha$ viscosity parameterization, in which steady accretion occurs at a rate
\begin{equation}\label{Mdot_SS73}
\Mdot_d(r) = {3\alpha(r)\over Q(r)} {c_s(r)^3\over G}
\end{equation}
Using the definition of $\xi$
\begin{equation} \label{xi_constrains_alpha/Q}
\xi \sim {3\alpha(r) \over Q(r)} {c_s(r)^3 \over c_{s,d}^3}
\end{equation}
Insofar as $Q\sim1$ when the GI is active, the effective value of $\alpha$ induced by a strong GI is directly proportional to $\xi$.
The magnitude of $\Gamma$ has important implications for disk evolution. As discussed previously by KMK08, $\Gamma$ (called $\Re_{\rm in}$ there) affects $\mu$ through the relation
\begin{eqnarray}\label{muEvolution}
{\dot{\mu}\over \mu \Omega_{\rm k,in}}& =& \Gamma\left(\frac1\mu -1\right) - {\dot{M}_*\over M_d\Omega_{\rm k, in}}. \\
& &\simeq \Gamma\left(\frac1\mu -1\right) - 3(1-\frac{k_\Sigma}{2})(1+l_j)\alpha \mu\left(\Gamma\over\xi\right)^{2/3}, \nonumber
\end{eqnarray}
where the second line uses disk-averaged quantities to construct a mean accretion rate from equation (\ref{Mdot_SS73}). In our simulations $\dot{\mu}\simeq 0$ so we expect $\mu$ to saturate at the value for which the two terms on the right of equation (\ref{muEvolution}) are equal,
\begin{equation}\label{muSolution}
\mu \rightarrow (B^2 + 2B)^{1/2} - B, ~~{\rm where}~~ B = {\Gamma^{1/3} \xi^{2/3}\over 3(2-k_\Sigma) (1+l_j) \alpha}.
\end{equation}
The disk mass fraction $\mu$ increases with $B$, so both $\Gamma$ and $\xi$ have a positive effect on $\mu$, whereas $\alpha$ tends to suppress the disk mass. Note that, when $B$ is small and $\mu\simeq \sqrt{2B}$, equation (\ref{Q_d}) implies $Q_d\simeq 3\alpha/\xi$ in accordance with equation (\ref{Mdot_SS73}). Because the effective value of $\alpha$ induced by the GI is a function of disk parameters, we cannot say more without invoking a model for $\alpha(\Gamma, \xi)$ or $\alpha(Q,\mu)$ as in KMK08.
The scalings of disk properties with the dimensionless parameters
of the problem are in accord with intuitive expectations. An
increase in $\xi$ corresponds to an increase in accretion rate at
fixed disk sound speed, and as a result the equilibrium disk mass
rises. An increase in $\Gamma$ corresponds to an increase in the mean
angular momentum of the infall at fixed sound speed, leading to larger
disks that must transport more angular momentum, and thus again become
more massive. An increase in $\alpha$ corresponds to an increase in
the rate at which the disk can transport angular momentum and mass at
a fixed rate of mass and angular momentum inflow, allowing the disk to
drain and reducing its relative mass.
We use the above relations to guide our interpretation of our simulation results, specifically the dependence of disk parameters like $\mu$, $Q_d$, $\alpha$, and the fragmentation boundary, on $\xi$ and $\Gamma$.
\section{Results}\label{results}
Each of our simulations produces either a disk surrounding a single star, or binary or multiple star system formed via disk fragmentation; Figure \ref{prettypics} depicts examples of each outcome. We use these three possible morphologies to organize our description of the simulations. We explore the properties of each type of disk below as well as examine the conditions at the time of fragmentation.
The division between single and fragmenting disks in $\xi$ and $\Gamma$ is relatively clear from our simulation results, as shown in
Figure \ref{xi_gamma}. Several trends are easily identified. First, there is a critical $\xi$ beyond which disks fragment independent of the value of $\Gamma$. Below this critical $\xi$ value, there is a weak stabilizing effect of increasing $\Gamma$. As $\xi$ increases, disks transition from singles in to multiples, and finally into binaries. We discuss the distinction between binaries and multiples in \S \ref{binstuff}.
In table \ref{restab} we list properties of the final state for all of our runs, their final multiplicity (S, B, or M for single, binary, or multiple, respectively), and the disk-to-star mass ratio $\mu_f$ measured at the time at which we stop each experiment, as well as the maximum resolution $\lambda_n$. Note that the disk extends somewhat beyond $R_{\rm k,in}$: therefore the disk as a whole is somewhat better resolved than the value of $\lambda_n$ would suggest. For the disks which fragment, we also list the value of $\mu_f, \lambda_f$ and $Q$ just before fragmentation occurs.
\begin{table}[htdp]
\begin{center}
\begin{tabular}{ c l c c c c c c c c |}
\hline
\hline
$\#$ & $\xi$ & $ {10^2} \Gamma$ & $N_*$ & $\mu_f$ &$\lambda_f$&$Q_{2D}$ & $\mu $ & $\lambda_n$ \\
\hline
1&1.6 & 0.9 & S & ... &...&...&0.49&99\\
2& 1.9 & 0.8 & S & ...&...&...&0.40&88\\
3&2.2 & 2.5 & S & ...& ... &...&0.56&82\\
4&2.4 & 1.0 & M & 0.43 &77&0.69&0.16&98\\
5&2.9 & 1.8 & S & ... &...&...&0.53&86\\
6& 2.9 & 0.8 & M & 0.40&51&0.72&0.14&78 \\
7&3.0 & 0.4 & M & 0.33 &50&0.48&0.11&77\\
8&3.4 & 0.7 & M & 0.40 &66&0.37&0.16&70\\
9&4.2 & 1.4 & B & 0.51 &56&0.19&0.33&72\\
10&4.6 & 2.1 & M & 0.54 &71&0.42&0.23&123\\
11&4.6& 0.7 & B & 0.35 &28&0.52&0.12&52\\
12&4.9 & 0.9 & B & 0.37&26&0.74&0.19&59 \\
13&5.4 & 0.4 & B & 0.38&38&0.33&0.19&64\\
14& 5.4 & 0.7 & B & 0.31 &49&0.85&0.21&62\\
15&5.4 & 7.5 & B & 0.72 &99&0.20&0.59&129\\
16*&23.4 & 0.8 & B & 0.25 &5&0.83&0.10&84\\
17*&24.9 & 0.4 & B & 0.15 &3&0.59&0.11&61\\
18*&41.2 & 0.8 & B & 0.13 &5&1.33&0.10&58\\
\hline
\end{tabular}
\caption{Each run is labelled by $\xi, \Gamma$, multiplicity outcome, the final value of the disk-to-star mass ratio,$\mu$ and the final resolution, $\lambda_n$. Values of $\Gamma$ are quoted in units of $10^{-2}$. For fragmenting runs the disk resolution $\lambda_f$, $Q_{2D}$ (equation \ref{Q_estimators}) and $\mu_f$ at the time of fragmentation are listed as well. S runs are single objects with no physical fragmentation. B's are binaries which form two distinct objects each with a disk, and M are those with three or more stars which survive for many orbits. * indicates runs which are not sufficiently well resolved at the time of fragmentation to make meaningful measures of $\mu_f$, and $Q$. }
\label{restab}
\end{center}
\end{table}%
\begin{figure}
\centering
\includegraphics[scale=0.5]{fig2.pdf}
\caption{ Distribution of runs in $\xi- \Gamma$ parameter space. The single stars are confined to the low $\xi$ region of parameters space, although increasing $\Gamma$ has a small stabilizing effect near the transition around $\xi = 2$ due to the increasing ability of the disk to store mass at higher values of $\Gamma$. The dotted line shows the division between single and fragmenting disks: $\Gamma = \xi^{2.5}/850$. As $\xi$ increases disks fragment to form multiple systems. At even higher values of $\xi$ disks fragment to make binaries. We discuss the distinction between different types of multiples in \S \ref{binstuff}. The shaded region of parameter space shows where isothermal cores no longer collapse due to the extra support from rotation. }
\label{xi_gamma}
\end{figure}
In table \ref{singletab} we describe those disks which do not fragment: we list the analytic estimate for the characteristic value Toomre's $Q$, $Q_d$, the measured minimum of $Q_{2D}$ (equation \ref{Q_estimators}), the radial power law $k_\Sigma$ which characterizes $\Sigma(r)$ for a range of radii extending from the accretion zone of the inner sink particle to the circularization radius $R_{\rm k,in}$, the final disk resolution, $\lambda_n$, and the characteristic disk radius, $R_d$ (equation (\ref{R_d_Omega_d}).
\begin{table}[htdp]
\begin{center}
\begin{tabular}{c l c c c c c c c c |}
\hline
\hline
$\#$ & $\xi$ & ${10^2}\Gamma$ & $\mu $ &$Q_d$&$Q_{2D}$&$k_\Sigma$ &$\lambda_n$& $R_d$ \\
\hline
1&1.6 & 0.9 &0.49&1.6&0.96&1.5&99 &103\\
2& 1.9 & 0.8 & 0.40&1.5&1.10&1.3&88&138\\
3&2.2 & 2.5 & 0.56 & 3.7&0.83&1.8&82&65\\
5&2.9 & 1.8 & 0.53&2.2&0.56&1.7&86&77\\
\hline
\end{tabular}
\caption{Single runs (numbers as from table \ref{restab}). We list values for the characteristic predicted value of Toomre's $Q$, $Q_d$ (equation \ref{Q_d}), as well as the measured disk minimum, $Q_{2D}$ equation (\ref{Q_estimators}). We also list the slope of the surface density profile, $k_\Sigma$ averaged over several disk orbits, the final resolutions, and $R_d$ at the end of the run (equation \ref{R_d_Omega_d})}
\label{singletab}
\end{center}
\end{table}
\subsection{The Fragmentation Boundary and $Q$}\label{Qstuff}
It is difficult to measure a single value of $Q$ to characterize a disk strongly perturbed by GI, so we consider two estimates: a two dimensional measurement $Q_{\rm 2D}$, and a one-dimensional measure $Q_{\rm av}(r)$ based on azimuthally-averaged quantities.
\begin{eqnarray}\label{Q_estimators}
Q_{\rm 2D} (r,\phi)&= &\frac{ c_s \kappa}{\pi G \Sigma},\\
Q _{\rm av}(r) &= &\frac{\bar{c}_s(r)\bar{\kappa} (r) }{\pi G \bar{\Sigma}(r)}
\end{eqnarray}
(bars represent azimuthal averages).
As Figure \ref{Qaz} shows, the two-dimensional estimate shows a great deal of structure which is not captured by the azimuthal average, let alone by $Q_d$. Moreover, while the minimum of the averaged quantity is close to two, the two dimensional quantity drops to $Q \sim 0.3$. We find that the best predictor of fragmentation is the minimum of a smoothed version of the two-dimensional quantity (smoothed over a local Jeans length to exclude meaningless fluctuations), although $Q_d$ shows a similar trend. We use this quantity in table \ref{restab}, and compare it to the analytic estimate $Q_d$ in table \ref{singletab} for non-fragmenting disks.
\begin{figure}
\centering
\includegraphics[scale=.7]{fig3.pdf}
\caption{Top: $Q_{\rm av}$ in a disk with $\xi = 2.9, \Gamma = 0.018$. The current disk radius, $R_{\rm k,in}$ is shown as well. Bottom: Log($Q_{2D}$) (equation \ref{Q_estimators}) in the same disk. While the azimuthally averaged quantity changes only moderately over the extent of the disk, the full two-dimensional quantity varies widely at a given radius. $Q$ is calculated using $\kappa$ derived from the gravitational potential, which generates the artifacts observed at the edges of the disk. Here and in all figures, we use $\delta_x$ to signify the resolution.}
\label{Qaz}
\end{figure}
We emphasize that the critical values of $Q$ at which fragmentation sets in depend on the exact method used for calculation (e.g. $Q_{\rm az}$ or $Q_{2D}$). Moreover, we do not expect to reproduce fragmentation at the canonical order unity boundary. This only marks the critical case for the m=0 unstable mode in razor-thin disks \citep{Toom1964}. As discussed by numerous authors, the fragmentation criterion is somewhat different for thick disks \citep{GLB1965,Laughlin:1997fk,1998ApJ...504..945L}, and the growth of higher order azimuthal modes \citep{ ARS89,Sling1990,Laughlin:1996uq}.
Another consequence of trying to describe thick disks with multiple unstable modes is that the fragmentation boundary cannot be drawn in $Q$-space alone. We use $Q_{2D}$ and $\mu$ in Figure \ref{q_mu} to demarcate the fragmentation boundary. Labeled curves illustrate that the critical $Q$ for fragmentation depends on the disk scale height (equation \ref{HonRfromGamma/Xi}). At a given value of $Q$, a disk with a larger value of $\mu$ will have a larger aspect ratio, and will therefore be more stable. Recall from equation (\ref{HonRfromGamma/Xi}), that the disk aspect ratio is proportional to $\left(\xi/\Gamma\right)^{1/3}$.
This trend is consistent with the results of \cite{GLB1965} for thick disks; because the column of material is spread out over a larger distance, $H$, its self-gravity is somewhat diluted. The fact that two parameters are necessary to describe fragmentation is also apparent in Figure \ref{xi_gamma}, where the boundary between single and multiple systems is a diagonal line through the parameter space.
Although two criteria are necessary to prescribe the fragmentation boundary, we observe a direct correspondence between $\mu$ and $\Gamma$, and $\xi$ and Toomre's $Q$. Figure \ref{mu_gam_comp} shows that $\mu \approx 2\Gamma^{1/3}$ for both single star disks, and just prior to the onset of fragmentation in disks that form binaries and multiples. We find a similar correspondence between $\xi$ and the combination $Q_d \mu $, which is a direct correlation between $\xi$ and $Q$ defined with respect to the disk circularization radius (using $R_d$ in the definition of $Q_d$ brings in an extra factor of $ \mu$.)
\begin{figure}
\centering
\includegraphics[scale=0.5]{fig4.pdf}
\caption{ Steady-state and pre-fragmentation values of $Q$ and $\mu$ for single stars and fragmenting disks respectively. We use the minimum of $Q_{2D}$ as described in \S \ref{Qstuff}. Symbols indicate the morphological outcome. Note that the non-fragmenting disks (large triangles) have the highest value of $\mu$ for a given $Q$. Contours show the predicted scaleheight as a function of $Q$ and $\mu$. It is clear that the single disks lie at systematically higher scale heights. We have assumed $k_\Sigma = 3/2$ in calculating scaleheight contours as a function of $Q$ and $\mu$.}
\label{q_mu}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.8]{fig5.pdf}
\caption{At right $\Gamma$ vs $\mu$ with the fit in equation (\ref{mugam}) overplotted. At left, $Q_d \mu$ vs $\xi$ with the scaling $ Q \propto \xi^{-1/3}$ overplotted. Runs, 16, 17, 18 are omitted as the low resolution at the time of fragmentation makes measurements of $\mu$ and therefore $Q_d$ unreliable. }
\label{mu_gam_comp}
\end{figure*}
\subsection{Properties of non-fragmenting disks}\label{mustuff}
Although we quote a single power law value for the surface density profiles of disks in table \ref{singletab}, the surface density structure is somewhat more complex. We find that the disks show some evidence of a broken power law structure: an inner region, characterized by $k_\Sigma$, where disk material is being accreted inwards, and an outer region characterized by a steep, variable power law due to the outward spread of low-density, high angular momentum material. We find disks characterized by slopes between $k_\Sigma = 1 - 2$. Clustering around $k_\Sigma = 3/2$ is expected, as this is the steady-state slope for a constant $Q$, isothermal disk. Our measurements of $Q(r)$ (equation \ref{Q_estimators}) show fluctuating, but roughly constant value over the disk radius. Note that the slope of the inner disk region tends to increase with $\Gamma$. Figure \ref{sigmaprofs} shows normalized radial profiles for the non-fragmenting disks. Profiles are averaged over approximately three disk orbital periods. The flattening at small radii is due to the increasing numerical viscosity in this region (\S \ref{reso}).
We find an upper mass limit of $\mu \sim 0.55$, for single stars, which means that disks do not grow more massive than their central star. A maximum disk mass has been predicted by \cite{Sling1990} as a consequence of the SLING mechanism. Such an upper limit is expected as eccentric gravitational instabilities in massive disks shift the center of mass of the system away from the central object. Indeed, we observe this wobble in binary forming runs. The subsequent orbital motion of the primary object acts as an indirect potential exciting strong $m=1$ mode perturbations which can induce binary formation \citep{Sling1990}. We find that this maximum value is consistent with their prediction.
Using the analytic expressions above, we can also derive an expression for an effective Shakura-Sunyaev $\alpha$. In this regime of parameter space, $\xi$ and $\Gamma$ are always such that $B \ll1$ (assuming $\alpha$ does not stray far from unity). We therefore expect that $\mu \propto \Gamma^{1/6}\xi^{1/3}\alpha^{-1/2}$. Using this relation we can find a functional form of $\alpha(\xi,\Gamma)$. Our fit to the data shown in Figure \ref{mu_gam_comp} implies
\begin{equation}\label{mugam}
\mu \approx 2 \Gamma^{1/3},
\end{equation}
with some scatter for both single disks and fragmenting disks just prior to fragmentation. We can use this fit to infer a scaling relation for $\alpha$ using equation (\ref{muSolution}) in the limit $\mu \sim \sqrt{2B}$:
\begin{equation}\label{alphascale}
\alpha_d \approx \frac{1}{18(2-k_\Sigma)^2(1+l_j)^2} \frac{\xi^{2/3}}{\Gamma^{1/3}}.
\end{equation}
The scaling is consistent with our expectation that driving the disk with a higher $\xi$ causes it to process materially more rapidly, while increasing $\Gamma$ decreases the efficiency with which the disk accretes. Equation (\ref{alphascale}) predicts disk averaged values of $\alpha$ for single star disks between $\sim 0.3-0.8$. These values are consistent with the observed accretion rates, and numerically calculated torques (\S \ref{torques}).
\begin{figure}
\centering
\includegraphics[scale=0.55]{fig6.pdf}
\caption{Normalized density profiles for the single-star disks. Profiles are azimuthal averages of surface densities over the final $\sim 3$ disk orbital periods. We find that while the inner regions are reasonably approximated by power law slopes, the slope steepens towards the disk edge. For comparison, slopes of $k_\Sigma= 1,1.5,$ and $ 2$ are plotted as well. Runs are labelled according to their values in table \ref{restab}. }
\label{sigmaprofs}
\end{figure}
\subsection{ The formation of binaries and multiples}\label{binstuff}
As shown by Figure \ref{xi_gamma}, a large swath of our parameter space is characterized by binary and multiple formation. We find that the division between fragmenting and non-fragmenting disks can be characterized by a minimum value of $\Gamma$ at which disks of a given $\xi$ are stable. In Figure \ref{xi_gamma} we have plotted this boundary as $\Gamma = \xi^{2.5}/850$.
While we do not claim that our numerical experiments are a true representation of the binary formation process, we do expect to find binaries in much of the parameter space characteristic of star formation, as nearly half of all stars are in binaries \citep{1991A&A...248..485D,2007prpl.conf..379D}. Moreover, as the binary forming parameters are typical of higher mass star formation, where binaries and multiples are expected to comprise perhaps 75\% of systems, these findings are encouraging \citep{1998AJ....115..821M, 2009AJ....137.3358M}. We discuss several general trends here, but defer a detailed analysis of binary evolution and application to observations to a later paper.
Are these equal mass binaries? Low mass stellar companions? Or maybe even massive planets? In a self-similar picture it is difficult to tell. In an actively accreting multiple system, as long as the mass reservoir has angular momentum such that the circularization radius of the infalling material is comparable to the separation between objects, the smaller object, which is further from the center of mass, will accrete due to the torque imbalance \citep{Bate97,1994MNRAS.271..999B}. Similarly, in thick, gravitationally unstable disks, the isolation mass approaches the stellar mass:
\begin{equation}
M_{\rm iso} = 4 \pi f_H r_H r_d \Sigma \approx 30 \frac{f_H}{3.5} \left(\frac{H}{R}\right)^{3/2}Q^{-3/2}M_*.
\end{equation}
Here, $r_H = (M_s / 3 M_*)^{1/3}$ is the Hill radius, $M_s$, and $M_*$ are the masses of the secondary and primary, and the numerical factor $f_H$, represents how many Hill radii an object can feed from in the disk -- numerical simulations suggest $f_H \sim 3.5$ \citep{1987Icar...69..249L, 2002ApJ...572..566R}. Therefore the evolution of these objects in our models is clear: they tend to equalize in mass. The binary separation will also grow if any of the infalling angular momentum is transferred to the orbits as opposed to the circumstellar disks. These trends are borne out in our experiments: binary mass ratios asymptote to values of $~0.8-0.9$ and separations to $\sim 60\%$ of $R_{\rm{k,in}}$. In a realistic model for star formation, the parameters that characterize a single run in this paper will represent only one phase in the life of a newborn system. The trajectory through $\xi-\Gamma$ space which the systems take following binary formation will strongly influence the outcome in terms of separation and mass ratio. For example, should the disk stabilize and accretion trail off quickly following binary formation, it is quite likely that a large mass ratio would persist as the disk drains preferentially onto the primary object once the secondary reaches its isolation mass. By contrast, in systems which fragment before most of the final system mass has accreted, we expect more equal mass ratios.
\subsubsection{ Hierarchical multiples and resolution dependence}
Disks which are at the low $\xi$ end of the binary forming regime tend to form binaries at later times, and therefore at higher disk resolution. One consequence of this is the formation of hierarchical multiples. When disks become violently unstable, they fragment into multiple objects. Because of the numerical algorithm which forces sink particles within a gravitational softening length of each other to merge, at lower resolution many of these particles merge, leaving only two distinct objects behind. At higher resolution, while some of the particles ultimately merge, we find that three or four objects typically survive this process. We cannot distinguish between merging and the formation of very tight binaries. In addition to merging, small mass fragments are occasionally ejected from the system entirely. This appears be a stochastic process, though we have not done sufficient runs to confirm this conclusion.
Disks which form binaries at early times and develop two distinct disks can also evolve into multiples when each disk becomes large enough and sufficiently unstable to fragmentation. In general, once a binary forms, the system becomes characterized by new values of $\xi$ and $\Gamma$ which are less than those in the original disk. As the distribution of mass and angular momentum evolves in the new system, the relative values of $\xi$ and $\Gamma$ evolve as well. However, once the mass ratios have reached equilibrium as is the case for run \#16 shown in the bottom center of Figure \ref{prettypics}, each disk sees $\xi$ of roughly half the original value, which for an initial $\xi \sim 24$ is still well into the fragmenting regime. As a result, the fact that the two disks ultimately fragment is expected. On the contrary, for the lowest $\xi$ binary runs, once one fragmentation event occurs, the new $\xi$ may be sufficiently low to suppress further fragmentation. The evolution of $\Gamma$ in the newly formed disks is more complicated, depending on how much angular momentum is absorbed into the orbit as compared to the circumstellar disks. We defer a discussion of this to a later paper.
It is clear that there is a numerical dependence to this phenomenon which we discuss in \S \ref{reso}, but there is a correspondence with the physical behavior of disks as well. The radius and mass of a fragmenting disk are likely to influence the multiplicity outcome of a real system. Cores with high values of $\xi$ that form binaries early in our numerical experiments correspond to cores whose disks fragment into binaries at small physical size scales, where the disk may only be a few fragment Hill radii wide, and contain a relatively small number of Jeans masses. It is possible that at these size scales, numerous bound clumps in a disk might well merge leaving behind a lower multiplicity system than one in which the ratio of disk size to Hill radii or mass to Jeans mass is higher.
\begin{figure}
\centering
\includegraphics[scale=0.60]{fig7.pdf}
\caption{ Azimuthal averages of different components of torque expressed as an effective $\alpha$ (equation \ref{stressalpha}) for run \#8. The straight line, $\alpha_d$ (equation \ref{alphascale}) is plotted for comparison. The agreement between the analytic value of $\alpha_d$ and the combined contribution from the other components is best near the expected disk radius $R_{\rm k,in}$.}
\label{azimalpha}
\end{figure}
\subsection{ Gravitational Torques and Effective $\alpha$} \label{torques}
We verify that the accretion observed in our disks is generated by physical torques by computing the net torque in the disk.
It is convenient to analyze the torques in terms of the stress tensor, $T_{R\phi}$, which is made up of two components: large scale gravitational torques and Reynolds stresses. Following \cite{LodRi05} we define:
\begin{equation} \label{stresstens}
T_{R\phi} = \int{\frac{g_R g_\phi}{4 \pi G} dz} + \Sigma \delta \mathbf{v}_R \delta \mathbf{v}_\phi,
\end{equation}
where $\delta \mathbf{v} = \mathbf{v} - \bar{v}$. In practice, we set $\delta \mathbf{v}_R = \mathbf{v}_R$, while $\delta \mathbf{v}_\phi$ is calculated with respect to the azimuthal average of the rotational velocity at each radius. In reality there is an extra viscous term attributable to numerical diffusion. We discuss the importance of this term in \S \ref{reso}.
The first term in equation (\ref{stresstens}) represents torques due to large scale density fluctuations in spiral arms, while the second is due to Reynolds stresses from deviations in the velocity field from a Keplerian (or at least radial) velocity profile. To facilitate comparison with analytic models, the torques can be represented as an effective $\alpha$ where:
\begin{equation}\label{stressalpha}
T_{R\phi} = \left|{\frac{{\rm d ln}\Omega}{{\rm d ln} R}}\right| \alpha \Sigma c_s^2
\end{equation}
We can compare these torques to the characteristic disk $\alpha_d$ in equation (\ref{alphascale}). Although there is variability in the disk accretion with time, it is consistent with a constant rate over long timescales.
Figure \ref{azimalpha} compares $\alpha_d$ to the azimuthal average of the physical torques for one of our runs. We also show the expected contribution from numerical diffusion (see \S \ref{reso}). The accretion expected from these three components is consistent with the time averaged total accretion rate onto the star. Due to the short term variability of the accretion rate, the two do not match up exactly. It is interesting to note the radial dependence of the Reynolds stress term, which in the inner region decays rapidly, before rising again, due to the presence of spiral arms. In both the azimuthal average and the two dimensional distribution we see that at small radii numerical diffusion dominates, whereas at large radii deviations in the azimuthal velocity which generate Reynolds stresses are spatially correlated with the spiral arms.
\subsection{ Vertical Structure} \label{vertical_struct}
When the disks reach sufficient resolution, we can resolve the vertical motions and structure of the disk. We defer a detailed analysis of the vertical structures to a later paper, but discuss several general trends here. Depending on the run parameters, the disk scale height is ultimately resolved by 10-25 grid cells. We observe only moderate transonic motions in the vertical direction of order $\mathcal{M} \sim 1-2$. Figure \ref{non-super} shows two slices of the z-component of the velocity field for a single system, one through the X-Z plane, and the other through the disk midplane. Although there is significant substructure, the motions are mostly transonic.
\begin{figure}
\centering
\includegraphics[scale=0.7]{fig8.pdf}
\caption{Cuts along the vertical axis and disk midplane of the vertical velocity, normalized to the disk sound speed. Clearly most of the vertical motions in the disk are transonic, although at the edges of the disk the velocities exceed $\mathcal{M} \sim 1$. }
\label{non-super}
\end{figure}
We also observe a dichotomy in the vertical structure between single and binary disks. Although the values of $\xi$ and $\Gamma$ should dictate the scaleheight (see equation (\ref{scaleheight})), and therefore higher $\xi$ disks which become binaries should have smaller scaleheights to begin with, we observe a transition in scaleheight when a disk fragments and becomes a binary. Large plumes seen in single disks, like those shown in Figure \ref{scaleheight} contain relatively low density, high angular momentum material being flung off of the disk. The relatively sharp outer edges are created by the accretion shock of infalling material onto these plumes. We observe small scale circulation patterns which support these long lived structures. Disks surrounding binaries, by comparison remain relatively thin; in particular while the circumprimary disks are slightly puffier than expected from pure thermal support, the circumbinary disk (when present) is sufficiently thin that we do not consider it well resolved. This implies that the effective $\Gamma$ values that binary disks see declines more than $\xi$ according to equation (\ref{HonRfromGamma/Xi}). This is consistent with the statement that some of the infalling angular momentum is transferred into the orbit instead of on to the disks themselves.
\section{ Caveats and Numerical Effects}\label{caveats}
\subsection{Isothermal equation of state} \label{thermo}
Many simulations have shown the dramatic effects that thermodynamics have on disk behavior \citep{2000ApJ...528..325B, Gam2001, Rice05, LodRi05, 2006ApJ...651..517B, Krumholz2007a, {2009arXiv0904.2004O}}. Since we are concerned with fragmentation, we must be aware of the potential dependencies of the fragmentation boundary on cooling physics. Starting with \cite{Gam2001}, there has been much discussion of the ``cooling time constraint" that states that a disk with $Q \sim 1$ will only fragment if the cooling time is short. While this is a valuable analysis tool for predicting the evolution of a system from a snapshot and for quantifying the feedback from gravito-turbulence, for most of the protostellar disks that we are modeling, the cooling time at the location of fragmentation is short. In the outer radii of protostellar disks in general, irradiation is the dominant source of heating \citep{1997ApJ...474..397D,ML2005,Krumholz2007a,KMK08}. In fact, even a low temperature radiation bath can contribute significantly to the heat budget of disks \citep{2008ApJ...673.1138C}. Such passively heated (through irradiation) disks behave more like isothermal disks than barotropic disks, because the energy generation due to viscous dissipation is small compared to the energy density due to radiation. Consequently, feedback from accretion in the midplane does not alter the disk temperature significantly. For realistic opacity laws, disks which are dominated by irradiation cool on timescales much faster than the orbital period at large radii. Numerical simulations such as \cite{Krumholz2007a} find that strongly irradiated disks have a nearly isothermal equation of state. In fact, the morphological outcome is similar to those of \cite{Krumholz2007a} with comparable values of $\xi$.
Another possible concern is the lack of a radial temperature gradient, independent of the equation of state. Both passively and actively (through viscous dissipation) heated disks will be warmer at small radii, though the radial dependency changes with the heating mechanism. Actively heated disks typically have steeper gradients. In either case, it is possible that the warmer inner disk would be stabilized against gravitational instability and slow down accretion. In these experiments, we find spiral arms persist in regions where the average value of $Q$ is well above that at which instability is presumed to set in, with local values exceeding this by an order of magnitude. It seem plausible that due to the global nature of the low-$m$ spiral modes, angular momentum transport may still occur in regions one would assume stable against GI. As discussed by \cite{ARS89}, $m=1$ modes can have appreciable growth rates for remarkably high values of $Q$ when the evanscent region is as much as $70\%$ of the disk radius. In the event that the GI does shut off due to increasing temperature, material from the outer, unstable portion of the disk will likely accumulate until the critical surface density for GI is reached, causing
$k_\Sigma$ to steepen. Further numerical investigation of this is necessary; we point readers to high resolution studies of disks with radial temperature gradients such as \citet{2009SciKrum,2007ApJ...665.1254B}, \cite{2008ApJ...673.1138C}, and \cite{Krumholz2007a}.
In order to test the effects of the gas stiffening we introduced to avoid
unphysical merging of our sink particles (see \S \ref{code_details}), we
have conducted several purely isothermal experiments in which it is
turned off. The removal of the barotropic switch aritifically enhances accretion at early times due to sink particles formed via numerical fragmentation merging with the central star. Removing the barotropic switch is equivalent to increasing the resolution of the fragmentation process, but decreasing the resolution of the scale of fragmentation relative to $\lambda$, the disk resolution. Using a barotropic switch allows the disk to reach a higher $\lambda$ before fragmentation sets in for a given set of parameters.
\begin{figure}
\centering
\includegraphics[scale=0.5]{fig9.pdf}
\caption{Density slices showing vertical structure in a single and binary disk. The top plot is a single star with $\xi = 1.6, \Gamma = 0.09$, while the bottom is a fragmenting binary system with $\xi = 24.3, \Gamma = 0.008$. The extended material in the binary system is generated by a combination of large scale circumbinary torques and the infalling material. Colorscale is logarithmic. The box sizes are scaled to $1.5 R_{\rm k,in}$ in the plane of the disk.}
\label{scaleheight}
\end{figure}
\subsection{Insensitivity of disk dynamics to core temperature}\label{isothermal}
Our parameterization of disk dynamics is based on the idea that thermodynamics can be accounted for by one parameter, $\xi$, which compares the accretion rate to the disk sound speed $c_{s,d}$. A basic corollary of this notion is that the core temperature $c_{s, {\rm core}}$ has no effect on disk dynamics, except insofar as it affects the accretion rate. We have defined $\xi$ with respect to the disk sound speed, but since
our disks and cores are the same temperature, we could equally well
have used the core sound speed. Therefore the
question arises whether $\xi$ should be computed by
normalizing the accretion rate to the disk sound speed, $c_{s,d}$, or
the core sound speed, $c_{s,c}$. To test this, we ran simulations in which $c_{s,d}$ and $c_{s,{\rm core}}$ differed: we imposed a change in temperature over a range of radii in which infall is highly supersonic.
To demonstrate that $\xi$ defined with respect to the disk sound speed is indeed a better predictor of the morphological and physical behavior of the
disk, we compare $\lambda$ at the time of fragmentation, $\lambda_f$ to both $\xi$ and the equivalent parameter defined in terms of the core sound speed, $\xi_{\rm core} = G \dot{M}/c_{s,{\rm core}}^3$ for runs with similar values of $\Gamma$. We observe a correlation between resolution at the time of fragmentation and $\xi$ at fixed $\Gamma$, and so if core temperature is irrelevant, these runs should follow the same trend.
Figure \ref{heat_complot}, shows that $\lambda_f$ correlates extremely well with $\xi$ at similar values of $\Gamma$, but poorly with $\xi_{\rm core}$ for the heated runs. The scaling of $\lambda_f$ with $\xi$ is also related to the existence of an upper limit on $\mu$ as a function of $\Gamma$: disks with higher $\xi$ approach this critical value of $\mu$ faster, and thus at lower $\lambda$.
\begin{figure}
\centering
\includegraphics[scale=0.65]{fig10.pdf}
\caption{Correlation between $\lambda_f$ and the infalling accretion rate for heated and non heated runs with comparable $\Gamma$. Plus symbols indicate non-heated runs, and the crosses are heated runs. The arrows and red crosses indicate the position of the runs evaluated with respect to $\xi_{ \rm core}$. Runs shown have $\Gamma$ values ranging from 0.006 to 0.009. The shaded region illustrates the scaling $\lambda_f \propto \xi^{-1}$. This scaling is related not only to the existence of a critical value of $\mu$, but also tied to the effect of resolution on fragmentation.}
\label{heat_complot}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.65]{fig11.pdf}
\caption{At left: a snapshot of the standard resolution of run \#16 shortly after binary formation. At right, the same run at double the resolution. Because of the self-similar infall prescription, we show the runs at the same numerical resolution, as time and resolution are interchangeable. In this case the high-resolution run has taken twice the elapsed ``time" to reach this state. The two runs are morphologically similar and share expected disk properties. }
\label{reso_snapshot}
\end{figure*}
\subsection{Resolution}\label{reso}
We have shown in \S \ref{torques} that the observed accretion is consistent with the combined gravitational torques and Reynold stresses, and that these are dominant over that expected purely from numerical diffusion. Because of the self-similar infall, convergence to a steady state within a given run is a good indicator that numerics are not determining our result; in effect, every run is a resolution study. That we observe a range of behavior at the same resolution but different input parameters also implies that numerical effects are sub-dominant. We consider our disks to begin to be resolved when they reach radii such that $R_{\rm k,in}/\Delta x \geq 30$. The effective numerical diffusivity, which we plot in Figure \ref{azimalpha}, has been estimated by \cite{Krumholz04} for ORION. Specifically they find that:
\begin{equation} \label{alpha_num}
\alpha_{\rm num} \approx 78 \frac{r_B}{\Delta x} \left({\frac{r}{\Delta x}}\right)^{-3.85}.
\end{equation}
where
\begin{equation}
r_B = \frac{G M_* }{ c_s^2}
\end{equation}
is the standard Bondi radius.
For our typical star and disk parameters, this implies numerical $\alpha$'s of order $0.1-0.3$ at the minimum radius at which we are resolved. This implies that for our ``low" accretion rate cases, at most $1/3$ of our effective alpha could be attributed to numerical effects at low resolution. See discussions by \cite{ Offner08,Krumholz2007a, Krumholz04} for a detailed analysis of disk resolution requirements. At our resolution of 50-100 radial cells across the disk, the dominant effect of numerical diffusion is likely a suppression of fragmentation \citep{2006ApJ...647..997S,{Nelson2006}}. Because the isothermal spiral arms can become very narrow prior to fragmentation, numerical diffusion across an arm may smear out some overdensities faster than they collapse. Therefore the conclusions regarding the fragmentation boundary are likely conservative.
We explicitly demonstrate morphology convergence in one of our binary runs. We rerun run \#16 (as labelled in table \ref{restab}) at double the resolution ($128^3$ with 10 levels of refinement as opposed to 9). Increasing the physical resolution also decreases the code time step proportionally so that the ratio of the timestep to orbital period as a function of $\lambda$ should be preserved. In fact, there is little that can be different between the runs at two resolutions at the same effective $\lambda$.
The two runs have the same morphology, and characteristic disks properties as a function of $\lambda$, as expected. We show in Figure \ref{reso_snapshot} snaphots of the standard and high resolution runs. The standard resolution run (left) is at twice the elapsed ``time" of the high resolution one (right), and so the same numerical resolution, $\lambda$. We confirm that the mass accretion rate is consistent between the two runs: at the snapshots shown the mass ratio of the lower resolution run is 0.46, while the higher resolution run is 0.48. We consider this variation to be within the expected variation of the parameters (see \S \ref{selfsim_idea}). To the extent that numerical artifacts are seeding instabilities, we expect some stochasticity in the details of the fragmentation between any two runs. Although the effect is small, it is also possible that since the physical size of the disk (and the radius from which material is currently accreting) relative to the box size is larger at the same value of $\lambda$ for the low resolution run, the large scale quadrapole potential from the image masses is stronger in the low resolution case.
We also compare a multiple run at a lower resolution by a factor of two. Again we find the same morphological outcome. We find that the disks behave equivalently at the same radial resolution although the elapsed time, and dimensional masses are different.
The scaling of $\lambda_f$ with $\xi$ in Figure \ref{heat_complot} also demonstrates that resolution plays a role in determining when disks fragment. Although the infall is self-similar, the disks approach a steady state as parameters like $Q$ and $\mu$ evolve toward constant values. This evolution, and sometimes fragmentation, is influenced by the interplay between decaying numerical viscosity and increasing gravitational instability in the disk as a function of $\lambda$.
\section{Comparison to Previous Studies}\label{prevwork}
The literature is replete with useful simulations of protostellar and protoplanetary disks at various stages of evolution, however most involve isolated disks, without infall at large radii \citep{1994ApJ...436..335L,1996ApJ...456..279L,Rice05, LodRi05, 2006A&A...457..343F, 2006ApJ...647..997S, 2006ApJ...651..517B, 2007MNRAS.374..590L, 2008ApJ...673.1138C}. These simulations include a wide range of physics, from magnetic fields to radiative transfer, but due to the lack of infalling matter, they neither develop disk profiles (surface density, temperature) self-consistenly, nor do they enter the regime of interest in this work: rapid accretion in the embedded phase. For a review of many of the issues addressed by current GI disk simulations, see \cite{2007prpl.conf..607D}.
There are a few simulations of self-consistent growth and evolution \citep{VorBas07, Vorobyov08}. These are ideal for following the long term evolution of more quiescient lower mass disks. However, because they are two dimensional, and lack a moving central potential, they cannot follow the evolution of non-axisymmetric modes which are driven by the displacement of the central star from the center of mass, nor can they accurately simulate the formation of multiple systems. Other authors have investigated the initial stage of core collapse onto disks \citep{BanPud07,1999ApJ...526..307T}, however these authors focus on the effects of magnetic fields and fragmentation of the core prior to disk formation respectively. \cite{1999ApJ...523L.155T} and \cite{2003ApJ...595..913M} have also investigated the collapse of cores into disks and binaries, though they do not investigate many disk properties (see \S \ref{bonnorebert} for detailed comparisons). \cite{Krumholz2007a} and \cite{2009SciKrum} have conducted three dimensional radiative transfer calculations, but due to computational cost can only investigate a small number of initial conditions.
In addition to numerical work, there are a range of semi-analytic models which follow the time evolution of accreting disks \citep[KMK08]{2005A&A...442..703H}. KMK08 examined the evolution of embedded, massive disks in order to predict regimes in which gravitational instability, fragmentation of the disk, and binary formation were likely. They concluded that disks around stars greater than $1-2M_{\odot}} \newcommand{\tff}{t_{\rm ff}$ were likely subject to strong gravitational instability, and that a large fraction of O and B stars might be in disk-born binary systems. \cite{2005A&A...442..703H} have also made detailed models of disk evolution, though they examine less massive disks, and do not include explicitly gravitational instability, and disk irradiation.
In KMK08 we hypothesized that the disk fragmentation boundary could be drawn in $Q$- $\mu$ parameter space, where small scale fragmentation was characterized by low values of $Q$ and binary formation by high values of $\mu$. Due to the self-similar nature of these simulations, the distinction between these two types of fragmentation is difficult, as the continued accretion of high angular momentum material causes the newly formed fragment to preferentially accrete material and grow in mass \citep{1994MNRAS.269L..45B}. Moreover, because the disks are massive and thick, the isolation mass of fragments is comparable to the disk mass, and so there is little to limit the continued growth of fragments.
\subsection{ The evolution of the accretion parameters in the isothermal collapse of a Bonnor-Ebert Sphere}\label{bonnorebert}
\begin{figure}
\centering
\includegraphics[scale=0.4]{fig12.pdf}
\caption{Trajectory of a Bonnor-Ebert sphere through $\xi-\Gamma$ space. The two lines show values of $\beta = 0.02, 0.08$ as defined in \cite{2003ApJ...595..913M}. Arrows indicate the direction of time evolution from $t/t_{\rm ff,0} = 0-5$. $t_{\rm ff,0}$ is evaluated with respect to the central density, and arrows are labelled with the fraction of the total Bonnor-Ebert mass which has collapsed up to this point. The dotted line shows the fragmentation boundary from Figure \ref{xi_gamma}. }
\label{bonnortrack}
\end{figure}
While self-similar scenarios are useful for numerical experiments, they do not accurately capture the complexities of star formation. In particular, in realistic cores, $\xi$ and $\Gamma$ evolve in time. Therefore it is interesting to chart the evolution of a more realistic (though still idealized) core through our parameter space. We consider the isothermal collapse of a Bonnor-Ebert sphere initially in solid-body rotation \citep{1956MNRAS.116..351B}. Such analysis allows us to compare our results with other numerical simulations that have considered global collapse and binary formation such as \cite{2003ApJ...595..913M} via the parameters laid out in \cite{1999ApJ...526..307T}.
We use the collapse calculation of a 10\% overdense, non-rotating Bonnor -Ebert sphere from \cite{1993ApJ...416..303F}, and impose angular momentum on each shell to emulate solid body rotation. Figure \ref{bonnortrack} shows the trajectory of a rotating Bonnor-Ebert sphere through $\xi-\Gamma$ parameter space as a function of the freefall time $ t/t_{\rm ff,0}$, for two different rotation rates corresponding to $\beta = E_{\rm{rot}}/ E_{\rm{grav}} = 0.02, 0.08$. The free fall time is evaluated with respect to central density.
The early spike in $\xi$ is due to the collapse of the inner flattened core at early times. Similarly, the corresponding decline in $\Gamma$ is a result of the mass enclosed increasing more rapidly than the infalling angular momentum. The long period of decreasing $\xi$ and constant $\Gamma$ arises from the balance between larger radii collapsing to contribute more angular momentum, and the slow decline of the accretion rate. This trajectory may explain several features of the fragmentation seen in \cite{2003ApJ...595..913M}. Although not accounted for in Figure \ref{bonnortrack}, cores with high values of $\beta$ have accretion rates supressed at early times due to the excess rotational support, while those with low $\beta$ collapse at the full rate seen in \cite{1993ApJ...416..303F}. In cores with small $\beta$, the high value of $\xi$ may drive fragmentation while the disk is young. Alternatively, for modest values of $\beta$, $\Gamma$ may be sufficiently low while $\xi$ is declining that the disk mass surpasses the critical fragmentation threshold, and fragments via the so-called satellite formation mechanism. For very large values of $\beta$, a core which is only moderately unstable will oscillate and not collapse as seen in \cite{2003ApJ...595..913M} for $\beta > 0.3$.
\section{Discussion}
We have examined the behavior of gravitationally unstable accretion disks using three-dimensional, AMR numerical experiments with the code ORION. We characterize each experiment as a function of two dimensionless parameters, $\xi$ and $\Gamma$, which are dimensionless accretion rates comparing the infall rate to the disk sound speed and orbital period respectively. We find that these two global variables can be used to predict disk behavior, morphological outcomes, and disk-to-star accretion rates and mass ratios. In this first paper in a series we discuss the main effects of varying these parameters. Our main conclusions are:
\begin{itemize}
\item{Disks can process material falling in at up to $\xi \sim 2-3$ without fragmenting. Although increasing $\Gamma$ stabilizes disks at fixed values of $\xi$ those fed at $\xi > 3$ for many orbits tend to fragment into a multiple or binary system.}
\item{Disks can reach a statistical steady state where mass is processed through the disk at a fixed fraction of the accretion rate onto the disk. The discrepancy between these two rates, $\mu$, scales with $\Gamma$; disks with larger values of $\Gamma$ can sustain larger maximum disk masses before becoming unstable. The highest disk mass reached in a non-fragmenting system is $\mu \approx 0.55$ or $M_* \sim M_d$.}
\item{Gravitational torques can easily produce effective accretion rates consistent with a time averaged $\alpha \approx 1$.}
\item{The minimum value of $Q$ at which disks begin to fragment is roughly inversely proportional to the disk scale height. It is therefore important to consider not only $Q$ but another dynamical parameter when predicting fragmentation, at least in disks which are not thin and dominated by axisymmetric modes.}
\item{The general disk morphology and multiplicity is consistent between isothermal runs and irradiated disks with similar effective values of $\xi$.}
\end{itemize}
These conclusions are subject to the qualification that fragmentation occurs for lower values of $\xi$ as the disk resolution increases, and so it is possible that the location of the fragmentation boundary will shift with increasing resolution. However we expect that our results are representative of real disks and other numerical simulations in so far as they have comparable dynamic range of the parameters relevant to fragmentation such as $\lambda_J/ \lambda$.
\section{Acknowledgments}
The authors would like to thank Chris McKee, Jonathan Dursi, Stella Offner, Andrew Cunningham, Norman Murray, and Yanqin Wu for insightful discussions and technical assistance. KMK was funded in part by a U. of T. fellowship. CDM received support through an Ontario Early Research Award, and by NSERC Canada. MRK received support for this work from an Alfred P. Sloan Fellowship, from NASA, as part of the Spitzer Theoretical
Research Program, through a contract issued by the JPL, and from the National Science Foundation, through grant AST-0807739. RIK received support for this work provided by the US Department of Energy at Lawrence Livermore National Laboratory under contract B-542762; NASA through ATFP grants NAG 05-12042 and NNG 06-GH96G and NSF through grant AST-0606831. All computations were performed on the Canadian Institute for Theoretical Astrophysics Sunnyvale cluster, which is funded by the Canada Foundation for Innovation, the Ontario Innovation Trust, and the Ontario Research Fund. This research was supported in part by the National Science Foundation under Grant No. PHY05-51164.
|
1,108,101,564,376 | arxiv | \section{Record gaps between primes}
\begin{center}TABLE 1 \\Maximal gaps between primes below $4\times10^{18}$ \
\cite[OEIS A005250]{nicely,toes,oeis}
\\[0.5em]
\small
\begin{tabular}{rrr|rrr}
\hline
\multicolumn{2}{r}{Consecutive primes} & Gap $g$ &
\multicolumn{2}{r}{Consecutive primes~\phantom{\fbox{$11111^1$}}} & Gap $g$ \\
[0.5ex]\hline
\vphantom{\fbox{$1^1$}}
2 & 3 & 1 & 25056082087 & 25056082543 & 456 \\
3 & 5 & 2 & 42652618343 & 42652618807 & 464 \\
7 & 11 & 4 & 127976334671 & 127976335139 & 468 \\
23 & 29 & 6 & 182226896239 & 182226896713 & 474 \\
89 & 97 & 8 & 241160624143 & 241160624629 & 486 \\
113 & 127 & 14 & 297501075799 & 297501076289 & 490 \\
523 & 541 & 18 & 303371455241 & 303371455741 & 500 \\
887 & 907 & 20 & 304599508537 & 304599509051 & 514 \\
1129 & 1151 & 22 & 416608695821 & 416608696337 & 516 \\
1327 & 1361 & 34 & 461690510011 & 461690510543 & 532 \\
9551 & 9587 & 36 & 614487453523 & 614487454057 & 534 \\
15683 & 15727 & 44 & 738832927927 & 738832928467 & 540 \\
19609 & 19661 & 52 & 1346294310749 & 1346294311331 & 582 \\
31397 & 31469 & 72 & 1408695493609 & 1408695494197 & 588 \\
155921 & 156007 & 86 & 1968188556461 & 1968188557063 & 602 \\
360653 & 360749 & 96 & 2614941710599 & 2614941711251 & 652 \\
370261 & 370373 & 112 & 7177162611713 & 7177162612387 & 674 \\
492113 & 492227 & 114 & 13829048559701 & 13829048560417 & 716 \\
1349533 & 1349651 & 118 & 19581334192423 & 19581334193189 & 766 \\
1357201 & 1357333 & 132 & 42842283925351 & 42842283926129 & 778 \\
2010733 & 2010881 & 148 & 90874329411493 & 90874329412297 & 804 \\
4652353 & 4652507 & 154 & 171231342420521 & 171231342421327 & 806 \\
17051707 & 17051887 & 180 & 218209405436543 & 218209405437449 & 906 \\
20831323 & 20831533 & 210 & 1189459969825483 & 1189459969826399 & 916 \\
47326693 & 47326913 & 220 & 1686994940955803 & 1686994940956727 & 924 \\
122164747 & 122164969 & 222 & 1693182318746371 & 1693182318747503 & 1132 \\
189695659 & 189695893 & 234 & 43841547845541059 & 43841547845542243 & 1184 \\
191912783 & 191913031 & 248 & 55350776431903243 & 55350776431904441 & 1198 \\
387096133 & 387096383 & 250 & 80873624627234849 & 80873624627236069 & 1220 \\
436273009 & 436273291 & 282 & 203986478517455989 & 203986478517457213 & 1224 \\
1294268491 & 1294268779 & 288 & 218034721194214273 & 218034721194215521 & 1248 \\
1453168141 & 1453168433 & 292 & 305405826521087869 & 305405826521089141 & 1272 \\
2300942549 & 2300942869 & 320 & 352521223451364323 & 352521223451365651 & 1328 \\
3842610773 & 3842611109 & 336 & 401429925999153707 & 401429925999155063 & 1356 \\
4302407359 & 4302407713 & 354 & 418032645936712127 & 418032645936713497 & 1370 \\
10726904659 & 10726905041 & 382 & 804212830686677669 & 804212830686679111 & 1442 \\
20678048297 & 20678048681 & 384 &1425172824437699411 & 1425172824437700887 & 1476 \\
22367084959 & 22367085353 & 394 & & & \\
\hline
\end{tabular}
\end{center}
\normalsize
\newpage
\section{Record gaps between twin primes}
\begin{center}TABLE 2 \\Maximal gaps between twin primes $\{p$, $p+2\}$ \
\cite[OEIS A113274]{fischer,kourbatov2013,rr,oeis} \\[0.5em]
\begin{tabular}{rrr|rrr}
\hline
\multicolumn{2}{r}{Initial primes in twin pairs} & Gap $g_2$ &
\multicolumn{2}{r}{Initial primes in twin pairs~\phantom{\fbox{$1^1$}}} & Gap $g_2$ \\
[0.5ex]\hline
\vphantom{\fbox{$1^1$}}
3 & 5 & 2 & 24857578817 & 24857585369 & 6552\\
5 & 11 & 6 & 40253418059 & 40253424707 & 6648\\
17 & 29 & 12 & 42441715487 & 42441722537 & 7050\\
41 & 59 & 18 & 43725662621 & 43725670601 & 7980\\
71 & 101 & 30 & 65095731749 & 65095739789 & 8040\\
311 & 347 & 36 & 134037421667 & 134037430661 & 8994\\
347 & 419 & 72 & 198311685749 & 198311695061 & 9312\\
659 & 809 & 150 & 223093059731 & 223093069049 & 9318\\
2381 & 2549 & 168 & 353503437239 & 353503447439 & 10200\\
5879 & 6089 & 210 & 484797803249 & 484797813587 & 10338\\
13397 & 13679 & 282 & 638432376191 & 638432386859 & 10668\\
18539 & 18911 & 372 & 784468515221 & 784468525931 & 10710\\
24419 & 24917 & 498 & 794623899269 & 794623910657 & 11388\\
62297 & 62927 & 630 & 1246446371789 & 1246446383771 & 11982\\
187907 & 188831 & 924 & 1344856591289 & 1344856603427 & 12138\\
687521 & 688451 & 930 & 1496875686461 & 1496875698749 & 12288\\
688451 & 689459 & 1008 & 2156652267611 & 2156652280241 & 12630\\
850349 & 851801 & 1452 & 2435613754109 & 2435613767159 & 13050\\
2868959 & 2870471 & 1512 & 4491437003327 & 4491437017589 & 14262\\
4869911 & 4871441 & 1530 & 13104143169251 & 13104143183687 & 14436\\
9923987 & 9925709 & 1722 & 14437327538267 & 14437327553219 & 14952\\
14656517 & 14658419 & 1902 & 18306891187511 & 18306891202907 & 15396\\
17382479 & 17384669 & 2190 & 18853633225211 & 18853633240931 & 15720\\
30752231 & 30754487 & 2256 & 23275487664899 & 23275487681261 & 16362\\
32822369 & 32825201 & 2832 & 23634280586867 & 23634280603289 & 16422\\
96894041 & 96896909 & 2868 & 38533601831027 & 38533601847617 & 16590\\
136283429 & 136286441 & 3012 & 43697538391391 & 43697538408287 & 16896\\
234966929 & 234970031 & 3102 & 56484333976919 & 56484333994001 & 17082\\
248641037 & 248644217 & 3180 & 74668675816277 & 74668675834661 & 18384\\
255949949 & 255953429 & 3480 & 116741875898981 & 116741875918727 & 19746\\
390817727 & 390821531 & 3804 & 136391104728629 & 136391104748621 & 19992\\
698542487 & 698547257 & 4770 & 221346439666109 & 221346439686641 & 20532\\
2466641069 & 2466646361 & 5292 & 353971046703347 & 353971046725277 & 21930\\
4289385521 & 4289391551 & 6030 & 450811253543219 & 450811253565767 & 22548\\
19181736269 & 19181742551 & 6282 & 742914612256169 & 742914612279527 & 23358\\
24215097497 & 24215103971 & 6474 &1121784847637957 & 1121784847661339& 23382\\
\hline
\end{tabular}
\end{center}
\normalsize
\newpage
\section{Record gaps between prime triplets}
\begin{center}TABLE 3.1 \\Maximal gaps between prime triplets $\{p$, $p+2$, $p+6\}$ \ [OEIS A201598] \\[0.5em]
\begin{tabular}{rrr|rrr}
\hline
\multicolumn{2}{r}{Initial primes in tuples} & Gap $g_3$ &
\multicolumn{2}{r}{Initial primes in tuples~\phantom{\fbox{$1^1$}}} & Gap $g_3$ \\
[0.5ex]\hline
\vphantom{\fbox{$1^1$}}
5 & 11 & 6 & 242419361 & 242454281 & 34920 \\
17 & 41 & 24 & 913183487 & 913222307 & 38820 \\
41 & 101 & 60 & 1139296721 & 1139336111 & 39390 \\
107 & 191 & 84 & 2146630637 & 2146672391 & 41754 \\
347 & 461 & 114 & 2188525331 & 2188568351 & 43020 \\
461 & 641 & 180 & 3207540881 & 3207585191 & 44310 \\
881 & 1091 & 210 & 3577586921 & 3577639421 & 52500 \\
1607 & 1871 & 264 & 7274246711 & 7274318057 & 71346 \\
2267 & 2657 & 390 & 33115389407 & 33115467521 & 78114 \\
2687 & 3251 & 564 & 97128744521 & 97128825371 & 80850 \\
6197 & 6827 & 630 & 99216417017 & 99216500057 & 83040 \\
6827 & 7877 & 1050 & 103205810327 & 103205893751 & 83424 \\
39227 & 40427 & 1200 & 133645751381 & 133645853711 & 102330 \\
46181 & 47711 & 1530 & 373845384527 & 373845494147 & 109620 \\
56891 & 58907 & 2016 & 412647825677 & 412647937127 & 111450 \\
83267 & 86111 & 2844 & 413307596957 & 413307728921 & 131964 \\
167621 & 171047 & 3426 & 1368748574441 & 1368748707197 & 132756 \\
375251 & 379007 & 3756 & 1862944563707 & 1862944700711 & 137004 \\
381527 & 385391 & 3864 & 2368150202501 & 2368150349687 & 147186 \\
549161 & 553097 & 3936 & 2370801522107 & 2370801671081 & 148974 \\
741677 & 745751 & 4074 & 3710432509181 & 3710432675231 & 166050 \\
805031 & 809141 & 4110 & 5235737405807 & 5235737580317 & 174510 \\
931571 & 937661 & 6090 & 8615518909601 & 8615519100521 & 190920 \\
2095361 & 2103611 & 8250 & 10423696470287 & 10423696665227 & 194940 \\
2428451 & 2437691 & 9240 & 10660256412977 & 10660256613551 & 200574 \\
4769111 & 4778381 & 9270 & 11602981439237 & 11602981647011 & 207774 \\
4938287 & 4948631 & 10344 & 21824373608561 & 21824373830087 & 221526 \\
12300641 & 12311147 & 10506 & 36385356561077 & 36385356802337 & 241260 \\
12652457 & 12663191 & 10734 & 81232357111331 & 81232357386611 & 275280 \\
13430171 & 13441091 & 10920 &186584419495421 & 186584419772321 & 276900 \\
14094797 & 14107727 & 12930 &297164678680151 & 297164678975621 & 295470 \\
18074027 & 18089231 & 15204 &428204300934581 & 428204301233081 & 298500 \\
29480651 & 29500841 & 20190 &450907041535541 & 450907041850547 & 315006 \\
107379731 & 107400017 & 20286 &464151342563471 & 464151342898121 & 334650 \\
138778301 & 138799517 & 21216 &484860391301771 & 484860391645037 & 343266 \\
156377861 & 156403607 & 25746 &666901733009921 & 666901733361947 & 352026 \\
\hline
\end{tabular}
\end{center}
\normalsize
\newpage
\begin{center}TABLE 3.2 \\Maximal gaps between prime triplets $\{p$, $p+4$, $p+6\}$ \ [OEIS A201596]
\\[0.5em]
\small
\begin{tabular}{rrr|rrr}
\hline
\multicolumn{2}{r}{Initial primes in tuples} & Gap $g_3$ &
\multicolumn{2}{r}{Initial primes in tuples~\phantom{\fbox{$1^1$}}} & Gap $g_3$ \\
[0.5ex]\hline
\vphantom{\fbox{$1^1$}}
7 & 13 & 6 & 2562574867 & 2562620653 & 45786 \\
13 & 37 & 24 & 2985876133 & 2985923323 & 47190 \\
37 & 67 & 30 & 4760009587 & 4760057833 & 48246 \\
103 & 193 & 90 & 5557217797 & 5557277653 & 59856 \\
307 & 457 & 150 & 10481744677 & 10481806897 & 62220 \\
457 & 613 & 156 & 19587414277 & 19587476563 & 62286 \\
613 & 823 & 210 & 25302582667 & 25302648457 & 65790 \\
2137 & 2377 & 240 & 30944120407 & 30944191387 & 70980 \\
2377 & 2683 & 306 & 37638900283 & 37638972667 & 72384 \\
2797 & 3163 & 366 & 49356265723 & 49356340387 & 74664 \\
3463 & 3847 & 384 & 49428907933 & 49428989167 & 81234 \\
4783 & 5227 & 444 & 70192637737 & 70192720303 & 82566 \\
5737 & 6547 & 810 & 74734558567 & 74734648657 & 90090 \\
9433 & 10267 & 834 & 111228311647 & 111228407113 & 95466 \\
14557 & 15643 & 1086 & 134100150127 & 134100250717 & 100590 \\
24103 & 25303 & 1200 & 195126585733 & 195126688957 & 103224 \\
45817 & 47143 & 1326 & 239527477753 & 239527584553 & 106800 \\
52177 & 54493 & 2316 & 415890988417 & 415891106857 & 118440 \\
126487 & 130363 & 3876 & 688823669533 & 688823797237 & 127704 \\
317587 & 321817 & 4230 & 906056631937 & 906056767327 & 135390 \\
580687 & 585037 & 4350 & 926175746857 & 926175884923 & 138066 \\
715873 & 724117 & 8244 & 1157745737047 & 1157745878893 & 141846 \\
2719663 & 2728543 & 8880 & 1208782895053 & 1208783041927 & 146874 \\
6227563 & 6237013 & 9450 & 2124064384483 & 2124064533817 & 149334 \\
8114857 & 8125543 & 10686 & 2543551885573 & 2543552039053 & 153480 \\
10085623 & 10096573 & 10950 & 4321372168453 & 4321372359523 & 191070 \\
10137493 & 10149277 & 11784 & 6136808604343 & 6136808803753 & 199410 \\
18773137 & 18785953 & 12816 & 18292411110217 & 18292411310077 & 199860 \\
21297553 & 21311107 & 13554 & 19057076066317 & 19057076286553 & 220236 \\
25291363 & 25306867 & 15504 & 21794613251773 & 21794613477097 & 225324 \\
43472497 & 43488073 & 15576 & 35806145634613 & 35806145873077 & 238464 \\
52645423 & 52661677 & 16254 & 75359307977293 & 75359308223467 & 246174 \\
69718147 & 69734653 & 16506 & 89903831167897 & 89903831419687 & 251790 \\
80002627 & 80019223 & 16596 &125428917151957 & 125428917432697 & 280740 \\
89776327 & 89795773 & 19446 &194629563521143 & 194629563808363 & 287220 \\
90338953 & 90358897 & 19944 &367947033766573 & 367947034079923 & 313350 \\
109060027 & 109081543 & 21516 &376957618687747 & 376957619020813 & 333066 \\
148770907 & 148809247 & 38340 &483633763994653 & 483633764339287 & 344634 \\
1060162843 & 1060202833 & 39990 &539785800105313 & 539785800491887 & 386574 \\
1327914037 & 1327955593 & 41556 & \multicolumn{3}{c}{} \\
\hline
\end{tabular}
\end{center}
\normalsize
\newpage
\section{Record gaps between prime quadruplets}
\begin{center}TABLE 4 \\Maximal gaps between prime quadruplets $\{p$, $p+2$, $p+6$, $p+8\}$ \ [OEIS A113404]
\\[0.5em]
\begin{tabular}{rrr|rrr}
\hline
\multicolumn{2}{r}{Initial primes in tuples} & Gap $g_4$ &
\multicolumn{2}{r}{Initial primes in tuples~\phantom{\fbox{$1^1$}}} & Gap $g_4$ \\
[0.5ex]\hline
\vphantom{\fbox{$1^1$}}
5 & 11 & 6 & 3043111031 & 3043668371 & 557340 \\
11 & 101 & 90 & 3593321651 & 3593956781 & 635130 \\
191 & 821 & 630 & 5675642501 & 5676488561 & 846060 \\
821 & 1481 & 660 & 25346635661 & 25347516191 & 880530 \\
2081 & 3251 & 1170 & 27329170151 & 27330084401 & 914250 \\
3461 & 5651 & 2190 & 35643379901 & 35644302761 & 922860 \\
5651 & 9431 & 3780 & 56390149631 & 56391153821 & 1004190 \\
25301 & 31721 & 6420 & 60368686121 & 60369756611 & 1070490 \\
34841 & 43781 & 8940 & 71335575131 & 71336662541 & 1087410 \\
88811 & 97841 & 9030 & 76427973101 & 76429066451 & 1093350 \\
122201 & 135461 & 13260 & 87995596391 & 87996794651 & 1198260 \\
171161 & 187631 & 16470 & 96616771961 & 96618108401 & 1336440 \\
301991 & 326141 & 24150 & 151023350501 & 151024686971 & 1336470 \\
739391 & 768191 & 28800 & 164550390671 & 164551739111 & 1348440 \\
1410971 & 1440581 & 29610 & 171577885181 & 171579255431 & 1370250 \\
1468631 & 1508621 & 39990 & 210999769991 & 211001269931 & 1499940 \\
2990831 & 3047411 & 56580 & 260522319641 & 260523870281 & 1550640 \\
3741161 & 3798071 & 56910 & 342611795411 & 342614346161 & 2550750 \\
5074871 & 5146481 & 71610 & 1970587668521 & 1970590230311 & 2561790 \\
5527001 & 5610461 & 83460 & 4231588103921 & 4231591019861 & 2915940 \\
8926451 & 9020981 & 94530 & 5314235268731 & 5314238192771 & 2924040 \\
17186591 & 17301041 & 114450 & 7002440794001 & 7002443749661 & 2955660 \\
21872441 & 22030271 & 157830 & 8547351574961 & 8547354997451 & 3422490 \\
47615831 & 47774891 & 159060 & 15114108020021 & 15114111476741 & 3456720 \\
66714671 & 66885851 & 171180 & 16837633318811 & 16837637203481 & 3884670 \\
76384661 & 76562021 & 177360 & 30709975578251 & 30709979806601 & 4228350 \\
87607361 & 87797861 & 190500 & 43785651890171 & 43785656428091 & 4537920 \\
122033201 & 122231111 & 197910 & 47998980412211 & 47998985015621 & 4603410 \\
132574061 & 132842111 & 268050 & 55341128536691 & 55341133421591 & 4884900 \\
204335771 & 204651611 & 315840 & 92944027480721 & 92944033332041 & 5851320 \\
628246181 & 628641701 & 395520 & 412724560672211 &412724567171921 & 6499710 \\
1749443741 & 1749878981 & 435240 & 473020890377921 &473020896922661 & 6544740 \\
2115383651 & 2115824561 & 440910 & 885441677887301 &885441684455891 & 6568590 \\
2128346411 & 2128859981 & 513570 & 947465687782631 &947465694532961 & 6750330 \\
2625166541 & 2625702551 & 536010 & 979876637827721 &979876644811451 & 6983730 \\
2932936421 & 2933475731 & 539310 & & & \\
\hline
\end{tabular}
\end{center}
\normalsize
\newpage
\section{Record gaps between prime quintuplets}
\begin{center}TABLE 5.1 \\
Maximal gaps between prime quintuplets $\{p$, $p+2$, $p+6$, $p+8$, $p+12\}$ \\[0.5em]
\begin{tabular}{rrr|rrr}
\hline
\multicolumn{2}{r}{Initial primes in tuples} & Gap $g_5$ &
\multicolumn{2}{r}{Initial primes in tuples~\phantom{\fbox{$1^1$}}} & Gap $g_5$ \\
[0.5ex]\hline
\vphantom{\fbox{$1^1$}}
5 & 11 & 6 & 107604759671 & 107616100511 & 11340840 \\
11 & 101 & 90 & 140760439991 & 140772689501 & 12249510 \\
101 & 1481 & 1380 & 162661360481 & 162673773671 & 12413190 \\
1481 & 16061 & 14580 & 187735329491 & 187749510491 & 14181000 \\
22271 & 43781 & 21510 & 327978626531 & 327994719461 & 16092930 \\
55331 & 144161 & 88830 & 508259311991 & 508275672341 & 16360350 \\
536441 & 633461 & 97020 & 620537349191 & 620554105931 & 16756740 \\
661091 & 768191 & 107100 & 667672901711 & 667689883031 & 16981320 \\
1461401 & 1573541 & 112140 & 1079628551621 & 1079646141851 & 17590230 \\
1615841 & 1917731 & 301890 & 1104604933841 & 1104624218981 & 19285140 \\
5527001 & 5928821 & 401820 & 1182148717481 & 1182168243071 & 19525590 \\
11086841 & 11664551 & 577710 & 1197151034531 & 1197173264711 & 22230180 \\
35240321 & 35930171 & 689850 & 2286697462781 & 2286720012251 & 22549470 \\
53266391 & 54112601 & 846210 & 2435950632251 & 2435980618781 & 29986530 \\
72610121 & 73467131 & 857010 & 3276773115431 & 3276805283951 & 32168520 \\
92202821 & 93188981 & 986160 & 5229301162991 & 5229337555061 & 36392070 \\
117458981 & 119114111 & 1655130 & 9196865051651 & 9196903746881 & 38695230 \\
196091171 & 198126911 & 2035740 & 14660925945221 & 14660966101421 & 40156200 \\
636118781 & 638385101 & 2266320 & 21006417451961 & 21006458070461 & 40618500 \\
975348161 & 977815451 & 2467290 & 22175175736991 & 22175216733491 & 40996500 \\
1156096301 & 1158711011 & 2614710 & 22726966063091 & 22727007515411 & 41452320 \\
1277816921 & 1281122231 & 3305310 & 22931291089451 & 22931338667591 & 47578140 \\
1347962381 & 1351492601 & 3530220 & 31060723328351 & 31060771959221 & 48630870 \\
2195593481 & 2199473531 & 3880050 & 85489258071311 & 85489313115881 & 55044570 \\
3128295551 & 3132180971 & 3885420 & 90913430825291 & 90913489290971 & 58465680 \\
4015046591 & 4020337031 & 5290440 & 96730325054171 & 96730390102391 & 65048220 \\
8280668651 & 8286382451 & 5713800 & 199672700175071 &199672765913051 & 65737980 \\
9027127091 & 9033176981 & 6049890 & 275444947505591 &275445018294491 & 70788900 \\
15686967971 & 15693096311 & 6128340 & 331992774272981 &331992848243801 & 73970820 \\
18901038971 & 18908988011 & 7949040 & 465968834865971 &465968914851101 & 79985130 \\
21785624291 & 21793595561 & 7971270 & 686535413263871 &686535495684161 & 82420290 \\
30310287431 & 30321057581 & 10770150 & 761914822198961 &761914910291531 & 88092570 \\
\hline
\end{tabular}
\medskip
[OEIS A201073]
\end{center}
\normalsize
\newpage
\begin{center}TABLE 5.2 \\Maximal gaps between prime quintuplets $\{p$, $p+4$, $p+6$, $p+10$, $p+12\}$ \\[0.5em]
\begin{tabular}{rrr|rrr}
\hline
\multicolumn{2}{r}{Initial primes in tuples} & Gap $g_5$ &
\multicolumn{2}{r}{Initial primes in tuples~\phantom{\fbox{$1^1$}}} & Gap $g_5$ \\
[0.5ex]\hline
\vphantom{\fbox{$1^1$}}
7 & 97 & 90 & 15434185927 & 15440743597 & 6557670\\
97 & 1867 & 1770 & 17375054227 & 17381644867 & 6590640\\
3457 & 5647 & 2190 & 17537596327 & 17544955777 & 7359450\\
5647 & 15727 & 10080 & 25988605537 & 25997279377 & 8673840\\
19417 & 43777 & 24360 & 66407160637 & 66416495137 & 9334500\\
43777 & 79687 & 35910 & 74862035617 & 74871605947 & 9570330\\
101107 & 257857 & 156750 & 77710388047 & 77723371717 & 12983670\\
1621717 & 1830337 & 208620 & 144124106167 & 144138703987 & 14597820\\
3690517 & 3995437 & 304920 & 210222262087 & 210238658797 & 16396710\\
5425747 & 5732137 & 306390 & 585234882097 & 585252521167 & 17639070\\
8799607 & 9127627 & 328020 & 926017532047 & 926036335117 & 18803070\\
9511417 & 9933607 & 422190 & 986089952917 & 986113345747 & 23392830\\
16388917 & 16915267 & 526350 & 2819808136417 & 2819832258697 & 24122280\\
22678417 & 23317747 & 639330 & 3013422626107 & 3013449379477 & 26753370\\
31875577 & 32582437 & 706860 & 3538026326827 & 3538053196957 & 26870130\\
37162117 & 38028577 & 866460 & 4674635167747 & 4674662545867 & 27378120\\
64210117 & 65240887 & 1030770 & 5757142722757 & 5757171559957 & 28837200\\
119732017 & 120843637 & 1111620 & 7464931087717 & 7464961813867 & 30726150\\
200271517 & 201418957 & 1147440 & 8402871269197 & 8402904566467 & 33297270\\
203169007 & 204320107 & 1151100 & 9292699799017 & 9292733288557 & 33489540\\
241307107 & 242754637 & 1447530 & 10985205390997 & 10985239010737 & 33619740\\
342235627 & 344005297 & 1769670 & 12992848206847 & 12992884792957 & 36586110\\
367358347 & 369151417 & 1793070 & 15589051692667 & 15589094176627 & 42483960\\
378200227 & 380224837 & 2024610 & 24096376903597 & 24096421071127 & 44167530\\
438140947 & 440461117 & 2320170 & 37371241083097 & 37371285854467 & 44771370\\
446609407 & 448944487 & 2335080 & 38728669335607 & 38728728308527 & 58972920\\
711616897 & 714020467 & 2403570 & 91572717670537 & 91572784840627 & 67170090\\
966813007 & 970371037 & 3558030 & 109950817237357 & 109950886775827 & 69538470\\
2044014607 & 2048210107 & 4195500 & 325554440818297 & 325554513360487 & 72542190\\
3510456787 & 3514919917 & 4463130 & 481567288596127 & 481567361629087 & 73032960\\
4700738167 & 4705340527 & 4602360 & 501796510663237 & 501796584764467 & 74101230\\
5798359657 & 5803569847 & 5210190 & 535243109721577 & 535243185965557 & 76243980\\
7896734467 & 7902065527 & 5331060 & 657351798174427 & 657351876771637 & 78597210\\
12654304207 & 12659672737 & 5368530 & 818872754682547 & 818872840949077 & 86266530\\
13890542377 & 13896088897 & 5546520 & 991851356676277 & 991851464273767 & 107597490\\
14662830817 & 14668797037 & 5966220 & & & \\
\hline
\end{tabular}
\medskip
[OEIS A201062]
\end{center}
\normalsize
\newpage
\section{Record gaps between prime sextuplets}
\begin{adjustwidth}{-9mm}{}
\begin{center}TABLE 6 \\Maximal gaps between prime sextuplets $\{p$, $p+4$, $p+6$, $p+10$, $p+12$, $p+16\}$ \\[0.5em]
\normalsize
\begin{tabular}{rrr|rrr}
\hline
\multicolumn{2}{r}{Initial primes in tuples} & Gap $g_6$ &
\multicolumn{2}{r}{Initial primes in tuples~\phantom{\fbox{$1^1$}}} & Gap $g_6$ \\
[0.5ex]\hline
\vphantom{\fbox{$1^1$}}
7 & 97 & 90 & 422088931207 & 422248594837 & 159663630\\
97 & 16057 & 15960 & 427190088877 & 427372467157 & 182378280\\
19417 & 43777 & 24360 & 610418426197 & 610613084437 & 194658240\\
43777 & 1091257 & 1047480 & 659829553837 & 660044815597 & 215261760\\
3400207 & 6005887 & 2605680 & 660863670277 & 661094353807 & 230683530\\
11664547 & 14520547 & 2856000 & 853633486957 & 853878823867 & 245336910\\
37055647 & 40660717 & 3605070 & 1089611097007 & 1089869218717 & 258121710\\
82984537 & 87423097 & 4438560 & 1247852774797 & 1248116512537 & 263737740\\
89483827 & 94752727 & 5268900 & 1475007144967 & 1475318162947 & 311017980\\
94752727 & 112710877 & 17958150 & 1914335271127 & 1914657823357 & 322552230\\
381674467 & 403629757 & 21955290 & 1953892356667 & 1954234803877 & 342447210\\
1569747997 & 1593658597 & 23910600 & 3428196061177 & 3428617938787 & 421877610\\
2019957337 & 2057241997 & 37284660 & 9367921374937 & 9368397372277 & 475997340\\
5892947647 & 5933145847 & 40198200 & 10254799647007 & 10255307592697 & 507945690\\
6797589427 & 6860027887 & 62438460 & 13786576306957 & 13787085608827 & 509301870\\
14048370097 & 14112464617 & 64094520 & 21016714812547 & 21017344353277 & 629540730\\
23438578897 & 23504713147 & 66134250 & 33157788914347 & 33158448531067 & 659616720\\
24649559647 & 24720149677 & 70590030 & 41348577354307 & 41349374379487 & 797025180\\
29637700987 & 29715350377 & 77649390 & 72702520226377 & 72703333384387 & 813158010\\
29869155847 & 29952516817 & 83360970 & 89165783669857 & 89166606828697 & 823158840\\
45555183127 & 45645253597 & 90070470 & 122421000846367 &122421855415957 & 854569590\\
52993564567 & 53086708387 & 93143820 & 139864197232927 &139865086163977 & 888931050\\
58430706067 & 58528934197 & 98228130 & 147693859139077 &147694869231727 & 1010092650\\
93378527647 & 93495691687 & 117164040 & 186009633998047 &186010652137897 & 1018139850\\
97236244657 & 97367556817 & 131312160 & 202607131405027 &202608270995227 & 1139590200\\
240065351077 & 240216429907 & 151078830 & 332396845335547 &332397997564807 & 1152229260\\
413974098817 & 414129003637 & 154904820 & 424681656944257 &424682861904937 & 1204960680\\
419322931117 & 419481585697 & 158654580 & 437804272277497 &437805730243237 & 1457965740\\
\hline
\end{tabular}
\medskip
[OEIS A200503]
\end{center}
\end{adjustwidth}
\normalsize
\phantom{\vspace{1cm}}
\newpage
\section{Record gaps between prime septuplets}
\begin{adjustwidth}{-9mm}{}
\begin{center}TABLE 7.1
\\Maximal gaps between prime 7-tuples $\{p$, $p+2$, $p+8$, $p+12$, $p+14$, $p+18$, $p+20\}$ \\[0.5em]
\begin{tabular}{rrr|rrr}
\hline
\multicolumn{2}{r}{Initial primes in tuples} & Gap $g_7$ &
\multicolumn{2}{r}{Initial primes in tuples~\phantom{\fbox{$1^1$}}} & Gap $g_7$ \\
[0.5ex]\hline
\vphantom{\fbox{$1^1$}}
5639 & 88799 & 83160 & 1554893017199 & 1556874482069 & 1981464870\\
88799 & 284729 & 195930 & 2088869793539 & 2090982626639 & 2112833100\\
284729 & 626609 & 341880 & 2104286376329 & 2106411289049 & 2124912720\\
1146779 & 6560999 & 5414220 & 2704298257469 & 2706823007879 & 2524750410\\
8573429 & 17843459 & 9270030 & 3550904257709 & 3553467600029 & 2563342320\\
24001709 & 42981929 & 18980220 & 4438966968419 & 4442670730019 & 3703761600\\
43534019 & 69156539 & 25622520 & 9996858589169 & 10000866474869 & 4007885700\\
87988709 & 124066079 & 36077370 & 21937527068909 & 21942038052029 & 4510983120\\
157131419 & 208729049 & 51597630 & 29984058230039 & 29988742571309 & 4684341270\\
522911099 & 615095849 & 92184750 & 30136375346249 & 30141383681399 & 5008335150\\
706620359 & 832143449 & 125523090 & 32779504324739 & 32784963061379 & 5458736640\\
1590008669 & 1730416139 & 140407470 & 40372176609629 & 40377635870639 & 5459261010\\
2346221399 & 2488117769 & 141896370 & 42762127106969 & 42767665407989 & 5538301020\\
3357195209 & 3693221669 & 336026460 & 54620176867169 & 54626029928999 & 5853061830\\
11768282159 & 12171651629 & 403369470 & 63358011407219 & 63365153990639 & 7142583420\\
30717348029 & 31152738299 & 435390270 & 79763188368959 & 79770583970249 & 7395601290\\
33788417009 & 34230869579 & 442452570 & 109974651670769 &109982176374599 & 7524703830\\
62923039169 & 63550891499 & 627852330 & 145568747217989 &145576919193689 & 8171975700\\
68673910169 & 69428293379 & 754383210 & 196317277557209 &196325706400709 & 8428843500\\
88850237459 & 89858819579 & 1008582120 & 221953318490999 &221961886287509 & 8567796510\\
163288980299 & 164310445289 & 1021464990 & 249376874266769 &249385995968099 & 9121701330\\
196782371699 & 197856064319 & 1073692620 & 290608782523049 &290618408585369 & 9626062320\\
421204876439 & 422293025249 & 1088148810 & 310213774327979 &310225023265889 & 11248937910\\
427478111309 & 428623448159 & 1145336850 & 471088826892779 &471100312066829 & 11485174050\\
487635377219 & 489203880029 & 1568502810 & 631565753063879 &631578724265759 & 12971201880\\
994838839439 & 996670266659 & 1831427220 & 665514714418439 &665530090367279 & 15375948840\\
\hline
\end{tabular}
\medskip
[OEIS A201251]
\end{center}
\end{adjustwidth}
\normalsize
\newpage
\begin{adjustwidth}{-9mm}{}
\begin{center}TABLE 7.2 \\
Maximal gaps between prime 7-tuples $\{p$, $p+2$, $p+6$, $p+8$, $p+12$, $p+18$, $p+20\}$ \\[0.5em]
\begin{tabular}{rrr|rrr}
\hline
\multicolumn{2}{r}{Initial primes in tuples} & Gap $g_7$ &
\multicolumn{2}{r}{Initial primes in tuples~\phantom{\fbox{$1^1$}}} & Gap $g_7$ \\
[0.5ex]\hline
\vphantom{\fbox{$1^1$}}
11 & 165701 & 165690 & 382631592641 & 383960791211 & 1329198570\\
165701 & 1068701 & 903000 & 711854781551 & 714031248641 & 2176467090\\
1068701 & 11900501 & 10831800 & 2879574595811 & 2881987944371 & 2413348560\\
25658441 & 39431921 & 13773480 & 3379186846151 & 3381911721101 & 2724874950\\
45002591 & 67816361 & 22813770 & 5102247756491 & 5105053487531 & 2805731040\\
93625991 & 124716071 & 31090080 & 5987254671311 & 5990491102691 & 3236431380\\
257016491 & 300768311 & 43751820 & 7853481899561 & 7857040317011 & 3558417450\\
367438061 & 428319371 & 60881310 & 11824063534091 & 11828142800471 & 4079266380\\
575226131 & 661972301 & 86746170 & 16348094430581 & 16353374758991 & 5280328410\\
1228244651 & 1346761511 & 118516860 & 44226969237161 & 44233058406611 & 6089169450\\
1459270271 & 1699221521 & 239951250 & 54763336591961 & 54771443197181 & 8106605220\\
2923666841 & 3205239881 & 281573040 &154325181803321 &154333374270191 & 8192466870\\
10180589591 & 10540522241 & 359932650 &157436722520921 &157445120715341 & 8398194420\\
15821203241 & 16206106991 & 384903750 &281057032201481 &281065611322031 & 8579120550\\
23393094071 & 23911479071 & 518385000 &294887168565161 &294896169845351 & 9001280190\\
37846533071 & 38749334621 & 902801550 &309902902299701 &309914040972071 &11138672370\\
158303571521 & 159330579041 & 1027007520 &419341934631071 &419354153220461 &12218589390\\
350060308511 & 351146640191 & 1086331680 &854077393259801 &854090557643621 &13164383820\\
\hline
\end{tabular}
\medskip
[OEIS A201051]
\end{center}
\end{adjustwidth}
\normalsize
|
1,108,101,564,377 | arxiv | \section{Introduction}
Index tracking aims at replicating the performance and risk profile of a given market index, and
constructs a tracking portfolio such that the performance of the portfolio is as close as possible to that
of the market index. Index tracking problem has received a great deal of attention in the literature (see,
for example, \cite{Alexander,Ammann,Beasley2003,Brodie,Korn,Beasley,DeMiguel,Fan,Gilli,lobo,Roll,Rudolf,
Tabata,Evolutionary,GaoLi}). An obvious approach is by full replication of the index. It, however, can cause high
administrative and transaction costs. Also, in the practical business environment, portfolio managers
often face business-driven requirements that limit the number of constituents in their tracking
portfolio. Therefore, index tracking can reduce transaction costs and avoid detaining small and illiquid
assets for the index with a large number of constituents.
In this paper we consider a natural model for index tracking, which minimizes a
quadratic tracking error while enforcing an upper bound on the number of assets in the portfolio.
When short selling is not allowed, this model can be formulated mathematically as
\begin{equation} \label{index-track}
\min\limits_{x\in\cFr} TE(x) := \|y - Rx\|^2/T.
\end{equation}
Here, $x \in \Re^n$ is the weight vector of $n$ index constituents; $y\in\Re^T$ is a sample vector of
portfolio returns over a period of length $T$; $R\in\Re^{T \times n}$ consists of the sample returns of
index constituents over the same period,
\beq \label{cFr}
\cFr := \left\{x\in\Re^n: \ba{l}
\sum_{i = 1}^n {x_i } = 1, \ \|x\|_0 \le r \\ [4pt]
0 \le x_i \le u, \ i=1,\ldots, n
\ea \right\},
\eeq
$\|x\|_0$ denotes the number of nonzero entries of $x$; and $u \in [1/r,1]$ is an upper bound on
the weight of each index constituent. The sum of error squares is used here
to measure the tracking error between the returns of the index and the returns of a portfolio. We
shall mention that another possible tracking error measure is the weighted sum of error squares. Recently,
Gao and Li \cite{GaoLi} studied a related but different cardinality constrained portfolio selection model,
which minimizes the variance of the portfolio subject to a given expected return and a cardinality
restriction on the assets. They developed some efficient lower bounding schemes and proposed a
branch-and-bound algorithm to solve the model.
Index tracking problem \eqref{index-track} involves a cardinality constraint and is generally
NP-hard. It is thus highly challenging to find a global optimal solution to this problem.
Recently, Fastrich et al.\ \cite{cardinality} studied a relaxation of \eqref{index-track} by
replacing the cardinality constraint in \eqref{index-track} by imposing an upper bound on the
$l_q$-norm ($0<q<1$) \cite{cxy} of the vector of portfolio weights. Xu et al. \cite{L1/2} considered
a special case of this relaxation with $q=1/2$ and proposed a hybrid half thresholding algorithm for
solving this $l_{1/2}$ regularized index tracking model. Lately, Chen et al.\ \cite{Chen14}
proposed a new relaxation of problem \eqref{index-track}, which minimizes the $l_q$-norm regularized
tracking error. They also proposed an interior point method to solve the model. On the other hand, a
local optimal solution of \eqref{index-track} can be found by the penalty decomposition method and the
iterative hard thresholding method that were proposed in \cite{PD,IHT}, respectively. However, they are
generic methods for a more general class of cardinality-constrained optimization problems. When
applied to problem \eqref{index-track}, these methods may not be efficient since they cannot
exploit the specific structure of the feasible region of problem \eqref{index-track}.
Nonmonotone projected gradient (NPG) methods have widely been studied in the literature, which
incorporate the nonmonotone line search technique proposed in \cite{GrLaLu86} into projected gradient
methods. For example, Birgin et al.\ \cite{E.G.Birgin} studied the convergence of an NPG method for minimizing
a smooth function over a closed convex set. Dai and Fletcher \cite{DaFl05} studied a NPG method for solving
a box-constrained quadratic programming in which Barzilai and Borwein's scheme \cite{BB} is used to choose
initial stepsize. Recently, Francisco and Baz\'an \cite{ncNPG} proposed an NPG method for minimizing
a smooth objective over a general nonconvex set and showed that it converges a generalized stationary point that
is a fixed point of a certain proximal mapping. It is known that NPG methods generally outperform the classical
(monotone) projected gradient methods in terms of speed and/or solution quality (see, for example, \cite{E.G.Birgin,DaFl05,AuSiTe07,TaZh12}). In this paper, we
propose a simple NPG method for solving problem \eqref{index-track}. At each iteration, our method usually
solves several projected gradient subproblems. By exploiting the specific structure of the feasible region of
problem \eqref{index-track}, we show that each projected gradient subproblem has a closed-form solution,
which can be computed in {\it linear} time. Moreover, we show that any accumulation point of the sequence
generated by our method is an optimal solution of a related convex optimization problem. Under some suitable
assumption, we further establish that such an accumulation point is a local minimizer of problem
\eqref{index-track}. We also conduct empirical tests to compare our method with the other two approaches
proposed in \cite{Evolutionary,L1/2} for index tracking. The computational results demonstrate that our approach
generally produces sparse portfolios with smaller out-of-sample tracking error and higher consistency between
in-sample and out-of-sample tracking errors. Moreover, our method outperforms the other two approaches in
terms of speed.
The rest of the paper is organized as follows. In section \ref{method} we propose a nonmonotone
projected gradient method for solving a class of optimization problems that include problem
\eqref{index-track} as a special case and establish its convergence. In section \ref{result} we conduct
empirical tests to compare our method with the other two existing approaches for index tracking. We
present some concluding remarks in section \ref{conclude}.
\section{Nonmonotone projected gradient method}
\label{method}
In this section we propose a nonmonotone projected gradient (NPG) method for solving the problem
\beq \label{sparse-prob}
\min\limits_{x\in\cFr} f(x),
\eeq
where $\cFr$ is defined in \eqref{cFr} and $f:\Re^n \to \Re$ is Lipschitz continuously differentiable, that is, there is a constant $L_f > 0$ such that
\beq \label{lipschitz}
\left\| {\nabla f(x) - \nabla f(y)} \right\| \le L_f\left\| {x - y} \right\| \quad \forall x, y \in \Re^n.
\eeq
Throughout this paper, $\|\cdot\|$ denotes the standard Euclidean norm. It is clear to see that
problem \eqref{sparse-prob} includes \eqref{index-track} as a special case. Therefore, the NPG
method proposed below can be suitably applied to solve problem
\eqref{index-track}.
\gap
\noindent
{\bf Nonmonotone projected gradient (NPG) method for \eqref{sparse-prob}} \\ [5pt]
Let $0< L_{\min} < L_{\max}$, $\tau>1$, $c>0$, integer $M \ge 0$ be given. Choose an
arbitrary $x^0 \in \cFr$ and set $k=0$.
\begin{itemize}
\item[1)] Choose $L^0_k \in [L_{\min}, L_{\max}]$ arbitrarily. Set $L_k = L^0_k$.
\bi
\item[1a)] Solve the subproblem
\beq \label{subprob}
x^{k+1} \in \Arg\min\limits_{x \in \cFr} \left\{\nabla f(x^k)^T (x-x^k) + \frac{L_k}{2} \|x-x^k\|^2\right\}
\eeq
\item[1b)] If
\beq \label{descent}
f(x^{k+1}) \le \max\limits_{[k-M]_+ \le i \le k} f(x^i) - \frac{c}{2} \|x^{k+1}-x^k\|^2
\eeq
is satisfied, then go to step 2).
\item[1c)] Set $L_k \leftarrow \tau L_k$ and go to step 1a).
\ei
\item[2)]
Set $k \leftarrow k+1$ and go to step 1).
\end{itemize}
\noindent
{\bf end}
\gap
\begin{remark}
\begin{itemize}
\item[(i)]
When $M=0$, the sequence $\{f(x^k)\}$ is monotonically decreasing. Otherwise, it may increase at some iterations and thus the above method is generally a nonmonotone method.
\item[(ii)] A popular choice of $L^0_k$ is by the following formula proposed by Barzilai and Borwein \cite{BB}
(see also \cite{E.G.Birgin}):
\[
L^0_k = \max \left\{ L_{\min } ,\min \left\{ L_{\max } ,\frac{(s^k)^T y^k}{\|s^k\|^2}\right\} \right\},
\]
where $s^k = x^k - x^{k - 1}$, $y^k=\nabla f(x^k)-\nabla f(x^{k - 1})$.
\end{itemize}
\end{remark}
\gap
We first show that for each outer iteration of the above NPG method, the number of its inner iterations
is finite.
\begin{theorem} \label{inner-convergence}
For each $k \ge 0$, the inner termination criterion \eqref{descent} is satisfied after at most
\[
\max\left\{\left\lfloor \frac{\log(L_f+c)-\log(L_{\min})}{\log \tau} +1\right\rfloor,1\right\}
\]
inner iterations.
\end{theorem}
\begin{proof}
Let $\bar L_k$ denote the final value of $L_k$ at the $k$th outer iteration, and let
$n_k$ denote the number of inner iterations for the $k$th outer iteration. We divide the proof
into two separate cases.
Case 1): $\bar L_k=L^0_k$. It is clear that $n_k=1$.
Case 2): $\bar L_k<L^0_k$. Let $H(x)$ denote the objective function of \eqref{subprob}.
By the definition of $x^{k+1}$,
we know that $H(x^{k+1}) \le H(x^k)$, which implies that
\[
\nabla f(x^k)^T(x^{k+1}-x^k) +
\frac{L_k}{2}\|x^{k+1}-x^k\|^2 \le 0.
\]
In addition, it follows from \eqref{lipschitz} that
\[
f(x^{k+1}) \ \le \ f(x^k)+\nabla f(x^k)^T(x^{k+1}-x^k) +
\frac{L_f}{2}\|x^{k+1}-x^k\|^2.
\]
Combining these two inequalities, we obtain that
\[
f(x^{k+1}) \le f(x^k) - \frac{L_k-L_f}{2}\|x^{k+1}-x^k\|^2.
\]
Hence, \eqref{descent} holds whenever $L_k \ge L_f+c$. This together with the definition of $\bar L_k$
implies that $\bar L_k/\tau < L_f+c$, that is, $\bar L_k <\tau(L_f+c)$. In view of the definition of $n_k$,
we further have
\[
L_{\min} \tau^{n_k-1} \le L^0_k \tau^{n_k-1} = \bar L_k < \tau(L_f+c).
\]
Hence, $n_k \le \left\lfloor \frac{\log(L_f+c)-\log(L_{\min})}{\log \tau} +1\right\rfloor$.
Combining the above two cases, we see that the conclusion holds.
\end{proof}
\gap
We next establish convergence of the outer iterations of the NPG method.
\begin{theorem} \label{converge}
Let $\{x^k\}$ be the sequence generated by the above NPG method. There hold:
\begin{itemize}
\item[(1)] $\{f(x^k)\}$ converges and $\{\|x^k-x^{k-1}\|\} \to 0$.
\item[(2)] Let $x^*$ be an arbitrary accumulation point of $\{x^k\}$ and $J^*=\{j: x^*_j \neq 0\}$. Then $x^*$ is a stationary point of the problem
\begin{equation} \label{stat-pt}
\begin{array}{ll}
\min\limits_x & f(x) \\
\mbox{s.t.} & \sum_{i=1}^n x_i = 1, \ 0 \le x_j \le u, \ j \in J^*; \\
& x_j = 0, \ j \notin J^*.
\end{array}
\end{equation}
Suppose further that $f$ is convex. Then
\begin{itemize}
\item[(2a)]
$x^*$ is a local minimizer of problem \eqref{sparse-prob}
if $\|x^*\|_0=r$;
\item[(2b)]
$x^*$ is a minimizer of problem \eqref{stat-pt} if $\|x^*\|_0<r$.
\end{itemize}
\end{itemize}
\end{theorem}
\begin{proof}
(1) Notice that $f$ is continuous in
$\Delta =\{x\in\Re^n: \sum^n_{i=1} x_i = 1, \ 0 \le x_i \le u \ \forall i\}$. Since $\{x^k\} \subset \Delta$, it
follows that $\{f(x^k)\}$ is bounded below. Let $\ell(k)$ be an integer such that $[k-M]_+ \le \ell(k) \le k$ and
\[
f(x^{\ell(k)}) = \max\limits_{[k-M]_+ \le i \le k} f(x^i).
\]
It is not hard to observe from \eqref{descent} that $f(x^{\ell(k)})$ is decreasing. Hence, $\lim_{k \to \infty} f(x^{\ell(k)})=\hat f$ for some $\hat f \in \Re$. Using this relation, \eqref{descent}, and a similar
induction argument as used in \cite{WrNoFi09}, one can show that for all $j\ge 1$,
\[
\lim\limits_{k\to\infty} d^{l(k)-j} = 0, \quad\quad \lim\limits_{k\to\infty} f(x^{l(k)-j})=\hat f,
\]
where $d^k = x^{k+1}-x^k$ for all $k\ge 0$. In view of these equalities, the uniform continuity of $f$
over $\Delta$, and a similar argument in \cite{WrNoFi09}, we can conclude that $\{f(x^k)\}$ converges and
$\{\|x^k-x^{k-1}\|\} \to 0$.
(2) Let $x^*$ be an arbitrary accumulation point of $\{x^k\}$. Then there exists a subsequence
$\cal K$ such that $\{x^k\}_\cK \to x^*$, which together with $\|x^{k}-x^{k-1}\| \to 0$ implies that
$\{x^{k-1}\}_\cK \to x^*$. By considering a convergent subsequence of $\cK$ if necessary, assume
without loss of generality that there exists some index set $J$ such that $x^k_j =0$ for every
$j \notin J, k\in \cK$ and $x^k_j > 0$ for all $j \in J, k\in \cK$. Let $\bar L_k$ denote the final value of
$L_k$ at the $k$th outer iteration. From the proof of Theorem \ref{inner-convergence}, we know that
$\bar L_k \in [L_{\min}, \tau(L_f+c)]$. By the definition of $x^k$, one can see that $x^k$ is a minimizer
of the problem
\[
\min\limits_{x \in \cFr} \left\{ \nabla f(x^{k-1})^T (x-x^{k-1}) + \frac{\bar L_{k-1}}{2} \|x-x^{k-1}\|^2\right\}.
\]
Using this fact and the definition of $J$, one can observe that $x^k$ is also the minimizer of
the problem
\beq \label{subprob1}
\min\limits_{x \in \Omega} \left\{\nabla f(x^{k-1})^T (x-x^{k-1}) + \frac{\bar L_{k-1}}{2} \|x-x^{k-1}\|^2 \right\},
\eeq
where
\[
\Omega = \left\{x \in \Re^n: \ba {l}
\sum_{i=1}^n x_i = 1, \ 0 \le x_j \le u, \ j \in J, \\
x_j = 0, \ j \notin J.
\ea\right\}.
\]
By the first-order optimality conditions of \eqref{subprob1}, we have
\beq \label{1st-cond-k}
-\nabla f(x^{k-1}) - \bar L_{k-1} (x^k-x^{k-1}) \in \cN_\Omega(x^k) \ \ \ \forall k \in \cK,
\eeq
where $\cN_\Omega(x)$ denotes the normal cone of $\Omega$ at $x$. Using
$\bar L_{k-1} \in [L_{\min}, \tau(L_f+c)]$, $\{x^{k-1}\}_\cK \to x^*$, $\|x^k-x^{k-1}\| \to 0$,
outer continuity of $\cN_\Omega(\cdot)$, and taking limits on both sides of \eqref{1st-cond-k} as $k \in \cK \to \infty$,
one can obtain that
\beq \label{stat-cond}
-\nabla f(x^*) \in \cN_\Omega(x^*).
\eeq
Let $\tilde \Omega$ be the feasible region of problem \eqref{stat-pt}. Clearly, $J^* \subseteq J$ and
hence $\tilde \Omega \subseteq \Omega$, which implies that $\cN_\Omega(x^*) \subseteq
\cN_{\tilde \Omega}(x^*)$. It then follows from \eqref{stat-cond} that $-\nabla f(x^*) \in
\cN_{\tilde \Omega}(x^*)$. Hence, $x^*$ is a stationary point of problem \eqref{stat-pt}.
We next prove statements (2a) and (2b) under the assumption that $f$ is convex.
(2a) Suppose that $\|x^*\|_0=r$ and $f$ is convex. We will show that $x^*$ is a local minimizer
of problem \eqref{sparse-prob}. Let $\epsilon = \min\{x^*_j: j\in J^*\}$,
\[
\tilde \cO(x^*;\epsilon) = \{x\in\tilde \Omega: \|x-x^*\| < \epsilon\}, \quad\quad \cO(x^*;\epsilon) = \{x\in\cFr: \|x-x^*\| < \epsilon\},
\]
where $\tilde \Omega$ is defined above. Since $f$ is convex and $x^*$ is a stationary point of
\eqref{stat-pt}, one can conclude that $x^*$ is a minimizer of problem \eqref{stat-pt}, which implies
that $f(x) \ge f(x^*)$ for all $x\in \tilde \cO(x^*;\epsilon)$. In addition, using the definition of
$\epsilon$ and $|J^*|=r$, it is not hard to observe that $\cO(x^*;\epsilon)=\tilde \cO(x^*;\epsilon)$.
It then follows that $f(x) \ge f(x^*)$ for all $x\in \cO(x^*;\epsilon)$, which implies that $x^*$ is
a local minimizer of problem \eqref{sparse-prob}.
(2b) Suppose that $\|x^*\|_0<r$ and $f$ is convex. Recall from above that $x^*$ is a stationary point
of \eqref{stat-pt}. Moreover, notice that problem \eqref{stat-pt} becomes a convex optimization problem when
$f$ is convex. Therefore, the conclusion of this statement immediately follows.
\end{proof}
\gap
One can observe that problem \eqref{subprob} is equivalent to
\[
x^{k+1} \in \Arg\min\limits_{x \in \cFr} \left\{\left\|x-\left(x^k-\frac{1}{L_k}\nabla f(x^k)\right)\right\|^2\right\},
\]
which is a special case of a more general problem
\beq \label{proj-subprob}
\min\limits_{x \in \cFr} \|x-a\|^2
\eeq
for some $a\in\Re^n$.
In the remainder of this section we will show that problem \eqref{proj-subprob} has a
closed-form solution, and moreover, it can be found in linear time. Before proceeding,
we review a technical lemma established in \cite{PD}.
\begin{lemma} \label{lem1}
Let $\cX_i \subseteq \Re$ and $\phi_i: \Re \to \Re$ for $i=1,\ldots,n$ be given. Suppose
that $r$ is a positive integer and $0 \in \cX_i$ for all $i$. Consider the following
$l_0$ minimization problem:
\beq \label{l0-p1}
\min\left\{\phi(x) = \sum^n_{i=1} \phi_i(x_i): \|x\|_0 \le r, \ x \in \cX_1 \times
\cdots \times \cX_n \right\}.
\eeq
Let $\tx^*_i\in \Arg\min\{\phi_i(x_i): x_i \in \cX_i\}$ and $I^* \subseteq \{1,\ldots, n\}$ be
the index set corresponding to the $r$ largest values of $\{v^*_i\}^n_{i=1}$, where
$v^*_i = \phi_i(0)-\phi_i(\tx^*_i)$ for $i=1, \ldots, n$. Then $x^*$ is an optimal solution of
problem \eqref{l0-p1}, where $x^*$ is defined as follows:
\[
x^*_i = \left\{\ba{ll}
\tx^*_i & \mbox{if} \ i \in I^*; \\
0 & \mbox{otherwise},
\ea\right. \quad i=1, \ldots, n.
\]
\end{lemma}
\gap
We are now ready to establish that problem \eqref{proj-subprob} has a closed-form solution that
can be computed efficiently.
\begin{theorem} \label{closed-form}
Given any $a\in \Re^n$, let $I^* \subseteq \{1,\ldots, n\}$ be the index set corresponding to
the $r$ largest values of $\{a_i\}^n_{i=1}$. Suppose that $\lambda^*\in \Re$ is such that
\beq \label{lambdas}
\sum\limits_{i \in I^*} \Pi_{[0,u]}(a_i+\lambda^*)=1,
\eeq
where
\[
\Pi_{[0,u]}(t) = \left\{\ba{ll}
0 & \mbox{if} \ t \le 0; \\
t & \mbox{if} \ 0 < t <u; \\
u & \mbox{if} \ t \ge u
\ea\right. \quad\quad \forall t\in\Re .
\]
Then $x^*$ is an optimal solution of problem \eqref{proj-subprob}, where $x^*$ is defined as
follows:
\[
x^*_i = \left\{\ba{ll}
\Pi_{[0,u]}(a_i+\lambda^*) & \mbox{if} \ i \in I^*; \\
0 & \mbox{otherwise},
\ea\right. \quad i=1, \ldots, n.
\]
\end{theorem}
\begin{proof}
Let $d(x)$ and $d^*$ denote the objective function and the optimal value of \eqref{proj-subprob},
respectively, and $x^*$ be defined above. We can observe that $\|x^*\|_0 \le r$, $\sum^n_{i=1} x^*_i =1$ and $0 \le x^*_j \le u$ for all $j$, which implies that $x^*$ is a feasible solution of
\eqref{proj-subprob}, namely, $x^*\in\cFr$. Hence, $d(x^*) \ge d^*$. Let
$\psi(t)=t^2-(t-\Pi_{[0,u]}(t))^2$ for every $t\in\Re$. It is not hard to see that $\psi$ is
differentiable, and moreover,
\[
\psi'(t) = 2t- 2(t-\Pi_{[0,u]}(t)) = 2\Pi_{[0,u]}(t) \ge 0.
\]
Hence, $\psi(t)$ is increasing in $(-\infty,\infty)$. Let $\phi_i(x_i)=(x_i-a_i-\lambda^*)^2$,
$\cX_i=[0,u]$, $\tx^*_i=\arg\min\{\phi_i(x_i): x_i \in \cX_i\}$ and $v^*_i=\phi_i(0)-\phi_i(\tx^*_i)$
for all $i$. One can observe that $\tx^*_i=\Pi_{[0,u]}(a_i+\lambda^*)$
and $v^*_i=\psi(a_i+\lambda^*)$ for all $i$. By the definition of $I^*$ and the
monotonicity of $\psi$, we conclude that $I^*$ is the index set corresponding to the $r$ largest
values of $\{v^*_i\}^n_{i=1}$. In view of Lemma \ref{lem1} and the definitions of $x^*$ and $\tx^*$,
one can see that $x^*$ is an optimal solution to the problem
\[
\underline{d}^*=\min\limits_{0 \le x \le u,\|x\|_0\leq r} \left\{\|x-a\|^2 - 2 \lambda^*(\sum_{i=1}^n x_i-1)\right\},
\]
and hence,
\[
\underline{d}^*=\|x^*-a\|^2 - 2 \lambda^*(\sum_{i=1}^n x_i^*-1)=\|x^*-a\|^2 =d(x^*).
\]
In addition, we can observe that $d^* \ge \underline{d}^*$. It then follows that $d^* \ge d(x^*)$. Recall that $d(x^*) \ge d^*$. Hence, we have $d(x^*) = d^*$. Using this relation and $x^*\in\cFr$, we
conclude that $x^*$ is an optimal solution of problem \eqref{proj-subprob}.
\end{proof}
\gap
We next show that a $\lambda^*$ satisfying \eqref{lambdas} can be computed in linear time, which
together with Theorem \ref{closed-form} implies that problem \eqref{proj-subprob} can be solved in
linear time as well.
\begin{theorem} \label{thm:lambdas}
For any $a\in\Re^n$ and $u \ge 1/n$, the equation
\beq \label{root-finding}
h(\lambda) := \sum\limits^n_{i=1} \Pi_{[0,u]}(a_i+\lambda)-1=0.
\eeq
has at least a root $\lambda^*$, and moreover, it can be computed in $O(n)$ time.
\end{theorem}
\begin{proof}
One can observe that $h$ is continuous in $(-\infty,\infty)$, and moreover, $h(\lambda) = -1$
when $\lambda$ is sufficiently small and $h(\lambda) =nu-1 \ge 0$ when $\lambda$ is
sufficiently large. Hence, \eqref{root-finding} has at least a root $\lambda^*$.
We next show that a root $\lambda^*$ to \eqref{root-finding} can be computed in $O(n)$ time. Indeed,
it is not hard to observe that $h$ is a piecewise linear increasing function in
$(-\infty,\infty)$ with breakpoints $\{-a_1,\ldots,-a_n,-a_1+u,\ldots,-a_n+u\}$.
Suppose that only $k$ of these breakpoints are distinct and they are arranged in strictly increasing
order $\{\lambda_1 < \ldots < \lambda_k\}$. The value of $h$ at each $\lambda_i$ and the slope
of each piece of $h$ can be evaluated iteratively. Indeed, let $\lambda_0=-\infty$. Observe that
$h(\lambda)=-1$ for all $\lambda \le \lambda_1$. Hence, $h$ has slope $s_0=0$ in $(-\infty, \lambda_1]$ and $h(\lambda_1)=-1$. Suppose that $h$ has slope $s_{i-1}$ in $(\lambda_{i-1}, \lambda_i]$, and that $h(\lambda_i)$ is already computed, and also that
there are $m_i$ number of $\{-a_1,\ldots,-a_n\}$ and $n_i$ number of $\{-a_1+u,\ldots,-a_n+u\}$ equal to $\lambda_i$. Then the slope of $h$ in $(\lambda_{i},\lambda_{i+1}]$ is $s_i=s_{i-1}+m_i-n_i$, which yields
$h(\lambda_{i+1})=h(\lambda_i)+s_i(\lambda_{i+1}-\lambda_i)$ for $i=1, \ldots,k-1$. Since $h(\lambda_1)=-1$, $h(\lambda_k)=nu-1 \ge 0$ and $h$ is increasing, there exists some
$1 \le j <k$ such that $h(\lambda_j) < 0$ and $h(\lambda_{j+1}) \ge 0$. If $h(\lambda_{j+1})=0$, then
$\lambda^*=\lambda_{j+1}$ is a root to \eqref{root-finding}. Otherwise, $\lambda^* \in (\lambda_j,\lambda_{j+1})$ and $h(\lambda^*)=0$. Using these facts and the relation $h(\lambda)=h(\lambda_j)+s_j(\lambda-\lambda_j)$ for $\lambda \in (\lambda_j,\lambda_{j+1})$,
we can have
\[
\lambda^* = \lambda_j-h(\lambda_j)/s_j.
\]
In addition, one can observe that the arithmetic operation cost of this root-finding procedure is
$O(n)$.
\end{proof}
\section{Numerical results} \label{result}
In this section, we conduct numerical experiments to compare the performance of the NPG method
proposed in Section \ref{method} with the hybrid evolutionary algorithm \cite{Evolutionary} and the
hybrid half thresholding algorithm \cite{L1/2} for solving index tracking problems. It shall be
mentioned that the NPG method solves the $l_0$ constrained model \eqref{index-track} with $u=0.5$
while the hybrid evolutionary algorithm solves a mixed integer programming model and the hybrid half
thresholding algorithm \cite{L1/2} solves an $l_{1/2}$ regularized index tracking model. These
three methods were coded in Matlab, and all computations were performed on a HP dx7408 PC (Intel
core E4500 CPU, 2.2GHz,1GB RAM) with Matlab 7.9 (R2009b).
The data sets used in our experiments are selected from the standard ones in
OR-library \cite{OR_library} and the CSI 300 index from China Shanghai-Shenzhen stock market.
For the standard data sets, weekly prices of the stocks from 1992 to 1997 of Hang Seng (Hong Kong),
DAX 100 (Germany), FTSE (Great Britain), Standard and Poor's 100 (USA), the Nikkei index (Japan), the
Standard and Poor's 500 (USA), Russell 2000 (USA) and Russell 3000 (USA) are used. For CSI 300 index,
the daily prices of 300 stocks from 2011 to 2013 in China stock market are considered.
According to the sample scale, we divide the above data sets into two categories:
small data sets including Hang Seng, DAX 100, FTSE , Standard and Poor's 100, the Nikkei index; and
large data sets including CSI 300, Standard and Poor's 500, Russell 2000 and Russell 3000. As in
Torrubiano and Alberto \cite{Evolutionary}, each data set is partitioned into two subsets: a training set
and a testing set. The training set, called in-sample set, consists of the first half of the data and
is used to compute the optimal index tracking portfolio. We also use the in-sample set and the
formula for $TE$ given in \eqref{index-track} to calculate the tracking error, which is called in-sample
tracking error ($TEI$) of the portfolio. The testing set, called out-of-sample set, contains the rest of the
data and is used to test the performance of the resulting optimal index tracking portfolio. In particular,
we use the formula for $TE$ in \eqref{index-track} with $(R,y)$ replaced by the out-of-sample set to
calculate the tracking error, which is called out-sample tracking error ($TEO$) of the portfolio.
\textcolor[rgb]{1.00,0.00,0.00}{In addition, we denote the true sparsity of the optimal output generated by each method by $S_{true}$.}
For the NPG method, we set $L_{\min}=10^{-8}$, $L_{\max}=10^8$, $\tau=2$, $c=10^{-4}$, and
$M=3$ for small data sets and $M=5$ for large data sets. For the hybrid half thresholding algorithm,
the lower and upper bounds are chosen to be 0 and 0.5, respectively. We terminate
these methods when the absolute change of the approximate solutions over two consecutive iterations
is below $10^{-6}$ or the maximum iteration is $10,000$. For the hybrid evolutionary algorithm,
we set the lower bound to 0, the upper bound to 0.5, initial population size to $100$, mutation
probability to $1\%$, cross probability to $50\%$, and maximum iterations to $10,000$. In addition,
we randomly choose a feasible point of problem \eqref{index-track} as a common initial point for these
three methods.
In order to measure the out-of-sample performance and the consistency between in-sample and
out-of-sample, we introduce the following two criteria.
\bi
\item
Consistency: The consistency between in-sample and out-of-sample tracking errors of a portfolio
given by a method $A$ is defined as
\begin{equation*}
Cons(A)=|TEI_A-TEO_A|,
\end{equation*
where $TEI_A$ and $TEO_A$ are the in-sample and out-of-sample tracking errors of a portfolio
generated by the method $A$.
Clearly, the smaller value of $Cons(A)$ means that the portfolio by
$A$ has more consistency between in-sample and out-of-sample tracking errors and thus it is more
robust (or less sensitive) with respect to the sample data used for model \eqref{index-track}.
\item
Superiority of out-of-sample: We define
\begin{equation*}
SupO(A,B)=\frac{TEO_{B}-TEO_{A}}{TEO_{B}}\times 100\%,
\end{equation*
where $TEO_{A}$ and $TEO_{B}$ are out-of-sample tracking error of the portfolio by methods $A$
and $B$, respectively. One can see that if $SupO(A,B)>0$, $TEO_{A}$ is smaller than $TEO_{B}$,
i.e., the portfolio by method $A$ is superior to that by method $B$ in terms of out-of-sample
tracking error; and it is very likely that the portfolio by $A$ has a smaller expected tracking error
and thus it is a better estimation to the underlying statistical regression model.
\ei
\subsection{Results on small data sets}\label{test1}
In this subsection, we compare the performance of the NPG method
with the hybrid evolutionary algorithm \cite{Evolutionary} and the hybrid half thresholding algorithm \cite{L1/2} on five small data sets, which are Hang Seng, DAX 100, FTSE, Standard and
Poor's 100, and Nikkei 225. For convenience of presentation, we abbreviate these three
approaches as $l_0$, MIP and $l_{1/2}$ since they are the methods for $l_0$, MIP and $l_{1/2}$ models,
respectively. In order to compare fairly the performance of these methods, we tailor their model
parameters so that the resulting portfolios have same density (i.e., same number of nonzero
entries).
Numerical results are presented in Tables 1 and 2, where $N$ denotes the number of assets
in a data set. In particular, we report in Table 1 in-sample error and out-of sample error of
the portfolios generated by the aforementioned three methods. In Table 2, we report the
consistency between in-sample and out-of-sample errors, and the superiority of out-of-sample
errors for the portfolios generated by these methods. The number of nonzero portfolios given by
these methods is listed in the column named ``density''. From Table 2, we can make the following
observations.
\bi
\item[(i)]
The $l_0$-based method (i.e., NPG method) generally has higher consistency between in-sample error
and out-of-sample error than the MIP- and $l_{1/2}$-based methods (namely, hybrid evolutionary and
half thresholding algorithms) since $Cons(l_0)<Cons(MIP)$ holds for 100\% (30/30)
instances and $Cons(l_0)<Cons(l_{1/2})$ holds for 77.3\% (22/30) instances.
\item[(ii)]
The $l_0$-based method is generally superior to the MIP- and $l_{1/2}$-based methods in terms of
out-of-sample error since $SupO(l_0,MIP)>0$ holds for 90\% (27/30) instances and
$SupO(l_0,l_{1/2})>0$ holds for 93.3\% (28/30) instances.
\ei
\begin{table}[ht]\label{biao1}
\caption{\footnotesize{The in-sample and out-of-sample tracking errors on small data sets.}}
{\scriptsize \ \centering \renewcommand\arraystretch{1.2} }
\par
\begin{center}
{\scriptsize
\begin{tabular}{ccccccccccc}
\hline
Index & Density & \multicolumn{3}{c}{$l_0$} &
\multicolumn{3}{c}{MIP} & \multicolumn{3}{c}{$l_{1/2}$} \\
& & $TEI$ & $TEO$ & $S_{true}$ & $TEI$ & $TEO$ & $S_{true}$ &$TEI$ & $TEO$ & $S_{true}$ \\ \hline
Hang & 5 & 6.23e-5 & 5.17e-5 & 5 & 5.69e-5 & 8.87e-5 & 5 & 8.36e-5 & 7.07e-5 & 5 \\
Seng & 6 & 4.29e-5 & 3.45e-5 & 6 & 4.85e-5 & 7.82e-5 & 6 & 8.58e-5 & 7.19e-5 & 6\\
($N$=31) & 7 &2.37e-5 &3.83e-5 & 7 & 3.26e-5 & 5.38e-5 & 7 & 6.45e-5 & 4.59e-5 & 7\\
& 8 & 2.38e-5 & 2.50e-5 & 8 & 2.06e-5 & 3.09e-5 & 8 & 3.20e-5 & 2.95e-5 & 8\\
& 9 & 2.00e-5 & 2.16e-5 & 9 & 1.95e-5 & 2.80e-5 & 9 & 3.96e-5 & 2.44e-5 & 9\\
& 10 & 1.58e-5 & 1.55e-5 & 10 & 1.86e-5 & 2.77e-5 & 10 & 2.33e-5 & 2.34e-5 & 10
\vspace{0.2cm} \\
DAX & 5 & 4.10e-5 & 1.08e-4 & 5 & 2.21e-5 & 1.02e-4 & 5 & 4.88e-5 & 1.18e-4 & 5 \\
($N$=85) & 6 & 3.07e-5 & 1.00e-4 & 6 & 1.82e-5 & 9.43e-5 & 6 & 3.86e-5 & 1.13e-4 & 6 \\
& 7 & 2.56e-5 & 9.68e-5 & 7 & 1.47e-5 & 1.02e-4 & 7 & 2.47e-5 & 1.04e-4 & 7\\
& 8 & 1.68e-5 & 8.71e-5 & 8 & 1.48e-5 & 8.78e-5 &8 & 2.66e-5 & 9.36e-5 & 8\\
& 9 & 1.54e-5 & 8.23e-5 & 9 & 1.05e-5 & 8.63e-5 & 9 & 3.44e-5 & 9.72e-5 & 9\\
& 10 & 1.88e-5 & 8.11e-5 & 10 & 8.21e-6 & 7.76e-5 & 10 & 2.23e-5 & 1.03e-4 & 10
\vspace{0.2cm}\\
FTSE & 5 & 1.05e-4 & 8.43e-5 & 5 & 6.92e-5 & 9.87e-5 & 5 & 1.22e-4 & 8.80e-5 & 5\\
($N$=89) & 6 & 7.29e-5 & 8.74e-5 & 6 & 5.50e-5 & 9.14e-5 & 6 & 1.04e-4 & 8.78e-5 & 6 \\
& 7 & 6.83e-5 & 8.18e-5 & 7 & 4.15e-5 & 1.02e-4 & 7 & 6.70e-5 & 9.67e-5 & 7 \\
& 8 & 5.81e-5 & 6.00e-5 & 8 & 3.50e-5 & 7.44e-5 & 8 & 6.11e-5 & 7.10e-5 & 8\\
& 9 & 6.51e-5 & 5.67e-5 & 9 & 2.49e-5 & 8.59e-5 & 9 & 7.08e-5 & 5.72e-5 & 9\\
& 10 & 6.70e-5 & 6.94e-5 & 10 & 2.18e-5 & 8.01e-5 & 10 & 5.43e-5 & 7.27e-5 & 10
\vspace{0.2cm}\\
S\&P & 5 &8.74e-5 & 8.94e-5 & 5 & 4.50e-5 & 1.14e-4 & 5 & 1.02e-4 & 1.14e-4 & 5\\
($N$=98) & 6 & 5.87e-5 & 8.47e-5 & 6 & 3.37e-5 & 1.01e-4 & 6 & 7.93e-5 & 8.88e-5 & 6 \\
& 7 & 3.51e-5 & 7.69e-5 & 7 & 3.36e-5 & 8.93e-5 & 7 & 6.70e-5 & 7.58e-5 & 7 \\
& 8 & 5.50e-5 & 5.75e-5 & 8 & 2.51e-5 & 7.35e-5 & 8 & 6.41e-5 & 6.58e-5 & 8\\
& 9 & 3.71e-5 & 5.09e-5 & 9 & 2.11e-5 & 5.92e-5 & 9 & 5.78e-5 & 6.56e-5 & 9 \\
& 10 & 2.93e-5 & 4.57e-5 & 10 & 1.85e-5 & 5.10e-5 & 10 & 5.22e-5 & 5.07e-5 & 10
\vspace{0.2cm} \\
Nikkei & 5 & 1.34e-4 & 1.32e-4 & 5 & 6.02e-5 & 1.44e-4 & 5 & 1.22e-4 & 1.43e-4& 5\\
($N$=225) & 6 & 9.48e-5 & 9.92e-5 & 6 & 5.13e-5 & 1.20e-4 & 6 & 8.26e-5 & 9.71e-5 & 6\\
& 7 & 7.72e-5 & 9.77e-5 & 7 & 3.93e-5 & 1.11e-4 & 7 & 6.89e-5 & 1.11e-4 & 7\\
& 8 & 9.24e-5 & 8.70e-5 & 8 & 3.12e-5 & 1.18e-4 & 8 & 7.09e-5 & 9.09e-5 &8 \\
& 9 & 4.87e-5 & 7.68e-5 & 9 & 2.78e-5 & 1.18e-4 & 9 & 4.52e-5 & 8.22e-5 & 9\\
& 10 & 6.39e-5 & 6.75e-5 & 10 & 2.36e-5 & 8.25e-5 & 10 & 5.37e-5 & 6.77e-5 & 10\\
\hline
\end{tabular}
}
\end{center}
\end{table}
\begin{table}[ht]\label{biao2}
\caption{\footnotesize{The comparison on small data sets.}}
{\scriptsize \ \centering \renewcommand\arraystretch{1.2} }
\par
\begin{center}
{\scriptsize
\begin{tabular}{ccccccc}
\hline
Index & Density & \multicolumn{1}{c}{$Cons(l_0)$} &
\multicolumn{1}{c}{$Cons(MIP)$} & \multicolumn{1}{c}{$Cons(l_{1/2})$}& $SupO(l_0,MIP)$ &$SupO(l_0,l_{1/2})$ \\ \hline
Hang & 5 & 1.05e-5 & 3.18e-5 & 1.29e-5 & 41.7 & 26.8 \\
Seng & 6 & 8.37e-6 & 2.97e-5 & 1.39e-5 & 55.9 & 52.1 \\
($N$=31) & 7 & 1.46e-5 & 2.13e-5 & 1.86e-5 & 28.8 & 16.4 \\
& 8 & 1.23e-6 & 1.03e-5 & 2.43e-6 & 19.0 & 15.3 \\
& 9 & 1.66e-6 & 8.50e-6 & 1.52e-5 & 22.9 & 11.4 \\
& 10 & \textbf{3.54e-7} & 9.15e-6 & \textbf{8.50e-8} & 44.3 & 33.9
\vspace{0.2cm} \\
DAX & 5 & 6.72e-5 & 7.97e-5 & 6.94e-5 & \textbf{-6.28} & 8.47 \\
($N$=85) & 6 & 6.95e-5 & 7.61e-5 & 7.49e-5 & \textbf{-6.27} & 11.7 \\
& 7 & 7.12e-5 & 8.69e-5 & 7.96e-5 & 4.72 & 7.26 \\
& 8 & \textbf{7.03e-5} & 7.30e-5 & \textbf{6.70e-5} & 0.79 & 6.96 \\
& 9 & \textbf{6.69e-5} & 7.58e-5 & \textbf{6.28e-5} & 4.68 & 15.3 \\
& 10 & 6.23e-5 & 6.94e-5 & 8.11e-5 & \textbf{-4.52} & 21.6
\vspace{0.2cm} \\
FTSE & 5 & 2.11e-5 & 2.95e-5 & 3.40e-5 & 14.6 & 4.27 \\
($N$=89) & 6 & 1.45e-5 & 3.64e-5 & 1.66e-5 & 4.41 & 0.42 \\
& 7 & 1.35e-5 & 6.05e-5 & 2.98e-5 & 19.8 & 15.4 \\
& 8 & 1.85e-6 & 3.94e-5 & 9.95e-6 & 19.3 & 15.5 \\
& 9 & 8.39e-6 & 6.11e-5 & 1.36e-5 & 34.0 & 0.74 \\
& 10 & 2.46e-6 & 5.83e-5 & 1.85e-5 & 13.3 & 4.52
\vspace{0.2cm}\\
S\&P & 5 & 2.10e-6 & 6.93e-5 & 1.17e-5 & 21.7 & 21.3 \\
($N$=98) & 6 & \textbf{2.60e-5} & 6.70e-5 & \textbf{9.48e-6} & 15.9 & 4.66 \\
& 7 & \textbf{4.18e-5} & 5.57e-5 & \textbf{8.80e-6} & 13.9 & \textbf{-1.40} \\
& 8 & \textbf{2.58e-6} & 4.83e-5 & \textbf{1.70e-6} & 21.7 & 12.6\\
& 9 & \textbf{1.38e-5} & 3.81e-5 & \textbf{7.81e-6} & 14.0 & 22.4 \\
& 10 & \textbf{1.64e-5} & 3.25e-5 & \textbf{1.49e-6} & 10.4 & 9.96
\vspace{0.2cm} \\
Nikkei & 5 & 2.10e-6 & 8.39e-5 & 2.14e-5 & 8.28 & 7.81 \\
($N$=225) & 6 & 4.38e-6 & 6.83e-5 & 1.46e-5 & 17.0 & \textbf{-2.11} \\
& 7 & 2.05e-5 & 7.16e-5 & 4.19e-5 & 11.9 & 11.8 \\
& 8 & 5.40e-6 & 8.64e-5 & 2.00e-5 & 26.1 & 4.29 \\
& 9 & 2.81e-5 & 8.98e-5 & 3.70e-5 & 34.8 & 6.60 \\
& 10 & 3.60e-6 & 5.89e-5 & 1.39e-5 & 18.1 & 0.23 \\
\hline
\end{tabular}
}
\end{center}
\end{table}
\subsection{Results on large data sets}\label{test2}
In this subsection, we compare the performance of the $l_0$-based method (i.e., NPG method) with
the MIP- and $l_{1/2}$-based methods (namely, hybrid evolutionary and half thresholding algorithms)
on four large data sets, which are Standard and Poor's 100, Russell 2000, Russell 3000 and the
Chinese index CSI 300. For a fair comparison of the performance of these methods, we tailor
their model parameters so that the resulting portfolios have same density (i.e., same number of
nonzero entries).
\begin{table}[ht]
\caption{\footnotesize{The in-sample and out-of-sample tracking errors on large data sets.}}
\label{biao3}{\scriptsize \ \centering \renewcommand\arraystretch{1.2} }
\par
\begin{center}
{\scriptsize
\begin{tabular}{ccccccccccc}
\hline
Index & Density & \multicolumn{3}{c}{$l_0$} &
\multicolumn{3}{c}{MIP} & \multicolumn{3}{c}{$l_{1/2}$} \\
& & $TEI$ & $TEO$ &$S_{true}$ & $TEI$ & $TEO$ &$S_{true}$ &$TEI$ & $TEO$ &$S_{true}$ \\ \hline
& 5 &3.34e-5 &2.19e-5 & 5 &1.21e-5 & 2.43e-5 & 5 & 2.39e-5 & 1.99e-5 & 5 \\
CSI 300 & 6 &2.34e-5 &2.11e-5 & 6 &1.17e-5 & 2.37e-5 & 6 & 1.91e-5 & 2.11e-5 &6 \\
($N$=300) & 7 &1.86e-5 &1.98e-5 & 7 &7.84e-6 &2.36e-5 & 7 & 1.51e-5 &2.09e-5 &7 \\
& 8 &1.67e-5 &1.68e-5 & 8 &7.68e-6 & 2.04e-5 & 8 & 1.42e-5 &1.92e-5 &8 \\
& 9 &1.67e-5 &1.54e-5 & 9 &7.23e-6 &1.85e-5 & 9 & 1.26e-5 &1.63e-5 & 9\\
& 10 &1.13e-5 &1.21e-5 & 10 &6.42e-6 &1.51e-5 & 10 & 1.32e-5 &1.33e-5&10\\
& 20 &6.29e-6 &7.29e-6 & 20 &2.92e-6 &7.65e-6 & 20 & 6.40e-6 &7.64e-6 & 20\\
& 30 &3.72e-6 &5.14e-6 & 30 &2.07e-6 &5.20e-6 & 30 & 4.15e-6 &5.55e-6 & 30\\
& 40 &2.39e-6 &4.17e-6 & 40 &1.58e-6 &7.63e-6 & 40 & 3.05e-6 &5.30e-5 & 40\\
& 50 &2.87e-6 &3.28e-6 & 50 &1.90e-6 &5.00e-6 & 50 & 2.03e-6 &4.53e-6 & ~50
\vspace{0.2cm}\\
& 80 &2.85e-6 &7.82e-5 & 80 &2.65e-6 &9.98e-5 & 80 & 1.37e-5 &9.85e-5 & 80 \\
S\&P & 90 &2.43e-6 &7.52e-5 & 90 &3.01e-6 & 1.24e-4 & 90 & 1.08e-5 &9.98e-5 & 90\\
($N$=457) & 100 &2.13e-6 &7.39e-5 & 100 &2.50e-6 & 9.69e-5 & 100 & 9.08e-6 & 1.04e-4 & 100 \\
& 120 &1.66e-6 &7.59e-5 & 120 & 2.58e-5 & 1.04e-4 & 120 & 6.42e-6 & 9.35e-5 &120 \\
& 150 &1.52e-6 &7.95e-5 & 150 &5.64e-6 & 1.25e-4 & 150 & 5.18e-6 & 1.07e-4 & 150\\
& 200 &1.57e-6 &7.94e-5 & 200 & 2.13e-6 & 9.80e-5 & 200 & 2.72e-6 & 9.09e-5 &~200
\vspace{0.2cm}\\
& 80 &4.02e-6 &2.07e-4 & 80 &3.62e-6 &2.89e-4 & 80 &2.92e-5 &2.34e-4 & 80 \\
Russell 2000 & 90 &3.51e-6 &2.08e-4 & 90 &4.95e-6 &2.76e-4 & 90 &2.76e-5 &2.45e-4 & 90\\
($N$=1318) & 100 &3.18e-6 &1.70e-4 & 100 &2.61e-6 &2.60e-4 & 100 &2.09e-5 &2.13e-4 & 100 \\
& 120 &2.32e-6 &1.68e-4 & 120 &2.80e-6 &2.49e-4 & 120 &1.71e-5 &2.61e-4 & 120\\
& 150 &1.99e-6 &1.94e-4 & 150 &1.16e-5 &2.68e-4 & 150 &1.20e-5 &2.66e-4 & 150 \\
& 200 &9.83e-7 &2.28e-4 & 200 &1.42e-6 &3.31e-4 & 200 &6.89e-6 &3.18e-4 &~200
\vspace{0.2cm}\\
& 80 &6.24e-6 &1.34e-4 & 80 &3.90e-6 &1.70e-4 & 80 &2.62e-5 & 1.64e-4 & 80 \\
Russell 3000 & 90 &5.49e-6 &1.14e-4 & 90 &3.33e-6 &1.21e-4 & 90 &1.99e-5 & 1.47e-4 & 90\\
($N$=2151) & 100 &4.10e-6 &1.05e-4 & 100 &3.48e-6 &1.05e-4 &100 &1.87e-5 & 1.37e-4 & 100 \\
& 120 &2.78e-6 &9.82e-5 & 120 &3.01e-6 &1.06e-4 & 120 &1.66e-5 & 1.26e-4 & 120\\
& 150 &1.63e-6 &1.00e-4 & 150 &2.48e-6 &1.10e-4 & 150 &1.46e-5 & 1.23e-4 & 150\\
& 200 &1.41e-6 &1.06e-4 & 200 &3.22e-6 &1.09e-4 & 200 &1.03e-5 & 1.57e-4 &200\\
\hline
\end{tabular}
}
\end{center}
\end{table}
\begin{table}[ht]\label{biao4}
\caption{The comparison on large data sets.}
{\scriptsize \ \centering \renewcommand\arraystretch{1.2} }
\par
\begin{center}
{\scriptsize
\begin{tabular}{cccccccccc}
\hline
Index & Density & \multicolumn{3}{c}{Time} & \multicolumn{1}{c}{$Cons(l_0)$} &
\multicolumn{1}{c}{$Cons(MIP)$} & \multicolumn{1}{c}{$Cons(l_{1/2})$} & $SupO$ &$SupO$ \\
& &$l_0$ & MIP & $l_{1/2}$ & &&& $(l_0,MIP)$ & $(l_0,l_{1/2})$ \\ \hline
& 5 &0.0114 &26.7 &1.96 &4.05e-6&1.22e-5&5.65e-6 & 18.2 &\textbf{-5.24} \\
CSI 300 & 6 &0.0113 &36.0 &2.17 &\textbf{2.32e-6}&1.20e-5&\textbf{1.96e-6} &10.9 &\textbf{-0.08} \\
($N$=300) & 7 &0.0039 &42.1 &2.31 &1.22e-6&1.58e-5&5.83e-6 &16.3 &5.41 \\
& 8 &0.0097 &13.5 &2.18 &1.91e-7&1.27e-5&4.94e-6 &17.3 &12.1 \\
& 9 &0.0078 &17.1 &2.50 &1.35e-6&1.12e-5&3.66e-6 &16.7 &5.44 \\
& 10 &0.0053 &14.6 &2.71 &\textbf{7.95e-7}&8.67e-6&\textbf{1.28e-7} &19.7 &8.72 \\
& 20 &0.0078 &2.84 &4.30 &1.00e-6&4.73e-6&1.23e-6 &4.65 &4.49 \\
& 30 &0.0060 &1.97 &6.47 &\textbf{1.42e-6}&3.13e-6&\textbf{1.40e-6 } &1.21 &7.43 \\
& 40 &0.0064 &2.20 &6.85 &1.78e-7&6.05e-6&2.25e-6 &45.4 &21.4 \\
& 50 &0.0083 &1.76 &7.65 &4.10e-7&3.11e-6&2.50e-6 &34.5 &27.8
\vspace{0.2cm}\\
& 80 &0.0271 &63.6 &8.64 &7.53e-5&9.72e-5&8.48e-5 &21.7 &20.7 \\
S\&P & 90 &0.0207 &49.0 &10.2 &7.28e-5&1.21e-4&8.90e-5 &39.1 &24.6 \\
($N$=457) & 100 &0.0199 &77.0 &15.3 &7.17e-5&9.44e-5&9.47e-5 &23.7 &28.8 \\
& 120 &0.0187 &86.9 &13.3 &7.42e-5&1.02e-4&8.71e-5 &27.3 &18.8 \\
& 150 &0.0184 &58.7 &13.5 &7.80e-5&1.20e-4&1.01e-4 &36.6 &25.4 \\
& 200 &0.0197 &689.3 &13.7 &7.78e-5&9.58e-5&8.82e-5 &19.0 &12.7
\vspace{0.2cm}\\
& 80 &0.153 &577.7 &35.7 &2.03e-4&2.85e-4&2.05e-4 &28.3 &11.6 \\
Russell 2000 & 90 &0.137 &352.6 &27.5 &2.04e-4&2.71e-4&2.17e-4 &24.7 &15.0 \\
($N$=1318) & 100 &0.148 &657.8 &38.4 &1.67e-4&2.58e-4&1.92e-4 &34.6 &20.1 \\
& 120 &0.149 &449.1 &47.2 &1.65e-4&2.46e-4&2.44e-4 &32.6 &35.6 \\
& 150 &0.113 &50.6 &56.5 &1.92e-4&2.56e-4&2.54e-4 &27.6 &27.3 \\
& 200 &0.095 &1352.7 &46.4 &2.27e-4&3.29e-4&3.11e-4 &30.9 &28.2
\vspace{0.2cm}\\
& 80 &0.626 &861.1 &37.1 &1.28e-4&1.66e-4&1.38e-4 &21.0 &18.6 \\
Russell 3000 & 90 &0.267 &1039.5 &47.9 &1.08e-4&1.18e-4&1.27e-4 &6.00 &22.3 \\
($N$=2151) & 100 &0.269 &913.1 &48.5 &1.01e-4&1.02e-4&1.19e-4 &0.05 &23.5 \\
& 120 &0.248 &658.7 &88.0 &9.54e-5&1.03e-4&1.09e-4 &7.26 &21.8 \\
& 150 &0.216 &878.7 &74.9 &9.83e-5&1.08e-4&1.09e-4 &9.34 &18.9 \\
& 200 &0.342 &1999.9 &97.9 &1.05e-4&1.05e-4&1.47e-4 &2.30 &32.4 \\
\hline
\end{tabular}
}
\end{center}
\end{table}
Numerical results are reported in Tables 3 and 4, where $N$ denotes the number of assets
in a data set. In particular, we present in Table 3 in-sample
error and out-of sample error of the portfolios generated by the above three methods. In
Table 4, we present the CPU time of these methods and superiority of out-of-sample errors of
the portfolios given by these methods. The number of nonzero portfolios given by these methods is
listed in the column named ``density''. We can have the following observations from Table 4.
\bi
\item[(i)]
The $l_0$-based method (i.e., NPG method) generally has higher consistency between in-sample error
and out-of-sample error than the MIP- and $l_{1/2}$-based methods (namely, hybrid evolutionary and
half thresholding algorithms) since $Cons(l_0)<Cons(MIP)$ holds for 100\% (28/28)
instances and $Cons(l_0)<Cons(l_{1/2})$ holds for 89.3\% (25/28) instances.
\item[(ii)]
The $l_0$-based method is generally superior to the MIP- and $l_{1/2}$-based methods in terms of
out-of-sample error since $SupO(l_0,MIP)>0$ holds for all instances and $SupO(l_0,l_{1/2})>0$ holds for 92.9\% (26/28) instances.
\item[(iii)]
The $l_0$-based method also generally outperforms the MIP- and $l_{1/2}$-based methods
in terms of speed.
\ei
\section{Concluding remarks} \label{conclude}
In this paper we proposed an index tracking model with budget, no-short selling and a cardinality constraint. Also, we developed an efficient nonmonotone projected gradient (NPG) method for solving
this model. At each iteration, this method usually solves several projected gradient subproblems. We
showed that each subproblem has a closed-form solution, which can be computed in linear time.
Under some suitable assumptions, we showed that any accumulation point of the sequence
generated by the NPG method is a local minimizer of the cardinality-constrained index tracking problem. We also conducted empirical tests on the data sets from OR-library \cite{OR_library} and
the CSI 300 index from China Shanghai-Shenzhen stock market to compare our method with the
hybrid evolutionary algorithm \cite{Evolutionary} and the hybrid half thresholding algorithm
\cite{L1/2} for index tracking. The computational results demonstrate that our approach
generally produces sparse portfolios with smaller out-of-sample tracking error and higher
consistency between in-sample and out-of-sample tracking errors. Moreover, our method
outperforms the other two approaches in terms of speed.
We shall mention that the proposed NPG method in this paper can be used to solve the subproblems arising
in the penalty method or augmented Lagrangian method when applied to solve more general problem
\[
\ba{ll}
\min\limits_{x \in \cFr} & f(x) \\
\mbox{s.t.} & g(x) \le 0, \ h(x) = 0
\ea
\]
for some $g:\Re^n \to \Re^p$ and $h:\Re^n \to \Re^q$, where $\cFr$ is given in \eqref{cFr}.
\section*{Acknowledgment}
The authors would like to thank the two anonymous referees for their constructive comments which
substantially improved the presentation of the paper.
|
1,108,101,564,378 | arxiv | \section{Introduction}\label{s:introduction}
Sup-functionals of the supremum of fractional Brownian motion
play crucial role in many problems arising both in theoretical probability and its applications, as in e.g. statistics, financial mathematics, risk theory, queueing theory, see e.g.
\cite{AdT07,DeM15, Man07, Pit96}.
Unfortunately, despite intensive research on their properties, apart from some particular cases --- reduced mostly to standard Brownian motion --- the exact value of such functionals is not known.
Let $ \{B_H (t): t \in \mathbb{R}_+ \} $ be fractional Brownian motion (fBm) with Hurst parameter $ H \in (0,1) $,
that is a centered Gaussian process with continuous sample paths a.s. and
\begin{equation}\label{eq:fbm_cov}
\cov(B_H(s), B_H(t)) = \tfrac{1}{2}\Big(|s|^{2H} + |t|^{2H} - |s-t|^{2H}\Big)
\end{equation}
for all $s,t\in\mathbb{R}_+$. In this manuscript, we consider the following two families of functionals
\begin{equation}\label{def:MH_PH}
\mathscr M_H(T,a) := \mathbb{E} \Big(\sup_{t\in[0,T]} B_H(t)-a t\Big),
\quad \mathscr P_H(T,a) :=\mathbb{E}\Big(\exp\big\{\sup_{t\in[0,T]} \sqrt{2}B_H(t)-a t^{2H}\big\}\Big),
\end{equation}
where $a\in\mathbb{R}$ is the intensity of the drift and $T\in\mathbb{R}_+\cup\{\infty\}$ is the time horizon.
These functionals cover a range of interesting quantities in the
extreme value theory of Gaussian processes. In particular:
\begin{itemize}
\item[(i)] For $a\in\mathbb{R}$ and $T>0$,
the quantity $\mathscr M_H(T,a)$ is the expected value of the workload in the
fluid queue with an fBm input at time $T$ under the assumption that at time $0$ the system starts off empty.
Analogously, if $a>0$ and $T=\infty$, then $\mathscr M_H(\infty,a)$
is the expected stationary workload of a queue with an fBm input, see e.g.
\cite{Man07, DeM15};
\item[(ii)] for $T>0$, the quantity $\mathscr H_H(T) := \mathscr P_H(T,1)$ is known as \emph{Wills functional} (\textit{truncated Pickands constant}),
see e.g. \cite{Vit96, DeH20} and references therein;
\item[(iii)] for $a>1$, the quantity $\mathscr P_H(\infty,a)$ is known as \emph{Piterbarg constant}; see e.g.
\cite{PiP78, BDHL18} and references therein;
\item[(iv)] the quantity $\mathscr H_H := \lim_{T\to\infty} \frac{1}{T} \mathscr H_H(T)$ is known as \emph{Pickands constant};
see e.g. \cite{Pit96,PiP78}.
\end{itemize}
The values of functionals $\mathscr M_H(T,a)$ and $\mathscr P_H(T,a)$ are notoriously difficult to find in cases other than $H\in\{\tfrac{1}{2},1\}$.
Thus, most of the work is
focused on
finding upper and lower bounds for these quantities (see, e.g. \cite{Sha96,DMR03, DeK08,BDHL18, Bor17, Bor18, BDM21})
or determining their asymptotic behavior in various settings
(as $H$ goes to $0$, $H$ goes to $1$, $T$ grows large, $a$ goes to $0$ etc.);
see, e.g., \cite{harper2017, BDM21}.
We note that in many cases simulation methods do not help in estimation of $\mathscr P_H(T,a)$.
For example, the second moment of $\mathscr P_H(\infty,a)$ does not exist when $a<2$,
which makes it very difficult to assess error bounds in this case.
When $H\to0$, then the approximation error resulted from simulation becomes overwhelming, see \cite{DiY14, DiM15}.
In recent years, Delorme, Wiese and others studied the behavior of the supremum of fractional Brownian motion and its time for $H\approx \tfrac{1}{2}$ using the perturbation theory \cite{DelormeEtAl2017Pickands, DelormeandWiese2016Extreme, DelormeWiese2015Maximum, delorme2016perturbative}. Our work was initially inspired by their result in \cite{DelormeEtAl2017Pickands}, where the following expansion for Pickands constant (see item (iv) above) was derived
\begin{equation}\label{eq:pickands_expansion}
\mathscr H_H = 1-2 (H-\tfrac{1}{2}) \gamma_{\rm E} + O ((H-\tfrac{1}{2}) ^ 2), \quad H\to\tfrac{1}{2}
\end{equation}
where $\gamma_{\rm E}$ is the Euler-Mascheroni constant.
The main goal of this contribution is to develop tools for researching expansions similar to \eqref{eq:pickands_expansion} for functionals introduced in Eq.~\eqref{def:MH_PH}. In particular, we find explicit formulas for the derivatives
of these functionals evaluated at $H=\tfrac{1}{2}$, i.e.
\begin{equation}\label{def:MH_PH_derivatives}
\mathscr M_{1/2}'(T,a) := \frac{\partial}{\partial H} \mathscr M_H(T,a)\Big|_{H=1/2} \quad \text{and} \quad \mathscr P_{1/2}'(T,a) := \frac{\partial}{\partial H}\mathscr P_H(T,a)\Big|_{H=1/2}.
\end{equation}
For these purposes, we consider a probability space on which fractional Brownian motions with all values of
$H\in(0,1)$ are coupled in a non-trivial way. In particular, we consider the Mandelbrot \& van Ness' (MvN) fractional
Brownian motion introduced in the seminal work \cite {mandelbrot1968fractional};
see Eq.~\eqref{eq:def_X} below for precise definition. Following
\cite{peltier1995multifractional} the realizations of MvN field can be viewed as two-dimensional \emph{random surfaces} $(H,t)\mapsto B_H(t)$, as opposed to one-dimensional trajectories $t\mapsto B_H(t)$ for each fixed $H\in(0,1)$. While these random surfaces are non-differentiable in the time direction (with respect to $t$), they turn out to be smooth functions of the Hurst parameter (with respect to $H$). Therefore, following Stoev and Taqqu \cite{StoevTaqqu04, StoevTaqqu05} we define the derivatives of the MvN field with respect to the Hurst parameter $H$. Intuitively speaking, the concept of $H$-derivative of fBm allows us to rigorously write $\frac{\partial}{\partial H}\mathbb{E} B_H(\tau) = \mathbb{E} \frac{\partial}{\partial H}B_H(\tau)$, where $\tau$ is some (well-behaved) random time and $\{\frac{\partial}{\partial H}B_H(t), t\in\mathbb{R}_+\}$ is a certain explicitly defined Gaussian process. It will turn out that in our context, the latter expression (i.e. $\mathbb{E} \frac{\partial}{\partial H}B_H(\tau)$) is tractable (explicitly computable).
We note that Stoev and Taqqu considered a broader class of fractional $\alpha$-stable fields, while for the purposes of this paper we limit ourselves to the Gaussian case, i.e. $\alpha=2$, which corresponds to fractional Brownian motion and, in particular, the MvN field. See also \cite[Eq.~(1.9)]{StoevTaqqu05} with $(a_+,a_-) = (1,0)$ and \cite[Chapter~7.4]{samorodnitsky1994stable} for more information on fractional $\alpha$-stable fields. Focusing on the Gaussian case, we strengthen some of the results derived by the authors. In particular, we show sample path continuity of the derivative fields (Proposition~\ref{prop:PWZ_continuous}) and strengthen \cite[Lemma 4.1]{StoevTaqqu04} to almost sure convergence for all $(H,t)\in(0,1)\times\mathbb{R}_+$ (Proposition~\ref{prop:nth_derivative_limit_PWZ}). These propositions are then used in the proofs of main results.
Finally, we propose a Paley-Wiener-Zygmund (PWZ) representation of MvN field and its derivatives. We show that PWZ field is a continuous modification of MvN field (Proposition~\ref{prop:continuous_modification}) and the difference quotients of $n$-th derivative converge everywhere to $(n+1)$-st derivative almost surely (Proposition~\ref{prop:nth_derivative_limit_PWZ}). The PWZ representation is defined in terms of Riemann integrals, as opposed to stochastic integrals in the original Mandelbrot \& van Ness' definition. In the context of this manuscript this representation is more tractable and allows us to express the main quantities of interest, cf.~\eqref{def:MH_PH_derivatives}, as integrals of elementary functions, which we then calculate explicitly in special cases, see Section~\ref{s:functionals_supremum}.
The manuscript is organized as follows.
In Section~\ref{s:preliminaries} we define fractional Brownian motion through its MvN and PWZ representations. We also recall the facts related to the joint distribution of the supremum of drifted Brownian motion and its argmax. In particular, we introduce the conditional distribution of Brownian motion, conditioned on the value of the supremum and its argmax, which follows the law of the generalized 3-dimensional Bessel bridge.
In Section~\ref{sec:main} we state main results of this paper. In Theorem 1 we give a formula for $\mathscr M'_{1/2}(T,a)$ and in Theorem 2 for $\mathscr P'_{1/2}(T,a)$ in terms of integrals of explicit elementary functions. More explicit formulae for
$\mathscr M'_{1/2}(T,0)$, $\mathscr M'_{1/2}(\infty,a)$ and $\mathscr P'_{1/2}(\infty,a)$ are given in Corollary~\ref{cor:explicit_expressions}. Additionally, in Corollary~\ref{pit.in} we show that Piterbarg constants $\mathscr P_H(\infty,a)$ are monotone as functions of Hurst parameter. While this result is not directly related to our topic, it is a direct consequence of Proposition~\ref{pit.ine} and it might be of independent interest.
In Section~\ref{s:MvN} we define and examine the $H$-derivatives of fractional Brownian motion both with the use of MvN and PWZ representations.
In Section~\ref{s:proofs_main_theorems} we give the proofs of main theorems. Since they require quite a lot of different results, throughout this section we introduce various preliminary results. Some of them, for the argmax of $B_H(t)-at$ and $B_H(t)-at^{2H}$ are presented in Proposition~\ref{prop:tau_uniform_tightness} and Proposition~\ref{prop:tau_continuity} respectively. More technical results are deferred to appendices.
In Appendix~\ref{appendixA} we show the equivalence between $\mathcal L^2$ and Paley-Wiener-Zygmund stochastic integrals. In Appendix~\ref{appendix:proofs} we present proofs of results from Section~\ref{s:preliminaries} and Section~\ref{s:MvN}. In Appendix~\ref{appendix:auxiliary_results} we state results needed in the proof of Theorem~\ref{thm:supremum_derivative_limit} and Theorem~\ref{thm:supremum_derivative_pickands_limit}, which can be also of independent interest such as monotonicity of Piterbarg constants and bounds on moments of the supremum of a Gaussian process at a random time. Finally, in Appendix~\ref{appendix:calculations} we write down all the calculations needed to prove Corollary~\ref{cor:explicit_expressions}.
\iffalse
We will now explain how the derivatives of functionals in \eqref{def:MH_PH} are calculated. We present a simplified draft of the proof of how to find the derivative $\mathscr M'_{1/2}(1,0)$. In what follows, let $m_H$ and $\tau_H$ be the maximum and the time of the maximum of a standard Brownian motion over $[0,1]$, i.e.
$$m_H := \max_{t\in[0,1]} B_H(t), \quad \tau_H := \argmax_{t\in[0,1]} B_H(t).$$
Then, $m_H = B_H(\tau_H)$. It turns out that
\begin{align*}
\mathscr M'_H(1,0) := \partial_H \mathbb{E}(B_H(\tau_H)) = \mathbb{E} \Big(\partial_H B_H(\tau_H)\Big)
\end{align*}
Now, we will show that in fact we have
\begin{align*}
\mathscr M'_H(1,0) = \mathbb{E} \Big(\frac{\partial}{\partial H} B_H(\tau_{1/2})\Big)
\end{align*}
\fi
\section{Preliminaries}\label{s:preliminaries}
Let $\{B(t) : t\in\mathbb{R}\}$ be a standard, two-sided Wiener process and let $\mathbb{R}_+ := [0,\infty)$.
In their seminal paper, \cite{mandelbrot1968fractional}, introduced a family of processes $\{X_H(t), t\in\mathbb{R}_+\}$ for $H\in(0,1)$, where
\begin{equation}\label{eq:def_X}
X_H(t) = \int_{-\infty}^0 \big[(t-s)^{H-\tfrac{1}{2}} - (-s)^{H-\tfrac{1}{2}}\big]{\rm d}B(s) + \int_0^t (t-s)^{H-\tfrac{1}{2}}{\rm d}B(s).
\end{equation}
For each $H\in(0,1)$, $\{X_H(t) : t\in\mathbb{R}_+\}$ is a centered Gaussian process with
\begin{align*}
\cov(X_H(s), X_H(t)) = V(H) \cdot \cov(B_H(s), B_H(t)),
\end{align*}
where, $B_H(t)$ is a fractional Brownian motion introduced in \eqref{eq:fbm_cov}, and
\begin{align*}
V(H) := \int_0^\infty \left((1+s)^{H-\tfrac{1}{2}}-s^{H-\tfrac{1}{2}}\right)^2{\rm d}s + \int_0^1 s^{2H-1}{\rm d}s = \frac{\Gamma(\tfrac{1}{2}+H)\Gamma(2-2H)}{2H\Gamma(\tfrac{3}{2}-H)},
\end{align*}
see e.g. \cite[Appendix A]{mishura2008stochastic},
where the explicit formula for $V(H)$ is derived.
This shows that, up to the scaling factor, for each $H\in(0,1)$, process $\{X_H(t):t\in\mathbb{R}_+\}$ is a fractional Brownian motion. Therefore, we call $\{X_H(t):(H,t)\in(0,1)\times\mathbb{R}_+\}$ the \textit{Mandelbrot \& van Ness} (MvN) fractional Brownian field. At the same time, there exists another representation $\{\tilde X_H(t) : (H,t)\in(0,1)\times\mathbb{R}_+\}$ of MvN field,
\begin{equation}\label{def:PWZ_n0}
\begin{split}
\tilde X_H(t) & := (H-\tfrac{1}{2})\int_{-\infty}^0 \Big[(t-s)^{H-\tfrac{3}{2}}-(-s)^{H-\tfrac{3}{2}}\Big]B(s){\rm d}s + t^{H-\tfrac{1}{2}}B(t)\\
&\quad - (H-\tfrac{1}{2})\int_0^t (t-s)^{H-\tfrac{3}{2}}\big(B(t)-B(s)\big){\rm d}s,
\end{split}
\end{equation}
which we call the \textit{Paley-Wiener-Zygmund} (PWZ) representation. In Section~\ref{s:MvN} it is shown that field $\{\tilde X_H(t) : (H,t)\in(0,1)\times\mathbb{R}_+\}$ is a modification of MvN field, whose sample paths $(H,t)\mapsto \tilde X_H(t)$ are continuous almost surely; see Proposition~\ref{prop:continuous_modification}.
From now on, the fractional Brownian motion is defined through the process
$X_H(\cdot)$, i.e. with $D(H) := (V(H))^{-1/2}$, we put
\begin{equation}\label{eq:fbm}
B_H(t) := D(H) \cdot X_H(t).
\end{equation}
For any $a\in\mathbb{R}$ we define
\begin{equation*}
Y_H(t; a) := B_H(t) - a t, \quad \text{and} \quad Y^*_H(t; a) := B_H(t) - at^{2H}.
\end{equation*}
Notice that $Y_{1/2}^* = Y_{1/2}$ for all $a\in\mathbb{R}$. Furthermore, we define suprema and their locations of these processes
\begin{align*}
\overline Y_H(T, a) & := \sup_{t\in[0,T]} Y_H(t;a), \quad \tau_H(T,a) := \argmax\{t\in[0,T] : Y_H(t; a)\}, \\
\overline Y^*_H(T, a) & := \sup_{t\in[0,T]} Y^*_H(t;a), \quad \tau^*_H(T,a) := \argmax\{t\in[0,T] : Y^*_H(t; a)\}.
\end{align*}
It is known that when $a\in\mathbb{R}$ and $T<\infty$, or $a>0$ and $T=\infty$ then $\overline Y$, $\overline Y^*$ are almost surely finite and $\tau_H$ and $\tau^*_H$ are well-defined (unique) and almost surely finite; see \cite{Ferger1999uniqueness} citing \cite{Lifshits1982absolute}.
Next, we recall the formulae for the joint density of the supremum of (drifted) Brownian motion over $[0,T]$ and its argmax. In the following, for $z\in\mathbb{R}$ we define error function and complementary error function respectively:
\begin{align*}
\erf(z) := \frac{2}{\sqrt{\pi}} \int_0^z e^{-t^2}{\rm d}t, \quad \erfc(z) := 1-\erf(z).
\end{align*}
For brevity of exposition, in this section we write $Y(t) := Y_{1/2}(t;a)$, $\overline Y(T) = \overline Y_{1/2}(T,a)$, and $\tau(T) := \tau_{1/2}(T,a)$. When $a\in\mathbb{R}$ and $T>0$, according to \cite{shepp1979joint} we have:
\begin{align*}
\frac{\mathbb{P}(\tau(T)\in{\rm d}t, \bar Y(T)\in{\rm d}y, Y(T)\in{\rm d}x)}{{\rm d}t\,{\rm d}y\,{\rm d}x} = \frac{1}{\pi} \frac{y(y-x)}{t^{3/2}(T-t)^{3/2}}\exp\left\{-\frac{y^2}{2t} - \frac{(y-x)^2}{2(T-t)}\right\}e^{-ax - a^2T/2}.
\end{align*}
After the integration of the above density with respect to $x$ over the domain $x<y$ we recover the joint density of the pair $(\tau(T), \bar Y(T))$
\begin{align}\nonumber
p(t,y; T,a) & := \frac{\mathbb{P}(\tau(T)\in{\rm d}t, \bar Y(T)\in{\rm d}y)}{{\rm d}t\,{\rm d}y}\\
\label{eq:density:T} & = \frac{ye^{-\tfrac{(y+ta)^2}{2t}}}{\sqrt{\pi}t^{3/2}} \cdot\left(\frac{e^{-a^2(T-t)/2}}{\sqrt{\pi(T-t)}} + \frac{a}{\sqrt{2}}\cdot \erfc\big(-a\sqrt{\tfrac{T-t}{2}}\big)\right)
\end{align}
for $(t,y)\in(0,T)\times\mathbb{R}_+$; see also \cite[2.1.13.4]{borodin2002handbook}. When $a>0$, then the pair $(\tau(\infty), \bar Y(\infty))$ is well-defined, with density
\begin{equation}\label{eq:density:infty}
p(t,y;\infty,a) = \frac{\sqrt{2}\,aye^{-\tfrac{(y+ta)^2}{2t}}}{\sqrt{\pi}t^{3/2}}
\end{equation}
for $(t,y)\in\mathbb{R}^2_+$; see e.g. \cite[2.1.13.4(1)]{borodin2002handbook}.
\iffalse
Here, we quote the result due to \cite{fitzsimmons2013excursions}, see also \cite[Prop.~2]{asmussen1995discretization}. In what follows, for $t>0, y>0$ let ${\rm BB}(3,t,y)$ denote a law of 3-dimensional Bessel Bridge process from $(0,0)$ to $(t,y)$, that is
\begin{align*}
{\rm BB}(3,t,y) \overset{d}{=} \{R(s), s\in[0,t] \mid R(t) = y\}
\end{align*}
and $\{R(t)\}_{t\in\mathbb{R}}$ is a 3-dimensional Bessel process.
\begin{proposition}\label{prop:BB_def}
Let $Y(t) = B(t)+at$ with $a\in\mathbb{R}$ and $\tau := \argmax\{t\in[0,T] : Y(t)\}$ with $T>0$. Then, conditionally on $Y(\tau) = y, \tau = t$, the process $\{y-Y(t-s)\}_{s\in[0,t]}$ is a ${\rm BB}(3,t,y)$ process.
\end{proposition}
\begin{proof}
See \cite[Prop.~2]{asmussen1995discretization}.
\end{proof}
\fi
When $a\in\mathbb{R}$ and $T>0$ or $a>0$ and $T=\infty$ then, conditionally on $\bar Y(T) = y, \tau(T) = t$, the process $\{Y(t)-Y(t-s)\}_{s\in[0,t]}$ has the law of the \textit{3-dimensional Bessel bridge from $(0,0)$ to $(t,y)$}, see e.g. \cite[Prop.~2]{asmussen1995discretization}. The law of this process does not dependent on the value of the drift $a$ nor on the time horizon $T$. Moreover, the density of the marginal distribution of this process is known. In the following, for $t,y>0$ and $s\in(0,t)$ we define
\begin{equation*}
g(x,s;t,y) := \frac{\mathbb{P}(Y(t)-Y(t-s)\in{\rm d}x \mid \tau(T)=t, \bar Y(T)=y)}{{\rm d}x}
\end{equation*}
Then, according to e.g. \cite[Theorem 1(ii)]{imhof1984density} we have
\begin{equation}\label{eq:W_dens}
g(x,s;t,y) = \frac{\frac{x}{s^{3/2}} \exp\{-\frac{x^2}{2s}\}}{\frac{y}{t^{3/2}}\exp\{-\frac{y^2}{2t}\}} \cdot \frac{1}{\sqrt{2\pi(t-s)}}\left[e^{-\frac{(x-y)^2}{2(t-s)}} - e^{-\frac{(x+y)^2}{2(t-s)}} \right]\mathds{1}{x < y}.
\end{equation}
The following functional of the 3-dimensional Bessel bridge from $(0,0)$ to $(t,y)$ will be important later on
\begin{equation}\label{def:I}
I(t,y) := \mathbb{E} \left(\int_0^t \frac{Y(t)-Y(t-s)}{s}{\rm d}s \mid \tau(T) = t, \overline Y(T) = y\right).
\end{equation}
Using the fact that we have explicit formula for the density $g(\cdot)$ in Eq.~\eqref{eq:W_dens}, we can express $I(t,y)$ as a double integral, see the proposition below, whose prove is given in Appendix~\ref{appendix:proofs}.
\begin{proposition}\label{prop:I(t,y)}
For any $t,y>0$ we have
\begin{equation*}
I(t,y) := \sqrt{\frac{2}{\pi}} \cdot ty^{-1} \int_0^\infty\int_0^\infty \frac{x^2}{q(1+q^2)^2}\left(e^{-(x-yq/\sqrt{t})^2/2} - e^{-(x+yq/\sqrt{t})^2/2}\right){\rm d}x{\rm d}q.
\end{equation*}
\end{proposition}
\section{Main results}\label{sec:main}
\label{s:functionals_supremum}
Recall the definitions of functionals $\mathscr M_H$ and $\mathscr P_H$ introduced in the Introduction in Eq.~\eqref{def:MH_PH}. We can rewrite them in terms of the expectation of random variables $\overline Y$ and $\overline Y^*$, i.e
\begin{equation*}
\mathscr M_H(T,a) = \mathbb{E} \Big(\overline Y_H(T,a)\Big), \quad \mathscr P_H(T,a) = \mathbb{E} \Big(\exp\{\sqrt{2}\cdot\overline Y^*_H(T,\tfrac{a}{\sqrt{2}})\}\Big).
\end{equation*}
We shall derive formulas for the derivative of functions $H\mapsto \mathscr M_H(T,a)$ and $H\mapsto \mathscr P_H(T,a)$ evaluated at $H=\tfrac{1}{2}$. Following Eq.~\eqref{def:MH_PH_derivatives}, in what follows let
\begin{align*}
\mathscr M'_{1/2}(T,a) := \lim_{H\to1/2} \frac{\mathscr M_H(T,a) - \mathscr M_{1/2}(T,a)}{H-\tfrac{1}{2}}, \quad \mathscr P'_{1/2}(T,a) := \lim_{H\to1/2} \frac{\mathscr P_H(T,a) - \mathscr P_{1/2}(T,a)}{H-\tfrac{1}{2}}.
\end{align*}
\begin{theorem}\label{thm:supremum_derivative_limit}
If $a\in\mathbb{R}$ and $T>0$ or $a>0$ and $T=\infty$, then
\begin{equation}\label{eq:supremum_derivative_limit_integral}
\mathscr M'_{1/2}(T,a) = \int_0^T\int_0^\infty\Big(y(1+\log(t)) + at\log(t) - I(t,y){\rm d}s\Big)p(t,y;T,a) {\rm d}y{\rm d}t.
\end{equation}
\end{theorem}
\begin{theorem}\label{thm:supremum_derivative_pickands_limit}
If $a\in\mathbb{R}$ and $T>0$ or $a>1$ and $T=\infty$, then
\begin{equation}\label{eq:supremum_derivative_pickands_limit_integral}
\mathscr P'_{1/2}(T,a) = \int_0^T\int_0^\infty\sqrt{2}\Big(y(1+\log(t)) - \tfrac{a}{\sqrt{2}}\,t\log(t) - I(t,y)\Big)e^{\sqrt{2}y}p(t,y;T,\tfrac{a}{\sqrt{2}}) {\rm d}y{\rm d}t.
\end{equation}
\end{theorem}
In order to gain a better intuitive understanding of the main results, we provide an outline of the proof of Theorem~\ref{thm:supremum_derivative_limit} below;
the proof of Theorem~\ref{thm:supremum_derivative_pickands_limit} will be similar. Full proofs of these theorems are given in Section~\ref{s:proofs_main_theorems}.
{\it Outline of the proof of Theorem~\ref{thm:supremum_derivative_limit}.} The main part of the proof is to show that
\begin{align*}
\frac{\partial}{\partial H} \Big[\mathbb{E}\, Y_H(\tau_{H}(T,a),a)\Big]\Big\vert_{H=1/2} = \mathbb{E} \left[\frac{\partial}{\partial H}Y_H(\tau_{1/2}(T,a),a)\Big\vert_{H=1/2}\right],
\end{align*}
where the expression on the left is, by definition, equal to $\mathscr M'_{1/2}(T,a)$. To explain in words, we may swap the order of taking the expected value and differentiation in the definition of $\mathscr M'_{1/2}(T,a)$ and swap $\tau_H(T,a)$ with $\tau_{1/2}(T,a)$ above. The derivative $\frac{\partial}{\partial H}Y_H(t,a)$ is understood point-wise, for every fixed $t\in\mathbb{R}_+$. In proofs we will need an $H$-calculus, which is formally introduced and worked out in Section~\ref{s:MvN}. Here we need only the first derivative
$$
X_H^{(1)}(t) = \int_{-\infty}^0 \big[\log(t-s)(t-s)^{H-\tfrac{1}{2}} - \log(-s)(-s)^{H-\tfrac{1}{2}}\big]{\rm d}B(s) + \int_0^t \log(t-s)(t-s)^{H-\tfrac{1}{2}}{\rm d}B(s).$$
In $H$-calculus, the Leibniz formula is valid and the $H$-derivative at $H=1/2$ of the fBm $\frac{\partial}{\partial H}B_H(t)$ is $X(t)+X^{(1)}(t)$. This is derived later in Section~\ref{s:MvN}, see \eqref{eq:Bn_linear_combination}.
As soon as the equation above is established, we find that
\begin{equation*}
\frac{\partial}{\partial H}Y_H(\tau_{1/2}(T,a),a)\Big\vert_{H=1/2} = X(\tau) + X^{(1)}(\tau),
\end{equation*}
where $\tau := \tau_{1/2}(T,a)$. Finally, $\mathscr M'_{1/2}(T,a)$ is equal to the expectation of the expression above
and it can be expressed as the definite integral in Eq.~\eqref{eq:supremum_derivative_limit_integral}
using PWZ representations of $X$ and $X^{(1)}$ and the fact that distribution of
Brownian motion conditioned on its supremum and time of supremum is known; see
Section~\ref{s:preliminaries} for more information. \hfill $\Box$\\
We note that the derivatives $\mathscr M_{1/2}'(T,a)$ and $\mathscr P_{1/2}'(T,a)$ in \eqref{eq:supremum_derivative_limit_integral}
and \eqref{eq:supremum_derivative_pickands_limit_integral} are expressed as definite integrals.
Thus, they can be computed numerically for any drift $a$ and time horizon $T$,
which satisfy the assumptions of Theorem~\ref{thm:supremum_derivative_limit} and
Theorem~\ref{thm:supremum_derivative_pickands_limit} respectively.
In addition, we were able to calculate these derivatives explicitly in special cases; see the corollary below.
\begin{corollary}\label{cor:explicit_expressions} It holds that
\begin{itemize}
\item[(i)] if $T>0$, then $\displaystyle \mathscr M'_{1/2}(T,0) = \sqrt{\frac{2T}{\pi}} \cdot (\log(T)-2)$,
\item[(ii)] if $a>0$, then $\displaystyle \mathscr M'_{1/2}(\infty,a) = -\frac{1}{a}\left(\gamma_{{\rm E}} +\log(2a^2)\right)$,
\item[(iii)] if $a > 1$, then $\displaystyle \mathscr P'_{1/2}(\infty,a) = -\frac{2a}{a-1} \Big( 1 + (a-2)\log\big(\tfrac{1-a}{a})\Big)$.
\end{itemize}
\end{corollary}
The calculations leading to Corollary~\ref{cor:explicit_expressions} are deferred to Appendix~\ref{appendix:calculations}. From Corollary~\ref{cor:explicit_expressions}(iii), it straightforwardly follows that for any $a > 1$, the function $H \rightarrow \mathscr P_{H}(\infty,a)$ is decreasing in the neighborhood of $H=\frac{1}{2}$
(because $\displaystyle \mathscr P'_{1/2}(\infty,a)<0$).
The following corollary extends this observation to the whole domain $H\in(0,1)$; its proof follows straightforwardly from Proposition~\ref{pit.ine} given in Appendix~\ref{appendix:auxiliary_results}.
\begin{corollary}\label{pit.in}
Suppose that $0<H_1<H_2< 1$.
\begin{itemize}
\item[(i)] If $a\in \mathbb{R}$ and $T>0$, then
\[
\mathscr P_{H_2}(T,a)\le \mathscr P_{H_1}\left(T^{H_2/H_1},a \right).
\]
\item[(ii)] If $a>1$, then
\[
\mathscr P_{H_2}(\infty,a)\le \mathscr P_{H_1}(\infty,a).
\]
\end{itemize}
\end{corollary}
\begin{remark}\label{rem:conjecture}
Using Mathematica software \cite{Mathematica} and applying certain simplifications, we are able to calculate the following limit
\begin{equation}\label{eq:conjecture_ours}
\lim_{T\to\infty} \lim_{H\to1/2} \frac{\mathscr H_H(T)/T - \mathscr H_{1/2}(T)/T}{H-\tfrac{1}{2}} = \lim_{T\to\infty} \frac{\mathscr P_{1/2}'(T,1)}{T} = -2\gamma_{{\rm E}}.
\end{equation}
It is noted that the result above is very much related to the result established in \cite{DelormeEtAl2017Pickands} on the derivative of Pickands constant at $H=\tfrac{1}{2}$, that is
\begin{equation}\label{eq:conjecture_theirs}
\lim_{H\to1/2} \lim_{T\to\infty} \frac{\mathscr H_H(T)/T - \mathscr H_{1/2}(T)/T}{H-\tfrac{1}{2}} = \lim_{H\to1/2} \frac{\mathscr H_H - \mathscr H_{1/2}}{H-\tfrac{1}{2}} = -2\gamma_{{\rm E}},
\end{equation}
see also Eq.~\eqref{eq:pickands_expansion}. The difference between Eq.~\eqref{eq:conjecture_ours} and Eq.~\eqref{eq:conjecture_theirs} is the order of the limit operations.
\end{remark}
\section{Mandelbrot \& van Ness' fractional Brownian field}\label{s:MvN}
In this section we provide some properties of Mandelbrot \& van Ness' field, which play important role in the proofs of results given in Section~\ref{sec:main}.
In contrast to the definition of $B_H(t)$, which we have given at the beginning of Section~\ref{s:introduction},
the definition in \eqref{eq:fbm} provides an additional \textit{coupling}
between fBms with different values of $H$. In fact, we will view the process $\{X_H(t), (H,t)\in(0,1)\times\mathbb{R}_+\}$
as a centered Gaussian field and refer to it as \textit{Mandelbrot \& van Ness' field} (MvN).
Realizations of the MvN field are trajectories (surfaces) $(H,t)\mapsto X_H(t)$;
using the standard rules of ${\mathcal L}^2$ theory of stochastic integrals, we find that
\begin{equation}\label{eq:covariance_mvn}
\begin{split}
\cov(X_{H}(t),X_{H'}(t')) & = \int_{-\infty}^0 \big[(t-s)^{H-\tfrac{1}{2}} - (-s)^{H-\tfrac{1}{2}}\big]\big[(t'-s)^{H'-\tfrac{1}{2}} - (-s)^{H'-\tfrac{1}{2}}\big]{\rm d}s\\
&\quad + \int_0^{t\wedge t'} (t-s)^{H-\tfrac{1}{2}}(t'-s)^{H'-\tfrac{1}{2}}{\rm d}s
\end{split}
\end{equation}
for any $H,H'\in(0,1)$ and $t,t'\in\mathbb{R}_+$. Explicit value of covariance function above was found in \cite[Theorem~4.1]{StoevTaqqu06}. While it is a well-known fact that fBm is self-similar, the same holds true for the MvN field, i.e. for any $c>0$ we have
\begin{equation}\label{eq:self_similarity}
\{B_H(ct): (H,t)\in(0,1)\times\mathbb{R}_+\} \overset{\rm d}{=} \{c^H B_H(t): (H,t)\in(0,1)\times\mathbb{R}_+\},
\end{equation}
where `$\overset{\rm d}{=}$' stands for the equality of finite-dimensional distributions, see e.g. \cite[Theorem~2.1(c)]{StoevTaqqu04}; this can also be seen by a direct calculation using \eqref{eq:covariance_mvn}.
It has been shown that there exists a continuous modification of the MvN field, see \cite[Theorem~4]{peltier1995multifractional}.
We remark that for the purposes of this paper the domain of MvN field is $(H,t)\in(0,1)\times\mathbb{R}_+$, however, we note that it can be extended to $(H,t)\in(0,1)\times\mathbb{R}$.
Next, for any $n\in\mathbb{Z}_+$, where $\mathbb{Z}_+$ is the set of non-negative integers, we define the \textit{$n$th derivative of} MvN field (with respect to the Hurst parameter), to be the stochastic process $\{X^{(n)}_H(t) : (H,t)\in(0,1)\times\mathbb{R}_+\}$, where
\begin{equation}\label{eq:MvN_derivative_field}
X^{(n)}_H(t) = \int_{-\infty}^0 f^{(n)}_H(t,s){\rm d}B(s) + \int_0^t g^{(n)}_H(t,s){\rm d}B(s),
\end{equation}
for any $(H,t)\in(0,1)\times\mathbb{R}_+$, with $f_H(t,s) := (t-s)^{H-\tfrac{1}{2}} - (-s)^{H-\tfrac{1}{2}}$, $g_H(t,s) := (t-s)^{H-\tfrac{1}{2}}$, and
\begin{equation}\label{def:f_H^{(n)}}
\begin{split}
f_H^{(n)}(t,s) & := \frac{\partial^n}{\partial H^n} f_H(t,s) = \log^n(t-s)(t-s)^{H-\tfrac{1}{2}} - \log^n(-s)(-s)^{H-\tfrac{1}{2}},\\
g_H^{(n)}(t,s) & := \frac{\partial^n}{\partial H^n} g_H(t,s) = \log^n(t-s)(t-s)^{H-\tfrac{1}{2}}.
\end{split}
\end{equation}
We follow the convention that $\frac{\partial^0}{\partial x^0} h(x) := h(x)$ for any function $h(x)$. In particular, for $n=0$, the definition \eqref{eq:MvN_derivative_field} is equivalent to the original MvN field \eqref{eq:def_X}, i.e. $X_H^{(0)} = X_H$. Since the case $H=\tfrac{1}{2}$ will be used particularly often throughout this manuscript, we write $X^{(n)}(\cdot) := X_{1/2}^{(n)}(\cdot)$ for brevity. Definition \eqref{eq:MvN_derivative_field} can be found in \cite{StoevTaqqu05}.
Let us emphasize that all derivatives of the MvN field live on the same probability space, and in fact they are jointly Gaussian.
The covariance between random variables $X^{(n)}_H(t)$, and $X^{(n')}_{H'}(t')$ can be found in analogous way to \eqref{eq:covariance_mvn}.
\iffalse
\begin{equation*}
\cov(X^{(n)}_H(t),X^{(n')}_{H'}(t')) = \int_{-\infty}^0f^{(n)}_H(t,s)f^{(n')}_{H'}(t',s){\rm d}s + \int_0^{t\wedge t'}g^{(n)}_H(t,s)g^{(n')}_{H'}(t',s){\rm d}s.
\end{equation*}
\fi
Similarly to the previous section, the derivatives of MvN field also have their PWZ representation. For $n\in\mathbb{Z}_+$ we define a \textit{Paley-Wiener-Zygmund} (PWZ) representation of the MvN field and its derivatives $\{\tilde X_H^{(n)}(t) : (H,t)\in(0,1)\times\mathbb{R}_+\}$, where
\begin{equation}\label{def:PWZ}
\begin{split}
\tilde X^{(n)}_H(t) & := -\int_{-\infty}^0 \Big[\frac{\partial}{\partial s}f^{(n)}_H(t,s)\Big]B(s){\rm d}s + g_H^{(n)}(t,0)B(t)\\
&\quad + \int_0^t \Big[\frac{\partial}{\partial s} g_H^{(n)}(t,s)\Big]\big(B(t)-B(s)\big){\rm d}s,
\end{split}
\end{equation}
for any $(H,t)\in(0,1)\times\mathbb{R}_+$, where
\begin{equation}\label{def:ders_f_H^{(n)}}
\begin{split}
& \frac{\partial}{\partial s}f^{(n)}_H(t,s) = \frac{\partial}{\partial s}g^{(n)}_H(t,s) - \frac{\partial}{\partial s}g^{(n)}_H(0,s)\\
& \frac{\partial}{\partial s}g^{(n)}_H(t,s) = -n\log^{n-1}(t-s)(t-s)^{H-\tfrac{3}{2}} - (H-\tfrac{1}{2})\log^{n}(t-s)(t-s)^{H-\tfrac{3}{2}}.
\end{split}
\end{equation}
Again, we recognize that when $n=0$, the definition \eqref{def:PWZ} is equivalent to \eqref{def:PWZ_n0},
i.e. $\tilde X^{(0)}_H = \tilde X_H$.
The following proposition justifies calling the PWZ, the equivalent \emph{representation} of MvN field.
\begin{proposition}\label{prop:continuous_modification}
For all $n\in\mathbb{Z}_+$, the field $\{\tilde X^{(n)}_H(t): (H,t)\in(0,1)\times\mathbb{R}_+\}$ is a continuous modification of $\{X^{(n)}_H(t): (H,t)\in(0,1)\times\mathbb{R}_+\}$.
\end{proposition}
The result in Proposition~\ref{prop:continuous_modification} is a direct consequence of the considerations in Appendix~\ref{appendixA} (Lemma~\ref{lem:PWZ_lemma} in particular) and the proposition below.
\begin{proposition}\label{prop:PWZ_continuous}
For all $n\in\mathbb{Z}_+$,
\begin{align*}
\mathbb{P}\left((H,t) \mapsto \tilde X^{(n)}_H(t) \text{ is a continuous mapping for all } (H,t)\in(0,1)\times\mathbb{R}_+ \right) = 1.
\end{align*}
\end{proposition}
We note that PWZ representation of fractional Brownian motion is defined in terms of Riemann integrals,
as opposed to the MvN representation, which is defined through ${\mathcal L}^2$ stochastic integrals.
Weaker versions of Proposition~\ref{prop:PWZ_continuous} and~\ref{prop:nth_derivative_limit_PWZ}
were derived in \cite{StoevTaqqu05}.
The rest of this section is
devoted to showing various fundamental properties of the PWZ field; due to Proposition~\ref{prop:continuous_modification},
these results carry over to the MvN field and its derivatives.
All the proofs are deferred to Appendix~\ref{appendix:proofs}.
In a special case $n=0$, Proposition~\ref{prop:PWZ_continuous} was proven by \cite {peltier1995multifractional}.
The following result justifies calling processes $X^{(n)}_H(t)$ for $n>0$ the \textit{derivatives with respect to the Hurst parameter}.
\begin{proposition}\label{prop:nth_derivative_limit_PWZ}
For all $n\in\mathbb{Z}_+$,
\begin{align*}
\mathbb{P}\left(\lim_{\Delta\to0} \frac{\tilde X^{(n)}_{H+\Delta}(t)-\tilde X^{(n)}_{H}(t)}{\Delta} = \tilde X^{(n+1)}_{H}(t) \text{ for all } (H,t)\in(0,1)\times\mathbb{R}_+\right) = 1.
\end{align*}
\end{proposition}
Notice that thanks to Proposition~\ref{prop:nth_derivative_limit_PWZ}, it makes sense to write $\frac{\partial^n}{\partial H^n} X_H(t) = X_H^{(n)}(t)$.
In the following, for any $k\in\mathbb{N}$ let: $a_1,\ldots,a_k\in\mathbb{R}$, $n_1,\ldots,n_k\in\mathbb{Z}_+$, and $H_1,\ldots,H_k\in(0,1)$ and define
\begin{equation}\label{def:xi_linear_combination}
\eta(t) := \sum_{i=1}^k a_i \cdot X^{(n_i)}_{H_i}(t).
\end{equation}
\begin{proposition}\label{prop:stationarity_of_linear_combination}
Gaussian process $\{\eta(t) : t\in\mathbb{R}_+\}$ defined in \eqref{def:xi_linear_combination} is centered and it has stationary increments.
\end{proposition}
For $N\in\mathbb{Z}_+$, $H\in(0,1)$ we define the Taylor sum remainder
\begin{equation}\label{def:R_H(t;N)}
R_H(t;N) := X_H(t) - \sum_{n=0}^{N-1}\frac{X^{(n)}(t)}{n!}\cdot (H-\tfrac{1}{2})^{n}.
\end{equation}
It is noted that $R_H(t;0) = X_H(t)$ and, according to Proposition~\ref{prop:stationarity_of_linear_combination}, $\{R_H(t;N);t\in\mathbb{R}_+\}$ is a centered Gaussian process with stationary increments. Its variance function can also be proven to be H\"{o}lder continuous; see Lemma~\ref{lem:Var(R_H(t,N))} below.
\begin{lemma}\label{lem:Var(R_H(t,N))}
Let $0<\underline H<\overline H<1$ be such that $\tfrac{1}{2}\in[\underline{H}, \overline{H}]$. Then, for any $\varepsilon>0$ and $N\in\mathbb{Z}_+$, there exists a constant $C$ such that
\begin{align*}
\mathbb{E} |R_H(t;N) - R_H(s;N)|^2 \leq C(H-\tfrac{1}{2})^{2N} \cdot \left(|t-s|^{2\underline H-\varepsilon} + |t-s|^{2\overline H+\varepsilon}\right)
\end{align*}
for all $t>0$ and $H\in[\underline{H}, \overline{H}]$.
\end{lemma}
\begin{remark}[Extension to normalized MvN field]
Let $B_H(t) = B_H^{(0)}(t) = D(H)\tilde X_H(t)$ and $B^{(n)}_H(t) := \frac{\partial}{\partial H^n}B_H(t)$. Noticing that $D(H)$ is a smooth function, for all $n\in\mathbb{Z}_+$ we have
\begin{align*}
\mathbb{P}\left(\lim_{\Delta\to0} \frac{B^{(n)}_{H+\Delta}(t)- B^{(n)}_{H}(t)}{\Delta} = B^{(n+1)}_{H}(t) \text{ for all } (H,t)\in(0,1)\times\mathbb{R}_+\right) = 1
\end{align*}
Moreover, by a simple application of Leibniz formula for the $n$th derivative of product of functions we find that
\begin{equation}\label{eq:Bn_linear_combination}
B_H^{(n)}(t) = \sum_{k=0}^n \binom{n}{k} D^{(n-k)}(H) \cdot \tilde{X}_H^{(k)}(t).
\end{equation}
The value of $D^{(n)}(H)$ at $H=\tfrac{1}{2}$ for all $n\in\mathbb{Z}_+$ can be found by direct calculation; for example
\begin{equation}\label{eq:values_of_D^{(n)}(1/2)}
D^{(0)}(\tfrac{1}{2}) = 1, \quad D^{(1)}(\tfrac{1}{2}) = 1, \quad D^{(2)}(\tfrac{1}{2}) = -1-\frac{\pi^2}{3}, \quad D^{(3)}(\tfrac{1}{2}) = 3-\pi^2-6\,\zeta(3),
\end{equation}
where $\zeta(\cdot)$ is the Riemann-zeta function.
\end{remark}
\section{Proof of Theorem~\ref{thm:supremum_derivative_limit} and Theorem~\ref{thm:supremum_derivative_pickands_limit}}\label{s:proofs_main_theorems}
Before proving Theorem~\ref{thm:supremum_derivative_limit} and Theorem~\ref{thm:supremum_derivative_pickands_limit}
we need to develop some preliminary results.
Recall the definition of $\tau_H(T,a)$ and $\tau_H^*(T,a)$ from the beginning of Section~\ref{s:functionals_supremum}. In the following proposition we establish that the all-time suprema locations $\tau_H(\infty,a)$ and $\tau^*_H(\infty,a)$ on the MvN field are uniformly bounded with probability growing to 1.
\begin{proposition}\label{prop:tau_uniform_tightness}
Let $0 < \underline H < \overline H < 1$ and $a>0$. Then there exist $C, \gamma>0$ such that
\begin{itemize}
\item[(i)] $\mathbb{P}\Big(\sup_{H\in[\underline H,\overline H]}\tau_H(\infty,a) > T\Big) \leq Ce^{-\gamma T^{2-2\overline H}}$,
\item[(ii)] $\mathbb{P}\Big(\sup_{H\in[\underline H,\overline H]}\tau^*_H(\infty,a) > T\Big) \leq Ce^{-\gamma T^{2\underline H}}$
\end{itemize}
for all $T$ sufficiently large.
\end{proposition}
\begin{proof}
Since the proofs of (i) and (ii) follow by the same idea, we focus only on (ii). Observe that, for $T\ge 1$,
\begin{eqnarray}
\mathbb{P}\Big(\sup_{H\in[\underline H,\overline H]}\tau^*_H(\infty,a) > T\Big)
&\le&
\mathbb{P}\Big(\sup_{H\in[\underline H,\overline H], t\ge T} B_H(t)-a t^{2H}>0\Big)\nonumber\\
&=&
\mathbb{P}\Big(\sup_{H\in[\underline H,\overline H], t\ge T} \frac{B_H(t)}{t^{2H}}>a\Big)\nonumber\\
&\le&
\sum_{k=0}^\infty
\mathbb{P}\Big(\sup_{H\in[\underline H,\overline H], t\in [2^kT,2^{k+1}T]} \frac{B_H(t)}{t^{2H}}>a\Big)\nonumber\\
&=&
\sum_{k=0}^\infty
\mathbb{P}\Big(\sup_{H\in[\underline H,\overline H], t\in [1,2]} \frac{B_H(2^kTt)}{(2^kTt)^{2H}}>a\Big)\nonumber\\
&=&
\sum_{k=0}^\infty
\mathbb{P}\Big(\sup_{H\in[\underline H,\overline H], t\in [1,2]} \frac{2^{Hk} T^H}{2^{2Hk}T^{2H}}\frac{B_H(t)}{t^{2H}}>a\Big)\label{ss}\\
&\le&
\sum_{k=0}^\infty
\mathbb{P}\Big(\sup_{H\in[\underline H,\overline H], t\in [1,2]} \frac{B_H(t)}{t^{2H}}>a2^{\underline H k} T^{\underline H}\Big),\nonumber
\end{eqnarray}
where (\ref{ss}) follows from self-similarity of the MvN field, cf.~\eqref{eq:self_similarity}.
By the continuity of the MvN field, in view of Borell inequality (see, e.g. \cite[Theorem~2.1]{Adl90}),
we know that
\[
\mathbb{E} \left(\sup_{H\in[\underline H,\overline H], t\in [1,2]} \frac{B_H(t)}{t^{2H}}\right)<\infty
\]
and for sufficiently large $T$, using that
$\sup_{H\in[\underline H,\overline H], t\in [1,2]}\var\left( \frac{B_H(t)}{t^{2H}}\right)=1$,
\[
\mathbb{P}\Big(\sup_{H\in[\underline H,\overline H], t\in [1,2]} \frac{B_H(t)}{t^{2H}}>a2^{\underline H k} T^{\underline H}\Big)
\le
2\exp\left( -\frac{ a^2 2^{2\underline H k} T^{2\underline H}}{8} \right).
\]
Thus, there exists $C>0$ such that for sufficiently large $T$
\begin{eqnarray*}
\sum_{k=0}^\infty
\mathbb{P}\Big(\sup_{H\in[\underline H,\overline H], t\in [1,2]} \frac{B_H(t)}{t^{2H}}>a2^{\underline H k} T^{\underline H}\Big)
\le C \exp\left( -\frac{ a^2}{8} T^{2\underline H} \right).
\end{eqnarray*}
This completes the proof.
\end{proof}
\begin{proposition}\label{prop:tau_continuity}
If $a\in\mathbb{R}$ and $T>0$ or $a>0$ and $T=\infty$, then
\begin{itemize}
\item[(i)] $\lim_{H\to H'} \tau_H(T,a) = \tau_{H'}(T,a)$, a.s.
\item[(ii)] $\lim_{H\to H'} \tau^*_H(T,a) = \tau_{H'}^*(T,a)$, a.s.
\end{itemize}
for any $H' \in(0,1)$.
\end{proposition}
\begin{proof}
In view of Proposition~\ref{prop:PWZ_continuous}, the random bivariate function $(H,t) \mapsto X_H(t)$ is continuous almost surely, this implies that also, for any fixed $a\in\mathbb{R}$, the functions $(H,t) \mapsto Y_H(t;a)$, and $(H,t) \mapsto Y^*_H(t;a)$ are continuous. Now, consider the case $a\in\mathbb{R}, T<\infty$ - in this case, (i) and (ii) follow from the fact that $Y_H(t;a)\to Y_{H'}(t;a)$, $Y^*_H(t;a)\to Y^*_{H'}(t;a)$ converge uniformly on $t\in[0,T]$, so the argmax functionals must also converge, see e.g. \cite[Lemma~2.9]{Seijo2011A}.
We now show that (i) holds also when $a<0$ and $T=\infty$. Let $\mathcal A\subset(0,1)$ be any compact interval containing $H'$ and for $n\in\mathbb{N}$ let $A_n := \{\sup_{H\in\mathcal A} \tau_H(a,\infty) < n\}$. Moreover, we have $\{\cup_n A_n\} = \{\sup_{H\in\mathcal A} \tau_H(\infty,a) < \infty\}$, which according to Proposition~\ref{prop:tau_uniform_tightness}, is a set of full measure. We thus can write
\begin{align*}
\mathbb{P}(\lim_{H\to1/2} \tau_H(\infty,a) = \tau_{H'}(\infty,a)) & = \mathbb{P}\big( \lim_{H\to H'} \tau_H(\infty,a) = \tau_{H'}(\infty,a); \cup_{n=1}^\infty A_n\big) \\
& = \mathbb{P}\left(\bigcup_{n=1}^\infty \left\{\lim_{H\to H'} \tau_H(\infty,a) = \tau_{H'}(\infty,a); A_n\right\}\right) \\
& = \lim_{n\to\infty}\mathbb{P}\left(\lim_{H\to H'} \tau_H(\infty,a) = \tau_{H'}(\infty,a); A_n\right),
\end{align*}
where in the last line we used the continuity property of probability measures for increasing sets. Finally, we notice that on the event $A_n$ we have $\tau_H(\infty,a) = \tau_H(n,a)$ for all $H\in\mathcal A$, thus
\begin{align*}
\mathbb{P}\left(\lim_{H\to H'} \tau_H(\infty,a) = \tau_{H'}(\infty,a); A_n\right) & = \mathbb{P}\left(\lim_{H\to H'} \tau_H(n,a) = \tau_{H'}(n,a); A_n\right) = \mathbb{P}(A_n),
\end{align*}
because we have already established that $\tau_H(T,a) \to \tau_{H'}(T,a)$ a.s. for any fixed $T<\infty$. This concludes the proof of (i) because $\mathbb{P}(A_n) \to 1$, as $n\to\infty$. The proof of (ii) is analogous.
\end{proof}
\begin{corollary}\label{coro:J3}
Let $0 < \underline H < \overline H < 1$, $N\in\mathbb{Z}_+$. If $a\in\mathbb{R}$ and $T>0$ or $a<0$ and $T=\infty$, then for any $n\in\mathbb{N}$,
\begin{itemize}
\item[(i)] $\displaystyle\sup_{H\in[\underline{H}, \overline{H}]}\mathbb{E} \left|\frac{R_H(\tau_H(T,a); N)}{(H-\tfrac{1}{2})^{N/2}}\right|^n < \infty$, and
\item[(ii)] $\displaystyle\sup_{H\in[\underline{H}, \overline{H}]}\mathbb{E} \left|\frac{R_H(\tau^*_H(T,a); N)|}{(H-\tfrac{1}{2})^{N/2}}\right|^n < \infty$.
\end{itemize}
\begin{proof}[Proof of Corollary~\ref{coro:J3}]
From Lemma~\ref{lem:Var(R_H(t,N))} we know that for every $\varepsilon>0$ there exists $C>0$ such that
\begin{align*}
\sup_{H\in[\underline{H}, \overline{H}]} \var R_H(1;N) \leq C (H-\tfrac{1}{2})^2\cdot t^{2(\underline H - \varepsilon)},
\end{align*}
where $\varepsilon>0$ is taken small enough so that $\underline H - \varepsilon > 0$. Now, again, applying Lemma~\ref{lem:Var(R_H(t,N))} to Lemma~\ref{lem:sup_moments} we find that for every $\varepsilon>0$ and $n\in\mathbb{N}$ there exists a constant $C'>0$ such that
\begin{align*}
\sup_{H\in[\underline{H}, \overline{H}]} \mathbb{E} \sup_{t\in[0,1]} |R_H(t;N)|^n \leq C (H-\tfrac{1}{2})^n(1 + \mu^{n-1}(\underline H-\varepsilon) + \mu^n(\underline H-\varepsilon)),
\end{align*}
where $\mu(\underline H-\varepsilon) < \infty$. Finally, applying these two bounds and the result in Proposition~\ref{prop:tau_uniform_tightness}(i) to Lemma~\ref{lem:moments_sup_random_time} we find that there exists $C''>0$ such that
\begin{align*}
\sup_{H\in[\underline{H}, \overline{H}]} \mathbb{E}|R_H(\tau_H;N)|^n \leq C''(H-\tfrac{1}{2})^{nN/2}.
\end{align*}
The same reasoning applies to $\tau_H^*$, which completes the proof.
\end{proof}
\end{corollary}
\subsection{Proof of Theorem~\ref{thm:supremum_derivative_limit}}
In the following, for breviety let $Y_H(t) := Y_H(t;a)$, $\tau_H := \tau_H(T,a)$, and $Y(t) := Y_{1/2}(t)$, $\tau := \tau_{1/2}$. We have
\begin{align*}
Y(\tau_H) - Y(\tau) = (Y_H(\tau_H) - Y_H(\tau)) + (Y_H(\tau)-Y(\tau)),
\end{align*}
We split the proof into three parts. In the first part of the proof we show that
\begin{equation}\label{eq:lem1:conv_0}
\lim_{H\to1/2} \mathbb{E} \left(\frac{Y_H(\tau_H) - Y_H(\tau)}{H-\tfrac{1}{2}}\right) = 0.
\end{equation}
In the second part of the proof we show that
\begin{equation}\label{eq:lem1:conv_EB_prim}
\lim_{H\to1/2} \mathbb{E} \left(\frac{Y_H(\tau) - Y(\tau)}{H-\tfrac{1}{2}}\right) = \mathbb{E}\left(\frac{\partial}{\partial H}Y_H(\tau)\Big|_{H=1/2}\right).
\end{equation}
Finally, in the last part of the proof we show that the claim in Theorem~\ref{thm:supremum_derivative_limit} holds.
{\bf Proof that Eq.~\eqref{eq:lem1:conv_0} holds.}
By definition, $Y_H(t) = D(H)X_H(t)+at$, so for any $t>0$, we have
\begin{align*}
Y_H(t) = at(D(H)-1) + D(H)Y(t) + D(H)X^{(1)}(t)(H-\tfrac{1}{2}) + D(H)R_H(t;2),
\end{align*}
where $R_H(t;2) := X_H(t)-\big[X(t)+X^{(1)}(t)(H-\tfrac{1}{2})\big]$; see \eqref{def:R_H(t;N)}. Furthermore,
\begin{align*}
Y_H(\tau_H)-Y_H(\tau) & = a(D(H)-1)(\tau_H-\tau) + D(H)\big[Y(\tau_H)-Y(\tau)\big] \\
& + D(H)(H-\tfrac{1}{2})\big[X^{(1)}(\tau_H)-X^{(1)}(\tau)\big] + D(H)\big[R_H(\tau_H;2)-R_H(\tau;2)\big].
\end{align*}
Since $Y(\tau) - Y(\tau_H) \geq 0$ and $Y_H(\tau_H)-Y_H(\tau)\geq 0$, then
\begin{eqnarray}
\nonumber\lefteqn{0 \leq \frac{Y_H(\tau_H)-Y_H(\tau)}{H-\tfrac{1}{2}} = a(\tau_H-\tau) \cdot \frac{D(H)-1}{H-\tfrac{1}{2}} + D(H)\big[X^{(1)}(\tau_H)-X^{(1)}(\tau)\big]} \\
\label{eq:J1J2J3}&& + D(H)\cdot\frac{R_H(\tau_H;2)-R_H(\tau;2)}{H-\tfrac{1}{2}} =: J_1(H) + J_2(H) + J_3(H).
\end{eqnarray}
So we need to show that $\mathbb{E} J_i(H) \to 0$, as $H\to0$ for $i\in\{1,2,3\}$. According to Proposition~\ref{prop:tau_continuity}(i) we have $J_1(H)\to0$ a.s. In view of de la Vallée-Poussin theorem, in order to show $\mathbb{E} J_1(H)\to0$, it suffices to show that $\sup_{H\in[1/3,2/3]}\mathbb{E} J_1^2 <\infty$. According to Proposition~\ref{prop:tau_uniform_tightness}, for a compact neighborhood of $H=\tfrac{1}{2}$, for example $H\in[\tfrac{1}{3},\tfrac{2}{3}]$, there exist positive constants $C,\gamma,\beta$ such that $\mathbb{P}(\tau_H >T) \leq C e^{-\gamma T^\beta}$, so for all $H\in[1/3,2/3]$:
\begin{align*}
\mathbb{E}\tau_H^2 = \int_0^\infty 2t\,\mathbb{P}(\tau_H>t){\rm d}t \leq 2C\int_0^\infty te^{-\gamma t^\beta} {\rm d}t.
\end{align*}
Since the right-hand-side is finite and independent of $H$, then $\sup_{H\in[1/3,2/3]} \mathbb{E} \tau_H^2 < \infty$ and $\mathbb{E} J_1(H) \to 0$.
Now, according to Proposition~\ref{prop:PWZ_continuous} and Proposition~\ref{prop:tau_continuity}(i) we have $J_2(H)\to0$ a.s. Moreover, combining Proposition~\ref{prop:tau_uniform_tightness}(i) with Lemma~\ref{lem:moments_sup_random_time}, and the fact that all moments of the supremum $\sup_{t\in[0,1]}X^{(1)}(t)$ exist, we find that the second moments, $\mathbb{E}[X^{(1)}(\tau_H)]^2$, $\mathbb{E}[X^{(1)}(\tau)]^2$ are uniformly bounded for all $H$ close enough to $\tfrac{1}{2}$; again, de la Vallée-Poussin theorem it implies that $\mathbb{E} J_2(H)\to0$.
Finally, we observe that
\begin{align*}
\mathbb{E} |J_3(H)| \leq \frac{\mathbb{E}|R_H(\tau_H;2)|}{H-\tfrac{1}{2}}+\frac{\mathbb{E}|R_H(\tau;2)|}{H-\tfrac{1}{2}}.
\end{align*}
Due to Corollary~\ref{coro:J3}, both terms above tend to $0$, as $H\to\tfrac{1}{2}$, therefore $\mathbb{E} J_3(H) \to 0$.
{\bf Proof that Eq.~\eqref{eq:lem1:conv_EB_prim} holds.} By virtue of Proposition~\ref{prop:nth_derivative_limit_PWZ} and Proposition~\ref{prop:tau_continuity}(i) combined, it is clear that
\begin{equation*}
\frac{Y_H(\tau) - Y(\tau)}{H-\tfrac{1}{2}} \to \frac{\partial}{\partial H} Y_H(\tau)\Big|_{H=1/2} \text{ a.s.}
\end{equation*}
Moreover, we recognize that $Y_H(\tau) - Y(\tau) = R_H(\tau;1)$, see the definition in \eqref{def:R_H(t;N)}, so in view of de la Vallée-Poussin theorem, in order to show that \eqref{eq:lem1:conv_EB_prim} holds, it is enough to show that
\begin{align*}
\sup_{H\in[1/3,2/3]}\,\mathbb{E}\left(\frac{R_H(\tau;1)}{H-\tfrac{1}{2}}\right)^2 < \infty,
\end{align*}
which follows from Corollary~\ref{coro:J3}(i).
{\bf Proof that Eq.~\eqref{eq:supremum_derivative_limit_integral} holds.} Notice that for any $t\in\mathbb{R}_+$ it holds that
\begin{equation*}
\frac{\partial}{\partial H}Y_H(t) = \frac{\partial}{\partial H} (B_H(t)-at) = B_H^{(1)}(t).
\end{equation*}
Combining the results in \eqref{eq:lem1:conv_0} and \eqref{eq:lem1:conv_EB_prim}, the Leibniz formula derived in \eqref{eq:Bn_linear_combination}, and the fact that $D'(1/2)=1$ (cf.~\eqref{eq:values_of_D^{(n)}(1/2)}) we obtain
\begin{equation}\label{eq:supremum_derivative_limit}
\mathscr M'_{1/2}(T,a) = \mathbb{E} \left[X(\tau) + X^{(1)}(\tau)\right].
\end{equation}
Now, using the PWZ representation for $X^{(1)}$ in \eqref{def:PWZ} we find that
\begin{equation*}
X^{(1)}(\tau) = \int_{-\infty}^\tau\left((\tau-s)^{-1}-(-s)^{-1}\right)B(s){\rm d}s + \log(\tau) B(\tau) - \int_0^\tau \frac{B(\tau)-B(s)}{\tau-s}{\rm d}s.
\end{equation*}
Since $\tau$ is independent of $\{B(s):s<0\}$ then the expected value of the first integral is $0$, and
\begin{align*}
\mathbb{E} X^{(1)}(\tau) & = \mathbb{E} \left[\log(\tau) B(\tau) - \int_0^\tau \frac{B(\tau)-B(s)}{\tau-s}{\rm d}s \right] \\
& = \mathbb{E} \left[\log(\tau) (B(\tau)-a\tau) + a\tau\log(\tau) - \int_0^\tau \frac{(B(\tau)-a\tau)-(B(s)-as)}{\tau-s}{\rm d}s - a\tau \right] \\
& = \mathbb{E} \left[\log(\tau) Y(\tau) + a\tau(\log(\tau)-1) - \int_0^\tau \frac{Y(\tau)-Y(s)}{\tau-s}{\rm d}s\right].
\end{align*}
Now, we recognize that
\begin{align*}
\mathbb{E} \left(\int_0^\tau \frac{Y(\tau)-Y(s)}{\tau-s}{\rm d}s\right) & = \mathbb{E}\left(\mathbb{E}\left(\int_0^\tau \frac{Y(t)-Y(s)}{t-s}{\rm d}s\,\Big\vert\, \tau = t, Y(\tau)=y \right)\right)\\
& = \mathbb{E}\left( I(\tau,Y(\tau))\right),
\end{align*}
with $I(t,y)$ defined in \eqref{def:I}. Since $\mathbb{E} X(\tau) = \mathbb{E}(Y(\tau)+a\tau)$, then from \eqref{eq:supremum_derivative_limit} we obtain
\begin{align*}
\mathscr M'_{1/2}(T,a) = \mathbb{E}\Big[Y(\tau)(1+\log(\tau)) + a\tau\log(\tau) - I(\tau,Y(\tau))\Big].
\end{align*}
The claim now follows because $(\tau,Y(\tau))$ has joint density $p(t,y;T,a)$,
cf. Eq.~\eqref{eq:density:T}.
\hfill $\Box$
\subsection{Proof of Theorem~\ref{thm:supremum_derivative_pickands_limit}}
In the following, for brevity let $Z_H(t) := \sqrt{2}B_H(t) - at^{2H}$, $\tau^*_H := \tau^*_H(T,\frac{a}{\sqrt{2}})$, so that $\sup_{t\in[0,T]} Z_H(t) = Z_H(\tau^*_H)$. Additionally we denote $Z(t) := Z_{1/2}(t)$, $\tau^* := \tau^*_{1/2}$. Notice that
\begin{align*}
e^{Z_H(\tau^*_H)} - e^{Z(\tau^*)} = \left(e^{Z_H(\tau^*_H)} - e^{Z_H(\tau^*)}\right) + \left(e^{Z_H(\tau^*)}-e^{Z(\tau^*)}\right).
\end{align*}
We split the proof into three parts. First part of the proof is to show that
\begin{equation}\label{eq:lem1:conv_0_pickands}
\lim_{H\to1/2} \mathbb{E} \left(\frac{e^{Z_H(\tau^*_H)} - e^{Z_H(\tau^*)}}{H-\tfrac{1}{2}}\right) = 0.
\end{equation}
In the second part of the proof we show that
\begin{equation}\label{eq:lem1:conv_pickands_prim}
\lim_{H\to1/2} \mathbb{E} \left(\frac{e^{Z_H(\tau^*)} - e^{Z(\tau^*)}}{H-\tfrac{1}{2}}\right) = \mathbb{E} \left(\frac{\partial}{\partial H} e^{Z_H(\tau^*)}\Big|_{H=1/2}\right).
\end{equation}
Finally, in the last part of the proof we show that the claim of Theorem~\ref{thm:supremum_derivative_pickands_limit} holds.
{\bf Proof that Eq.~\eqref{eq:lem1:conv_0_pickands} holds.}
Since $Z_H(\tau^*_H) - Z_H(\tau^*) \geq 0$, then, using the mean value theorem we find that
\begin{equation}\label{eq:mvt_exp}
\mathbb{E}\left[e^{Z_H(\tau^*_H)} - e^{Z_H(\tau^*)}\right] \leq \mathbb{E}\left[\Big(Z_H(\tau^*_H)-Z_H(\tau^*)\Big) \cdot e^{Z_H(\tau^*_H)}\right],
\end{equation}
so it suffices to show that the bound above converges to $0$.
By definition $Z_H(t) = \sqrt{2}D(H)X_H(t)-at^{2H}$, so for any $t>0$, we have
\begin{align*}
Z_H(t) = a(D(H)t-t^{2H}) + D(H)Z(t) + \sqrt{2}D(H)\left(X^{(1)}(t)(H-\tfrac{1}{2}) + R_H(t;2)\right),
\end{align*}
where $R_H(t;2) := X_H(t)-\big[X(t)+X^{(1)}(t)(H-\tfrac{1}{2})\big]$ was defined in \eqref{def:R_H(t;N)}. Furthermore,
\begin{align*}
Z_H(\tau^*_H)-Z_H(\tau^*) & = a\left[D(H)(\tau_H^*-\tau^*)-\left((\tau_H^*)^{2H}-(\tau^*)^{2H}\right)\right] + D(H)\big[Z(\tau^*_H)-Z(\tau^*)\big] \\
& + D(H)(H-\tfrac{1}{2})\big[X^{(1)}(\tau^*_H)-X^{(1)}(\tau^*)\big] + D(H)\big[R_H(\tau^*_H;2)-R_H(\tau^*;2)\big].
\end{align*}
Now, for $H\in(0,1)$, $t\in\mathbb{R}_+$ we define
\begin{align*}
U_H(t) := D(H) \cdot \frac{t - t^{2H}}{H-\tfrac{1}{2}} + t^{2H}\cdot\frac{D(H)-1}{H-\tfrac{1}{2}}.
\end{align*}
Since $Z(\tau^*)-Z(\tau_H^*)\geq 0$ and $Z_H(\tau_H)-Z_H(\tau)\geq 0$, then
\begin{align*}
0 \leq \frac{Z_H(\tau_H^*)-Z_H(\tau^*)}{H-\tfrac{1}{2}} & \leq aJ_1(H) + D(H)\Big(J_2(H)+J_3(H)\Big),
\end{align*}
where
\begin{align*}
J_1(H) := U_H(\tau_H^*)-U_H(\tau^*), \quad J_2(H) := X^{(1)}(\tau^*_H)-X^{(1)}(\tau^*), \quad J_3(H) := \frac{R_H(\tau^*_H;2)-R_H(\tau^*;2)}{H-\tfrac{1}{2}}.
\end{align*}
The proofs that $\mathbb{E} J_i(H)\to0$, $i\in\{2,3\}$, as $H\to\tfrac{1}{2}$ are analogous to the corresponding statements in the proof of Theorem~\ref{thm:supremum_derivative_limit}, cf.~Eq.~\eqref{eq:J1J2J3}. Thus, we show only that $\mathbb{E} J_1(H)\to0$, as $H\to\tfrac{1}{2}$. We have
\begin{align*}
\lim_{H\to1/2} U_H(t) = -2t\log(t) + t =: U(t)
\end{align*}
and the convergence is uniform for any compact subset of $\mathbb{R}_+$. Moreover,
\begin{align*}
|U_H(\tau_H^*) - U_H(\tau^*)| \leq |U_H(\tau_H^*) - U(\tau_H^*)| + |U(\tau_H^*)-U(\tau_H)| + |U_H(\tau^*)-U(\tau^*)|,
\end{align*}
where each of the terms of the sum above converges to $0$ due to uniform convergence $U_H(t)\to U_H(t)$, continuity of $U$ and the fact that $\tau_H^*\to\tau^*$ a.s. (see Proposition~\ref{prop:tau_continuity}(ii)), so we have $J_1(H)\to 0$ a.s. Using the mean value theorem we find that, with $\underline H := \min\{\tfrac{1}{2},H\}$, $\overline H := \max\{\tfrac{1}{2},H\}$,
\begin{equation}\label{eq:bound_with_mvt}
\left|\frac{t - t^{2H}}{H-\tfrac{1}{2}}\right| \leq \sup_{\tilde H\in[\underline{H}, \overline{H}]}|2\log(t)t^{2\tilde H}| \leq 2|\log(t)|(1+t^{2\overline H}).
\end{equation}
Now, we have that
\begin{equation*}
U_H^2(t) \leq 2D^2(H)\left(\frac{t - t^{2H}}{H-\tfrac{1}{2}}\right)^2 + 2t^{4H}\left(\frac{D(H)-1)}{H-\tfrac{1}{2}}\right)^2,
\end{equation*}
so using the bound in \eqref{eq:bound_with_mvt} and Proposition~\ref{prop:tau_uniform_tightness}(ii), we find that the second moments $\mathbb{E} [U_H(\tau_H^*)]^2, \mathbb{E} [U_H(\tau^*)]^2 < \infty$ are uniformly bounded for all $H$ close enough to $\tfrac{1}{2}$. In view of de la Vallée-Poussin theorem, this observation combined with the fact that $J_1(H)\to0$ a.s. imply that $\mathbb{E} J_1(H)\to0$.
{\bf Proof that Eq.~\eqref{eq:lem1:conv_pickands_prim} holds.} By virtue of Proposition~\ref{prop:nth_derivative_limit_PWZ} and \eqref{prop:tau_continuity} combined, it is clear that
\begin{align*}
\frac{e^{Z_H(\tau^*)} - e^{Z(\tau^*)}}{H-\tfrac{1}{2}} \to \frac{\partial}{\partial H} e^{Z_H(\tau^*)}\Big|_{H=1/2} \quad \text{a.s.}
\end{align*}
In view of de la Vallée-Poussin theorem, it suffices to show that some $1+\varepsilon$ with $\varepsilon>0$, the absolute moment of the pre-limit above is bounded for all $H$ close enough to $1/2$. Using the mean value theorem as in \eqref{eq:mvt_exp} and applying H\"{o}lder's inequality afterwards, we find that
\begin{align*}
\mathbb{E}\left(\frac{e^{Z_H(\tau^*)} - e^{Z(\tau^*)}}{H-\tfrac{1}{2}}\right)^{1+\varepsilon} \leq \sup_{\tilde H\in[\underline{H}, \overline{H}]} \mathbb{E}\bigg(|Z_H(\tau^*)-Z(\tau^*)|\cdot \exp\{Z_{\tilde H}(\tau_{\tilde H}^*)\}\bigg)^{1+\varepsilon} \\
\leq \left(\sup_{\tilde H\in[\underline{H}, \overline{H}]} \mathbb{E}|Z_H(\tau^*)-Z(\tau^*)|^{(1+\varepsilon)q}\right)^{1/q} \cdot \left(\sup_{\tilde H\in[\underline{H}, \overline{H}]} \mathbb{E}\,\exp\{(1+\varepsilon)pZ_{\tilde H}(\tau_{\tilde H}^*)\}\right)^{1/p},
\end{align*}
where $p,q>1$, and $1/p + 1/q = 1$. Now, according to Proposition~\ref{pit.ine}, if we take $\varepsilon>0, p>1$ small enough to satisfy $(1+\varepsilon)p < a$, then the second term remains bounded, as $H\to1/2$. For the boundedness of the first term, see that $Z_H(\tau^*) - Z(\tau^*) = \sqrt{2}R_H(\tau^*;1)$ and
\begin{align*}
\sup_{\tilde H\in[\underline{H}, \overline{H}]}\mathbb{E}|R_H(\tau^*;1)|^{(1+\varepsilon)q} < \infty
\end{align*}
for arbitrarily large $q>1$ due to Corollary~\ref{coro:J3}.
{\bf Proof that Eq.~\eqref{eq:supremum_derivative_pickands_limit_integral} holds.} Notice that for any $t\in\mathbb{R}_+$ it holds that
\begin{align*}
\frac{\partial}{\partial H} e^{Z_H(t)} & = \sqrt{2}\left[\frac{\partial}{\partial H} (B_H(t)-at^{2H})\right]e^{Z_H(t)} \\
& = \sqrt{2}\left(B^{(1)}_H(t) - \frac{2a}{\sqrt{2}}\cdot t^{2H}\log(t)\right)\exp\left(\sqrt{2}B_H(t)-at^{2H}\right).
\end{align*}
Combining the results in \eqref{eq:lem1:conv_0_pickands} and \eqref{eq:lem1:conv_pickands_prim}, the Leibniz formula derived in \eqref{eq:Bn_linear_combination}, and the fact that $D'(\tfrac{1}{2})=1$, cf.~\eqref{eq:values_of_D^{(n)}(1/2)}, we obtain
\begin{equation*}
\mathscr P'_{1/2}(T,a) = \mathbb{E} \left[\sqrt{2}\left(X(\tau) + X^{(1)}(\tau) - \frac{2a}{\sqrt{2}} \cdot \tau\log(\tau)\right)\exp\left(\sqrt{2}X(\tau)-a\tau\right)\right].
\end{equation*}
The rest of the proof is analogous to the third part of the proof of Theorem~\ref{thm:supremum_derivative_limit}.
\hfill $\Box$
|
1,108,101,564,379 | arxiv | \section{Introduction}
Monte Carlo methods are considered the gold standard for dose calculation in radiotherapy treatment planning due to their accuracy \citep{paganettiRangeUncertaintiesProton2012, wengVectorizedMonteCarlo2003}. However, the accuracy of a simulated compared to a delivered dose is not only determined by the chosen dose engine, but also compromised by treatment uncertainties in water-equivalent path length (WEPL), patient set-up and anatomy. Especially in proton and carbon-ion therapy, the high dose localization in the Bragg-peak usually does not allow for uncertainty quantification and mitigation using approximations known for photon therapy, such as the static dose cloud \citep{lomaxIntensityModulatedProton2008, lomaxIntensityModulatedProton2008a}.
Consequently, particle therapy demands personalized robustness analyses and mitigation. Such techniques may be based on explicit propagation of input uncertainties using probabilistic methods and statistical analysis \citep{bangertAnalyticalProbabilisticModeling2013,wahlEfficiencyAnalyticalSamplingbased2017,wahlAnalyticalProbabilisticModeling2020,kraanDoseUncertaintiesIMPT2013,parkStatisticalAssessmentProton2013,perkoFastAccurateSensitivity2016} or worst-case estimates \citep{mcgowanDefiningRobustnessProtocols2015, casiraghiAdvantagesLimitationsWorst2013, loweIncorporatingEffectFractionation2016}. Most of these methods then further translate to robust and probabilistic optimization to extend the conventional, generic margin approach to uncertainty mitigation \citep{sobottaRobustOptimizationBased2010, liuRobustOptimizationIntensity2012,fredrikssonCharacterizationRobustRadiation2012,unkelbachRobustRadiotherapyPlanning2018}.
The additional computational effort of robustness analyses and robust optimization techniques, however, clashes with the long computation times of Monte Carlo dose calculation. The use of faster, less accurate deterministic pencil-beam dose calculation algorithms instead is not always feasible, because their accuracy is low in particularly heterogeneous anatomies like lung \citep{taylorPencilBeamAlgorithms2017}, which at the same time show high sensitivity to uncertainties in range and set-up.
More efficient uncertainty quantification approaches for Monte Carlo methods, developed for example by the radiative transport community \citep[e.g.][]{poetteGPCintrusiveMonteCarloScheme2018,huStochasticGalerkinMethod2016}, often do not demonstrate an application to realistic patient data and it is not clear how well the results transfer. Also in many cases, more sophisticated methods are intrusive, which limits the applicability when using proprietary MC simulation engines.
In this paper, we introduce a simple, minimally-intrusive method for uncertainty quantification in Monte Carlo dose computations. It is based on (re-)weighting a single set of MC simulated particle histories. In contrast to the conventional approach of simulating different scenarios separately, our method significantly reduces the required computational effort. The weighting step, which can be represented as multiplications of a weight vector with a history dose matrix, replaces simulations of different dose scenarios. The method enables uncertainty propagation during the simulation, making it possible to estimate the dose uncertainty induced by range and setup errors from nominal dose calculations. We demonstrate the application of this method to specifically approximate expected value and variance of dose, given a respective uncertainty model for set-up and range errors, which includes the choice of different beam and pencil beam correlation scenarios.
The remainder of this paper is organized as follows: In section \ref{sec:methods}, we introduce basic definitions and notation, derive a direct computation of the expected value before introducing the concept of importance (re-)weighting for set-up and range uncertainty models. Section \ref{sec:results} then compares estimated expected doses and corresponding standard deviations to reference computations based on scenario sampling. Discussion and Conclusion follow in sections \ref{sec:discussion} and \ref{sec:conclusion}, respectively.
\section{Materials and methods}
\label{sec:methods}
\subsection{The Monte Carlo method for dose computation}
\label{sec:MCmethod}
First, we briefly recapitulate the basic principles of the Monte Carlo method for radiotherapy. This serves the purpose of establishing notation and parameters used to introduce our method and simplifying the illustration of later adaptations. For a more detailed description we refer to other sources, such as \citet{paganettiRangeUncertaintiesProton2012,fippelMonteCarloDose2004, maMonteCarloDose2002, bielajewMonteCarloModeling1994b, mackieApplicationsMonteCarlo1990}, among many others.
The Monte Carlo method is a numerical integration technique, based on random sampling. When used for dose calculations, a set of particles is created with properties including position, momentum and energy, which evolve dynamically over the course of a simulation. The initial values of these properties constitute the random input parameters of the MC simulation and are sampled from a known probability distribution function. On this basis, the trajectories of each primary particle and its secondaries are simulated and the deposited dose is aggregated, by sampling interactions such as scattering and energy loss according to physical laws and material properties. While this appears to be an intuitive simulation of the actual physical process, it is essentially a statistical method to solve the linear Boltzmann transport equation and therefore compute the expected value of a model with random input.
Let $\boldsymbol{\xi}$ be the vector of random input parameters of the dose simulation. $\Phi_0(\boldsymbol{\xi})$ is the joint density of these parameters and is assumed to be known. For our purposes, which will not interfere with the simulation itself, we assume that the trajectory of a primary particle is given by the "black box" simulation engine, yielding the dose deposited in voxel $i$ within an individual particle's history $h_i(\boldsymbol{\xi})$.
The nominal dose $d_i$ in voxel $i$ can now be estimated with the expected value $\mathbb{E}_{\Phi_0}$ of all histories via the sample mean
\begin{equation}
\label{eq:nomDose}
d_i = \mathbb{E}_{\Phi_0}[h_{i}(\boldsymbol{\xi})] \approx \frac{1}{H}\sum_{p=1}^H h_{i}(\boldsymbol{\Xi}_p) \;,
\end{equation}
where $H$ is the sample size (number of computed primary particle histories) and $\boldsymbol{\Xi}_p$, $p=1,...,H$ are realizations of primary particle properties.
Here we omit the dependence on random factors within the simulation, such as particle scattering, as well as their probability distribution. Particle histories $h_{i}(\boldsymbol{\Xi}_p)$, for input realizations $\boldsymbol{\Xi}_p$, implicitly also include realizations of these random parameters. For a large number of histories, their effect on the dose estimates can however be assumed to be constant.
\subsection{Beam model}
\label{sec:beamModel}
The initial state of each particle is represented by a point in the seven-dimensional phase space, which encompasses the particle position $\boldsymbol{r}=(r_x,r_y,r_z)$, momentum $\boldsymbol{p}=(p_x, p_y, p_z)$ and energy $E$. We assume a Gaussian emittance model, i.e. the parameters within each pencil beam are multivariate normal distributed with $\Phi_0^b(\boldsymbol{\xi})=\Phi_0^b(\boldsymbol{r},\boldsymbol{\varphi},E)=\mathcal{N}(\boldsymbol{\mu}_{\xi}^b,\boldsymbol{\Sigma}_{\xi}^b)$, for pencil beams $b=1,..,B$. Here, $\boldsymbol{\varphi}=(\varphi_x,\varphi_y)=(\frac{dp_x}{dp_z},\frac{dp_y}{dp_z})$ describes the transverse divergence of the momentum direction from the axial beam direction.
The joint density over all pencil beams is then defined by a Gaussian mixture model
\begin{equation}
\Phi_0(\boldsymbol{\xi})=\sum_{b=1}^B w_b \Phi_0^b(\boldsymbol{\xi})\;,
\end{equation} where $w_b$ are the pencil beam weights.
To introduce our method, we will initially assume a simplified phase space where $\varphi_x = \varphi_y = 0$. Results including a distribution in the momentum direction can be found in the Appendix.
\subsection{Uncertainties}
\label{sec:uncertainties}
Among the most important sources of uncertainty in proton therapy are errors in the patient set-up $\boldsymbol{\delta}_r=(\delta_{r_x},\delta_{r_y},\delta_{r_z}
)$ and the proton range $\delta_{\rho}$ \citep[comp.][]{parkStatisticalAssessmentProton2013, liuRobustOptimizationIntensity2012,perkoFastAccurateSensitivity2016,lomaxIntensityModulatedProton2008, lomaxIntensityModulatedProton2008a}. While these errors are random variables with, in principle, unknown probability distributions, we follow the common approach of assuming normally distributed errors \citep{wieserImpactGaussianUncertainty2020,unkelbachAccountingRangeUncertainties2007,perkoFastAccurateSensitivity2016, bangertAnalyticalProbabilisticModeling2013, fredrikssonMinimaxOptimizationHandling2011}.
Set-up errors directly affect the primary particle positions in an additive way, such that the actual position $\boldsymbol{r}_{\delta}$ of a primary particle under uncertainty is given by its position $\boldsymbol{r}$ according to the emmitance model, plus the error $\boldsymbol{\delta}_r$.
Uncertainties in the particle range are caused by a variety of factors, ranging from the conversion of Hounsfield units to stopping powers and imaging artifacts, over changes in the patient geometry to biological effects and inaccuracies in physics models \citep{unkelbachAccountingRangeUncertainties2007,paganettiRangeUncertaintiesProton2012, lomaxIntensityModulatedProton2008,mcgowanTreatmentPlanningOptimisation2013}. Here, we focus on calculational uncertainties, such as conversion errors, and model these by scaling the complete tissue density with the random factor $\delta_{\rho}$ \citep[comp.][]{lomaxIntensityModulatedProton2008, sourisTechnicalNoteMonte2019,malyapaEvaluationRobustnessSetup2016}. Since the density is assumed to be deterministic, the error is not directly linked to a random input parameter. In section \ref{sec:rangeError}, we however present an approximation which models range errors using the initial energy distribution.
Sampling-based uncertainty quantification approaches, similar to \citet{parkStatisticalAssessmentProton2013} or \citet{kraanDoseUncertaintiesIMPT2013}, rely on repeated dose calculations for different realizations $\boldsymbol{\Delta}_k, k=1,...,K$ of the error vector. For an individual error scenario $\boldsymbol{\Delta}_k$, the dose is computed as
\begin{equation}
\label{eq:errorScenarioDose}
d^{\Delta_k}_i = \mathbb{E}_{\Phi(\boldsymbol{\xi},\boldsymbol{\Delta}_k)}[h_{i}(\boldsymbol{\xi},\boldsymbol{\Delta}_k)] \approx \frac{1}{H}\sum_{p=1}^H h_{i}(\boldsymbol{\Xi}_p) \;,\;\; \boldsymbol{\Xi}_p \sim \Phi(\boldsymbol{\xi},\boldsymbol{\Delta}_k)
\end{equation}
In the case of set-up uncertainties $\Phi(\boldsymbol{\xi},\boldsymbol{\Delta}_k)$ for example corresponds to the nominal parameter density $\Phi_0(\boldsymbol{\xi})$, where all particle positions are shifted by $\boldsymbol{\Delta}_{r;k}$. Due to its accuracy, this procedure is later used to obtain reference values to validate our results. It is however extremely computationally expensive, since it requires numerous runs of the complete Monte Carlo dose simulation.
\subsection{Direct computation of the expected value}
\label{sec:directExpDose}
When the distribution $\Psi(\boldsymbol{\xi}_{\delta})$ of the initial parameters under uncertainty can be explicitly defined, it is possible to compute the expected dose directly by replacing the nominal parameter distribution $\Phi_0$ with $\Psi$ in the Monte Carlo dose simulation as follows:
\begin{equation}
\label{eq:expectedDose}
E(d_i) = \mathbb{E}_{\Psi(\boldsymbol{\xi}_{\delta})}[h_{i}(\boldsymbol{\xi}_{\delta})] \approx \frac{1}{H}\sum_{p=1}^H h_{i}(\boldsymbol{\Xi}_p) \;,\;\; \boldsymbol{\Xi}_p \sim \Psi(\boldsymbol{\xi}_{\delta})
\end{equation}
For example when the error is additive, i.e.
\begin{equation}
\boldsymbol{\xi}_{\delta}=\boldsymbol{\xi} + \boldsymbol{\delta}
\end{equation}
and $\boldsymbol{\xi}\sim\mathcal{N}(\boldsymbol{\mu}_{\xi},\boldsymbol{\Sigma}_{\xi})$, as well as $\boldsymbol{\delta}\sim\mathcal{N}(\boldsymbol{\mu}_{\delta},\boldsymbol{\Sigma}_{\delta})$, the distribution of $\boldsymbol{\xi}_{\delta}$ is the convolution $\Psi=\mathcal{N}(\boldsymbol{\mu}_{\xi}+\boldsymbol{\mu}_{\delta},\boldsymbol{\Sigma}_{\xi}+\boldsymbol{\Sigma}_{\delta})$. For $\boldsymbol{\mu}_{\delta}=0$, this is just a wider Gaussian distribution.
\subsection{Importance (re-)weighting}
\label{sec:importanceRew}
We now consider the dose deposited by histories $h(\boldsymbol{\xi},\boldsymbol{\delta})$, which are a function of the random input parameters $\boldsymbol{\xi}\sim \Phi_0(\boldsymbol{\xi})$ and random error vector $\boldsymbol{\delta}\sim p_{\delta}$. In the following we focus on computing estimates for the dose expected value and standard deviation, the method can however be analogously applied to the computation of several worst case scenarios.
We propose a replacement of the dose calculations for different error scenarios by a more efficient weighting of particle histories $h$. For this, we adopt the concept of importance sampling \citep{kahnRandomSamplingMonte1950, hastingsMonteCarloSampling1970}. Instead of sampling primary particles from $\Phi(\boldsymbol{\xi},\boldsymbol{\Delta}_k)$ for different error scenarios, we sample from a different density function - e.g. the nominal parameter distribution $\Phi_0(\boldsymbol{\xi})$. Then, the dose for all scenarios can be estimated using histories from the nominal dose calculation:
\begin{eqnarray}
\label{eq:shiftedDoseIR}
d^{\Delta_k}_i &= \mathbb{E}_{\Phi(\boldsymbol{\xi},\boldsymbol{\Delta}_k)}[h_{i}(\boldsymbol{\xi},\boldsymbol{\Delta}_k)]\nonumber\\
&\approx \frac{1}{H}\sum_{p=1}^H h_{i}(\boldsymbol{\Xi}_p) \frac{\Phi(\boldsymbol{\Xi}_p,\boldsymbol{\Delta}_k)}{\Phi_0(\boldsymbol{\Xi}_p)} \;,\;\; \boldsymbol{\Xi}_p \sim \Phi_0(\boldsymbol{\xi})
\end{eqnarray}
Thus, scenario computation reduces to a scoring problem. Dose expectation and variance can now be computed through sample mean and variance over the respectively obtained scenarios.
\begin{equation}
\label{eq:varianceIR}
Var(d_i) \approx \frac{1}{K-1}\sum_{k=1}^K (d^{\Delta_k}_{i}- E[d_i])^2\;.
\end{equation}
\begin{equation}
\label{eq:expDoseIR}
E[d_i] \approx \frac{1}{K}\sum_{k=1}^K d^{\Delta_k}_{i}\;.
\end{equation}
When it is possible to compute the expected value directly as discussed in \ref{sec:directExpDose}, only one (re-)weighting step is necessary:
\begin{equation}
\label{eq:expDoseIRDirect}
E[d_i] \approx \frac{1}{H}\sum_{p=1}^H h_{i}(\boldsymbol{\Xi}_p) \frac{\Psi(\boldsymbol{\Xi}_p)}{\Phi_0(\boldsymbol{\Xi}_p)} \;,\;\; \boldsymbol{\Xi}_p \sim \Phi_0(\boldsymbol{\xi})
\end{equation}
\subsection{Modeling set-up uncertainties}
\label{sec:setupUncert}
Set-up uncertainties correspond to a shift of the patient position or equivalently the positions of primary particles relative to the patient. While errors occur in three dimensional space, shifts along the beam axis do not affect the dose distribution. In the Gaussian model, set-up errors can hence be assumed to follow a bivariate normal distribution for each pencil beam $b=1,..,B$:
\begin{equation}
\boldsymbol{\delta}_r^b=(\delta^b_{r_x},\delta^b_{r_y}) \sim \mathcal{N}(\boldsymbol{\mu}_{\delta_r}^b,\boldsymbol{\Sigma}_{\delta_r}^b) \;,
\end{equation} with $\boldsymbol{\mu}_{\delta_r}^b \in \mathbb{R}^2$ and $\boldsymbol{\Sigma}_{\delta_r}^b \in \mathbb{R}^{2\times 2}$.
Particles are initialized in a 2D plane, thus the primary particle positions follow a bivariate Gaussian mixture (\ref{sec:beamModel})
\begin{equation}
\label{eq:nomDistSetup}
\Phi_{0;r}(\boldsymbol{r})= \sum_{b=1}^B w_b \Phi^b_{0;r}(\boldsymbol{r}) \,,\;\; \Phi^b_{0;r}=\mathcal{N}(\boldsymbol{r};\boldsymbol{\mu}_r^b,\boldsymbol{\Sigma}_r^b) \; .
\end{equation}
Here $\boldsymbol{\mu}_r^b$ is the mean lateral position of initial particles in pencil beam $b$ in beam's eye view i.e., in the 2D plane perpendicular to the central beam axis. Then, according to \ref{sec:uncertainties} the initial position $\boldsymbol{r}_\delta$ of a particle under uncertainty is determined by
\begin{equation}
\boldsymbol{r}_\delta=\boldsymbol{r}+\boldsymbol{\delta}_r\;
\end{equation}
and $\boldsymbol{r}_\delta$ is distributed with the convolution function
\begin{equation}
\label{eq:expDistSetup}
\Psi_r = \sum_{b=1}^B w_b \Psi_r^b \,,\;\;\Psi_r^b=\mathcal{N}(\boldsymbol{\mu}_r^b+\boldsymbol{\mu}_{\delta_r}^b,\boldsymbol{\Sigma}_r^b + \boldsymbol{\Sigma}_{\delta_r}^b) \;.
\end{equation}
An individual error realization $\boldsymbol{\Delta}_{r;k}\sim p_{\delta_r}$ then formally just corresponds to a shift of the original primary positions, which now follow the distribution
\begin{equation}
\label{eq:shiftedDistErrorScen}
\Phi_r(\boldsymbol{r},\boldsymbol{\Delta}_k) = \sum_{b=1}^B w_b \cdot \mathcal{N}(\boldsymbol{r};\boldsymbol{\mu}_r^b+\boldsymbol{\Delta}^b_{r;k},\boldsymbol{\Sigma}_r^b) \;,
\end{equation}
corresponding to the nominal distribution shifted by $\boldsymbol{\Delta}_{r;k}$.
The above distributions can be directly used with (\ref{eq:shiftedDoseIR}), (\ref{eq:varianceIR}) and (\ref{eq:expDoseIRDirect}) to obtain the expected dose and variance for set-up uncertainties.
\subsection{Modeling range uncertainties}
\label{sec:rangeError}
The proposed approach could be analogously applied to any type of uncertainty directly affecting input parameters of the simulation, which have an a-priori probability distribution. Range uncertainties, however, modify the density values, which are deterministic and can thus not be directly modeled within the proposed framework.
To still approximate our quantities of interest, we exploit that the largest dose uncertainty is induced near the range of a beam \citep{bortfeldAnalyticalApproximationBragg1997}, although the uncertain density variation affects the whole trajectory. Range can be expressed in terms of the initial energy of particles, using the Bragg-Kleemann rule
\begin{equation}
\label{eq:braggKleemann}
R=\alpha \cdot E_0^{p} \;,
\end{equation}
where $R$ is the range, $E_0$ is the initial energy and $\alpha$ and $p$ are application-specific parameters. For the case of the slow-down of therapeutic protons in water, values of $\alpha=0.022$ mm/MeV$^p$ and $p=1.77$ can be chosen \citep{ulmerTheoreticalMethodsCalculation2011}.
The initial energy spectrum of a scanned pencil beam at the exit of the nozzle can be approximately represented by a Gaussian \citep{bortfeldAnalyticalApproximationBragg1997,kimstrandBeamSourceModel2007,tourovskyMonteCarloDose2005,soukupPencilBeamAlgorithm2005}. We can use this to model range uncertainties through random variations of the initial energy \citep[compare treatment of range straggling in][]{pedroniExperimentalCharacterizationPhysical2005}.
Let's assume range uncertainties are normally distributed, i.e. $\delta_{\rho} \sim \mathcal{N}(0,\sigma_R^2)$ \citep[comp.][]{lomaxIntensityModulatedProton2008, yangComprehensiveAnalysisProton2012}. With a Taylor approximation (order 1 for the mean and 2 for the variance) around $X=E[X]$, we can determine the parameters $\mu_{E_0},\sigma^2_{E_0}$ of the energy distribution due to range uncertainties.
\begin{eqnarray}
E_0&=(\frac{1}{\alpha}R)^{\frac{1}{p}}=:g(E[R])) \\
\mu_{E_0}&=E[g(E[R])] \approx g(E[R]) = (\frac{1}{\alpha}E[R])^{\frac{1}{p}} \\
\sigma^2_{E_0}&=Var(g(E[R])) \approx g'(E[R])^2 Var(E[R]) \\
&= \left( \frac{1}{p \alpha}(E[R]\frac{1}{\alpha})^{\frac{1}{p}-1} \right)^2\sigma^2_R \nonumber \end{eqnarray}
Thus the randomness in range is approximated through an energy distribution $E_0 \sim \mathcal{N}(\mu_{E_0},\sigma^2_{E_0})$ and the expected dose and variance can be computed by (re-)weighting histories analogously to \ref{sec:setupUncert}. This can again be extended to multiple pencil beams using Gaussian mixtures.
Note that, again, the expected value can be directly computed from simulations with an energy spectrum convolved with the Gaussian uncertainty kernel. Alternatively to the nominal energy distribution, this convolved distribution can also be used to obtain the required histories. In this case, the nominal distribution is replaced by the convolved distribution in (\ref{eq:shiftedDoseIR})-(\ref{eq:expDoseIRDirect}).
\subsection{Correlation Models}
\label{sec:corrModels}
In the previous sections, the distributions of different types of phase space parameters were considered independently. Note that the derived distributions are however all marginals of the joint multivariate Gaussian mixture spanning the complete phase space and all pencil beams (see also \ref{sec:beamModel}).
Similarly, the univariate normal distributions of errors of different types and in different pencil beams can be connected using a joint multivariate Gaussian distribution. This framework in principle allows for the definition of arbitrary correlation models for uncertainties between pencil beams. For the dose variance computation, these correlations can be easily implemented using the covariance matrix of this joint distribution, since the samples for the weighted scenarios are directly drawn from the respective multivariate normal distribution. The expected dose is constant under varying correlation assumptions.
Using the example of set-up uncertainties, if we define the errors in each beam by
$\boldsymbol{\delta}_r = (\delta^1_{r},...,\delta^B_{r})^T$, the multivariate Gaussian $\mathcal{N}(\boldsymbol{\mu},\boldsymbol{C})$ would be parametrized with
\begin{equation*}
\boldsymbol{\mu} =\left(\matrix{ \boldsymbol{\mu}_{\delta_r}^1 \cr\boldsymbol{\mu}_{\delta_r}^2 \cr\vdots\cr\boldsymbol{\mu}_{\delta_r}^B }\right),
\end{equation*}
\begin{equation}
\boldsymbol{C} = \left( \begin{array}{ccccc}
\boldsymbol{\Sigma}_r^1 & \begin{array}{cc}
\rho_{xx}^{12} & \rho_{xy}^{12} \\
\rho_{yx}^{12} & \rho_{yy}^{12}\\
\end{array} & \cdots & \begin{array}{cc}
\rho_{xx}^{1B} & \rho_{xy}^{1B}\\
\rho_{yx}^{1B} & \rho_{yy}^{1B}\\
\end{array}\\
\begin{array}{cc}
\rho_{xx}^{21} & \rho_{xy}^{21}\\
\rho_{yx}^{21} & \rho_{yy}^{21}
\end{array}& \boldsymbol{\Sigma}_r^2 & &\\
\vdots & &\ddots & \vdots\\
\begin{array}{cc}\rho_{xx}^{B1} & \rho_{xy}^{B1}\\
\rho_{yx}^{B1} & \rho_{yy}^{B1}\\
\end{array} & \cdots & &\boldsymbol{\Sigma}_r^B \\\end{array} \right) \;,
\end{equation}
where $\rho_{xy}^{ab}$ is the covariance between set-up errors in the x-direction in pencil beam $a$ and errors in the y-direction in pencil beam $b$.
A few simple examples for correlation models are shown in figure \ref{fig:corrModels}, more can be found in literature \citep{bangertAnalyticalProbabilisticModeling2013, pflugfelderWorstCaseOptimization2008, unkelbachReducingSensitivityIMPT2009}.
\begin{figure}[ht!]
\begin{tabular}{c c c c c}
\multicolumn{5}{l}{\includegraphics[width=0.8\textwidth]{figures_eps/figure_1.pdf}}\\[-0.5ex]
\includegraphics[width=0.17\textwidth]{figures_eps/figure_1a.pdf}&\includegraphics[width=0.17\textwidth]{figures_eps/figure_1b.pdf}&\includegraphics[width=0.17\textwidth]{figures_eps/figure_1c.pdf}&\includegraphics[width=0.17\textwidth]{figures_eps/figure_1d.pdf}&\includegraphics[width=0.17\textwidth]{figures_eps/figure_1e.pdf} \\
& & & & \\[-1.5ex]
(a)&(b)&(c)&(d)&(e)\\
\end{tabular}
\caption{Adapted from \citet{wahlAnalyticalModelsProbabilistic2018a}. Covariance matrices for different correlation assumptions. Rows and columns of the matrices correspond to the individual pencil beams, beam and ray separators indicate sections of pencil beams with the same irradiation angle and lateral position, respectively. (a) No correlation between pencil beams, (b) correlation of energy levels within one beam, (c) ray-wise correlation, all pencil beams with the same lateral position, i.e.\ hitting the same material are fully correlated (d) beam-wise correlation, pencil beams with the same irradiation angle are fully correlated and (e) errors in all pencil beams are fully correlated.}
\label{fig:corrModels}
\end{figure}
In case the correlation matrix is singular (perfect correlation between some pencil beams), the dimension of the uncertain vector can be reduced and one joint error can be sampled for the respective perfectly correlated pencil beams.
More complex correlation models are possible.
\subsection{Implementation}
\label{sec:implementation}
For the proof-of-concept in this work, the weighting method was implemented as a post-processing routine in Matlab. Radiation plans were generated with matRad \citep{wieserDevelopmentOpensourceDose2017b} and exported to the Monte Carlo simulation engine TOPAS \citep{perlTOPASInnovativeProton2012} for dose calculations. The required particle histories $h(\boldsymbol{\xi})$ are stored during the simulation using a custom extension for TOPAS.
All reference computations rely on Monte Carlo dose calculations with TOPAS as well.
To reduce the number of required realizations, both for the reference computation and the (re-)weighting steps, a quasi-Monte Carlo approach was used to sample the random parameters \citep[see e.g.][]{caflischMonteCarloQuasiMonte1998}.
\subsection{Investigated patients and uncertainties}
In the following, we consider range and set-up uncertainties, as well as a combination of both. For the set-up uncertainties, we assume a symmetric, bivariate normal distribution with zero mean (no systematic errors) and a standard deviation of \SI{3}{\milli\meter} \citep[comp.][]{wahlAnalyticalModelsProbabilistic2018a, perkoFastAccurateSensitivity2016}. For range uncertainties, in the reference computations we scale the density with a normally distributed factor, where the mean is equal to the nominal density and the standard deviation is \SI{3}{\percent} \citep[as recommended in][]{lomaxIntensityModulatedProton2008a, yangComprehensiveAnalysisProton2012}. The corresponding parameters of the energy distribution, used to approximate range errors in the importance (re-)weighting estimate, are determined based on this distribution as detailed in \ref{sec:rangeError}. Table \ref{table:overviewPatientsUnc} provides an overview of which uncertainty models were computed for which patient, as well as the irradiation angles used for different patients.
\begin{table}[htb!]
\centering
\caption{Overview of uncertainties investigated for each patient/test case. Beam irradiation angles are given as (couch angle, gantry angle) in degrees and error values refer to the standard deviations of the corresponding normal distributions.}
\begin{tabular}{l l l l l l }
\br
& &Patient & Water phantom & Prostate & Liver \\
&&Angles& (\ang{0}, \ang{0}) & (\ang{0}, \ang{90}/\ang{270}) &(\ang{0}, \ang{315}) \\
Correlation & Type & &&&\\
\mr
\multirow{ 3}{*}{Full}& Set-up && \SI{3}{\milli\meter}& \SI{3}{\milli\meter}& \SI{3}{\milli\meter}\\
&Range &&\SI{3}{\percent}&-&\SI{3}{\percent}\\
&Both& & \SI{3}{\milli\meter}/\SI{3}{\percent}&-& \SI{3}{\milli\meter}/\SI{3}{\percent}\\
Beam& Set-up& &-& \SI{3}{\milli\meter}&-\\
Ray& Set-up& &-& \SI{3}{\milli\meter}&-\\
Energy& Both &&-& \SI{3}{\milli\meter}/\SI{3}{\percent}&-\\
None& Set-up &&-& \SI{3}{\milli\meter}& -\\
\br
\end{tabular}
\label{table:overviewPatientsUnc}
\end{table}
The number of histories and pencil beams for each considered patient, as well as the number of error scenarios computed for the importance (re-)weighting estimates can be found in table \ref{table:patientStatistics}. Note, that the number of histories per pencil beam were determined based on (non-normalized) weights from the optimized radiation plan (see \ref{sec:implementation}), where around $10^5$ histories are computed for the pencil beams with the highest weights.
\begin{table}[htb!]
\centering
\caption{Overview of simulated plans and error scenarios per patient.}
\lineup
\begin{tabular}{l l l l l }
\br
Patient & Water phantom & Liver & \multicolumn{2}{c}{Prostate} \\
\mr
Irradiation angles &\0\0 (\ang{0}, \ang{0}) & \0(\ang{0}, \ang{315}) &\0\0(\ang{0}, \ang{90}) &\0(\ang{0}, \ang{270}) \\
Number of pencil beams & \,\0\0\0\0\0147 & \0\0\0\0\01\,378 & \0\0\0\0\01\,375 & \0\0\0\0\01\,383 \\
Number of histories &2\,566\,453 &13\,528\,430 & 16\,992\,193 & 16\,748\,034\\
Number of error scenarios & \,\0\0\0\0\0100 & \,\0\0\0\0\0\0100 & \multicolumn{2}{c}{100} \\
\br
\end{tabular}
\label{table:patientStatistics}
\end{table}
\subsection{Evaluation criteria}
To compare our results to the respective reference computations, we plot two-dimensional slices of the dose cubes as well as a difference map and employ a global three-dimensional $\gamma$-analysis. For the difference maps, we compute
\begin{equation}
\text{diff}_i(d^\text{ref}_i,d_i^\text{est}) = d_i^\text{ref} - d_i^\text{est} \; ,
\end{equation}
for each voxel $i$ in the reference result $d^\text{ref}$ and (re-)weighting estimate $d^\text{est}$. For the $\gamma$-analysis, we use the matRad implementation based on \citet{lowTechniqueQuantitativeEvaluation1998}, with a distance to agreement of $\SI{3}{\milli\meter}$ and a dose difference criterion of $\SI{3}{\percent}$.
\section{Results}
\label{sec:results}
In the following we present results for the cases given in table \ref{table:overviewPatientsUnc}. Unless specified otherwise, results were computed on the basis of histories from nominal dose calculations, i.e. with phase space parameters sampled from $\Phi_0$ (see \ref{sec:beamModel}). The references computed for nominal and expected dose stem from Monte Carlo dose calculations with the respective phase space distributions $\Phi_0$ and $\Psi$ (see \ref{sec:directExpDose}), the reference standard deviation is derived using numerous such Monte Carlo simulations for different error scenarios sampled from the joint error distribution. Therefore, the importance (re-)weighting estimate for the nominal dose only differs from the reference by round-off errors introduced in post-processing, as can be seen in figure \ref{fig:WB_Setup}. It is omitted thereafter, as is the reference.
\subsection{Set-up errors}
Figure \ref{fig:WB_Setup} displays the nominal dose, expected value and standard deviation estimates for a water phantom, computed using the (re-)weighting approach in comparison to the respective references. While we see some minor deviations in difference maps for the expected dose and standard deviations, they do not appear systematic.
The distance-to-agreement analysis using the $\gamma$-criterion supports this quantitatively (table \ref{table:gammaWaterBox}), with $\gamma^{\SI{3}{\milli\meter}}_{\SI{3}{\percent}}$-pass rate of \SI{100}{\percent} for the nominal and expected dose and \SI{99.97}{\percent} for the standard deviation.
Figure \ref{fig:Setup_LiverProstate} demonstrates that this is transferable to the more complex patient cases. With overall $\gamma^{\SI{3}{\milli\meter}}_{\SI{3}{\percent}}$-pass rates of \SI{99.99}{\percent} (prostate patient) and \SI{100}{\percent} (liver patient), the standard deviation agrees as well with the reference computations as the expected value, with \SI{100}{\percent} and \SI{99.99}{\percent}, respectively (table \ref{table:gammaProstate}, \ref{table:gammaLiver}).
\begin{table}[h]
\centering
\caption{$\gamma^{\SI{3}{\milli\meter}}_{\SI{3}{\percent}}$-pass rates in volumes of interest (VOI) of the water phantom.}
\lineup
\begin{tabular}{l l l l }
\br
Water Phantom & Nominal dose & Expected value & Standard deviation\\
\mr
Error type & \multicolumn{3}{c}{Set-up} \\
\mr
\textbf{Overall} &100 &100&\099.97\\
Body &100 &100 & \099.97\\
Target &100 & 100 & 100 \\
\br
\end{tabular}
\label{table:gammaWaterBox}
\end{table}
\begin{figure}[H]
\centering
\begin{tabular}{c c c c}
& \textbf{Estimate}\hspace*{0.5cm} & \textbf{Reference}\hspace*{0.5cm}&\textbf{Difference}\hspace*{0.5cm} \\
$\boldsymbol{d}$ & \raisebox{-.5\height}{\includegraphics[width=0.25\textwidth]{figures_eps/figure_2a.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.25\textwidth]{figures_eps/figure_2b.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.255\textwidth]{figures_eps/figure_2c.pdf}} \\
$E[\boldsymbol{d}]$&\raisebox{-.5\height}{\includegraphics[width=0.25\textwidth]{figures_eps/figure_2d.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.25\textwidth]{figures_eps/figure_2e.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.255\textwidth]{figures_eps/figure_2f.pdf}}\\
$\boldsymbol{\sigma(d)}$&\raisebox{-.5\height}{\includegraphics[width=0.25\textwidth]{figures_eps/figure_2g.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.25\textwidth]{figures_eps/figure_2h.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.255\textwidth]{figures_eps/figure_2i.pdf}}\\
\end{tabular}
\caption{Nominal dose $\boldsymbol{d}$, expected dose $E[\boldsymbol{d}]$ and standard deviation $\boldsymbol{\sigma(d)}$ w.r.t.\ set-up uncertainties with $\SI{3}{\milli\meter}$ standard deviation for a spread out Bragg peak in a water phantom. The left column shows the estimate computed with the proposed (re-)weighting approach, the middle column the respective reference and the right column the difference between both simulations.}
\label{fig:WB_Setup}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c@{\hspace{-0.1ex}} c @{\hspace{-0.1ex}} c c@{\hspace{-0.1ex}} c}
& \textbf{Estimate}\hspace*{0.5cm} & \textbf{Difference}\hspace*{0.5cm}& \textbf{Estimate}\hspace*{0.5cm} & \textbf{Difference}\hspace*{0.5cm} \\
$E[\boldsymbol{d}]$&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_3aa.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_3ab.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_3ba.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_3bb.pdf}}\\
$\boldsymbol{\sigma(d)}$&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_3ac.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_3ad.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_3bc.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_3bd.pdf}}\\
& & & & \\[-1.5ex]
& \multicolumn{2}{c}{(a) Prostate}& \multicolumn{2}{c}{(b) Liver} \\
\end{tabular}
\caption{Expected dose $E[\boldsymbol{d}]$ and standard deviation $\boldsymbol{\sigma(d)}$ w.r.t.\ set-up uncertainties with $\SI{3}{\milli\meter}$ standard deviation for (a) a prostate patient (couch angle \ang{0}, gantry angles \ang{90} and \ang{270}) and (b) a liver patient (couch angle \ang{0}, gantry angle \ang{315}). The left columns show the estimates computed with the proposed (re-)weighting approach and the right columns the difference to the corresponding references.}
\label{fig:Setup_LiverProstate}
\end{figure}
\begin{table}[h]
\centering
\caption{$\gamma^{\SI{3}{\milli\meter}}_{\SI{3}{\percent}}$-pass rates in volumes of interest (VOI) of the prostate patient.}
\label{table:gammaProstate}
\lineup
\begin{tabular}{l l l }
\br
Prostate& Expected value & Standard deviation \\
\mr
Error type & \multicolumn{2}{c}{Set-up} \\
\mr
\textbf{Overall} &100&\099.99\\
Rectum &100&100\\
Penile bulb &100&100\\
Lymph nodes &100&100\\
Rt femoral head &100&100\\
Prostate bed &100&100\\
PTV 68 & 100&100\\
PTV 56 & 100&\099.99\\
Bladder & 100&\099.99\\
Body & 100&\099.99\\
Lt femoral head &100 &100\\
\br
\end{tabular}
\end{table}
\begin{table}[htb!]
\centering
\caption{$\gamma^{\SI{3}{\milli\meter}}_{\SI{3}{\percent}}$-pass rates in volumes of interest (VOI) of the liver patient (initial particles sampled from $\Phi_0$).}
\label{table:gammaLiver}
\lineup
\begin{tabular}{l | l l l | l l l}
Liver & \multicolumn{3}{l|}{Expected Value} &\multicolumn{3}{l}{Standard Deviation}\\
\hline
Error type & Set-up & Range & both & Set-up & Range & both\\
\hline
\textbf{Overall} &\099.99&100&99.50 &100 & 91.69&93.12\\
GTV &100&100&97.59&100&99.01&87.10\\
Liver &100&100&99.75&\099.99&92.78&84.04\\
Heart &100&100&96.67&100&91.26&98.84\\
CTV &100&100&98.90&100&93.01&90.99\\
Contour &100&100&98.69&100&91.48&90.75\\
PTV &100&100&99.28&\099.99&83.60&90.69\\
\end{tabular}
\end{table}
\subsection{Range errors}
\label{sec:resultsRangeError}
In contrast to the set-up errors, for which dose estimates can also be shown to be mathematically accurate, range errors can only be modeled through an approximation introduced in \ref{sec:rangeError}. Figure \ref{fig:WB_Range} displays results for range errors as well as the combination of range and set-up errors in the water phantom.
The difference maps for both expected value and standard deviation show that the deviations when including range errors are expectedly higher. We observe a systematic bias primarily at the distal edge, where our method seems to consistently underestimate the variance. The standard deviation estimate using our importance weighting method also expresses strong local artifacts, as evident in the difference maps (compare figure \ref{fig:WB_Range}). This is an indicator of too little statistical mass, i.e., computed particle trajectories, in the original simulation. For more extreme error realizations, relatively high weights are assigned to a small number of particles, thereby amplifying single realizations or errors. Especially in case of a relatively small beam energy spread in the original simulation (here $\SI{1}{\percent}$), compared to the range error of $\SI{3}{\percent}$, such artifacts are likely to appear. In order to prevent this, one could either compute a larger number of particle histories in the simulation or sample the particles from a different distribution which has more density mass in its outer regions or tails (compare \ref{sec:directExpDose}).
To underline the explanation for the appearance of the artifacts above, we recomputed the estimates using the (re-)weighting method based on a direct computation of the expected value, which can be calculated using the convolution $\Psi$ of the Gaussian error kernel with the nominal phase space parameter distribution \ref{sec:directExpDose}. Figure \ref{fig:WB_Range} shows that this alleviates the discrepancy from the references, causing artifacts to disappear and also reducing the overall amount of deviation displayed in the difference maps.
\begin{figure}[H]
\centering
\begin{tabular}{c@{\hspace{-0.1ex}} c@{\hspace{1ex}} c@{\hspace{1ex}} c@{\hspace{1ex}} c@{\hspace{1ex}}}
& \textbf{Estimate ($\Phi_0$)}\hspace*{0.25cm} & \textbf{Difference}\hspace*{0.25cm}& \textbf{Estimate ($\Psi$)}\hspace*{0.25cm} & \textbf{Difference}\hspace*{0.25cm} \\
$E[\boldsymbol{d}]$&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_4aa.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_4ab.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_4ac.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_4ad.pdf}}\\
$\boldsymbol{\sigma(d)}$&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_4ae.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_4af.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_4ag.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_4ah.pdf}}\\
& & & &\\[-1.5ex]
& \multicolumn{4}{c}{(a) Range errors}\\
& \textbf{Estimate ($\Phi_0$)}\hspace*{0.5cm} & \textbf{Difference}\hspace*{0.5cm}& \textbf{Estimate ($\Psi$)}\hspace*{0.5cm} & \textbf{Difference}\hspace*{0.5cm} \\
$E[\boldsymbol{d}]$&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_4ba.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_4bb.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_4bc.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_4bd.pdf}}\\
$\boldsymbol{\sigma(d)}$&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_4be.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_4bf.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_4bg.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_4bh.pdf}}\\
& & & &\\[-1.5ex]
& \multicolumn{4}{c}{(b) Range and set-up errors}\\
\end{tabular}
\caption{Expected dose $E[\boldsymbol{d}]$ and standard deviation $\boldsymbol{\sigma(d)}$ w.r.t.\ (a) range uncertainties with a $\SI{3}{\percent}$ standard error (b) and both range uncertainties as well as set-up errors with $\SI{3}{\milli\meter}$ standard deviation for a spread out Bragg peak in a water phantom. The left columns show the estimate computed with the proposed (re-)weighting approach, reconstructed either from the nominal distribution $\Phi_0$ or its convolution $\Psi$ witht the error kernel. The right columns show the difference to the corresponding references.}
\label{fig:WB_Range}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c@{\hspace{-0.1ex}} c@{\hspace{1ex}} c@{\hspace{1ex}} c@{\hspace{1ex}} c@{\hspace{1ex}}}
& \textbf{Estimate ($\Phi_0$)}\hspace*{0.25cm} & \textbf{Difference}\hspace*{0.25cm}& \textbf{Estimate ($\Psi$)}\hspace*{0.25cm} & \textbf{Difference}\hspace*{0.25cm} \\
$E[\boldsymbol{d}]$&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_5aa.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_5ab.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_5ac.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_5ad.pdf}}\\
$\boldsymbol{\sigma(d)}$&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_5ae.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_5af.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_5ag.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_5ah.pdf}}\\
& & & &\\[-1.5ex]
& \multicolumn{4}{c}{(a) Range errors}\\
& \textbf{Estimate ($\Phi_0$)}\hspace*{0.5cm} & \textbf{Difference}\hspace*{0.5cm}& \textbf{Estimate ($\Psi$)}\hspace*{0.5cm} & \textbf{Difference}\hspace*{0.5cm} \\
$E[\boldsymbol{d}]$&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_5ba.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_5bb.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_5bc.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_5bd.pdf}}\\
$\boldsymbol{\sigma(d)}$&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_5be.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_5bf.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.22\textwidth]{figures_eps/figure_5bg.pdf}}&\raisebox{-.5\height}{\includegraphics[width=0.225\textwidth]{figures_eps/figure_5bh.pdf}}\\
& & & &\\[-1.5ex]
& \multicolumn{4}{c}{(b) Range and set-up errors}\\
\end{tabular}
\caption{Expected dose $E[\boldsymbol{d}]$ and standard deviation $\boldsymbol{\sigma(d)}$ w.r.t.\ (a) range uncertainties with a $\SI{3}{\percent}$ standard error and (b) both range uncertainties as well as set-up errors with $\SI{3}{\milli\meter}$ standard deviation in a liver patient (couch angle \ang{0}, gantry angle \ang{315}). The left columns show the estimate computed with the proposed (re-)weighting approach, reconstructed either from the nominal distribution $\Phi_0$ or its convolution $\Psi$ with the error kernel. The right columns show the difference to the corresponding references.}
\label{fig:Liver_Range}
\end{figure}
Thereby we can conclude that the irregularities in the solution can be attributed to the lack of statistical support in certain areas. Contrary to this, parts of the systematic differences remain and are thus most likely a result of the model approximations.
Figure \ref{fig:Liver_Range} validates these observations for a liver patient. The difference maps for estimates computed based on the expected distribution $\Psi$, have less severe artifacts and systematic deviations. For both set-up and range errors, the $\gamma_{\SI{3}{\percent}}^{\SI{3}{\milli\meter}}$-pass rate also increases from $\SI{93.12}{\percent}$ to $\SI{98.19}{\percent}$ (table \ref{table:gammaLiver}). However, we do not observe such an increase in the case of only range uncertainties.
Also, it has to be noted, that using $\Psi$ to sample the initial particles leads to an expected dose estimate which is exact up to machine precision, but a nominal dose estimate which now shows deviations from a nominal standard Monte Carlo reference computation in the order of magnitude that we could previously observe for the expected dose.
\subsection{Correlation models}
So far, we have only shown results for the case of fully correlated pencil beams, meaning one global shift of the patient position or scaling factor for the beam range. One of the advantages of the proposed method is, however, the high flexibility in changing the uncertainty model. In figure \ref{fig:Prostate_CorrModels} we therefore present the standard deviation estimate for four examples of different error correlation models discussed in section \ref{sec:corrModels}.
\begin{figure}[H]
\centering
\begin{tabular}{c@{\hspace{1ex}} c@{\hspace{1ex}} c@{\hspace{1ex}} c@{\hspace{1ex}}}
\includegraphics[width=0.24\textwidth]{figures_eps/figure_6a.pdf} & \includegraphics[width=0.24\textwidth]{figures_eps/figure_6b.pdf} & \includegraphics[width=0.24\textwidth]{figures_eps/figure_6c.pdf}& \includegraphics[width=0.24\textwidth]{figures_eps/figure_6d.pdf} \\
& & & \\[-1.5ex]
(a)&(b) &(c)&(d) \\
\end{tabular}
\caption{Standard deviation of dose in a prostate patient for (a) no correlation (b) correlation between pencil beams in the same energy level (c) ray-wise correlation between pencil beams and (d) correlation between pencil beams with the same irradiation angle, w.r.t\ set-up errors and in case (c) also range errors.}
\label{fig:Prostate_CorrModels}
\end{figure}
The results indicate, that different correlation assumptions have a crucial impact on the standard deviation of dose distributions. While it is in principle possible to define arbitrary correlations within the proposed framework, estimates can be prone to artifacts due to a lack of statistical information, especially for the ray-wise correlation model. When sampling error realizations independently for smaller beam components, the reconstruction depends solely on the particle histories associated with these components. For rays with small weights, only very few histories are computed, therefore we observe similar artifacts as encountered in above range uncertainty computations (\ref{sec:resultsRangeError}).
\section{Accuracy and Convergence}
\label{sec:convergence}
Mathematically, it can be shown, that the expected and nominal dose estimates are unbiased. This also holds for the doses corresponding to each individual error realization. While this does not generally apply for the variance, our results indicate that the bias does not have a significant impact on the quality of estimates.
\begin{figure}[H]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{figures_eps/figure_7a.pdf}
\caption*{(a)}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{figures_eps/figure_7b.pdf}
\caption*{(b)}
\end{subfigure}
\caption{Mean square error (MSE) of the dose standard deviation estimate for the water phantom, computed using a (quasi) Monte Carlo method and the importance (re-)weighting approach and compared for the error convergence (a) per number of samples and (b) per corresponding computation time.}
\label{fig:convergencePlots}
\end{figure}
For quicker convergence we used quasi-random numbers throughout the whole comparisons, both for the reference computation and the importance (re-)weighting approach. Note, that the combination of importance sampling with quasi-Monte Carlo methods has been shown to be not only possible, but advantageous and preserves the convergence properties of quasi-Monte Carlo \citep{hormannQuasiImportanceSampling2005, oktenErrorReductionTechniques1999,caflischMonteCarloQuasiMonte1998,schurerAdaptiveQuasiMonteCarlo2004}. Since the procedure mimics a (quasi-) Monte Carlo method for uncertainty quantification, where the repeated simulation runs are replaced by (re-)weighting steps, the convergence of the variance per computed error realization is identical. However, due to the lower cost of the (re-)weighting steps, the convergence per time is much faster (see figure \ref{fig:convergencePlots}).
\begin{table}[htb!]
\centering
\caption{CPU time comparison for the reference vs. (re-)weighting approach applied to different patients and computed on the same machine. All values are given in seconds. Note that the times for 100 realizations include the initialization times, while the time for a single realization only refers to the dose computation time.}
\lineup
\begin{tabular}{l l l l}
\br
& & Reference& (Re-)weighting \\
\mr
\multirow{3}{*}{Water phantom}& Initialization & \,\,\0\0\0\0\0\0\02.35& \,\0\0\0\0\061.53 \\
& One realization &\,\,\0\0\02\,331.30 & \,\0\0\0\0\028.51\\
& 100 realizations & \,\0233\,126.93& \,\0\02\,912.53\\
&&&\\[-2ex]
\multirow{3}{*}{Liver} & Initialization &\,\,\0\0\0\0\0\0\02.44 &\,\0\02\,038.75 \\
& One realization &\,\,\0\039\,066.44 &\,\0\01\,198.74 \\
& 100 realizations &3\,906\,650.90 &121\,912.75 \\
&&&\\[-2ex]
\multirow{3}{*}{ Prostate} & Initialization & \,\,\0\0\0\0\0\0\04.26& \,\0\04\,867.75\\
& One realization & \,\,\0\058\,762.40 & \,\0\02\,479.07\\
& 100 realizations & 5\,876\,253.86& 252\,774.75 \\
\br
\end{tabular}
\label{table:CPUtimes}
\end{table}
For run-time comparisons, the reference computations using TOPAS and the (re-)weighting approach, implemented as post-processing in Matlab, were run on the same virtual machine\footnote{Virtual machine including 64 CPUs with 1.995 GHz and 200GB RAM}. We observe reduced CPU times by a factor of 80, 32 and 23 for the water phantom, liver and prostate patient, respectively (see table \ref{table:CPUtimes}).
\section{Discussion}
\label{sec:discussion}
In this paper, we introduce an efficient approach for uncertainty quantification in Monte Carlo dose calculations using history (re-)weighting. We demonstrate how particle histories from one simulation can be scored to construct estimates for error scenarios, the expected dose and standard deviation, for set-up and range errors in intensity modulated proton therapy. As demonstration example, Gaussian range and set-up uncertainties, with $\SI{3}{\percent}$ and $\SI{3}{\mm}$ standard deviation respectively, were considered for a water phantom, a liver patient and a prostate patient.
For set-up uncertainties, we observed good agreement of at least $\SI{99.99}{\percent}$ in the $\gamma_{\SI{3}{\mm}/\SI{3}{\percent}}$-criterion for all quantities of interest. Range error propagation could be approximated by transforming the assumed range uncertainty into energy uncertainty via the range-energy relationship. The error caused by this model approximation appears to be relatively minor for the expected dose. The standard deviation estimates are, however, sensitive to the number of histories and usage of the nominal Gaussian pencil beam width or the convolved distribution. Differences and visible artifacts in the standard deviation estimate can be partly eliminated by simulating the initial phase-space parameters using the convolved beam parameterization. While some systematic deviations remain, the order of magnitude as well as shape and extent of the dose standard deviation is sufficiently well-represented. However, this causes a reduction in accuracy for the nominal dose. Thus, improving the accuracy of the estimate for one quantity of interest comes at a cost for the accuracy of another and it remains up to the user and use-case to put the focus on either retaining accuracy in the nominal dose computation or trading it against better accuracy of the uncertainty estimate.
We also demonstrate the use of different pencil beam correlation models within the framework. It is clear that the choice of correlation model has a significant impact on the standard deviation estimate. Therefore, it is particularly convenient that the (re-)weighting method allows for the definition of principally arbitrary correlation matrices to put into the underlying multivariate Gaussian error model. These could possibly be extended to simulate interplay effects or other dynamic influences in the context of 4D treatment planning. Since the applied correlation models are not only experimental but also difficult and time-consuming to evaluate in scenario sampling, we did not quantitatively compare them to reference computations. Further studies could explore whether they agree with other methods computing such correlations based on an analytical probabilistic dose engine \citep{bangertAnalyticalProbabilisticModeling2013,wahlEfficiencyAnalyticalSamplingbased2017,wieserImpactGaussianUncertainty2020}.
Compared to the reference scenario estimates, which rely on performing full MC dose calculations repeatedly, the CPU-time for standard deviation estimates could be reduced by more than an order of magnitude using our method in combination with a quasi-MC approach. This is achieved by reducing the costs of repeated expensive simulations to those of scoring based on matrix-vector multiplications. Consequently, it has to be noted, that the time reduction depends largely on the proportion of computational overhead of the initialization and simulation steps in the MC engine. Therefore the factor of speed increase varies strongly between different test-cases and, most likely, implementations. But even then, our method holds two performance advantages: First, it can directly compute the expected dose by using the convolved phase space parameterization (\ref{sec:directExpDose}) in \emph{one} standard simulation. Second, multiple uncertainty models with different correlation patterns and magnitude can be reconstructed from the same set of histories. This could for example be used to investigate the impact of fractionation effects, using the framework proposed by \citet{wahlAnalyticalIncorporationFractionation2018} or to consider a number of (worst case) scenarios besides the expected dose and standard deviation.
Further, we argue that computational performance can be further improved through a more efficient implementation and using better parallelization. Also, a combination of our approach with other efficient uncertainty quantification approaches, which rely on scenario computations could lead to run-time improvements. For instance a polynomial chaos expansion as introduced in \citet{perkoFastAccurateSensitivity2016} could be adjusted such that the evaluations are computed by (re-)weighting histories instead of the usual dose calculations.
When computing the estimates in post-processing, the regular Monte Carlo dose calculation is not slowed down perceptibly by storing particle histories for later reconstructions. The possible additional run-time is smaller than the variation between two runs of the same simulation and thereby barely detectable. An implementation as on-the-fly scoring is however also possible and might outperform post-processing in terms of overall run time for a single uncertainty model.
Last but not least, the method is not inherently limited to the discussed application in proton therapy; a calculation of uncertainty estimates using the (re-)weighting approach would also be feasible for other IMPT modalities like carbon ions but also photons. In its current description, it is however limited to uncertainties which can be modeled in terms of variations of phase space parameters with a prior probability distribution. Application to, for example, pre-simulated phase spaces might also be feasible using numerical convolution techniques. Also, a disproportionately high magnitude of uncertainties in relation to this probability distribution can compromise the accuracy of results. Furthermore, it needs to be mentioned, that the current computational speed, especially for the standard deviation, might still not be sufficient for optimization purposes, where a full dose influence matrix needs to be computed. Due to the simplicity of the process and the high flexibility in post-processing at virtually no cost for the original simulation, we are confident that the approach has the potential for further development and use.
\section{Conclusion}
\label{sec:conclusion}
Dose distributions in intensity modulated proton therapy are known to be sensitive to uncertainties. The computational efforts in estimating such uncertainties become particularly evident when Monte Carlo dose calculation is used. We showed how the concept of importance sampling can be adapted to estimate the expected dose and its variance using histories from only a single Monte Carlo simulation. Set-up uncertainties can be efficiently modeled and exhibit almost exact agreement with reference computations. The inclusion of range uncertainties, by modeling them as energy uncertainty via the range-energy relationship, yields less but sufficient accuracy for most application purposes. Further, the physical simulation of particles is completely decoupled from uncertainty quantification, thereby allowing for the incorporation of arbitrary correlation assumptions and the comparison of different scenarios, at no additional cost to the nominal dose calculation. Therefore, the presented approach has several benefits over classic non-intrusive methods and is a step towards reconciling efficient uncertainty quantification and, in the future, robust optimization based on Monte Carlo dose calculations.
\ack
The present contribution is supported by the Helmholtz Association under the joint research school HIDSS4Health -- Helmholtz Information and Data Science School for Health.
|
1,108,101,564,380 | arxiv | \section{Introduction}
The clustering of galaxies is often used to study the power-spectrum of the
underlying mass distribution (e.g. Tegmark et al. 2006). Since the data
does not reflect the clustering of mass but rather the clustering of
galaxies, a correction factor termed {\it the galaxy bias} must be applied
to its analysis. This bias factor is a mass (and possibly scale) dependent
property of the galaxy population. Alternatively, assuming that the mass
power-spectrum is known, the masses of the galaxies can be inferred from
their bias by comparing the expected clustering of mass with the observed
galaxy clustering. In either case, if one is trying to determine galaxy
mass using clustering or one is trying to determine the underlying
properties of the mass power-spectra from observations of galaxy
clustering, a theoretical understanding of galaxy bias is required. As a
first estimate, the bias can be calculated from linear theory (Mo \&
White~1996; Sheth, Mo \& Tormen~2001). In addition, through comparison with
N-body simulations, various corrections to the bias have also been
calculated (see Eisenstein et al.~2005 for a summary) to allow more
accurate comparisons with improving data. These contributions to the bias
are physical in the sense that they are due only to the properties of the
dark matter, and so they can be computed (albeit via simulation) from first
principles given an input mass power-spectrum.
Recently it has been suggested that an inhomogeneous reionization can lead
to a modification of the observed clustering of galaxies. Babich \&
Loeb~(2006) calculated the modulation of the number density of the
lowest-mass galaxies that result from reionization induced variation in the
thermal history among different regions of the IGM. Although they have
found that the expected effect on the galaxy power-spectrum is much larger
than the difference between competing models of inflation, their analysis
did not extend to high mass galaxies to which future surveys will be
sensitive. Pritchard, Furlanetto \& Kamionkowski~(2007) considered lower
redshifts and more massive galaxies, but used an ad-hoc ansatz that the
overdensity of galaxies is proportional to the underlying radiation field
and concluded that reionization would leave a redshift dependent imprint on
the galaxy power-spectrum at low redshifts that might interfere with
measurements of the baryonic acoustic peak. These papers did not attempt
to compute the coupling between the mass-to-light ratio of massive galaxies
and the large scale environment. However galaxy surveys produce clustering
statistics for either flux limited surveys, or for volume limited surveys
in a fixed luminosity range. Computation of the effect of reionization on
the mass-to-light ratio of massive galaxies is therefore critical for
comparison with any real survey.
The aim of this paper is to estimate the {\it astrophysical} contribution
to the galaxy bias due to the reionization of the intergalactic medium
(IGM). This contribution is model dependent, requiring knowledge of the
baryonic physics in addition to gravity. The reionization of the IGM is
sensitive to the local large-scale overdensity. In regions that are
overdense, galaxies are over-abundant for two reasons: first because there
is more material per unit volume to make galaxies, and second because
small-scale fluctuations need to be of lower amplitude to form a galaxy
when embedded in a larger-scale overdensity. The first effect will result
in a larger density of ionizing sources. However this larger density will
be compensated by the increased density of gas to be ionized. In addition,
the recombination rate is increased in overdense regions, but this effect
is counteracted by the bias of galaxies in these regions. The process of
reionization also contains several layers of feedback. Radiative feedback
heats the IGM and results in the suppression of low-mass galaxy formation
(Efstathiou, 1992; Thoul \& Weinberg~1996; Quinn et al.~1996; Dijkstra et
al.~2004). This delays the completion of reionization by lowering the local
star formation rate, but here again the effect is counteracted in overdense
regions by the biased formation of massive galaxies. The radiation feedback
may therefore be more important in low-density regions where small galaxies
contribute more significantly to the ionizing flux. Wyithe \& Loeb~(2007)
have modeled the density dependent reionization process using a
semi-analytic model that incorporates the features described above, and so
captures the important physical processes. This model demonstrated that
galaxy bias leads to enhanced reionization in overdense regions, so that
overdense regions are reionized first.
We show that this early reionization leads to an additional bias in the
observed clustering at later epochs in addition to that associated with
enhanced structure formation. We find that the correction to the linear
bias due to reionization could be significantly larger than other
corrections that have been previously considered. Moreover, we show that
the bias correction is larger than the uncertainties in current surveys
over a wide range of redshifts between $1\la z\la 5$.
The outline of the paper is as follows. In \S~\ref{reion} and
\S~\ref{Sbias} we describe the effect of reionization on galaxy formation,
and summarize galaxy bias. We then outline the reasons why we would expect
reionization to yield an additional galaxy bias in \S~\ref{reionbias},
before presenting a model to allow quantitative predictions of the effect
(\S~\ref{model}). We then apply our model to surveys for Ly-break galaxies
(\S~\ref{Lybreak}) and surveys to measure baryonic acoustic oscillations
(\S~\ref{bao}). We then discuss some outstanding issues in
\S~\ref{discussion} before summarizing our conclusions in
\S~\ref{conclusions}. Throughout the paper we adopt the latest set of
cosmological parameters determined by {\it WMAP} (Spergel et al. 2006) for
a flat $\Lambda$CDM universe.
\section{Reionization and observed galaxy formation}
\label{reion}
The dominant effect of reionization on galaxy formation is believed to
involve radiative feedback which heats the IGM following the reionization
of a region, and thus results in the suppression of low-mass galaxy
formation (Efstathiou, 1992; Thoul \& Weinberg~1996; Quinn et al.~1996;
Dijkstra et al.~2004). Standard models of the reionization process assume
a minimum threshold mass for galaxy halos in which cooling and star
formation occur ($M_{\rm cool}$) within neutral regions of the IGM. In
ionized regions the minimum halo mass is limited by the Jeans mass (Barkana
\& Loeb 2001) in an ionized IGM ($M_{\rm ion}$). We assume $M_{\rm cool}$
to correspond to a virial temperature of $10^4$K, representing the hydrogen
cooling threshold, and $M_{\rm ion}$ to correspond to a virial temperature
of $10^5$K, representing the mass below which infall is suppressed from an
IGM in which hydrogen has been ionized (Dijkstra et al.~2004).
Observations suggest that hydrogen was reionized by stars prior to
$z\sim6$ (e.g. White et al.~2003; Fan et al.~2006). However models of
HeIII reionization suggest that it was the rise of quasars (with
harder spectra) that resulted in the overlap of HeIII regions at a
redshift of $z\sim3.5$ (e.g. Wyithe \& Loeb~2003; Sokasian et
al.~2003). This prediction is consistent with observations that show
transmission just blueward of the helium Ly$\alpha$ line at $z\sim3$
(Jacobsen et al 1994; Tytler 1995; Davidsen et al. 1996; Hogan et
al. 1997; Reimers et al. 1997; Heap et al. 2000; Kriss et al. 2001;
Smette et al. 2002). In addition, the double reionization of helium
results in the temperature of the IGM being approximately doubled
(Schaye et al. 2000; Theuns et al.~2002,2002b). Thus we assume the
IGM temperature to change from $T_{\rm IGM}\sim10^4$K to $T_{\rm
IGM}\sim2\times10^4$K between $z\sim4$ and $z\sim3$. Calculation of
the accretion of baryons from an adiabatically expanding IGM into a
dark matter potential well show that the minimum virial temperature
for significant accretion is proportional to the temperature of the
IGM (Barkana \& Loeb~2001). Thus, when helium is reionized at
$z\sim3.5$, the value of $T_{\rm min}$ is doubled from $T_{\rm ion}$
to $2T_{\rm ion}$. When considering Helium reionization we assume a
sudden heating ($\Delta z\la0.1$). However we note that the period of
heating could be more prolonged.
\section{Galaxy Bias}
\label{Sbias}
Strong clustering of massive galaxies in overdense regions implies
that these sources trace the higher density regions of IGM. The
clustering of galaxies is driven by two effects. The first effect is
the underlying clustering of the density field. This clustering may
be expressed via the mass correlation
function between regions of mass $M_1$ and $M_2$, separated by a
comoving distance $R$ is (see Scannapieco \& Barkana~2002 and
references therein)
\begin{eqnarray} \nonumber \xi_{\rm
m}(M_1,M_2,R)&=& \frac{1}{2\pi^2}\int dk k^2 P(k) \\
&\times&\frac{\sin(kR)}{kR}W(kR_1)W(kR_2), \end{eqnarray}
where
\begin{equation} R_{1,2} = \left(\frac{3 M_{\rm 1,2}}{4\pi\rho_{\rm
m}}\right)^{1/3},
\end{equation}
$W$ is the window function (top-hat
in real space), $P(k)$ the power spectrum and $\rho_{\rm m}$ is the
cosmic mass density. The dark-matter halo correlation function for
halos of mass $M$ is obtained from the product of the mass correlation
function $\xi_{\rm m}(M,M,R)$ and the square of the ratio between the
variances of the halo and mass distributions. This ratio, $b$, is
defined as the halo bias. This bias has been discussed extensively
in the literature, (e.g. Mo \& White~1996; Sheth, Mo \& Tormen~2001).
However we briefly describe a likelihood based
interpretation which allows the effects of reionization to be included
in a natural way.
To see the origin of bias due to enhanced galaxy formation in overdense regions, consider the likelihood (which is proportional
to the local number density of galaxies) of observing a galaxy at a random location. Given a large scale overdensity $\delta$ of comoving radius $R$,
the likelihood of observing a galaxy may be estimated from the
Sheth-Tormen~(2002) mass function as
\begin{equation}
\label{LH}
\mathcal{L}_{\rm g}(\delta) = \frac{(1+\delta)\nu(1+\nu^{-2p}) e^{-a\nu^2/2}}{\bar{\nu}(1+\bar{\nu}^{-2p})e^{-a\bar{\nu}^2/2}},
\end{equation}
where $\nu=(\delta_{\rm c}-\delta)/[\sigma(R)]$, $\bar{\nu} = \delta_{\rm
c}/[\sigma(R)]$ and $\delta_{\rm c}\approx 1.69$ is the critical linear
overdensity for collapse to a bound object. Here $\sigma(R)$ is the
variance of the density field smoothed with a top-hat window on a
scale $R$ at redshift $z$, and $a=0.707$ and $p=0.3$ are constants. Note
that here as elsewhere in this paper we work with over-densities and
variances computed at the redshift of interest (i.e. not extrapolated to
$z=0$). Equation~(\ref{LH}) is simply the ratio of the number density of
halos in a region of overdensity $\delta$ to the number density of halos
in the background universe. This ratio has been used to derive the bias for
small values of $\delta$ (Mo \& White~1996; Sheth, Mo \& Tormen~2001). For
example, in the Press-Schechter~(1974) formalism we write
\begin{eqnarray}
\label{biasPS}
\nonumber \mathcal{L}_{\rm g}(\delta) &=&
(1+\delta)\left[\frac{dn}{dM}(\bar{\nu}) +
\frac{d^2n}{dMd\nu}(\nu)\frac{d\nu}{d\delta}\delta\right]\left[\frac{dn}{dM}(\bar{\nu})\right]^{-1}
\\&\sim&
1+\delta\left(1+\frac{\nu^2-1}{\sigma(M)\nu}\right)\equiv1+\delta b_{\rm g},
\end{eqnarray}
where $(dn/dM)(\bar{\nu})$ and $(dn/dM)(\nu)$ are the average and perturbed mass functions, and $b_{\rm g}$ is defined as the bias factor.
The observed overdensity of galaxies is
$\delta_{\rm gal}=4/3\times b_{\rm g}(M,z)\delta$, where $b_{\rm g}(M, z)$ is the galaxy
bias, and the pre-factor of 4/3 arises from a spherical average over the
infall peculiar velocities (Kaiser 1987). The value of bias $b_{\rm g}$ for a halo
mass $M$ may be better approximated using the Press-Schechter formalism (Mo \&
White~1996), modified to include non-spherical collapse (Sheth, Mo \&
Tormen~2001)
\begin{eqnarray}
\label{bias}
\nonumber
b_{\rm g}(M,z) = 1 &+& \frac{1}{\delta_{\rm c}}\left[\nu^{\prime2}+b\nu^{\prime2(1-c)}\right.\\
&&\hspace{10mm}-\left.\frac{\nu^{\prime2c}/\sqrt{a}}{\nu^{\prime2c}+b(1-c)(1-c/2)}\right],
\end{eqnarray}
where $\nu\equiv {\delta_{\rm c}^2}/{\sigma^2(M)}$,
$\nu^\prime\equiv\sqrt{a}\nu$, $a=0.707$, $b=0.5$ and $c=0.6$. Here $\sigma(M)$ is the variance of the density field smoothed on a mass scale $M$ at redshift $z$. This
expression yields an accurate approximation to the halo bias determined
from N-body simulations (Sheth, Mo \& Tormen~2001). Note that in linear theory the bias (equations~\ref{biasPS} and \ref{bias}) is a function of halo mass, but not of overdensity or scale.
\section{Reionization induced galaxy bias in observed galaxy samples}
\label{reionbias}
We next introduce an additional galaxy bias due to reionization. In
addition to color selection criteria, clustering surveys typically
consider galaxies that are either selected to be above a minimum flux
threshold (e.g. Adelberger et al.~2005) or to lie in a particular absolute
magnitude range (e.g. Eisenstein et al.~2005). Suppose that reionization
caused an overdensity dependent change in the flux per unit halo mass by a
factor $\mu$. In either of the selection scenarios mentioned above, this
effect will result in the host halos of survey galaxies being smaller in
that region by an average factor $\mu$. Following the previous formalism,
we find the likelihood for observing a galaxy which is subject to a
decrease in mass-to-light ratio of $\mu$ in a region of overdensity
$\delta$,
\begin{eqnarray}
\nonumber \mathcal{L}_{\rm reion}(\delta) &=&
\left[(1+\delta)\frac{dn}{dM}(\nu_\mu)\right]\left[(1+\delta)\frac{dn}{dM}(\nu)\right]^{-1}
\\&\equiv&1+\delta b_{\rm reion}(\delta),
\end{eqnarray}
where $(dn/dM)(\nu_\mu)$ is the perturbed mass function evaluated at
$M/\mu$, and $b_{\rm reion}(\delta)$ is defined to be the bias factor due
to reionization and which could be a function of $\delta$. Note that since
surveys are magnitude limited or measured in logarithmic bins of
luminosity, there is no factor of $\mu^{-1}$ as would be required when
discussing the number-counts per unit luminosity\footnote{There is also no
factor of $\mu^{-1}$ to account for depletion as would be appropriate if
the enhancement in flux were due to gravitational lensing.}.
We may then write an expression for the likelihood of observing a galaxy
that includes both the bias due to enhanced formation in overdense regions,
and a possible effect of reionization
\begin{eqnarray}
\nonumber
\label{rbias}
\mathcal{L}(\delta) &=& \mathcal{L}_{\rm g}(\delta)\times\mathcal{L}_{\rm reion}(\delta)\\
\nonumber
&=&(1+b_{\rm g}\delta)\times(1+b_{\rm reion}\delta) \\
&\sim& 1+[b_{\rm g}+b_{\rm reion}(\delta)]\delta.
\end{eqnarray}
In the second equality we have parameterized the effect of variance in the
reionization redshift is an additive contribution to the galaxy bias, and
have then noted (in the third equality) that we are working in a regime where
$b\times\delta\ll1$. In the next section we will develop a model that will
allow us to estimate the magnitude of this effect.
\section{Patchy Reionization and Galaxy Bias}
\label{model}
In this section we describe a model for the effect of reionization on galaxy bias,
and show that reionization increases the bias in the observed
overdensity of galaxies relative to the underlying density field. In later sections
we will use this model to make
qualitative predictions for the impact of reionization on observed
clustering in a range of galaxy samples. Our intention is not to
produce a detailed model in order to make quantitative predictions or
comparisons with the data. Such a model would require detailed
numerical simulations, and would in any case require a number of
uncertain astrophysical assumptions. However our model is adequate for
the purposes of assessing the importance of reionization in clustering
measurements, and for making qualitative predictions about its
dependence on quantities such as survey redshift and luminosity.
\subsection{Reionization redshift and large scale overdensity}
Large-scale inhomogeneity in the cosmic density field
leads to structure-formation that is enhanced in overdense regions and
delayed in under-dense regions. Thus, overlap of ionized regions and hence
heating of the IGM would have occurred at different times in different
regions due to the cosmic scatter in the process of structure
formation within finite spatial volumes (Barkana \& Loeb~2004). The
reionization of hydrogen would have been completed within a region of
comoving radius $R$ when the fraction of mass incorporated into collapsed
objects in that region attained a certain critical value, corresponding to
a threshold number of ionizing photons emitted per baryon
The ionization state of a region is governed by the
enclosed ionizing luminosity, by its overdensity, and by dense pockets of
neutral gas that are self shielding to ionizing radiation. There is an
offset (Barkana \& Loeb~2004) $\delta z$ between the redshift when a region
of mean overdensity $\delta$ achieves this critical collapsed fraction,
and the redshift ${\bar z}$ when the universe achieves the same collapsed
fraction on average. This offset may be computed (Barkana \& Loeb~2004)
from the expression for the collapsed fraction (Bond et al.~1991) $F_{\rm
col}$ within a region of overdensity $\delta$ on a comoving scale $R$,
\begin{equation}
F_{\rm col}(M_{\rm min})=\mbox{erfc}\left[\frac{\delta_{\rm
c}-\delta}{\sqrt{2[\sigma_{\rm R_{\rm min}}^2-\sigma_{\rm
R}^2]}}\right],
\end{equation}
yielding
\begin{equation}
\label{scatter1}
\frac{\delta
z}{(1+\bar{z})}=\frac{\delta}{\delta_{\rm
c}}-\left[1-\sqrt{1-\frac{\sigma_{\rm R}^2}{\sigma_{\rm R_{\rm
min}}^2}}\right],
\end{equation}
where $\sigma_{\rm R}$ and $\sigma_{R_{\rm min}}$ are the variances in the
power-spectrum at $z$ on comoving scales corresponding to the region of
interest and to the minimum galaxy mass $M_{\rm min}$, respectively. On
large scales equation~(\ref{scatter1}) reduces to
\begin{equation}
\label{scatter}
\delta z \approx (1+z)\frac{\delta}{\delta_{\rm c}}
\end{equation}
The offset in the ionization redshift of a region depends on its linear
overdensity, $\delta$. As a result, the distribution of offsets, and
therefore the scatter in the reionization redshift may be obtained directly
from the power spectrum of primordial inhomogeneities (Wyithe \&
Loeb~2004). As can be seen from equation~(\ref{scatter}), larger regions
have a smaller scatter due to their smaller cosmic variance. Note that
equation~(\ref{scatter}) is independent of the critical value of the
collapsed fraction required for reionization. We also note that since at
high redshift the variance of the linear density field increases
approximately in proportion to $(1+z)^{-1}$, the typical delay in {\em
redshift} is almost independent of cosmic time (in addition to not being a
function of collapsed fraction).
Following the reionization of hydrogen, doubly ionized helium remained in
the pre-overlap phase. At this time, the mean-free-path of HeIII ionizing
photons was therefore limited to be smaller than the size of the HeIII
regions. As is the case for hydrogen, the ionization state of these regions
was therefore dependent on the local source population. If it is true that
quasars were responsible for the reionization of helium, then these are
much rarer sources than the galaxies responsible for the reionization of
hydrogen. As a result there would be large fluctuations in the HeIII
reionization redshift due to Poisson fluctuations in the number of sources
and variations in the opacity of the IGM (Reimers et al. 2006). These
fluctuations would not be simply related to the local large scale
overdensity. On the other hand, the arguments regarding the fluctuations in
the redshift of hydrogen reionization due to enhanced structure formation
in overdense regions must also apply to the reionization of helium, and
these will be present in addition to the Poisson noise. As already
mentioned, the delay in reionization due to an overdensity $\delta$ is not
a function of cosmic time. Thus we see from equation~(\ref{scatter}) that
since the delay is also independent of collapsed fraction (which we expect
to be different for hydrogen and helium reionization), the delay ($\delta
z$) in the redshift of HeIII reionization for a particular value of the
comoving overdensity $\delta$ is equal to the delay for the reionization
of hydrogen. As a result, in an overdense region both hydrogen and helium
would be reionized early by the same offset in redshift. The large-scale
variations in the reionization redshifts of hydrogen and helium lead to a
different accretion histories for galaxies, which in-turn lead to different
star-formation histories, and thus a change in the luminosity of a galaxy
given a total stellar mass due to the different age distribution of the
stellar population.
Before proceeding, we draw attention to the approximation of sudden
reionization in which the reionization of a volume on a scale $R$ occurs at
a redshift $z$. Of course some regions within that volume will have been
reionized earlier. However our point is that, on average, a region of IGM will be
reionized earlier by $\Delta z$ within a volume of scale $R$. A critical
component of our model is the assumption that the average variation in the
redshift at which the gas that ultimately makes the progenitor galaxies was
reionized is also equal to $\Delta z$.
Weinmann et al.~(2007) have recently employed numerical simulations of
reionization to compute whether a galaxy observed at the present time
formed in a region of IGM prior to it being reionized, or whether it formed
in a region that had already been reionized. In their work the time of
formation of a galaxy refers to the identification of the earliest
progenitor of the local galaxy above the resolution limit of the
simulation. These authors find that more massive galaxies had progenitors
that formed in neutral regions while less massive galaxies formed in
ionized regions. They also conclude that there is no correlation between
the reionization history of field galaxies and their environment or large
scale clustering (however see discussion below). While very useful in
understanding the relation between the early formation histories of
galaxies and the reionization process, these findings are not directly
applicable to our discussion, which aims to calculate the average effect of
reionization on all the progenitors of a low redshift galaxy rather than on
its earliest progenitor.
But interestingly, the numerical results presented by Weinmann et
al.~(2007) show consistency with the quantitative expectations of our
simple model. Their Figure~6 shows the relation between the reionization
redshift for a massive galaxy and the local overdensity of massive galaxies
within 10 comoving Mpc. The simulations predict a large scatter of $\pm1$
redshift unit about a mean relation that varies by $\Delta z\sim0.8$
between present-day galaxy overdensities of $-1$ and 1 on 10 comoving Mpc
(cMpc) scales, with overdense environments reionizing at higher
redshift. While the scatter is large as expected, the statistical accuracy
of the measured mean is significantly below $\Delta z=0.8$ due to the large
sample size of model galaxies. This implies that Weinmann et al.~(2007) do
indeed infer a relation between large scale environment and the mean
reionization redshift, but that the variation in the mean relation is not
significant with respect to the scatter among individual galaxies. We can
compare the mean relation of Weinmann et al.~(2007) for the reionization
redshift of the earliest progenitor from simulations with expectations from
our simple model for patchy reionization. On a scale of 10 cMpc,
the variance in the density field at $z=0$ is
$\sigma(10\mbox{cMpc})\sim0.8$. Since the bias is around $b\sim1.1$ for the
$5\times10^{12}M_\odot$ galaxies from which the simulated relation was
calculated, we expect 1-sigma fluctuations in the overdensity of galaxies
at $z=0$ on 10 cMpc scales to be $\pm1$. Thus the numerical
simulations of Weinmann et al.~(2007) predict fluctuations in the
reionization redshift around a mean of $z\sim8.5$ (predicted by their
model) of $\delta z\sim0.4$ for the earliest progenitor of massive local
galaxies. Our simple model predicts the fluctuation in the reionization
redshift of the IGM with an overdensity of $\delta\sim0.8$ to be $\delta
z\sim (1+z)\times(\delta D(z))/\delta_{\rm c}\sim0.6$, where $D$ is the
growth factor (from equation~\ref{scatter}). This number is similar to the
typical fluctuations in the reionization redshift of the earliest
progenitor in the simulations of Weinmann et al.~(2007). Weinmann et
al.~(2007) also argue that the way in which a galaxy is reionized (either
externally or internally) is not sensitive to the local overdensity of
galaxies. Thus numerical simulations predict that the process of
reionization is similar in overdense and underdense regions, but that
reionization is accelerated in overdense regions. These findings from
numerical simulation support the basic assumptions of our simple model.
\subsection{Model of Reionization induced bias}
To develop our model we consider a galaxy residing in a halo of mass
$M$ at $z\ll z_{\rm reion}$. This galaxy has accreted its mass via a
merger tree, which we generate using the method described in
Volonteri, Haardt \& Madau~(2003). We describe this tree as having a
number $N_{\rm halo}(z_j)$ of halos of mass $M_i(z_j)$ at redshift
$z_j$, where the number of redshift steps is $N_z$, with values of
redshift that increase from the redshift of the primary halo in the
tree so that $z_0=z$. These halos grow in mass due to mergers of progenitor
halos, and due to accretion (which, in the Press-Schechter formalism,
is the sum of mergers with halos below the resolution limit of the
merger tree).
First consider halos above the minimum mass for star formation (which is either $M_{\rm cool}$ in neutral regions, or $M_{\rm reion}$ in reionized regions respectively).
At each redshift step, a fraction of the baryonic mass gained by these halos through accretion is turned into stars, thus
\begin{eqnarray}
\nonumber
\Delta M_{\star,i}(z_j) &=& f_\star\frac{\Omega_{\rm b}}{\Omega_{\rm m}}\left(M_i(z_j)-M_i(z_{j+1})\right)\hspace{2mm}\mbox{for}\hspace{2mm}M>M_{\rm min}\\
\Delta M_{\star,i}(z_j) &=& 0 \hspace{5mm}\mbox{otherwise},
\end{eqnarray}
where $f_\star$ is the star formation efficiency. We choose $f_\star=0.3$ throughout this paper, though our conclusions are not sensitive to this choice.
In addition, we assume that whenever a progenitor halo $i$ at redshift $z_j$ in the merger tree crosses the minimum mass for star formation through the merger of two sub-units $M_{i,1}(z_{j+1})$ and $M_{i,2}(z_{j+1})$, stellar mass is added in the amount
\begin{equation}
\Delta M_{\star,i}(z_j) = \left(f_\star\frac{\Omega_{\rm b}}{\Omega_{\rm m}}M_i(z_j)\right) - \left(M_{\star,i,1}(z_{j+1}) +M_{\star,i,2}(z_{j+1})\right),
\end{equation}
where $M_{\star,i,1}(z_{j+1})$ and $M_{\star,i,2}(z_{j+1})$ are the stellar mass content of the progenitors prior to the merger.
Similarly, whenever a progenitor halo $i$ at redshift $z_j$ in the merger tree crosses the minimum mass for star formation through accretion, stellar mass is added in the amount
\begin{equation}
\Delta M_{\star,i}(z_j) = f_\star\frac{\Omega_{\rm b}}{\Omega_{\rm m}}M_i(z_j) - M_{\star,i}(z_{j+1}),
\end{equation}
where $M_{\star,i}(z_{j+1})$ is the stellar mass content of the halo at the previous redshift step.
The subtraction of the second term is necessary in each of the latter cases because the minimum mass in a region increases suddenly at the local reionization epoch.
The total stellar mass added at each step is the sum of these three contributions. We may then construct a stellar-mass accretion history
\begin{equation}
\frac{d(\Delta M_\star)}{dz}(z_j) \sim \frac{1}{z_j-z_{j-1}}\Sigma_{i=0}^{N_{\rm halo}(z_j)}\Delta M_{\star,i}(z_j).
\end{equation}
Our scheme neglects any star formation that may occur in recycled gas
following a major merger. However star-formation in recycled gas at
low redshift is by definition already above the minimum threshold for
star-formation and so should not be sensitive to the local redshift of
reionization. We have used a sudden transition of the minimum virial
temperature for star formation, rather than the more gradual
transition described by the filtering mass, which takes account of the
formation time-scale for a collapsing halo and the full thermal
history of the IGM. In the hierarchical build-up of a halo, the merger
of collapsed progenitors, combined with accretion of mass onto these
progenitors can be identified with the formation of the final halo
within the spherical collapse model and Press-Schechter formalism. The
sudden transition of virial temperature is therefore the appropriate
choice for our model since the star formation is calculated to
coincide with the formation of a halo during a merger tree. The
fraction of mass in the halo at redshift $z$ that was already
collapsed prior to reionization is therefore explicitly accounted for
in our model.
Before proceeding we note that the calculation of the merger tree is independent of the large-scale overdensity $\delta$.
We demonstrate this explicitly in Appendix~\ref{app1}, for a
cosmology that ignores the cosmological constant (as is appropriate at
high redshift). Thus, to estimate the additional contribution to the bias
that is due to changes in the star formation history
associated with the reionization variable redshift, we may compute one merger tree
within the mean background cosmology, and then change only the
reionization redshift to account for the overdensity of the region in
which the parent halo formed. The total bias for the observation of
this galaxy is then the reionization induced bias computed from this
merger tree, plus the usual galaxy bias. This independence of the
merger tree on $\delta$ greatly simplifies calculation of the
dependence of the apparent brightness of a galaxy within a halo of
mass $M$ on the large-scale overdensity.
We next compute the spectrum of stellar light that results from this star
formation history using the stellar population model of Leitherer et
al.~(1999). We assume a 1/20th solar metallicity population with a
Scalo~(1998) mass-function and begin with the time dependent spectrum for a
burst of star formation\footnote{Model spectra of star forming galaxies
obtained from http://www.stsci.edu/science/starburst99/.}. This yields the
emitted energy per unit time per unit frequency per solar mass of stars
$d^2\epsilon_\nu/dtdM(t_0-t_j)$ at a time $t_0-t_j$ following the burst,
where $t_0$ and $t_j$ are the ages of the universe at the redshift of the
primary halo ($z_0$) and $z_j$ respectively. The flux (erg/s/cm$^2$/Hz)
from the galaxy at $z$ then follows from the sum over the starburst
associated with each star formation episode. We find
\begin{equation}
\label{fnu}
f_\nu = \Sigma_{j=0}^{Nz} \Delta M_\star(z_j) \frac{d^2\epsilon_\nu}{dtdM}(t_0-t_j) \frac{1}{4\pi D_{\rm L}(z_0)^2}(1+z_0),
\end{equation}
where $D_{\rm L}$ is the luminosity distance at $z$. We note that our scheme neglects enrichment of gas prior to star formation, so that all star-bursts are assumed to have the same metallicity. Since we are not computing the contribution from star-formation in recycled gas, we do not expect this assumption to have a large influence on our results. Moreover, the UV spectra of galaxies are very sensitive to the dust content of a galaxy and therefore also sensitive to the metallicity. However the reionization bias would be sensitive to the ratio of fluxes in which the effect of the dust on a single spectrum will vanish.
We can then use equation~(\ref{fnu}) to compute the spectra for two star-formation histories that correspond to the mean universe and to an overdensity $\delta$, with reionization redshifts separated by $\delta z$. These spectra in turn allow us to determine ratio of fluxes and hence to compute a value of $\mu(\delta)$. As described above, in order to compute the contribution of reionization to galaxy bias we make this comparison
directly, using the same merger tree with different values of the
reionization redshift to obtain different star formation histories.
Equation~(\ref{fnu}) implies that the apparent flux of a galaxy will
be sensitive to its star-formation history, which will in turn be
sensitive to the redshift of reionization. We would now like to
calculate the typical flux change induced by the effect of a large
scale overdensity on the reionization redshift. In order to achieve
this we must determine the change in flux as a function of scale, and
at a range of over-densities. As shown above, the delay in the
reionization redshift is proportional to the variance of the
power-spectrum on the scale of interest. Since the variance decreases
towards large scales, we find that the fluctuations in reionization
redshift should also be smaller on large scales. In order to
investigate the scale dependence of the bias, we could therefore
compute the difference in star formation histories corresponding to
different delays in reionization over a number of spatial
scales. However we find that for individual merger trees, the value of
the apparent magnitude change (i.e. the logarithm of the
observed flux) is approximately proportional to the delay in
reionization. Thus we can estimate the change in magnitude due to reionization using a first order expansion in $\delta z$
\begin{eqnarray}
\label{deltaf}
\nonumber
2.5\log_{10}{\mu} &=& 2.5\frac{d\log_{10}[f_\nu(z_{\rm ol})]}{dz_{\rm ol}}\delta z\\
&\sim& 2.5\log_{10}\left[\frac{f_\nu(z_{\rm ol})}{f_\nu(z_{\rm ol}+\Delta z)}\right]\frac{\delta z}{\Delta z},
\end{eqnarray}
where $z_{\rm ol}$ is the overlap (reionization) redshift, which we assume
to be $z=6$ throughout this paper (e.g. White et al.~2003; Fan et
al.~2006), and $\Delta z=0.25$ is the separation in overlap redshifts of
the two star formation histories computed for each merger tree. Thus the
magnitude change can be computed for a single length scale $R$ and
overdensity $\delta$, and then translated to other length scales and over
densities in proportion to the variance. We employ this approximation which
greatly simplifies our calculations.
Given a scale $R$ and variance $\sigma(R)$ we can now estimate the contribution of reionization to galaxy bias. For each merger tree $k$, we compute the bias averaged over likeli-hoods at each $\delta$ in the density field
\begin{equation}
b_{{\rm reion},k} = \frac{1}{\sqrt{2\pi}\sigma(R)}\int d\delta \left[\frac{1-\mathcal{L}(\delta)}{\delta}\right]\exp{\left(-\frac{\delta}{2\sigma(R)}\right)^2}.
\end{equation}
To get the average bias for the galaxy population, we then average the bias evaluated using $N_{\rm trees}$ different merger trees,
\begin{equation}
b_{\rm reion} = \frac{1}{N_{\rm trees}}\Sigma_{k=0}^{N_{\rm trees}}b_{{\rm reion},k}.
\end{equation}
In the remainder of this paper, we use the above model to estimate the contribution of reionization to galaxy bias for several existing and planned galaxy surveys.
\section{Ly-break Galaxies}
\label{Lybreak}
\begin{figure*}
\includegraphics[width=13cm]{fig1.eps}
\caption{ The effect of reionization on the star formation histories of galaxies. These examples correspond to typical Ly-break galaxies at $z=3$ in the survey of Steidel et al. (2003). \textit{Upper Left:} The star formation rate summing over all halos that end up as part of the galaxy in a $10^{12}M_\odot$ halo at $z=3$. The solid and dotted lines refer to histories where overlap occurred at $z=6.125$ and $z=5.875$ respectively. \textit{Central Left:} The difference in star formation rate for the two histories. \textit{Lower Left:} The difference in cumulative stellar mass for the two histories. \textit{Upper Right:} Rest frame luminosity of ten example Ly-break galaxies at $z=3$. \textit{Central Right:} Observed flux (including Ly$\alpha$ absorption) for the ten example galaxies. \textit{Lower Right:} The magnitude change induced by a delay in reionization of 0.25 units of redshift. }
\label{fig1}
\end{figure*}
\begin{figure*}
\includegraphics[width=13cm]{fig2.eps}
\caption{ Examples of clustering bias in Ly-break galaxies induced by
reionization. \textit{Upper Left:} The primary color selection (Steidel et
al.~2003) for LBGs at $z\sim1.7$ (solid points), $z\sim2.3$ (open points)
and $z\sim3$ (crosses). \textit{Central Left:} The apparent magnitudes and
colors of the model galaxies. \textit{Lower Left:} The $U_n-G$ color as a
function of the change in $\mathcal{R}$-band magnitude induced by
reionization. \textit{Upper Right:} The bias introduced by reionization in
cases where helium reionization at $z\sim3.5$ is considered in addition to
hydrogen reionization (open squares) and where it is not (solid squares). The bias
was computed assuming a flux evaluated at a rest-frame wavelength of $1350$\AA within a $400$\AA
window. The galaxy bias is shown by the solid line for comparison. The
error bars represent the statistical noise in the simulations due to the
finite number of merger trees. \textit{Central Right:} The ratio of the
component of bias introduced through reionization to the usual galaxy
bias. \textit{Lower Right:} The factor by which the mass will be
overestimated in clustering analyses where reionization is not
considered. }
\label{fig2}
\end{figure*}
As a first application of our model we construct mock spectra
corresponding to Ly-break galaxies (Steidel et al. 2003) at
$z=3$. Examples of the model star formation history and resulting
galaxy spectra are shown in Figure~\ref{fig1} assuming a halo mass of
$M=10^{12}M_\odot$ (Adelberger et al.~2005). Here we include only
hydrogen reionization as the mechanism that introduces fluctuations in
the star-formation history.
In the upper left panel of Figure~\ref{fig1} we show an example of the star
formation rate summed over all halos that end up as part of the galaxy in a
$10^{12}M_\odot$ halo at $z=3$. Here the solid and dotted lines refer to
histories where the overlap of ionized regions occurred at $z=6.125$ and
$z=5.875$ respectively. The effect of the reionization redshift on these
star formation histories is more easily seen in the central left panel
where we plot the difference in star formation rate between the two
histories. Figure~\ref{fig1} shows that early reionization initially
results in a deficit of star formation. This deficit is then made up
continually until $z=3$. By $z=3$ the total stellar mass is the same for
both histories ($M_\star=f_\star\Omega_{\rm b}/\Omega_{\rm m}M$) as
required for consistency of the model. This behavior is also demonstrated in the lower left panel of
Figure~\ref{fig1} where we plot the difference in cumulative stellar mass,
summing over all halos that end up as part of the primary galaxy at $z=3$.
Figure~\ref{fig1} also shows examples of the resulting galaxy
spectrum. In the upper right panel we show the rest frame luminosity
for ten example $10^{12}M_\odot$ galaxies at $z=3$. In each case
reionization was at $z=6.125$. The differences between these spectra
arise due to the slightly different age distribution of the stellar
populations that result from the stochastic buildup of mass in the
merger tree. In the central right panel of Figure~\ref{fig1} we show
the corresponding observed flux [including mean Ly$\alpha$ absorption, see e.g. Fan et al.~(2006)] for
the same ten example Ly-break galaxies at $z=3$. The distinctive
Ly-break near 4000\AA~ is clearly visible in these spectra. Finally,
in the lower right hand panel we show the fractional change
($\delta_{\rm flux}$) in the observed flux that is induced by a delay
in reionization of $\Delta z=0.25$ units of redshift. This flux
change, which is related to the parameter $\mu$ through
equation~(\ref{deltaf}) corresponds to differences in the
star-formation histories that are comparable to the example shown in
the left hand panels of Figure~\ref{fig1}. The fluctuations are at the
level of a few tenths to a few percent. We investigate the bias that
arises from the resulting values of $\mu$ in \S~\ref{ss_reionbias}.
\subsection{The colors of simulated Ly-break galaxies}
Our aim in this paper is to evaluate the importance of reionization
with respect to galaxy bias in measurements of galaxy clustering. To
this end we have constructed model star-formation histories that
include the effect of reionization, and computed the effect of
reionization on the corresponding model galaxy spectra. In order for
our results to be applicable to surveys of real galaxies, we must, at
a minimum demonstrate that our model produces realistic spectra with
colors that would see the model galaxies selected into the survey of
interest. Therefore, before describing our results for the
reionization induced bias we demonstrate that our model galaxies have
colors and magnitudes that correspond to those of real Ly-break galaxies (LBG).
In the upper left panel of Figure~\ref{fig2} we show the position of
100 model Ly-break galaxies within the primary color selection\footnote{To estimate the colors of Ly-break galaxies we assume top-hat filters of central wavelength $\lambda_0$ and width $\Delta \lambda$ to approximate the filter set used in Steidel et al.~(2003). We use $AB$-magnitudes throughout this paper. The filters have $(\lambda_0,\Delta\lambda)= (3550,600)$ for the $U_n$-band; $(\lambda_0,\Delta\lambda)=(4780,1100)$ for the $G$-band; $(\lambda_0,\Delta\lambda)=(6830,1250)$ for the $\mathcal{R}$-band; $(\lambda_0,\Delta\lambda)=(8100,1650)$ for the $I$-band.}
(Steidel et al.~2003) for LBGs at $z\sim1.7$ (solid points),
$z\sim2.3$ (open points) and $z\sim3$ (crosses). The galaxies at
$z\sim3$ are well separated from those at lower redshift due to the
Ly-break moving to a wavelength beyond the $U_n$-band. Our model
galaxies at $z\sim1.7$ and 2.3 have similar colors that are close to
the selection cutoff. This is consistent with the observed galaxies,
which have overlapping redshift distributions when selected via this
criteria (Adelberger et al.~2005). For their clustering analysis
Adelberger et al.~(2005) restricted themselves to objects with
$23.5<\mathcal{R}<25.5$. In the central left panel we show the position of our
model galaxies in a color-magnitude diagram. The apparent magnitudes
of these model galaxies are consistent with the observed
population. Thus our model produces Ly-break galaxies with both the
correct colors, and the correct luminosity. Finally we check that
reionization induced changes in the observed flux are not sensitive to
the observed galaxy color. In the lower left panel of
Figure~\ref{fig2} we show the $U_n-G$ color as a function of the change
in $\mathcal{R}$ magnitude induced by reionization. We find no systematic trend
of the flux variation with galaxy color.
\subsection{Reionization induced bias for Ly-break galaxies}
\label{ss_reionbias}
\begin{figure*}
\includegraphics[width=13cm]{fig3.eps}
\caption{ Examples of clustering bias in Ly-break galaxies induced by reionization as a function of the halo mass. \textit{Left Hand Panels:} The bias introduced by reionization in cases where helium reionization at $z\sim3.5$ is considered in addition to hydrogen reionization (open squares) and where it is not (solid squares). The bias was computed assuming a flux evaluated at a rest-frame wavelength of $1350\AA$ within a $400\AA$ window. The galaxy bias is shown by the solid line for comparison. The error bars represent the statistical noise in the simulations due to the finite number of merger trees. \textit{Right Hand Panels:} The factor by which the mass will be overestimated in clustering analyses where reionization is not considered. Results are shown for two halo masses, $M=10^{11}M_\odot$, and $M=10^{13}M_\odot$. In each case the corresponding results for $M=10^{12}M_\odot$, as presented in Figure~\ref{fig2}, are shown for comparison (light lines).}
\label{fig3}
\end{figure*}
We now present an estimate for the reionization induced bias in the
sample of Ly-break galaxies. We show results at a scale of $R=10$
comoving Mpc, which is a factor of $\sim3$ larger than the
clustering length at $z\sim3$ (Adelberger et al.~2005). We evaluate
the bias for fluxes measured at a rest frame wavelength of 1350\AA, and
within a 400\AA~ wide band (this choice allows us to compare the predicted bias over the redshift range $1.7<z<5.5$). In the upper right panel of
Figure~\ref{fig2} we show the bias introduced by reionization in cases
where hydrogen reionization alone is considered (solid squares), as
well as cases where helium reionization at $z\sim3.5$ is considered in
addition to hydrogen reionization (open squares). Also shown for
comparison is the galaxy bias due to enhanced structure formation
(solid line). In this figure the error bars represent the statistical
noise in the simulations due to the finite number of merger trees. In
order to better see the relative contributions of enhanced structure
formation and reionization induced galaxy bias, in the central right
panel we show the ratio of the component of bias introduced through
reionization to the usual galaxy bias. Reionization represents a
10-20\% correction to the galaxy bias in Ly-break galaxy samples at
$1.7\la z\la3$. This correction corresponds to a predicted amplitude for
the galaxy correlation function that can be 50\% larger than the prediction
in the absence of consideration of reionization. Thus reionization
provides a correction to the clustering amplitude that is
in excess of the observational error for the existing
Ly-break galaxy samples at $1.7\la z\la3$.
One of the primary uses for measurements of clustering in a galaxy
sample is the estimation of host halo mass. This mass estimate is made
by measuring the bias, which is then interpreted theoretically in
terms of host mass. However the results summarized in Figure~\ref{fig2} suggest that existing estimates of the galaxy bias could be
systematically in error, at a level significantly larger than the
observational error, due to the neglect of the effect of
reionization. This in turn implies that estimates of the host masses
in galaxy samples are also systematically in error. To evaluate the
importance of this systematic error, we estimate the ratio of the
inferred host masses with and without the inclusion of reionization,
yielding
\begin{equation}
\ln(M_{\rm reion+gal})-\ln(M_{\rm gal}) \approx \frac{d\ln(M)}{db} [(b_{\rm
g}+b_{\rm reion})-b_{\rm g}],
\end{equation}
or
\begin{equation}
\frac{M_{\rm reion+gal}}{M_{\rm gal}} \approx \exp{\left[b_{\rm
reion}\frac{d\ln M}{db}\right]},
\end{equation}
where $dM/db$ is evaluated via equation~(\ref{bias}). The factor by
which the mass will be overestimated in clustering analyses where
reionization is not considered is plotted in the lower right hand
panel of Figure~\ref{fig2}. We find that masses in existing Ly-break
galaxy surveys (Adelberger et al.~2005) have been overestimated by
factors of between 1.5 and 2.
In addition to showing results evaluated at redshifts corresponding to
the Ly-break galaxy sample, we also show results for hypothetical galaxy
samples $4\leq z\leq5.5$. At these redshifts helium is not doubly
reionized, and so all modifications to the star formation history are
due to the reionization of hydrogen at $z\sim6$. This is in contrast
to samples at $z\la3$ where helium reionization is
complete. Figure~\ref{fig2} demonstrates that we expect to
see a significant jump in the amplitude of the clustering for galaxy
samples of fixed absolute magnitude following the double reionization
of helium at $z\sim3.5$.
Up to this point we have presented results for $R=10$ comoving Mpc. We do
not explicitly show results corresponding to other length scales in this
paper for the following reason. On the scales of interest ($\sim3-100$
cMpc), the variance is approximately a power-law with $R$, while
the mass function is approximately a power-law with $\log(M)$. It turns out
that these power-laws approximately cancel, leaving the bias induced by
reionization almost independent of scale. This independence is a
coincidence. A different slope of the primordial power-spectrum would have
led to a scale dependent bias. However the conclusion that the bias is not
scale dependent should be treated with caution for two reasons. First, we
are unable to rule out scale dependence at the level of a few percent at
the numerical accuracy of our simulations. Second, as discussed in
\S~\ref{scaldependance}, the scale-independence might be broken by
additional astrophysical effects. Thus, future observations aiming to
measure clustering at the percent level over a large range of spatial
scales will need to carefully account for this possibility.
\subsection{Mass dependence of the reionization bias}
\begin{figure}
\includegraphics[width=8.5cm]{fig4.eps}
\caption{ Examples of how reionization will effect the observed colors of Ly-break galaxies. The figure shows the change in $U_n-G$ color for LBGs at $z\sim1.7$ (solid points), $z\sim2.3$ (open points) and $z\sim3$ (crosses), due to a fluctuation in the reionization redshift of $\delta z=0.25$. }
\label{fig4}
\end{figure}
Thus far our discussion of Ly-break galaxies has assumed a halo mass
of $10^{12}M_\odot$, corresponding to observed Ly-break galaxies. In
this section we describe the dependence of the predicted reionization
induced galaxy bias on the halo mass. In Figure~\ref{fig3} we show
examples of the clustering bias in Ly-break galaxies induced by
reionization for halo masses of $M=10^{11}M_\odot$ and
$10^{13}M_\odot$. In the left hand panels we show the bias introduced
by reionization in cases where helium reionization at $z\sim3.5$ is
considered in addition to hydrogen reionization (open squares)
and where it is not (solid squares). As before the bias was
computed assuming a flux evaluated at a rest-frame wavelength of $1350\AA$ within a $400\AA$
window. The usual galaxy bias is shown by the solid line for
comparison. In addition, in each case the corresponding results for
$M=10^{12}M_\odot$, as presented in Figure~\ref{fig2}, are shown for
comparison (light lines).
We find that the contribution to the bias due to reionization is
fairly insensitive to the halo mass. To understand this we note that
although we would expect the larger halos to have begun forming
earlier, and so to have their star formation histories less effected by the reionization of the IGM, this is offset by the steeper
mass function of massive halos. On the other hand the galaxy bias due
to enhanced structure formation in over dense regions is quite
sensitive to the halo mass, and so we find that the fractional contribution to the galaxy bias is smaller for more
massive systems. As a result, the systematic error introduced into the
estimate of halo mass from clustering amplitude is less serious for
more massive systems. In the right hand panels of Figure~\ref{fig3}
we show the factors by which the mass will be overestimated in
clustering analyses where reionization is not considered. While halos
with masses near $M\sim10^{11}$ would be incorrectly inferred by a
factor that could be larger than 3, the systematic error on very
massive systems of $M\sim10^{13}M_\odot$ would be at only a level of
10s of percent. This implies that the reionization bias will become
more important as future surveys begin to discover populations of less
massive galaxies at high redshift.
\subsection{Reionization and the observed colors of Ly-break galaxies}
The reionization induced bias should be sensitive to the selection
band. In the case of Ly-break galaxies, we would therefore expect that
the clustering amplitude would be sensitive to the band in which the
flux selection was performed. Alternatively, overdense regions would
therefore be expected to have a slightly bluer population of
galaxies.
In Figure~\ref{fig4} we show the change in $U_n-G$ color for LBGs at
$z\sim1.7$ (solid points), $z\sim2.3$ (open points) and $z\sim3$ (crosses),
due to a fluctuation in the reionization redshift of $\delta
z=0.25$. Galaxies in overdense regions, have systematically bluer colors
due to their younger stellar populations. The example shown for Ly-break
galaxies has fluctuations in $\Delta(U_n-G)$ color at the $\sim0.01-0.02$
level given a fluctuation in the redshift of overlap amounting to $\delta
z=0.25$ redshift units. On a scale of 10 comoving Mpc, the fluctuation in
the overlap redshift around a mean of $z=6$ is $\langle (\delta
z)^2\rangle^{1/2}\sim0.6$ (from equation~\ref{scatter}). Hence the expected
color variation between overdense and under dense regions would be
$\Delta(U_n-G)\sim0.03-0.05$ magnitudes. This expected correlation between
galaxy color and overdensity would be evidence for the reionization induced
galaxy bias, and could be used to calibrate its effect empirically.
This systematic variation in color is much smaller than the range of
colors in the observed samples. However high redshift samples are
selected to be redder than a certain limit. In practice one would
therefore have to be careful that the systematically bluer colors did
not bias the sample {\em against} finding galaxies in overdense
regions. At the redshifts of LBGs, the shift of the Ly-break with
redshift primarily effects the $U_n-G$ color. On the other hand
reddening effects both the $U_n-G$ and $G-\mathcal{R}$ colors. As a
result, LBGs are selected to lie above a line with positive gradient
in the $(U_n-G)-(G-\mathcal{R})$ color-color space. We note that
like reddening, the reionization induced color change will be in both
bands, and will therefore transform the position of the galaxy in
color-color space in a direction parallel to the selection criteria
for LBGs. As a result, we do not expect the reionization induced color change to introduce a bias through the survey selection criteria.
\section{Surveys for Baryonic Oscillations}
\begin{figure*}
\includegraphics[width=13cm]{fig5.eps}
\caption{ The effect of reionization on the star formation histories of
galaxies. These examples correspond to typical SDSS Luminous Red Galaxies
(LRG) at $z=0.3$. \textit{Upper and Lower Left:} The primary color
selection (Eisenstein et al.~2001) for LRGs, together with the locations of
our model galaxies. \textit{Upper Right:} Observed flux (including
Ly$\alpha$ absorption) for ten example the Ly-break galaxies at $z=0.3$.
\textit{Lower Right:} The magnitude change induced by a delay in
reionization of 0.25 units of redshift. The corrections to the bias due to
reionization are quoted in the upper right panel. We compute this bias
using the rest-frame $r$-magnitude.}
\label{fig5}
\end{figure*}
\label{bao}
We next apply our model to surveys that aim to measure baryonic acoustic oscillations in the clustering of
galaxies at $z<1$. These surveys
require exquisite accuracy of the clustering amplitude, and so the effect of reionization on galaxy bias could be particularly important. We consider two surveys, the existing SDSS Luminous Red Galaxy
survey, and the planned WiggleZ survey.
\subsection{Luminous Red Galaxies}
First we discuss the effect of reionization on the star formation histories
of SDSS Luminous Red Galaxies (LRG) at $z=0.3$ (Eisenstein et al.~2001). By
selection, LRGs are old galaxies with passively evolving stellar
populations and no recent star formation. Thus, in order to model LRGs at
$z\sim0.3$ we arbitrarily shut off star formation in the galaxies at
$z=1$. The spectrum of the galaxy is not sensitive to the exact choice of
redshift where star formation is curtailed, provided that the population of
massive stars from the most recent star-burst have already died. However a
cutoff in star formation is necessary if the models are to reproduce the
correct colors of the observed sample.
The upper and lower left hand panels of Figure~\ref{fig5} show the primary
color selection\footnote{To estimate the colors of LRGs in this paper, we assume top-hat filters of central wavelength $\lambda_0$ and width $\Delta \lambda$ to approximate the Sloan Digital Sky Survey filter set. The filters have $(\lambda_0,\Delta\lambda)= (3543,564)$ for the $u$-band; $(\lambda_0,\Delta\lambda)=(4770,1388)$ for the $g$-band; $(\lambda_0,\Delta\lambda)=(6231,1372)$ for the $r$-band; $(\lambda_0,\Delta\lambda)=(7625,3524)$ for the $i$-band.}
(Eisenstein et al.~2001) for LRGs at $z\sim0.3$. The model
produces galaxies with the correct colors and observed flux, as is
illustrated by the magnitudes and colors of the 100 modeled galaxies which
are also plotted in the left hand panels of Figure~\ref{fig5}. In the upper
right hand panel of Figure~\ref{fig5} we show examples of observed flux
(including Ly$\alpha$ absorption) for ten model LRGs at $z=0.3$. These
spectra show less variation than those of the Ly-break galaxies discussed
in the previous section. This lack of variation is a feature of the LRG
sample. Due to the lack of star formation in these galaxies, the spectra do
not exhibit a sharp Ly-break. In the lower right panel of Figure~\ref{fig5}
we show the fractional change in flux induced by a delay in reionization of
0.25 units of redshift. We see that reionization has a very small effect on
the observed flux of these galaxies. The resulting value of bias is quoted
in the upper right panel. LRGs are selected to lie within a range of
rest-frame absolute $r$-magnitudes, and we therefore calculate the bias at
the rest-frame $r$-magnitude. Reionization will decrease the bias by $\sim
0.1$\% in the LRG sample, and therefore the clustering amplitude by $\sim
0.2$\%. Also shown is the fractional systematic error in the derived host
mass ($\sim 1$\%).
Eisenstein et al.~(2005) summarize the various corrections to the linear
bias that have been previously considered when interpreting clustering
data, including those due to non-linear gravity, and coupling of
gravitational modes. On scales larger than $\sim40$ comoving Mpc, the sum
of previously considered corrections drops below the 1\% level. For this
reason among others, the correlation function of galaxies is considered to
be a very clean tracer of the underlying large-scale mass-distribution, and
in particular a perfect sample with which to investigate the baryonic
oscillations in the matter power-spectrum (Eisenstein et al.~2005). It is
therefore important to note that while the correction to the galaxy bias
due to reionization predicted by our models is at a very low level for the
LRG sample, it may nevertheless be comparable to the largest correction to
linear theory yet described on the scales relevant to baryonic oscillation
experiments. On the other hand, our model predicts no dependence of the
reionization induced bias on scale. As a result it is very unlikely that
the details of the reionization will adversely effect attempts to use the
measurements of baryonic acoustic oscillations as a cosmic standard ruler
(Blake \& Glazebrook~2003). We return to this point in \S~\ref{scaldependance}.
\subsection{Blue Star Forming Galaxies}
\begin{figure*}
\includegraphics[width=13cm]{fig6.eps}
\caption{ The effect of reionization on the star formation histories of galaxies that will be selected by the WiggleZ Survey (Glazebrook et al.~2007) at $z=0.8$. \textit{Upper and Lower Left:} The primary color selection (Glazebrook et al.~2007) for Wiggles star forming galaxies at $z\sim0.8$, together with the points for our model galaxies.
\textit{Upper Right:} Observed flux (including Ly$\alpha$ absorption) for ten example the Ly-break galaxies at $z=0.8$. \textit{Lower Right:} The magnitude change induced by a delay in reionization of 0.25 units of redshift. The corrections to the bias due to reionization are quoted in the upper right panel. This bias is computed at the observed $r$-band wavelength.}
\label{fig6}
\end{figure*}
For our second example we consider the effect of reionization on the star
formation histories of galaxies that will be selected by the WiggleZ Survey
(Glazebrook et al.~2007) at $z=0.8$. Unlike the SDSS LRG sample considered
in the previous section, galaxies in the WiggleZ survey will be selected as
being star forming via the Ly-break using observations in the near and far
UV in addition to optical colors. In modeling these galaxies we therefore
do not impose a cutoff in the star formation prior to the observed
redshift.
The upper and lower left hand panels of Figure~\ref{fig6} show the primary
color selection\footnote{To estimate the UV colors in this paper, we assume
top-hat filters of central wavelength $\lambda_0$ and width $\Delta
\lambda$ to approximate the Galex filter set. The filters assumed have
$(\lambda_0,\Delta\lambda)= (1550,400)$ for the $FUV$-band; and
$(\lambda_0,\Delta\lambda)=(2500,1000)$ for the $NUV$-band.} (Glazebrook et
al.~2007) for WiggleZ star forming galaxies at $z\sim0.8$. As with the
previous examples the model produces galaxies with the correct optical and
UV colors as well as the correct observed fluxes. In the upper right panel
we show the observed spectra (including Ly$\alpha$ absorption) for ten
examples of WiggleZ selected galaxies at $z=0.8$. In the lower right hand
panel of Figure~\ref{fig6} we show the fractional change in galaxy flux
induced by a delay in reionization of 0.25 units of redshift. Unlike the
LRG sample, the active star forming nature of the WiggleZ sample will mean
that (like the Ly-break galaxies at higher redshift) patchy reionization is
expected to significantly effect the observed clustering. The value of bias
due to reionization, the relative correction to the galaxy bias from
reionization, and the fractional change in the host mass inferred from the
clustering amplitude are quoted in the upper right panel of
Figure~\ref{fig6}. We quote results based both on models that include only
hydrogen reionization, and on models that also include the additional
heating of the IGM due to double reionization of helium. The bias was
calculated at the observed $r$-band wavelength to be consistent with the
luminosity selection of the sample. Reionization will increase the bias by
$\sim 5$\% where helium reionization is included, and therefore the
clustering amplitude by $\sim 10$\%. The mass estimate would be inferred
incorrectly by a factor of as much as 25\% where reionization is ignored.
One of the underlying premises motivating galaxy surveys to measure the
baryonic acoustic oscillations is that the galaxies provide a nearly
perfect {\em geometrical} estimate of the distance, free from any
astrophysical complexities. However we have demonstrated that in the case
of star forming galaxies at moderate redshifts, the astrophysical effect of
reionization may enter the clustering statistics at the $\sim 5-10$\%
level. This level is significantly larger than the precision necessary for
measurement of the baryonic acoustic oscillations. Moreover, it is
important to note that the correction of $\sim5\%$ to the galaxy bias due
to reionization is the largest correction to linear theory yet described on
the scales relevant to baryonic oscillation experiments (Eisenstein et
al.~2005). On the other hand, as mentioned earlier, our simple model
predicts that the bias due to reionization is, like the linear bias due to
enhanced formation in overdense regions, independent of scale. Thus in an
analysis that ignores reionization, the host mass would be misidentified,
but because the correction to the linear bias is not scale-dependent, the
unknown details of the reionization history may not compromise the
measurement the baryonic acoustic peak.
\section{Discussion}
\label{discussion}
Before concluding we discuss several issues which arise from our
results and which will provide interesting areas for future research.
\subsection{Implications for the evolution of clustering in galaxy
samples}
The observed spatial correlation function of galaxies can be used to
estimate the mass of the host dark-matter halo population through
comparison with theoretical calculations. Having determined this mass, the
evolution in the clustering of these galaxies can then also be computed and
compared with the clustering properties of different populations at later
times, with the aim of piecing together the evolution of the galaxy
population. Moreover having estimated the host halo mass, the predicted
number density of hosts can be compared with the observed number density of
objects in order to obtain the fraction of halos containing a galaxy of the
selected type at any one time. By comparing the inferred mass of LBGs from
clustering data to the observed number counts, Adelberger et al.~(2005)
concluded that star formation in LBGs has a duty-cycle approaching unity.
This conclusion is consistent with our star formation model in which nearly
all model galaxies satisfy the LBG color selection criteria.
In this sub-section we consider the interpretation of LBG clustering
evolution in light of the additional contribution to the observed galaxy
bias from reionization. The spatial correlation function of dark matter
halos as a function of radius $r$ can be written in terms of the
correlation function of dark-matter and the halo bias $b$ as
\begin{equation}
\xi_{\rm h}(r) = \xi_{\rm m}b^2(M),
\end{equation}
In practice this correlation function can be approximated using the
parameterization
\begin{equation}
\label{approx}
\xi_{\rm h} \approx \left(\frac{r}{r_0}\right)^{-\gamma},
\end{equation}
where $r_0$ is defined as the clustering length, and $\gamma\sim1.5$
describes the observed clustering of galaxies. More biased samples have
larger clustering lengths.
We have argued that reionization will increase the observed value of the
bias, by causing galaxies in overdense regions to have lower mass-to-light
ratios due to their younger stellar populations. Thus we also expect
reionization to increase the observed clustering length of a sample of
galaxies at fixed halo mass. As a result, neglect of reionization leads to
overestimation of the true clustering length for host dark-matter halos.
For small values of $b_{\rm reion}/b_{\rm gal}$, equation~(\ref{approx})
may be used to estimate the contribution to the observed clustering length
($\Delta r_{0,{\rm reion}}$) that results from reionization induced bias using the expression
\begin{eqnarray}
\label{deltaR}
\nonumber
\Delta r_{0,{\rm reion}} &\approx & r_0\frac{2}{\gamma}\frac{b_{\rm
reion}}{b_{\rm gal}}\approx 1.25 r_0\frac{b_{\rm reion}}{b_{\rm gal}}\\
&\approx&1.4\left(\frac{r_0}{5.71}\right)\left(\frac{b_{\rm reion}/b_{\rm
gal}}{0.2}\right),
\end{eqnarray}
where the units of length-scales in the latter equality are comoving Mpc.
The clustering evolution of LBGs was discussed by Adelberger et
al.~(2005). They measure the clustering length of LBGs at $z\sim1.7$,
$z\sim2.3$ and $z\sim3$, obtaining $r_0=5.7$, $r_0=6.0$ and $r_0=6.4$
comoving Mpc respectively, corresponding to halo masses of
$10^{12.1\pm0.2}M_\odot$, $10^{12\pm0.3}M_\odot$ and
$10^{11.5\pm0.3}M_\odot$. Using simulations, Adelberger et al.~(2005)
calculated the clustering lengths that these galaxies should have at lower
redshifts of $z\sim1$ and $z\sim0.2$, and then compared these evolved
clustering lengths to clustering studies of various populations of galaxies
from other surveys. In particular, Adelberger et al.~(2005) compared the
evolved clustering length for LBGs to galaxies in the DEEP Survey (Coil et
al.~2004), and in the Sloan Digital Sky Survey (Budavari et al.~2003).
Adelberger et al.~(2005) find that the LBG clustering length should evolve
to a value that is consistent with redder elliptical galaxies ($r_0\approx
9.4$ comoving Mpc at both $z=1$ and $z=0.2$), but which is larger than the
clustering length for both the whole DEEP galaxy sample at $z\sim1$
($r_0\approx 4.6$ comoving Mpc) and the blue Sloan Digital Sky Survey
(SDSS) galaxies at $z=0.2$ ($r_0=6.4$ comoving Mpc).
Based on these results Adelberger et al.~(2005) argued that the descendents
of LBGs will have clustering strengths that are significantly in excess of
typical galaxies in optical magnitude-limited surveys at low redshift, and
therefore that LBGs must have stopped forming stars before
$z\sim1$. However the results of this paper show that the clustering length
at $z\sim3$ has been overestimated by $\Delta r_{0,{\rm reion}}\sim1.5$
comoving Mpc. Since the reionization induced bias decreases in influence
towards low redshift, and is small below $z=1$ (see following sections) we
conclude that, after accounting for the reionization induced bias, the
clustering of the hosts of LBGs may well be comparable to the blue
population of galaxies at $z<1$. Indeed, as shown in Figure~13 of
Adelberger et al.~(2005), the value of $\Delta r_{0,{\rm reion}}$ computed
for LBGs at $z\sim3$ is comparable to the difference in the clustering
length of normal ellipticals and normal blue galaxies in the Sloan Digital
Sky Survey at $z=0.2$. Thus the effect of reionization on the observed
clustering of galaxies should be accounted for in studies that aim to link
galaxies at a range of epochs through the evolution of their clustering
properties.
\subsection{Helium reionization and scale dependent bias }
\label{scaldependance}
The previous section (\S~\ref{bao}) ended with the positive suggestion that
reionization will not impact measurement of the baryonic acoustic peak in
samples of moderate redshift star-forming galaxies, due to the independence
of the reionization induced bias on scale. Before concluding this paper, we
describe an additional astrophysical situation which may compromise this
favorable conclusion.
Our simple model predicts the bias introduced through reionization to
be independent of scale. However this model ignores several
astrophysical effects that could introduce additional fluctuations in
temperature within the IGM, and hence also introduce additional
dependencies of the star formation history large scale
overdensity. These additional fluctuations might include a scale
dependence that is different to that of cosmic variance, and could
therefore introduce a scale dependent component of reionization
induced galaxy bias.
For example, consider the epoch after HeIII overlap. At that time the
mean-free-path for HeIII ionizing photons is limited by abundance and
cross-section of Ly-limit systems, since the diffuse HeII has
previously been ionized. During this epoch, heating of the IGM will be
sourced by recombinations of HeIII ions. Now the recombination time is
an order of magnitude longer than the Hubble time at the mean IGM
density. However in the overdense regions containing filaments and
sheets the recombination time would be shorter, and could approach a
Hubble time in high density regions. As a result, while regions of low
overdensity would cool adiabatically by the cosmic expansion, heating
due to photo-ionization of HeII could be substantial in the overdense
regions. This effect would introduce temperature fluctuations inside
overdense regions of the IGM on scales larger than the
mean-free-path.
Thus, it is possible that the ionizing photon mean-free-path introduces a
length scale below which reionization induced bias is independent of scale,
but above which the reionization induced bias is scale dependent. At
$z\sim3$ the ionizing photon mean-free-path is $\sim 100$ comoving Mpc
(e.g. Bolton \& Haehnelt~2006). This scale is uncomfortably close to the
scale of the baryonic acoustic peak, implying that careful account will
need to be taken of reionization induced bias in galaxy surveys that select
star forming galaxies. A proper analysis of this possibility would require
full numerical modeling and is beyond the scope of the present paper.
\subsection{Improvements to the model}
In the future, our simple model could be improved in several ways. Our
model predicts that galaxies in regions that are reionized earlier form a
larger fraction of their stellar mass at later times, implying that these
galaxies form a greater fraction of their stellar mass in more massive
halos. We have assumed that the star formation efficiency is independent of
halo mass. However if high redshift galaxies are subject to mass-dependent
feedback effects (such as supernova feedback), then the star formation
history would be altered. The presence of feedback in low mass halos would
result in a larger fraction of the final stellar mass being formed after
reionization, and hence in an increase in the sensitivity of the final
mass-to-light ratio to the local reionization redshift. In addition, one
could incorporate metal enrichment of the star formation. Stars forming in
galaxies within overdense regions where reionization occurs early have
their star formation, and hence their metal enrichment, delayed. As we have
discussed, the resulting stellar populations are therefore observed to be
younger at a low redshift $z$. Since there is a delay in the enrichment of
the IGM following a burst of star formation, the younger stellar
populations will have slightly lower metallicity. Thus the metallicity of
populations observed at $z$ should be slightly dependent on the
reionization history. We expect that the lower metallicity in the younger
populations would tend to reduce the magnitude of the reionization induced
bias, since more highly enriched populations are bluer, and have lower
mass-to-light ratios (though this would be a second order effect).
On the other hand, since we have not included metal
enrichment, our model also underestimates the variation between the UV
fluxes of stellar populations with different ages, and hence also
underestimates the contribution of reionization to the galaxy bias. In
addition to metallicities, one might also attempt to include the effects of
dust, which leads to larger extinction in younger galaxies (Shapley et
al. 2001). This would redden the spectra of galaxies in regions that were
reionized at earlier times.
\section{Conclusions}
\label{conclusions}
We have developed a model to estimate the effect of reionization on the
clustering properties of galaxy samples at intermediate redshifts. Current
models of the reionization of the intergalactic medium predict that
overdense regions will be reionized early due to the presence of galaxy
bias. The IGM in these regions is heated through the absorption of the
ionizing radiation. The heating leads to an increased Jeans mass, and so
reionization suppresses the formation of low-mass galaxies. The suppression
of low mass galaxy formation in turn delays the build up of stellar mass in
the progenitors of massive low redshift galaxies. As a result of this
delayed buildup, the stellar populations observed in galaxies at later
times are on average slightly younger in overdense large-scale regions of
the Universe. Stellar populations fade as they age and so the resulting age
difference would lead to a lower mass-to-light ratio for galaxies in
overdense regions. In volume limited surveys, such as those now being
employed for large scale clustering studies, a fixed observed flux
threshold therefore contains lower mass galaxies (on average) in overdense
regions with a corresponding increase in the galaxy number density.
We have parameterized the reionization induced increase of the observed galaxy
density in overdense regions in analogy with the traditional galaxy
bias. Our modeling uses merger trees combined with a stellar synthesis
code. We have used this model to demonstrate that reionization can have a
significant and detectable effect on the clustering properties of galaxy
samples that are selected based on their star-formation activity.
In existing samples of Ly-break galaxies, the bias correction for
reionization is at the level of 10-20\%, leading to correction factors
between 1.5--2 in the mass inferred from clustering amplitudes. This effect
is present in existing samples of Ly-break galaxies at $1\la z\la3$
(Steidel et al.~2003), and provides a systematic correction to existing
analyses that is in excess of the statistical errors (Adelberger et
al.~2005). For example the reionization induced bias qualitatively changes
the conclusion of Adelberger et al.~(2005) that Ly-break galaxies stop
forming stars at $z\ga1$, and evolve into red elliptical galaxies by
$z\sim2$. Rather, allowing for reionization induced bias implies that
Ly-break galaxies could evolve into the blue populations observed at low
redshift with clustering lengths that are smaller than the massive red
galaxy population.
The reionization of helium, and the associated additional heating of the
IGM may lead to a sharp increase in the amplitude of the correlation
function of $\sim50\%$ for galaxies at fixed luminosity in the redshift
range $3\la z\la 4$. Our model predicts that the reionization introduced
bias is approximately independent of scale. However we are unable to rule
out scale dependence at the level of a few percent due to the limited
numerical accuracy of our calculations. Further astrophysical complexities
not addressed in our model could alter this conclusion. Future experiments
aimed at measuring galaxy clustering at a precision level of a few percent
over a large range of spatial scales (with a goal of constraining the
initial conditions from inflation or the nature of dark matter and dark
energy), will need to carefully account for this possibility.
We find that the contribution to the bias due to reionization is fairly
insensitive to halo mass. This is in contrast to the galaxy bias from
enhanced structure formation in overdense regions which is a function of
halo mass. Hence the fractional contribution to the galaxy bias by
reionization is smaller for more massive systems, and as a result the
systematic error introduced into the estimate of halo mass from clustering
amplitude is less serious for more massive systems. We find that while the
reionization bias is already effecting clustering studies of Ly-break
galaxies, it will become even more important as future surveys begin to
discover populations of less massive high redshift galaxies.
The reionization induced bias should be sensitive to the type of selected
galaxies. In the case of Ly-break galaxies, we would therefore expect that
the clustering amplitude would depend on the band in which the flux
selection was performed. Alternatively, overdense regions would therefore
be expected to have a slightly bluer population of galaxies. For Ly-break
galaxies our model predicts the systematic offset in $U_n-G$ color to be
$\sim0.03-0.05$ magnitudes. Note that this offset refers to the correlation
between color and large scale overdensity within a color-selected sample,
and not to the galaxy population on average. The average galaxy population
could show a different behavior. For example, galaxies in overdense regions
are observed at low redshift to be redder because they formed earlier, or
because their cold gas was heated by mergers or stripped by the hot IGM in
clusters. A correlation between galaxy color and overdensity within a
Ly-break galaxy sample would be evidence for the reionization induced
galaxy bias, and could be used to calibrate its effect empirically.
Finally, we considered the importance of reionization induced bias for
current and upcoming surveys attempting to detect baryonic acoustic
oscillations. We find that the contribution to the bias from reionization
is very small in surveys of old stellar population galaxies at $z<1$, with
corrections of $\la 1\%$. We also find that reionization should not impact
on measurement of the baryonic acoustic peak in samples of moderate
redshift star-forming galaxies, due to the independence of the reionization
induced bias on scale. However it is possible that the mean-free-path of
ionizing photons introduces a length scale below which reionization induced
bias is scale dependent, but above of which the reionization induced bias
is scale dependent. The scale of the mean-free-path is uncomfortably close
to the scale of the baryonic acoustic peak, implying that careful account
will need to be taken of reionization induced bias in galaxy surveys that
select star forming galaxies.
{\bf Acknowledgments} We thank Dan Stark for helpful comments on an early
draft of this paper. This work was supported by the Australian Research
Council (JSBW) and Harvard University grants (AL). JSBW acknowledges the
hospitality of the Institute of Astronomy at Cambridge University where
part of this work was undertaken.
\newcommand{\noopsort}[1]{}
|
1,108,101,564,381 | arxiv | \section{Introduction}
Multidimensional distributions whose marginal distributions are uniform are called copulas. Among them, the one that satisfies the expectation constraints and is closest to the independent distribution in the sense of Kullback--Leibler (KL) divergence is called the minimum information copula. Minimum information copulas are used in, e.g., financial models \cite{CHATRABGOUN2018266} and flood models \cite{DANESHKHAH2016469}. It is known that the density function of a two-dimensional minimum information copula is written in the following form \cite{Bedford}:
\begin{align*}
&c_\theta(x,y)=\exp \left(\sum_{i=1}^k\theta_i h_i(x,y) +a_\theta(x)+b_\theta(y) \right),
\end{align*}
where $h_i(x,y)$ ($1\leq i\leq k$) are functions describing dependence between $x$ and $y$, and $\theta=(\theta_i)$ is a parameter.
The functions $a_\theta(x)$ and $b_\theta(y)$ are called normalizing functions that are determined by the marginal condition of copulas (see Section~\ref{section:min-info} for details).
The normalizing functions play a similar role to the normalizing constant of exponential families. However, since the marginal condition involves a set of integral equations, it is generally more difficult to deal with the normalizing functions than the normalizing constants.
When performing estimation, a function that measures the goodness of the model, called the score function, is often used.
Different scores have different properties, and by using scores with properties consistent with the intended use, estimation accuracy can be improved and the amount of computation can be reduced.
For example, scores for estimation that are robust to noise have been proposed by \cite{BHHJ} and \cite{FUJISAWA20082053} among others.
The Hyv\"{a}rinen score \cite{hyvarinen2005estimation} has been proposed as a score without calculating the normalizing constant. A class of scoring rules with the same property are investigated by \cite{parry2012}.
In this paper, we propose a scoring rule for minimum information copulas that can be calculated without the normalizing functions.
The score, which we call the conditional Kullback--Leibler score, uses a conditional likelihood on pairs of observations.
The score is shown to be strictly proper and therefore asymptotically consistent. Furthermore, the score is convex with respect to the parameters and can be easily optimized by the gradient methods.
The structure of this paper is as follows. Section~2 introduces copulas and minimum information copulas. Section~3 defines scores and explains their commonly used properties: propriety, locality, and homogeneity. In Section~4, we introduce general homogeneity and multi-point locality, which are key properties for dealing with normalizing functions. Then, we propose the conditional Kullback--Leibler score
for minimum information copulas that satisfies generalized homogeneity and two-point locality. In Section~5, we confirm the asymptotic consistency through numerical experiments. Finally, future issues and prospects are discussed in Section~6.
For simplicity, we will only discuss the two-dimensional case, but we believe it can be extended to three or higher dimensions.
\section{Minimum information copulas} \label{section:min-info}
\subsection{Copulas}
A copula is a joint distribution function $C$ on the unit square with uniform marginals (e.g.\ \cite{Nels06}).
In this paper, we only deal with absolutely continuous copulas, in which case the definition is stated by density functions as follows.
\begin{definition}[Copula densities]
A two-dimensional copula density is a function $c:[0,1]^2 \to [0,\infty)$ that satisfies the following two properties:
\begin{align}
\int_0^1 c(x,y)\mathrm{d}y=1,\ \ x\in[0,1]
\label{eq:marginal-1}
\end{align}
and
\begin{align}
\int_0^1 c(x,y)\mathrm{d}x=1,\ \ y\in[0,1].
\label{eq:marginal-2}
\end{align}
\end{definition}
From Sklar's theorem, a joint distribution with arbitrary marginals can be constructed by a copula.
Therefore, we can separate statistical modeling into the marginal part and the copula part.
\begin{theorem}[Sklar's theorem]
Let $h$ be a joint density function on $\mathbb{R}^2$ with marginal density functions $f$ and $g$. Then there exists a copula density $c$ such that for all $(x,y)\in\mathbb{R}^2$,
\begin{align*}
h(x,y)=c(F(x),G(y))f(x)g(y),
\end{align*}
where $F(x)=\int_{-\infty}^x f(\xi)\mathrm{d}\xi$ and $G(y)=\int_{-\infty}^y g(\eta)\mathrm{d}\eta$.
\end{theorem}
\subsection{Minimum information copulas}
Consider statistical modeling of a phenomenon with a multivariate data.
Suppose that some constraints, such as mean and variance, are given as prior information.
If there is little prior information about the distribution, the model that satisfies the constraints cannot be uniquely determined.
In this case, which model should be adopted?
One way to think of it is to adopt a neutral model that assumes as little as possible dependence between random variables that are not included in the prior information.
In other words, we adopt the model that is closest to the distribution in which the random variables are independent.
The Kullback-Leibler(KL) divergence \cite{10.1214/aoms/1177729694} is often used to measure the distance between distributions.
\begin{definition}[Kullback--Leibler divergence]
\label{KL-D}
Let $g_1$ and $g_2$ be density functions. Then the Kullback--Leiber (KL) divergence is defined as follows:
\begin{align*}
D_{\rm{KL}}(g_1||g_2)=\int g_1(x,y)\log \left(\frac{g_1(x,y)}{g_2(x,y)} \right) \mathrm{d}x\mathrm{d}y.
\end{align*}
\end{definition}
The copula that is closest to the independent copula in terms of the KL divergence is called the minimum information copula.
\begin{definition}[Minimum information copulas]
Let $h_1(x,y),\ldots,h_k(x,y)$ be given functions and $\alpha_1,\ldots,\alpha_k\in\mathbb{R}$ be given numbers.
Let $\pi(x,y)=1$ be the independent copula density.
Then, the copula density $c$ that minimizes
\begin{align*}
D_{\rm{KL}}(c||\pi)
= \iint c(x,y)\log c(x,y)\mathrm{d}x\mathrm{d}y
\end{align*}
subject to
\begin{align*}
\iint c(x,y)h_i(x,y)\mathrm{d}x\mathrm{d}y=\alpha_i \ \ (1\leq i \leq k)
\end{align*}
is called the minimum information copula density.
\end{definition}
The minimum information copula is characterized by the following theorem. See also \cite{Borwein_et_al1994} for details on the uniqueness and existence problem.
\begin{theorem}[\cite{Bedford}, Theorem 2 \& Theorem 3]
\label{min inf copula}
The minimum information copula density is unique if it exists, and expressed in the following form:
\begin{align*}
&c(x,y)=\exp \left(\sum_{i=1}^k\theta_i h_i(x,y) +a(x)+b(y) \right)
\end{align*}
with some $\theta_i$, $a(x)$ and $b(y)$. The functions $a(x)$ and $b(y)$ are unique except for arbitrariness of the additive constants.
\end{theorem}
From the theorem, we can redefine the minimum information copula density by
\begin{align}
&c_\theta(x,y)=\exp \left(\sum_{i=1}^k\theta_i h_i(x,y) +a_\theta(x)+b_\theta(y) \right)
\label{eq:min-info}
\end{align}
together with the marginal conditions (\ref{eq:marginal-1}) and (\ref{eq:marginal-2}).
Here, the parameter of interest is $\theta=(\theta_i)_{i=1}^k$. The functions $a_\theta(x)$ and $b_\theta(y)$ are called the normalizing functions.
For identifiability of $c_\theta(x,y)$, we suppose that $h_1(x,y),\ldots,h_k(x,y)$ are linearly independent modulo additive functions, which means that an identity
\[
\sum_{i=1}^k \Theta_i h_i(x,y) + A(x) + B(y) = 0
\]
for some $\Theta_i$, $A(x)$ and $B(y)$ implies $\Theta_1=\cdots=\Theta_k=0$.
\section{Scoring rules} \label{section:scoring}
\subsection{Scores}
A score is a function used to measure the difference between a model and the true distribution.
We only consider probability density functions on $[0,1]^2$.
Denote the set of probability density functions on $[0,1]^2$ by $\mathcal{P}$.
\begin{definition}[Scores]
A score $S$ is a real-valued function of $(x,y,q)\in[0,1]^2\times \mathcal{P}$. The expected value
\begin{align*}
S(p,q)=\iint S(x,y,q) p(x,y) \mathrm{d}x \mathrm{d}y
\end{align*}
with respect to $p\in\mathcal{P}$ is called the expected score.
\end{definition}
In this paper, the score function $S(x, y, q)$ and the expected score $S(p, q)$ use the same symbol $S$ and are both called scores for convenience.
One property that is necessary for a score to measure the difference between the true distribution and the model is called propriety.
\begin{definition}[Propriety]
A score $S$ is said to be proper if
$S(p,q) \geq S(p,p)$
for all density functions $p,q\in\mathcal{P}$.
\end{definition}
In general, when we say score, we often refer to the proper score. A divergence can be defined from a proper score by
\begin{align*}
D(p||q)=S(p,q)-S(p,p),
\end{align*}
which is like the distance between $p$ and $q$.
If $p$ is fixed, minimizing the divergence and minimizing the score are equivalent.
Consider a statistical model $\{q_\theta\}\subset\mathcal{P}$ indexed by a parameter $\theta$.
If $n$ data points $(x_i , y_i )$ are observed and the empirical distribution is denoted as $\hat{p}$, the estimation can be operated as follows:
\begin{align}
\hat{\theta
= \mathop{\rm arg~min}\limits_{\theta} S(\hat{p},q_{\theta})=\mathop{\rm arg~min}\limits_{\theta} \frac{1}{n}\sum^{n}_{i=1}S(x_i,y_i,q_{\theta}).
\label{eq:estimator}
\end{align}
There are many different types of scores, but one of the simplest is the local score.
\begin{definition}[Locality \cite{parry2012}]
A score $S$ is said to be local (in strict sense) if there exists a function $s:[0,1]^2\times[0,\infty)\to\mathbb{R}$ such that
\[
S(x,y,q)=s(x,y,q(x,y)).
\]
Furthermore, for $l\geq 0$, a score $S$ is said to be $l$-local if $S(x,y,q)$ is represented by at most the $l$-th derivatives of $q$ at $(x,y)$.
\end{definition}
Local scores are easy to calculate whenever $q(x,y)$ is explicitly expressed because the score can be obtained only from the information at that point, without integrating over the neighborhood or referring to other points.
\begin{example}[\rm{KL} score] \label{example:KL}
The score
\begin{align*}
S(x,y,q)=-\log q(x,y)
\end{align*}
is called the KL score because the divergence induced from it is the KL divergence.
The KL score is 0-local and proper.
In fact, it is known that the \rm{KL} score is essentially the only score that is 0-local and proper \cite{10.1214/aos/1176344689}.
\end{example}
\subsection{Homogeneity}
There are several computational advantages to using homogeneous scores for estimation.
\begin{definition}[Homogeneity]
A score $S$ is said to be homogeneous if it satisfies $S(x,y,\lambda q)=S(x,y,q)$ for any constant $\lambda>0$.
\end{definition}
If a homogeneous score is used for estimation, computation of the normalizing constant is not necessary.
We have introduced the properties of scores: propriety, locality and homogeneity.
Based on these definitions, we consider two examples of scores.
\begin{example2}[Example~\ref{example:KL} continued]
The KL score is not homogeneous. Indeed, the KL score satisfies
\[
S(x,y,\lambda q)=S(x,y,q)-\log \lambda.
\]
Therefore, when the \rm{KL}-score is used for estimation, computation of the normalizing constant is necessary.
\end{example2}
\begin{example}[Hyv\"{a}rinen score \cite{hyvarinen2005estimation}]
A score
\begin{align*}
S(x,y,q)&=\left(\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right) \log q(x,y)\\
&\ \ +\frac{1}{2}\left(\frac{\partial}{\partial x} \log q(x,y)\right)^2 +\frac{1}{2}\left(\frac{\partial}{\partial y} \log q(x,y)\right)^2
\end{align*}
is called the \rm{Hyv\"{a}rinen} score.
The \rm{Hyv\"{a}rinen} score is 2-local, homogeneous and proper. Therefore, the normalizing constant is not necessary for estimation.
However, the Hyv\"{a}rinen score is not useful for estimation of the minimum information copulas because it does not remove the normalizing functions.
\end{example}
\section{The proposed score}
\subsection{General homogeneity}
As mentioned in the last example, the normalizing functions $a_\theta(x)$ and $b_\theta(y)$ in the density function (\ref{eq:min-info}) do not vanish even if a homogeneous score is applied.
For this reason, the following property is introduced.
\begin{definition}[General homogeneity]
A score $S$ is said to be generally homogeneous if it satisfies that
\begin{align*}
&S(x,y,\lambda_1q)= S(x,y,q),\\
&S(x,y,\lambda_2q)= S(x,y,q)
\end{align*}
for any positive functions $\lambda_1(x)$ and $\lambda_2(y)$,
where $\lambda_1q$ and $\lambda_2q$ are defined by $(\lambda_1q)(x,y)=\lambda_1(x)q(x,y)$ and $(\lambda_2q)(x,y)=\lambda_2(y)q(x,y)$, respectively.
\end{definition}
\begin{example}
It is easy to see that a score
\[
S(x,y,q)=-\frac{\partial^2}{\partial x\partial y}\log q(x,y)
\]
is generally homogeneous and 2-local. However, the score is not proper. To see this, let $q(x,y)$ be the Gaussian density (over $\mathbb{R}^2$).
Then $S(x,y,q)$ is a constant involving the correlation parameter and $S(p,q)$ takes any real value independent of $p$.
\end{example}
If $S$ is generally homogeneous and $c_\theta(x,y)$ is the minimum information copula density in (\ref{eq:min-info}), we have
\[
S(x,y,c_\theta(x,y))=S(x,y,e^{\sum_i\theta_ih_i(x,y)}),
\]
which does not require computation of the normalizing functions.
Hence, our problem is reduced to find a generally homogeneous ($l$-)local proper score.
However, after some trials based on symbolic computation in line with \cite{parry2012}, the authors realized that such a score may not exist.
\subsection{Multi-point scores and their locality}
Instead of finding a generally homogeneous local proper score, we try to relax the required properties.
General homogeneity and propriety are necessary for estimation of the minimum information copulas.
Therefore, we reconsider locality. For this purpose, the concept of multi-point scores is introduced.
\begin{definition}[Multi-point score]
Let $m\geq 1$. An $m$-point score is a function
\[
S(\bm{x},\bm{y},q)
= S(x^1,y^1,\ldots,x^m,y^m,q)
\]
of $\bm{x}=(x^1,\cdots,x^m)\in[0,1]^m$, $\bm{y}=(y^1,\cdots,y^m)\in[0,1]^m$ and $q\in\mathcal{P}$. The expected score of $S(\bm{x},\bm{y},q)$ is defined as
\begin{align*}
S(p,q)&=\int S(\bm{x},\bm{y},q)p(\bm{x},\bm{y})\mathrm{d}\bm{x}\mathrm{d}\bm{y},
\end{align*}
where $p(\bm{x},\bm{y})= \prod_{i=1}^m p(x^i,y^i)$. In other words, the expectation is taken with respect to independent samples from $p$.
A score is called a multi-point score if it is an $m$-point score for some $m$.
\end{definition}
Definition of propriety and general homogeneity of the multi-point scores is straightforward. Locality is defined as follows.
\begin{definition}[Multi-point locality]
A multi-point score $S$ is said to be local if there exists a function $s:[0,1]^m\times [0,1]^m\times [0,\infty)^{m\times m}\to\mathbb{R}$ such that
\[
S(\bm{x},\bm{y},q) = s(\bm{x},\bm{y}, \bm{q}),
\]
where $\bm{q}=(q(x^i,y^j))_{i,j=1}^m$.
\end{definition}
Locality defined in Section~\ref{section:scoring} meant that for a given model, the score at a single point is evaluated using only the information at that point. On the other hand, multi-point locality means that the score at $m$ points is evaluated using all the information at the $m^2$ points $\{(x^i,y^j)\}_{i,j=1}^m$.
\subsection{The conditional Kullback--Leibler score}
In the following, we construct a generally homogeneous $2$-point local proper score, which is applicable to estimation of minimum information copulas.
Since it is known that the $1$-point local proper score is essentially only the \rm{KL}-score \cite{10.1214/aos/1176344689}, which does not have general homogeneity, it is reasonable to construct $2$-point local proper scores.
\begin{definition}[The conditional Kullback--Leibler score]
Define a 2-point local score by
\begin{align*}
S(x^1,y^1,x^2,y^2,q)=-\log \left( \frac{q^{11} q^{22}}{q^{11}q^{22}+q^{12} q^{21}} \right),
\end{align*}
where $q^{ij}=q(x^i,y^j)$.
We call it the conditional Kullback--Leibler score.
\end{definition}
This score looks a little strange at first glance, but it can actually be seen as a kind of conditional \rm{KL} score as follows.
Consider that there are two systems that are exactly the same and independent of each other.
Data $(x^1,y^1)$ are obtained from System~1, and data $(x^2,y^2)$ are obtained from System~2.
Suppose that, by mistake, we were able to record the numerical values of the data, but forgot the correspondence between $x$ and $y$.
Then, there are two possible cases: $(x^1,y^1),(x^2,y^2)$ or $(x^1,y^2),(x^2,y^1)$.
Under the condition that only the numerical value of the data is known, the conditional probability that the data is the pair $(x^1,y^1),(x^2,y^2)$ is
\begin{align*}
\frac{q^{11} q^{22}}{q^{11}q^{22}+q^{12} q^{21}}.
\end{align*}
Therefore,
the score
\begin{align*}
S=-\log \left( \frac{q^{11} q^{22}}{q^{11}q^{22}+q^{12} q^{21}} \right)
\end{align*}
will come naturally as the KL score for the conditional probability.
Now we state our main result.
\begin{theorem} \label{theorem:main}
The conditional KL score is generally homogeneous and proper.
\end{theorem}
\begin{proof}[Proof]
General homogeneity is straightforward.
We prove the propriety as follows:
\begin{align*}
&S(p,p)-S(p,q)\\
=&\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}[S(x^1,y^1,x^2,y^2,p)-S(x^1,y^1,x^2,y^2,q)]p^{11}p^{22}\mathrm{d}x^1 \mathrm{d}y^1 \mathrm{d}x^2 \mathrm{d}y^2\\
=&\frac{1}{2} \int_{0}^{1} \cdots \int_{0}^{1} [S(x^1,y^1,x^2,y^2,p)-S(x^1,y^1,x^2,y^2,q)]p^{11}p^{22}\mathrm{d}x^1 \mathrm{d}y^1 \mathrm{d}x^2 \mathrm{d}y^2 \\
+&\frac{1}{2} \int_{0}^{1} \cdots \int_{0}^{1} [S(x^1,y^2,x^2,y^1,p)-S(x^1,y^2,x^2,y^1,q)]p^{12}p^{21}\mathrm{d}x^1 \mathrm{d}y^2 \mathrm{d}x^2 \mathrm{d}y^1 \\
=&\frac{1}{2} \int_{0}^{1} \cdots \int_{0}^{1} \left[ \log \left(\frac{q^{11}q^{22}}{q^{11}q^{22}+q^{12}q^{21}}\right)-\log \left( \frac{p^{11}p^{22}}{p^{11}p^{22}+p^{12}p^{21}} \right) \right]p^{11}p^{22}\mathrm{d}x^1 \mathrm{d}y^1 \mathrm{d}x^2 \mathrm{d}y^2 \\
+&\frac{1}{2} \int_{0}^{1} \cdots \int_{0}^{1} \left[ \log \left(\frac{q^{12}q^{21}}{q^{11}q^{22}+q^{12}q^{21}}\right)-\log \left( \frac{p^{12}p^{21}}{p^{11}p^{22}+p^{12}p^{21}} \right) \right]p^{12}p^{21}\mathrm{d}x^1 \mathrm{d}y^2 \mathrm{d}x^2 \mathrm{d}y^1 \\
=&\frac{1}{2} \int_{0}^{1} \cdots \int_{0}^{1} \left[ \log \frac{\frac{q^{11}q^{22}}{q^{11}q^{22}+q^{12}q^{21}}}{\frac{p^{11}p^{22}}{p^{11}p^{22}+p^{12}p^{21}}} \right]\frac{p^{11}p^{22}}{p^{11}p^{22}+p^{12}p^{21}} (p^{11}p^{22}+p^{12}p^{21}) \mathrm{d}x^1 \mathrm{d}y^1 \mathrm{d}x^2 \mathrm{d}y^2 \\
+&\frac{1}{2} \int_{0}^{1} \cdots \int_{0}^{1} \left[ \log \frac{\frac{q^{12}q^{21}}{q^{11}q^{22}+q^{12}q^{21}}}{\frac{p^{12}p^{21}}{p^{11}p^{22}+p^{12}p^{21}}} \right]\frac{p^{12}p^{21}}{p^{11}p^{22}+p^{12}p^{21}} (p^{11}p^{22}+p^{12}p^{21}) \mathrm{d}x^1 \mathrm{d}y^2 \mathrm{d}x^2 \mathrm{d}y^1.
\end{align*}
Using the inequality $\log x \leq x-1$, we obtain
\begin{align*}
&S(p,p)-S(p,q) \\
\leq&\frac{1}{2} \int_{0}^{1} \cdots \int_{0}^{1} \left[ \frac{\frac{q^{11}q^{22}}{q^{11}q^{22}+q^{12}q^{21}}}{\frac{p^{11}p^{22}}{p^{11}p^{22}+p^{12}p^{21}}}-1 \right]\frac{p^{11}p^{22}}{p^{11}p^{22}+p^{12}p^{21}} (p^{11}p^{22}+p^{12}p^{21}) \mathrm{d}x^1 \mathrm{d}y^1 \mathrm{d}x^2 \mathrm{d}y^2 \\
+&\frac{1}{2} \int_{0}^{1} \cdots \int_{0}^{1} \left[ \frac{\frac{q^{12}q^{21}}{q^{11}q^{22}+q^{12}q^{21}}}{\frac{p^{12}p^{21}}{p^{11}p^{22}+p^{12}p^{21}}}-1 \right]\frac{p^{12}p^{21}}{p^{11}p^{22}+p^{12}p^{21}} (p^{11}p^{22}+p^{12}p^{21}) \mathrm{d}x^1 \mathrm{d}y^2 \mathrm{d}x^2 \mathrm{d}y^1 \\
=&\frac{1}{2} \int_{0}^{1} \cdots \int_{0}^{1} \left[ \frac{q^{11}q^{22}}{q^{11}q^{22}+q^{12}q^{21}}-\frac{p^{11}p^{22}}{p^{11}p^{22}+p^{12}p^{21}} \right] (p^{11}p^{22}+p^{12}p^{21}) \mathrm{d}x^1 \mathrm{d}y^1 \mathrm{d}x^2 \mathrm{d}y^2 \\
+&\frac{1}{2} \int_{0}^{1} \cdots \int_{0}^{1} \left[ {\frac{q^{12}q^{21}}{q^{11}q^{22}+q^{12}q^{21}}}-{\frac{p^{12}p^{21}}{p^{11}p^{22}+p^{12}p^{21}}} \right](p^{11}p^{22}+p^{12}p^{21}) \mathrm{d}x^1 \mathrm{d}y^2 \mathrm{d}x^2 \mathrm{d}y^1 \\
=&\frac{1}{2}\int_{0}^{1} \cdots \int_{0}^{1} \left[ \frac{q^{11}q^{22}}{q^{11}q^{22}+q^{12}q^{21}}-\frac{p^{11}p^{22}}{p^{11}p^{22}+p^{12}p^{21}}+{\frac{q^{12}q^{21}}{q^{11}q^{22}+q^{12}q^{21}}}-{\frac{p^{12}p^{21}}{p^{11}p^{22}+p^{12}p^{21}}} \right]\\
&\hspace{8cm} \times (p^{11}p^{22}+p^{12}p^{21}) \mathrm{d}x^1 \mathrm{d}y^1 \mathrm{d}x^2 \mathrm{d}y^2 \\
=&\frac{1}{2}\int_{0}^{1} \cdots \int_{0}^{1} \left[ 1-1 \right](p^{11}p^{22}+p^{12}p^{21}) \mathrm{d}x^1 \mathrm{d}y^1 \mathrm{d}x^2 \mathrm{d}y^2 \\
=&0.
\end{align*}
Therefore,
\begin{align*}
S(p,p)\leq S(p,q).
\end{align*}
As a result, the score $S$ is proper. Moreover, the equality condition is
\begin{align}
\frac{q^{11}q^{22}}{q^{11}q^{22}+q^{12}q^{21}}=\frac{p^{11}p^{22}}{p^{11}p^{22}+p^{12}p^{21}}
\label{eq:equality-condition}
\end{align}
for all $(x^1,y^1,x^2,y^2)$.
\end{proof}
From the above, we construct a generally homogeneous $2$-point local proper score for the general density function. In fact, if the target is restricted to minimum information copulas, this score has even stronger properties.
\begin{definition}[Strict propriety]
Let $\mathcal{M}\subset\mathcal{P}$ be a given class of probability density functions. A proper score $S$ is said to be strictly proper relative to $\mathcal{M}$ if the equality $S(p,q) = S(p,p)$ for $p,q\in\mathcal{M}$ implies $p=q$.
\end{definition}
\begin{theorem} \label{theorem:strictly-proper}
Let $\mathcal{M}$ be a minimum information copula model.
Then the conditional KL score is strictly proper relative to $\mathcal{M}$.
\end{theorem}
\begin{proof}[Proof]
Let $p,q\in\mathcal{M}$ and suppose that $S(p,q)=S(p,p)$.
Then we have (\ref{eq:equality-condition}) in the proof of Theorem~\ref{theorem:main},
which is equivalent to
\begin{align*}
\frac{q^{11}q^{22}}{q^{12}q^{21}}=\frac{p^{11}p^{22}}{p^{12}p^{21}},
\end{align*}
that is,
\begin{align*}
\frac{q(x^1,y^1)q(x^2,y^2)}{q(x^1,y^2)q(x^2,y^1)}=\frac{p(x^1,y^1)p(x^2,y^2)}{p(x^1,y^2)p(x^2,y^1)}.
\end{align*}
By fixing $(x^1,y^1)$ to an arbitrary point, we obtain a relation
\[
q(x^2,y^2) = p(x^2,y^2)\exp(a(x^2)+b(y^2)),
\]
where $a(x^2)=\log(p^{11}q^{21})-\log(q^{11}p^{21})$ and $b(y^2)=\log q^{12}-\log p^{12}$.
Since $p$ and $q$ are assumed to be the minimum information copula densities,
Theorem~$\ref{min inf copula}$ implies
\begin{align*}
p(x^2,y^2)=q(x^2,y^2).
\end{align*}
Therefore, the score is strictly proper relative to $\mathcal{M}$.
\end{proof}
\subsection{Properties of the estimator}
We define an estimator based on the proposed score and briefly describe its properties.
For the conditional KL score, we first separate the given data randomly into $N=\lfloor n/2\rfloor$ groups as
\[
\{(x_i^1,y_i^1,x_i^2,y_i^2)\}_{i=1}^N.
\]
Then, based on the empirical score
\[
\hat{S}(\theta) =\frac{1}{N}\sum_{i=1}^{N}S(x^1_i,y^1_i,x^2_i,y^2_i,q),
\]
the estimator is defined by
\begin{align*}
\hat\theta = \mathop{\rm arg~min}\limits_{\theta}\hat{S}(\theta)
\end{align*}
Recall the following theorem on consistency and asymptotic normality of estimators based on strictly proper scores. We omit regularity conditions to make the ideas clearer.
\begin{theorem}[\cite{doi:10.1198/016214506000001437} and Theorem~5.23 of \cite{van2000asymptotic}]
\label{strictly_proper}
Let $\theta_0$ be the true parameter and suppose that $(x_1,y_1),\cdots,(x_n,y_n)$ are independent and identically distributed. Let $S$ be a strictly proper (1-point) score.
Then, the estimator $\hat\theta$ defined by (\ref{eq:estimator}) converges almost surely to $\theta_0$ as $n\to\infty$.
Furthermore, under regularity conditions, the asymptotic normality holds:
\[
\sqrt{n}(\hat\theta-\theta_0) \xrightarrow{\rm d} N(0,J^{-1}VJ^{-1}),
\]
where $J=E[\nabla_\theta\nabla_\theta^\top S]$ and $V=E[(\nabla_\theta S)(\nabla_\theta S)^\top]$.
\end{theorem}
Since the conditional KL score is strictly proper as proved in Theorem~\ref{theorem:strictly-proper}, we obtain the following corollary.
\begin{corollary} \label{corollary:asymptotic}
The estimator based on the conditional KL score is consistent and asymptotically normal.
\end{corollary}
Next we point out that the estimation is a convex optimization problem.
Here, we use the vector notation $\bm\theta=(\theta_i)_{i=1}^k$ for convenience.
\begin{theorem}
Consider a minimum information copula model
\[
q(x,y;\bm{\theta})=\exp\left( \bm{\theta}^{T} \bm{h}(x,y)+a(x)+b(y) \right).
\]
Suppose that $\bm{H}_1,\ldots,\bm{H}_N\in\mathbb{R}^k$ are linearly independent, where
\[
\bm{H}_i=\bm{h}_i^{12}+\bm{h}_i^{21}-\bm{h}_i^{22}-\bm{h}_i^{11},
\quad \bm{h}_i^{\alpha\beta}=\bm{h}(x_i^\alpha,y_i^\beta).
\]
Then, the empirical score $\hat{S}(\bm\theta)$ based on the conditional KL score
is strictly convex with respect to $\bm{\theta}$.
\end{theorem}
\begin{proof}[Proof]
It is easy to see
\begin{align*}
\hat{S}
&= \frac{1}{N}\sum_{i=1}^{N} \left\{-\log\left(
\frac{q_i^{11}q_i^{22}}{q_i^{11}q_i^{22}+q_i^{12}q_i^{21}}
\right)\right\}
\\
&= \frac{1}{N}\sum_{i=1}^{N} \log \left( \mathrm{e}^{\bm{\theta}^{T} \bm{H}_i}+1 \right).
\end{align*}
Then, the gradient vector is
\begin{align}
\nabla_{\bm\theta} \hat{S}
&=\frac{1}{N}\sum_{i=1}^{N}\frac{\mathrm{e}^{\bm{\theta}^{T} \bm{H}_i}}{\mathrm{e}^{\bm{\theta}^{T} \bm{H}_i}+1} \bm{H}_i \nonumber \\
&=\frac{1}{N}\sum_{i=1}^{N}\frac{\bm{H}_i}{\mathrm{e}^{-\bm{\theta}^{T} \bm{H}_i}+1} \label{partial_theta_S}.
\end{align}
The Hessian matrix
\begin{align*}
\frac{1}{N}\sum_{i=1}^{N}\frac{\mathrm{e}^{-\bm{\theta}^{T} \bm{H}_i}}{(\mathrm{e}^{-\bm{\theta}^{T} \bm{H}_i}+1)^2}\bm{H}_i \bm{H}_i^T,
\end{align*}
is positive definite under the assumption.
Therefore, the score is strictly convex with respect to $\bm\theta$.
\end{proof}
The estimation can be operated easily with the gradient method.
\section{Numerical experiments}
\subsection{Experiments of Gaussian copulas}
\label{gaussian_copula_exp}
\subsubsection{Setting}
\label{setting}
The normalizing function of a minimal information copula cannot be obtained in general.
In this subsection, we consider the Gaussian copula, which is one of the few examples of minimal information copulas for which the normalization function can be obtained in a closed form.
Another reason for experimenting with a Gaussian copula is that the data can be easily generated by a variable transformation of the $2$-dimensional normal distribution, and the maximum likelihood estimation (MLE) can be compared with the proposed method.
In Subsection \ref{general numerical experiments}, we conduct numerical experiments for general minimum information copulas for which normalizing functions cannot be obtained.
The Gaussian copula is a multidimensional normal distribution with its marginal distributions converted to uniform distributions.
The density function of the $2$-dimensional Gaussian copula is
\begin{align*}
\frac{1}{2 \pi (1-\rho^2)^{\frac{1}{2}}\phi(\xi)\phi(\eta)}\exp \left. \left( - \frac{1}{2(1-\rho^2)} (\xi^2-2 \rho \xi \eta + \eta^2) \right) \right|_{\xi=\Phi^{-1}(x),\eta=\Phi^{-1}(y)}
\end{align*}
for $(x,y)\in(0,1)^2$, where $\phi$ and $\Phi$ are the density function and cumulative distribution function of the $1$-dimensional standard normal distribution. The parameter $\rho$ is the correlation coefficient of the Gaussian variables $\xi$ and $\eta$.
As pointed out by \cite{jansen1997maximum}, the Gaussian copula is in fact a minimum information copula with
\begin{align*}
\theta=\frac{\rho}{1-\rho^2}
\end{align*}
and
\begin{align*}
&h(x,y)=\Phi^{-1}(x)\Phi^{-1}(y).
\end{align*}
The normalizing functions are
\begin{align*}
&\exp\{a(x)\}=\frac{1}{\sqrt{2\pi} (1-\rho^2)^{\frac{1}{4}}\phi(\xi)}\exp \left. \left( -\frac{\xi^2}{2(1-\rho^2)} \right) \right|_{\xi=\Phi^{-1}(x)}
\end{align*}
and $\exp\{b(y)\}=\exp\{a(y)\}$.
The parameters $\theta$ and $\rho$ have one-to-one correspondence as
\[
\rho=\frac{2\theta}{1+\sqrt{1+4\theta^2}}.
\]
Thus, the density function of the Gaussian copula can be expressed relatively simply. In the following, the procedure of the numerical experiment is explained in detail.
\begin{setting}
\begin{enumerate}
\item First, we generate $2N$ data of $2$-dimensional normal distribution with mean vector$
\begin{bmatrix}
0 \\
0
\end{bmatrix}
$and covariance matrix$
\begin{bmatrix}
1 & \rho \\
\rho & 1
\end{bmatrix}.
$
Consider the first $N$ data$
\begin{bmatrix}
\xi_i^1 \\
\eta_i^1
\end{bmatrix}
$
as obtained from $\text{system}$ $1$ and the other $N$ data$
\begin{bmatrix}
\xi_i^2 \\
\eta_i^2
\end{bmatrix}
$
as obtained from $\text{system}$ $2$.
\item
Take the variable transformation
\begin{align}
\label{transform}
\begin{bmatrix}
x_i^j \\
y_i^j
\end{bmatrix}
=\begin{bmatrix}
\Phi(\xi_i^j)\\
\Phi(\eta_i^j)
\end{bmatrix}
=\begin{bmatrix}
\frac{1}{2}\left(1+\text{erf}\left( \frac{\xi_i^j}{\sqrt{2}} \right)\right) \\
\frac{1}{2}\left(1+\text{erf}\left( \frac{\eta_i^j}{\sqrt{2}} \right)\right)
\end{bmatrix}
\end{align}
($i=1,\ldots,N;j=1,2$)
in order to obtain $2N$ sample of the Gaussian copula, where $\text{erf}(x)=(2/\sqrt{\pi})\int_{0}^{x} \text{e}^{-t^2} \mathrm{d}t$ is the error function.
\item
Set the initial value $\theta=\theta_0$ and iterative step $\mathrm{d}\theta$ properly.
\item Calculate the gradient of the empirical score
\begin{align*}
\hat{S}:=\frac{1}{N}\sum_{i=1}^{N}S(x_i^1,y_i^1,x_i^2,y_i^2,q)=-\frac{1}{N}\sum_{i=1}^{N}\log \left( \frac{q^{11}_{i}q^{22}_{i}}{q^{11}_{i}q^{22}_{i}+q^{12}_{i}q^{21}_{i}} \right).
\end{align*}
Substitute the sample $(x_i^j,y_i^j)(i=1,\cdots,N;j=1,2)$ for $(\ref{partial_theta_S})$:
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}\theta}\hat{S}=\frac{1}{N}\sum_{i=1}^{N}\frac{{H}_i}{\mathrm{e}^{-\theta {H}_i}+1}.
\end{align*}
\item
If $|\mathrm{d}\hat{S}/\mathrm{d}\theta|$ is sufficiently small, output $\theta$ as the estimator $\hat\theta$. Otherwise, $\theta\leftarrow \theta-(\mathrm{d}\hat{S}/\mathrm{d}\theta)\mathrm{d}\theta$ and go to Step 4.
\end{enumerate}
\end{setting}
The experiments of the MLE use the empirical score of the \rm{KL}-score
\begin{align*}
\hat{S}_{\rm{KL}}=-\frac{1}{N}\sum_{i=1}^{N}\log{\left(q^{11}_{i}q^{22}_{i}\right)}
\end{align*}
instead of the conditional KL score in Step $4$.
The other steps are the same.
\subsubsection{Results}
\label{result}
We describe the results of numerical calculations according to the experimental procedure in Subsection $\ref{setting}$.
First, $4000$ sample points of $2$-dimensional normal distribution with mean vector$
\begin{bmatrix}
0 \\
0
\end{bmatrix}
$ and covariance matrix$
\begin{bmatrix}
1 & \rho=0.7 \\
\rho=0.7 & 1
\end{bmatrix}
$
were obtained.
Then, the variable transformation $(\ref{transform})$ was operated in order to get the sample of Gaussian copula with parameter $\theta=\frac{\rho}{1-\rho^2}=1.372549$.
The larger the number of data, the more the estimation error converges to $0$. To see the degree of convergence, the estimation error was calculated by changing the number of data used.
Using $N=40,50,\cdots,1990,2000$ pairs of data, we substituted the data into the proposal score and obtained the optimal solution $\hat{\theta}$ as the estimator.
The estimation error was calculated as the absolute value of the difference from the true parameter $\theta=1.372549$.
In addition, maximum likelihood estimation was performed using the same data with \rm{KL}-scores.
The above experiment was repeated up to $100$ times and the estimation errors were averaged.
The estimation results are shown in Figure~\ref{Gaussian_copula_error_MLE_CKL}.
The results are also plotted on the logarithmic graph of both axes and linearly fitted in Figure~\ref{Gaussian_copula_log_error_MLE_CKL}.
Linear fitting is the least-squares fitting of $a,b$ of $y=x^a \exp(b)$ to the data.
The red line is the estimation error of the maximum likelihood estimation (\rm{KL}-score) and the green is the estimation error of the proposal score (\rm{CKL}-score).
The blue line is the estimation error fitting of the maximum likelihood estimation (\rm{KL}-score), where the coefficients of the fitting are
\begin{align*}
&a=-0.517919\\
&b=0.445423.
\end{align*}
The purple line is the estimation error fitting of the proposed score (\rm{CKL}-score), where
\begin{align*}
&a=-0.49438\\
&b=0.97278.
\end{align*}
The speed of convergence of the error is roughly $\frac{1}{\sqrt{N}}$ for the number $N$ of data since the blue and purple $a$ in the fitting are close to $-0.5$.
This result is consistent with Corollary~\ref{corollary:asymptotic}.
The maximum likelihood estimation has better results with respect to error convergence than the proposed method, as expected.
However, when estimating parameters with minimum information copulas, the maximum likelihood estimation is only possible with the Gaussian copulas used in this experiment.
In most other cases, the maximum likelihood estimation cannot be performed because the normalizing functions cannot be obtained.
In such cases, the greatest strength of the proposed score is that it can estimate with the same accuracy.
\begin{figure}[htbp]
\centering
\includegraphics[width=10cm,height=6.5cm]{Gaussian_copula_error_MLE_CKL.eps}
\caption{The estimation result of Gaussian copulas. The mean absolute error (vertical axis) with respect to the number of data (horizontal axis) is shown. The red line is the KL score (MLE) and green line is the CKL score (proposed)}
\label{Gaussian_copula_error_MLE_CKL}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=10cm,height=6.5cm]{Gaussian_copula_log_error_MLE_CKL.eps}
\caption{The estimation result of Gaussian copulas. The mean absolute error (vertical axis) with respect to the number of data (horizontal axis) is shown in logarithmic scale. The red line is the KL score (MLE), green line is the CKL score (proposed), and the blue and purple lines are the linear fitting of the two lines.}
\label{Gaussian_copula_log_error_MLE_CKL}
\end{figure}
\subsection{Experiments of general minimum information copulas}
\label{general numerical experiments}
\subsubsection{Setting}
Section \ref{gaussian_copula_exp} described the numerical experimental results of the Gaussian copulas. In this section, we present numerical experiments on general minimum information copulas. Since the normalizing functions of the minimum information copula are not generally obtainable, exact sampling is difficult. Thus, the generation of data itself is problematic before parameter estimation. In this paper, we generate data using the approximate sampling method proposed by \cite{Sei&Yano}. The sampling procedure is described below.
\begin{setting}[The approximate sampling method]
\begin{enumerate}
\item First, we decide the function $h(x,y)$ and parameter $\theta \in \mathbb{R}$. Once these are determined, the minimum information copula is uniquely determined, although it cannot be written explicitly.
\item Then, generate a sufficiently large number of independent and identically distributed (i.i.d.) data $
\begin{bmatrix}
x_i \\
y_i
\end{bmatrix}(i=1,\cdots,2N)$ from the uniform distribution on $[0,1]^2$.
\item Choose $i,j$ that satisfy $1\leq i <j \leq 2N$ randomly and flip them with probability
\begin{align*}
\rho
&=\frac{q(x_i,y_j)q(x_j,y_i)}{q(x_i,y_i)q(x_j,y_j)+q(x_i,y_j)q(x_j,y_i)}\\
&=\frac{\exp\{ \theta(h(x_i,y_j)+h(x_j,y_i)) \}}{\exp \{ \theta(h(x_i,y_i)+h(x_j,y_j))+\exp\{\theta (h(x_i,y_j)+h(x_j.y_i)) \}}
\end{align*}
In other words, change the pair $
\begin{bmatrix}
x_i \\
y_i
\end{bmatrix}$ and $
\begin{bmatrix}
x_j \\
y_j
\end{bmatrix}$ to the pair$
\begin{bmatrix}
x_i \\
y_j
\end{bmatrix}$ and $
\begin{bmatrix}
x_j \\
y_i
\end{bmatrix}$.
\item Repeat step $3$ enough times. Then, the $2N$ data are approximately an i.i.d.\ sample from the minimum information copula decided in step $1$.
\item We consider the first $N$ data$
\begin{bmatrix}
x_i^1 \\
y_i^1
\end{bmatrix}$
obtained from system $1$ and the other $N$ data $
\begin{bmatrix}
x_i^2 \\
y_i^2
\end{bmatrix}
$obtained from system $2$.
\end{enumerate}
\end{setting}
Now we have an approximate sample from the given minimum information copulas and can perform parameter estimation using the proposed scores.
Once the initial value $\theta_0$ and the iterative step $\mathrm{d}\theta$ are determined appropriately, the rest is done in the same way as in step $4$ and $5$ of Subsection~\ref{setting}. The above numerical experiments are repeated up to $100$ times from sampling to estimation, and the average of the absolute errors is obtained.
\subsubsection{Result}
First, to roughly check the behavior of the approximate sampling, we conduct the experiment with Gaussian copulas again.
The exact and approximate sampling were used, respectively, to obtain a sample of Gaussian copulas with the same parameters $\theta=\frac{\rho}{1-\rho^2}=1.372549$ as in Subsection~\ref{result}.
Using $N=40,50,\cdots,1990,2000$ pairs of data, we substituted the data into the proposed score and obtained the optimal solution $\hat{\theta}$ as the estimator.
The estimation error was calculated as the absolute value of the difference from the true parameter $\theta=1.372549$.
In addition, maximum likelihood estimation was performed using the same data with \rm{KL}-scores.
The above experiment was repeated up to $100$ times and the estimation errors were averaged.
The results of estimating data with exact and approximate sampling are shown in
Figure~\ref{Gaussian_copula_log_error}, where both axes are in logarithmic scale.
The red line shows the estimation error for the exact sampling and the maximum likelihood estimation (\rm{KL}-score), and the green line is the estimation error for the exact sampling and the proposed score (\rm{CKL}-score).
The blue line is the estimation error of the approximate sampling and the maximum likelihood estimation (\rm{KL}-score), and the purple line is the estimation error of the approximate sampling and proposed score (\rm{CKL}-score).
\begin{figure}[htbp]
\centering
\includegraphics[width=10cm,height=6.5cm]{Gaussian_copula_log_error.eps}
\caption{The estimation result of Gaussian copulas with exact and approximate sampling. The mean absolute error (vertical axis) with respect to the number of data (horizontal axis) is shown in logarithmic scale. The four lines mean the exact sampling $+$ KL score (red), the exact sampling $+$ CKL score (green), the approximate sampling $+$ KL score (blue) and the approximate sampling $+$ CKL score (purple).}
\label{Gaussian_copula_log_error}
\end{figure}
The result of linear fitting for the four lines is summarized in Table~\ref{table:linear-fitting-Gaussian}.
The figure and table confirm that the approximate sampling seems successfully generating data for the Gaussian copulas.
\begin{table}[htbp]
\caption{The coefficients of linear fitting for the four lines on in Figure~\ref{Gaussian_copula_log_error}.}
\label{table:linear-fitting-Gaussian}
\begin{center}
\begin{tabular}{cc|cc}
score& sampling & $a$& $b$ \\
\hline
KL & exact &$-0.517919$& 0.445423\\
CKL & exact & $-0.49438$& 0.97278\\
KL & approximate& $-0.492425$& 0.37061\\
CKL & approximate& $-0.561812$& 1.52274
\end{tabular}
\end{center}
\end{table}
Since the rough behavior of approximate sampling has been confirmed, we will discuss the results of experiments on sampling and parameter estimation of general minimum information copulas, which was the original purpose of this paper.
We generated $N=2000$ pairs of samples of minimum information copula with parameters $\theta=5.0,10.0$ and function $h(x,y)=xy,x^2y$. We used $N=20,30,40,\cdots,1990,2000$ pairs of them, substituted data into the proposed score, and estimated the optimal solution as $\hat{\theta}$. The estimation error was calculated as the absolute value of the difference from the true parameter $\theta$.
The above experiment was repeated up to $100$ times and the average of the estimation error was taken.
The estimation results with the proposed score for the general minimum information copulas are shown in
Figure~\ref{general_copula_log_error}, where both axes are in logarithmic scale.
The red line is the estimation error for $\theta=5.0,h(x,y)=xy$ and the green line is the estimation error for $\theta=5.0,h(x,y)=x^2y$.
The blue line is the estimation error for $\theta=10.0,h(x,y)=xy$ and the purple is the estimation error for $\theta=5.0,h(x,y)=x^2y$.
\begin{figure}[htbp]
\centering
\includegraphics[width=10cm,height=7cm]{general_copula_log_error.eps}
\caption{The estimation result of general minimum information copulas. The mean absolute error (vertical axis) with respect to the number of data (horizontal axis) is shown in logarithmic scale. The used parameters are
$\theta=5.0,h(x,y)=xy$ (red),
$\theta=5.0,h(x,y)=x^2y$ (green),
$\theta=10.0,h(x,y)=xy$ (blue) and
$\theta=10.0,h(x,y)=x^2y$ (purple).
}
\label{general_copula_log_error}
\end{figure}
The result of linear fitting for the four lines is summarized in Table~\ref{table:linear-fitting-general}.
The proposed method confirms that estimation is possible.
\begin{table}[htbp]
\caption{The coefficients of linear fitting for the four lines on in Figure~\ref{general_copula_log_error}.}
\label{table:linear-fitting-general}
\begin{center}
\begin{tabular}{cc|cc}
$\theta$& $h(x,y)$ & $a$& $b$ \\
\hline
5.0 & $xy$ & $-0.563269$& 3.02252\\
5.0 & $x^2y$ & $-0.572157$& 2.98715\\
10.0& $xy$& $-0.548056$& 3.30813\\
10.0& $x^2y$& $-0.581245$& 3.52886
\end{tabular}
\end{center}
\end{table}
\section{Conclusions}
In this paper, we propose a generally homogeneous $2$-point local strictly proper score for minimum information copulas.
The greatest strength of this score is that it can be calculated without normalizing functions, which are difficult to compute for minimum information copulas.
The estimator based on the score is asymptotically consistent, and numerical experiments have confirmed that the behavior is consistent with the theory.
Future work includes theoretical computation of asymptotic variance.
In addition, in this paper, we considered the data
\begin{align*}
(x_1,y_1),\cdots,(x_n,y_n)
\end{align*}
as $N=\lfloor n/2\rfloor$ pairs
\begin{align*}
(x^1_1,y^1_1,x^2_1,y^2_1),\cdots,(x^1_N,y^1_N,x^2_N,y^2_N).
\end{align*}
However, there are $n(n-1)/2$ pairs in the data: $(x_i,y_i,x_j,y_j)$ for $1\leq i<j\leq n$.
In terms of extracting the full information of the data, it would be better to consider the empirical score of all combinations
\begin{align*}
&\hat{S}=\frac{2}{n(n-1)}\sum_{1\leq i < j\leq n}\log\left( \frac{q_{ii}q_{jj}}{q_{ii}q_{jj}+q_{ij}q_{ji}} \right),\quad q_{ij}:=q(x_i,y_j).
\end{align*}
Comparison of the accuracy and computational speed of these two approaches are quite interesting.
The theory of U-statistics (e.g.\ Chapter 12 of \cite{van2000asymptotic}) may be useful for analysis of the modified score.
Furthermore, other than the proposed score, scores with general homogeneity, propriety, and multi-point locality should also be discussed.
In this paper, the score was based on the form of the KL score, but it is not yet known whether a score with similar properties can be created by mimicking the form of the \rm{Hyv\"{a}rinen} score, for example. It is also possible that scores with similar properties can be created using a form that does not take logarithms.
Thus, the authors believe that there is still a way to create such a score.
By knowing the entire set of scores with these properties, we can see if there are scores among them that are even more accurate or that satisfy the properties we want to add.
\section*{Acknowledgments}
We would like to thank Keisuke Yano for helpful comments.
\input{main.bbl}
\end{document}
|
1,108,101,564,382 | arxiv |
\section{Introduction}
The notion of an infinite permutation was introduced in \cite{flaas}, where were investigated the periodic properties and low complexity of permutations.
Similarly to the definition of subword complexity of infinite words, we can introduce the notion of the factor complexity of a permutation as the number of distinct subpermutations of a given length.
The notion of a permutation generated by an infinite non-periodic word was introduced in \cite{mak}.
In \cite{sturm} Makarov calculated the factor complexity of permutations generated by a well-known family of Sturmian words.
In \cite{widmer} Widmer calculated the factor complexity of the permutation generated by the Thue-Morse word.
In this paper we find a formula for the factor complexity of permutations generated by the fixed points of binary uniform morphisms from a some class. Since the Thue-Morse word belongs to this class, we obtain an alternative way to compute the factor complexity of the Thue-Morse permutation.
In Section 2 we introduce basic definitions to be used below.
In Section 3, we introduce the class of morphisms $Q$ for which in Section 9 we state the main theorem of this paper.
In Sections 4--8 we state some auxiliary assertions needed to prove the main theorem.
In Section 10 we give an alternative proof of the formula for the factor complexity of the permutation generated by the Thue-Morse word.
\section{Basic definitions}
Let $\Sigma$ be a finite alphabet. Everywhere below we will use only the two-letter alphabet $\Sigma=\{0,1\}$.
A right infinite word over the alphabet $\Sigma$ is a word of the form $\omega=\omega_{1}\omega_{2}\omega_{3}\ldots$, where each $\omega_{i}\in{\Sigma}$. A (finite) word $u$ is called a subword of a (finite or infinite) word $v$ if $v=s_{1}us_2$ for some words $s_1$ and $s_2$ which may be empty. The set of all finite subwords of the word $\omega$ is denoted by $F(\omega)$.
For the word $\omega$ we define the binary real number $R_{\omega}(i)=0.\omega_{i}\omega_{i+1}\ldots=\sum_{k\geq{0}}{\omega_{i+k}2^{-(k+1)}}$.
The mapping $h:\Sigma^{\ast}\longrightarrow{\Sigma^{\ast}}$ is called a morphism if $h(xy)=h(x)h(y)$ for any words $x,y\in{\Sigma^{\ast}}$.
We say that $\omega$ is a {\em fixed point} of a morphism $\varphi$ if $\varphi(\omega)=\omega$.
Clearly, every morphism is uniquely determined by the images of symbols, which we call {\em blocks}.
A morphism is called {\em uniform} if its blocks are of the same length.
We say that a morphism $\varphi:\Sigma^{\ast}\longrightarrow{\Sigma^{\ast}}$ is {\em marked} if its blocks are of the form $\varphi(a_i)=b_ixc_i$, where $x$ is an arbitrary word, $b_i$ and $c_i$ are symbols of the alphabet $\Sigma$, and all $b_i$ (as well as all $c_i$) are distinct.
In what follows, we will consider only uniform marked morphisms with blocks of length $l$.
An {\em interpretation} of a word $u\in\Sigma^*$ under the morphism $\varphi$ is a triple $s=\langle{v,i,j}\rangle$, where $v=v_{1}\ldots{v_{k}}$ is a word over the alphabet $\Sigma$, $i$ and $j$ are nonnegative integers such that $0\leq{i}<|{\varphi(v_1)}|$ and $0\leq{j}<|{\varphi(v_k)}|$, and the word obtained from $\varphi(v)$ by erasing $i$ symbols to the left $i$ and $j$ symbols to the right is $u$.
Moreover, if $v$ is a subword of $\omega$, then $s$ is called an interpretation on $\omega$.
The word $v$ is called an {\em ancestor} of the word $u$.
In what follows we shall consider only interpretations of subwords of the word $\omega$.
We say that $(u_1,u_2)$ is a {\em synchronization point} of $u\in{F(\omega)}$ if $u=u_1u_2$ and $\forall{v_1, v_2}\in{\Sigma^{*}},\forall{s}\in{F(\omega)}$ $\exists{s_1,s_2}\in{F(\omega)}$ such that $[v_{1}uv_{2}=\varphi(s)\Rightarrow(s=s_{1}s_{2},v_{1}u_{1}=\varphi(s_1),u_{2}v_{2}=\varphi(s_2))]$. A fixed point $\omega=\varphi(\omega)$ of the morphism $\varphi$ is called {\em circular} (\cite{frid}), if any subword $v$ of word $\omega$ of length at least $L_{\omega}$ contains at least one point of synchronization.
For uniform morphisms, this means the uniqueness of the partition of the word $v$ into blocks.
In $\cite{frid}$ it was proved that the nonperiodic fixed points of uniform binary morphisms with $w_1=0$ are circular, except for the case when $\varphi(1)=1^n$.
An {\em occurrence} of word $u\in\Sigma^*$ in the word $\omega$ is a pair $(u,m+1)$ such that $u=\omega_{m+1}\omega_{m+2}\ldots{\omega_{m+n}}$.
It is easy to see that a word can have many different occurrences.
Since the fixed point $\omega$ is circular, the interpretations of all occurrences of the word $u$ are the same and equal to $\langle{v,i,j}\rangle$.
An occurrence $(v,p+1)$ of a word $v$ of length $k$ is called the {\em ancestor of the occurrence} $(u,m+1)$ of the word $u$ if $m=pl+i$ and $0\leq{i}<l$.
It is easy to see that for $|u|\geq{L_{\omega}}$ word $u$ has exactly one ancestor, and any occurrence $(u,m+1)$ of the word $u$ also has exactly one ancestor.
Let $|u|\geq{L_{\omega}}$.
A sequence $u_{0},u_1,\ldots{,u_m}$ of subwords of word $\omega$ is called a {\em chain of ancestors} of the word $u$, if $u_{i+1}$ is the ancestor of $u_i$ for any $0\leq{i}\leq{m-1}$ and $u_0=u$.
The chain of ancestors of word $u$ will be denoted as $u\rightarrow{u_1}\rightarrow{\ldots}\rightarrow{u_m}$.
We say that $u$ is a {\em descendant} of $v$ if $v$ belongs to a chain of ancestors of $u$.
Now we introduce the main object of this paper.
Let $\omega$ be a right infinite nonperiodic word over the alphabet $\Sigma$.
We define the {\em infinite permutation} generated by the word $\omega$ as the ordered triple $\delta=\langle\mathbb{N},{<}_{\delta},<\rangle$, where ${<}_{\delta}$ and $<$ are linear orders on $\mathbb{N}$.
The order ${<}_{\delta}$ is defined as follows: $i{<}_{\delta}j$ if and only if $R_{\omega}(i)<R_{\omega}(j)$, and $<$ is the natural order on $\mathbb{N}$.
Since $\omega$ is a non-periodic word, all $R_{\omega}(i)$ are distinct, and the definition above is correct.
We define a function $\gamma:R^{2}\rightarrow\{<,>\}$, which for two different real numbers reveals their relation.
We say that a permutation $\pi=\pi_{1}\ldots{\pi_n}\in{S_n}$ is a {\em subpermutation} of length $n$ of an infinite permutation $\delta$ if $\gamma(\pi_s,\pi_t)=\gamma(R(i+s),R(i+t))$ for $1\leq{s}<{t}\leq{n}$ and for a fixed positive integer $i$.
We define the set $Perm(n)=\{\pi(i,n+i-1)|i\geq{1}\}$, where $\pi(i,n+i-1)=\pi_{i}\ldots{\pi_{n+i-1}}$ is subpermutation induced by the sequence $R(i),\ldots,R(n+i-1)$ (in the sense that $\gamma(\pi_{i+s_1},\pi_{i+s_2})=\gamma(R(i+s_1),R(i+s_2))$ for $0\leq{s_{1}}<s_{2}\leq{n-1}$).
Now we define the {\em permutation complexity} of the word $\omega$ (or equivalently, the factor complexity of the permutation $\delta_{\omega}$) as $\lambda(n)=|Perm(n)|$.
We say that an occurrence $(u,m+1)$ of the word $u$ {\em generates} a permutation $\pi$ if $\pi$ is induced by a sequence of numbers $R(m+1)\ldots{R(m+n)}$.
A subword $u$ of the word $\omega$ {\em generates} the permutation $\pi$ if there is an occurrence $(u,m+1)$ of this word which generates $\pi$.
\section{Morphisms considered}
We say that a uniform marked binary morphism $\varphi$ with blocks of length $l$ belongs to the class $Q$ if one of the following conditions is fulfilled: either $\varphi(0)=X=01^{n}0x1,\varphi(1)=Y=10^{m}1y0$, where ${n,m}\in\mathbb{N}$, both the word $1^{n}$ and the word $0^{m}$ is included in both blocks morphism exactly once, and the word $X$ ($Y$) does not end by $1^{n-1}$($0^{m-1}$); or $\varphi(0)=01^{n},\varphi(1)=10^{n}$, where $n=l-1$.
It is easy to see that the fixed point $\omega=\lim\limits_{n \to\infty}{\varphi^{n}(0)}$ of any morphism $\varphi$ which belongs to the class $Q$ is circular because $\varphi(1)\neq{1^n}$ for any $n$.
Everywhere below the word $\varphi(0)$ will be called the block of the first type, and $\varphi(1)$ is called the block of the second type.
\textbf{Example.} The morphism $\varphi(0)=011101,\varphi(1)=100010$ belongs to $Q$, whereas the morphism $\varphi(0)=01011,\varphi(1)=10000$ does not belong to $Q$.
Consider a fixed point $\omega=\varphi(\omega)$ of a morphism $\varphi\in{Q}$. Then $\omega$ is divided into blocks, which are the images of its symbols. Such a partition is called {\em correct}.
\begin{lemma}
Let $\omega$ be a fixed point of the morphism $\varphi$, where $\varphi\in{Q}$. Then the following statements are true:
\begin{enumerate}
\item Let $\omega_{i}=\omega_{j}=0$ and $i\equiv 1 \bmod l$, $j\not\equiv 1 \bmod l$. Then $R_{\omega}(i)>R_{\omega}(j)$.
\item Let $\omega_{i}=\omega_{j}=1$ and $i\equiv 1 \bmod l$, $j\not\equiv 1 \bmod l$. Then $R_{\omega}(i)<R_{\omega}(j)$.
\end{enumerate}
\end{lemma}
\begin{lemma}
Let $\omega$ be a fixed point of the morphism $\varphi$, where $\varphi\in{Q}$. Let $\omega_{i}=\omega_{j}$, where $i\equiv{i'}$$(mod\ l)$,$j\equiv{j'}(mod\ l)$ and $0\leq{i',j'}\leq{l-1}$, where $i'$ and $j'$ are fixed. If $i'\neq{j'}$, or if $\omega_i$ and $\omega_j$ lie in blocks of different types in the correct partition $\omega$ into blocks, then the relation ${\gamma(R_{\omega}(i),R_{\omega}(j))}$ is uniquely defined by $i'$, $j'$ and the types of respective blocks.
\end{lemma}
\begin{lemma}
Let $\omega_{i}=\omega_j$ and $R_{\omega}(i)<R_{\omega}(j)$. Then inequality $R((i-1)l+r)<R((j-1)l+r)$ holds for all $1\leq{r}\leq{l}$.
\end{lemma}
\section{Equivalence of permutations}
In this section we introduce the concept of equivalent permutations. Let $z=z_{1}z_{2}\ldots z_{k}$ be a permutation belonging to $S_k$.
An {\em element} of the permutation $z$ is the number $z_i$, where $1\leq{i}\leq{k}$.
We will say that two permutations $x=x_{1}x_{2}\ldots x_{k}$ and $y=y_{1}y_{2}\ldots y_{k}$ are {\em equivalent} if they differ only in relations of extreme elements, i.e $\gamma(x_{1},x_{k})\neq{\gamma(y_{1},y_{k})}$, but $\gamma(x_{i},x_{j})={\gamma(y_{i},y_{j})}$ for all other $i,j$. We will denote this equivalence by $x\sim{y}$.
\begin{lemma}
Let $x$ be a finite permutation and $x=x_{1}\ldots,x_{k}$. Then the permutation $y$ such that $x\sim{y}$ exists if and only if $|x_{1}-x_{k}|=1$.
\end{lemma}
\section{Bad, narrow and wide words}
For an arbitrary subword $v$ of the word $\omega$ we define the sets $M_{v}$ and $N_{v}$, where $N_{v}$ is the set of all pairs of equivalent permutations, and $M_v$ is the remaining set of permutations generated by $v$.
A word $u$ will be called {\em bad} if the set $N_u$ is not empty, i.e, if $u$ generates at least one pair of equivalent permutations.
Let $u$ be a subword of word $\omega$ of length $|u|=n$. The number of permutations generated by $u$ is denoted by $f(u)$.
\begin{lemma}
Let $u$ be a word of length $|u|=n\geq{L_{\omega}}$, and $u'$ be the ancestor of $u$. Then the following statements are true:
\begin{enumerate}
\item $f(u)\leq{f(u')}$,
\item If $N_{u'}={\emptyset}$, then $N_{u}=\emptyset$ and $f(u)=f(u')$.
\end{enumerate}
\end{lemma}
\begin{lemma}
Let $u$ be a bad word and $|u|=n\geq{L_{\omega}}$, and $u'$ be the ancestor of $u$. Then $u'$ is a bad word and $f(u)=f(u'),m_{u}=m_{u'},n_{u}=n_{u'}$.
\end{lemma}
It is worth noting that from the proof of Lemma 6 it follows that if $u$ is a bad word of length $|u|=n\geq{L_{\omega}}$, then $|u|\equiv{1}(mod\ l)$.
The set of all words of length less than $L_\omega$, having descendants of length at least $L_\omega$ is denoted by $A$.
The set of bad words of length $n$, whose chain of ancestors is $u\rightarrow{u_1}\rightarrow{u_2}\rightarrow{\ldots}\rightarrow{u_{m}=a}$, where $m\in{\mathbb{N}}$ ($m$ is not fixed) and $a\in{A}$, is denoted by $F_{a}^{bad}(n)$. The cardinality of the set $F_{a}^{bad}(n)$ is denoted by $C_{a}^{bad}(n)$.
\begin{lemma}
Let $u\in{F_{a}^{bad}}(n)$, where $n\geq{L_{\omega}}$. Then $f(u)=m_{a}+2n_{a}$.
\end{lemma}
A word $u$ will be called {\em narrow} if its chain of ancestors is $u\rightarrow{\ldots}\rightarrow{u_{k-1}}\rightarrow{u_k}\rightarrow{\ldots}\rightarrow{u_{m}=a}$, where $a\in{A}$, $u_{k}$ is the first bad word in the chain of ancestors, and for the interpretation $\langle{u_{k},i,j}\rangle$ of the word $u_{k-1}$ we have $i+1>l-j$.
\begin{lemma}
Let $u$ be a narrow word with $|u|=n\geq{L_{\omega}}$, $u'$ be an ancestor of $u$, and $u'$ be a bad word. Then $n_{u}=0$ and $f(u)=m_{u'}+n_{u'}$.
\end{lemma}
The set of narrow words of length $n$ whose chain of ancestors is $u\rightarrow{u_1}\rightarrow{u_2}\rightarrow{\ldots}\rightarrow{u_{m}=a}$, where $m\in{\mathbb{N}}$ ($m$ is not fixed), is denoted by $F_{a}^{nar}(n)$. The cardinality of the set $F_{a}^{nar}(n)$ is denoted by $C_{a}^{nar}(n)$.
\begin{lemma}
Let $u\in{F_{a}^{nar}}(n)$, where $|u|=n\geq{L_{\omega}}$. Then $f(u)=m_{a}+n_{a}$.
\end{lemma}
A word $u$ will be called {\em wide} if its chain of ancestors is $u\rightarrow{\ldots}\rightarrow{u_{k-1}}\rightarrow{u_k}\rightarrow{\ldots}\rightarrow{u_{m}=a}$, where $a\in{A}$, $u_{k}$ is the first bad word in the chain of ancestors, and for the interpretation $\langle{u_{k},i,j}\rangle$ of the word $u_{k-1}$ we have $i+1<l-j$.
\begin{lemma}
Let $u$ be a wide word with $|u|=n\geq{L_{\omega}}$, $u'$ be an ancestor of $u$, and $u'$ be a bad word. Then $n_{u}=0$ and $f(u)=m_{u'}+2n_{u'}$.
\end{lemma}
The set of wide words of length $n$ whose chain of ancestors is $u\rightarrow{u_1}\rightarrow{u_2}\rightarrow{\ldots}\rightarrow{u_{m}=a}$, where $m\in{\mathbb{N}}$ ($m$ is not fixed), is denoted by $F_{a}^{wide}(n)$. The cardinality of the set $F_{a}^{wide}(n)$ is denoted by $C_{a}^{wide}(n)$.
\begin{lemma}
Let $u\in{F_{a}^{wide}}(n)$, where $|u|=n\geq{L_{\omega}}$. Then $f(u)=m_{a}+2n_{a}$.
\end{lemma}
\section{Algorithm for finding $f(u)$ }
Suppose that $u$ is a subword of $\omega$ and $|u|=n$.
In this section, we calculate $\sum_{|u|=n}f(u)$.
The set of all subwords of length $n$ of the word $\omega$, whose chain of ancestors is $u\rightarrow{u_1}\rightarrow{u_2}\rightarrow{\ldots}\rightarrow{u_{m}=z}$, where $m\in{\mathbb{N}}$ ($m$ is not fixed), is denoted by $F_{z}(n)$.
The cardinality of the set $F_{z}(n)$ is denoted by $C_{z}(n)$.
We consider the set $A$ introduced in the previous section.
Let $A=A_1\cup{A_2}$ be a partition of set $A$, where $A_1$ is the set of bad words belonging to the set $A$, and $A_2$ is the set of remaining words of $A$. Thus, for a word $u$ there are two opportunities:
1)$a\in{A_2}$. In this case, Lemma 5 implies that $f(u)=f(a)=m_{a}$,
2)$a\in{A_1}$. In this case, there are two cases:if $u$ is a bad or a wide word, due to Lemmas 7 and 11 we obtain $f(u)=f(a)=m_{a}+2n_{a}$.
If $u$ is a narrow word, then by Lemma 4 we obtain $f(u)=m_{a}+n_{a}$.
\begin{theorem}
$\sum_{|u|=n}f(u)=\sum_{a_1\in{A_1}}[C_{a_1}^{nar}(n)(m_{a_1}+n_{a_1})+(C_{a_1}^{bad}(n)+C_{a_1}^{wide}(n))(m_{a_1}+2n_{a_1})]+\sum_{a_2\in{A_2}}C_{a_2}(n)m_{a_2}$.
\end{theorem}
Now for the calculation of $\sum_{|u|=n}f(u)$ it remains to compute $C_{a}^{nar}(n)$, $C_{a}^{wide}(n)$, $C_{a}^{bad}(n)$, $C_{a}(n)$. Let $n=xl+r$, where $0\leq{r}\leq{l-1}$. It is easy to see that for $xl+r\geq{L_\omega}$ the following recurrence relations hold:
\begin{enumerate}
\item $C_{a}^{bad}(xl+1)=l{C_{a}^{bad}(x+1)}$. We note that the remark to Lemma 6 implies that $C_{a}^{bad}(xl+r)=0$ for $r\neq{1}$.
\item $C_{a}^{nar}(xl+r)=(r-1)C_{a}^{nar}(x+2)+(r-1)C_{a}^{bad}(x+2)+(l-r+1)C_{a}^{nar}(x+1)$ for $r\geq{1}$.
$C_{a}^{nar}(xl)=(l-1)C_{a}^{nar}(x+1)+(l-1)C_{a}^{bad}(x+1)+C_{a}^{nar}(x)$.
\item $C_{a}^{wide}(xl+r)=(r-1)C_{a}^{wide}(x+2)+(l-r+1)C_{a}^{wide}(x+1)+\\(l-r+1)C_{a}^{bad}(x+1)$ for $r\geq{2}$.
$C_{a}^{wide}(xl+1)=l{C_{a}^{wide}(x+1)}$.
$C_{a}^{wide}(xl)=(l-1)C_{a}^{wide}(x+1)+C_{a}^{wide}(x)+C_{a}^{bad}(x)$.
\item $C_{a}(xl+r)=(r-1)C_{a}(x+2)+(l-r+1)C_{a}(x+1)$ for $r\geq{1}$.
$C_{a}(xl)=(l-1)C_{a}(x+1)+C_{a}(x)$.
\end{enumerate}
\section{Special words}
Recall that the subword $v$ of word $\omega$ is called special if $v0$ and $v1$ are also subwords of $\omega$.
Note that the unique interpretation of any special word $v$ of length at least $L_w$ is equal to $\langle{v',i,0}\rangle$.
Indeed, if $j>0$, then $v$ is uniquely complemented to the right to a full block, and thus only one of the words $v0$ and $v1$ is a subword of $\omega$.
\begin{lemma}
Let $(v0,m_1)$ and $(v1,m_2)$ be some occurrences of words $v0$ and $v1$, where $v$ is a special word with the ancestor $v'$.
Let $(v'0,m'_{1})$ and $(v'1,m'_{2})$ be the ancestors of occurrences of $(v0,m_1)$ and $(v1,m_2)$, where $(v'0,m'_{1})$ and $(v'1,m'_{2})$ generate the same permutation.
Then $(v0,m_1)$ and $(v1,m_2)$ also generate the same permutation.
\end{lemma}
\begin{lemma}
Let $(v0,m_1)$ and $(v1,m_2)$ be some occurrences of words $v0$ and $v1$, where $v$ is a special word with the ancestor $v'$.
Let $(v'0,m'_{1})$ and $(v'1,m'_{2})$ be the ancestors of occurrences of $(v0,m_1)$ and $(v1,m_2)$, where $(v'0,m'_{1})$ and $(v'1,m'_{2})$ generate different non-equivalent permutations.
Then $(v0,m_1)$ and $(v1,m_2)$ also generate different non-equivalent permutations.
\end{lemma}
\begin{lemma}
Let $(v0,m_1)$ and $(v1,m_2)$ be some occurrences of words $v0$ and $v1$, where $v$ is a special word with the ancestor $v'$.
Let $(v'0,m'_{1})$ and $(v'1,m'_{2})$ be the ancestors of occurrences of $(v0,m_1)$ and $(v1,m_2)$.
Then the following statements are true:
1)If $(v'0,m'_{1})$ and $(v'1,m'_{2})$ generate equivalent permutations and $|v|=l|v'|$, then $(v0,m_1)$ and $(v1,m_2)$ also generate equivalent permutations.
2)If $(v0,m_1)$ and $(v1,m_2)$ generate equivalent permutations, then $(v'0,m'_{1})$ and $(v'1,m'_{2})$ also generate equivalent permutations and $|v|=l|v'|$.
3)If $(v'0,m'_{1})$ and $(v'1,m'_{2})$ generate equivalent permutations and $|v|<l|v'|$, then $(v0,m_1)$ and $(v1,m_2)$ generate the same permutation.
\end{lemma}
We will consider special word $v$ of length $n-1$.
Let $v$, without loss of generality, start with $0$.
Then $v1$ cannot generate equivalent permutations.
The number of permutations of the set $M_{v0}$ which also belong to $M_{v1}$, is denoted by $k_v$.
The number of permutations of the set $M_{v0}$, such that each of their is equivalent to some permutation of the set $M_{v1}$, is denoted by $t_v$.
The number of pairs of permutations of a set $N_{v0}$ such that a permutation of the pair is equivalent, and the other is equal to some permutation of the set $M_{v1}$, is denoted by $r_v$.
\begin{lemma}
Let $v$ be a special word with the ancestor $v'$, and $|v|<l|v'|$. Then $k_{v}=k_{v'}+t_{v'}+r_{v'}$, $t_{v}=0$ and $r_{v}=0$.
\end{lemma}
\begin{lemma}
Let $v$ be a special word with the ancestor $v'$, and $|v|=l|v'|$. Then $k_{v}=k_{v'}$, $t_{v}=t_{v'}$ and $r_{v}=r_{v'}$.
\end{lemma}
\begin{lemma}
Let $v$ be a special word with the ancestor $v'$, and $t_{v'}=r_{v'}=0$. Then $k_{v}=k_{v'}$ and $t_{v}=r_{v}=0$.
\end{lemma}
\section{Algorithm for finding $g(v)$}
Suppose $v$ is a special word and $|v|=n-1$.
The set of all the special words of length $n$ is denoted by $B(n)$.
The number of common permutations generated by some occurrences of words $v0$ and $v1$ is denoted by $g(v)$.
In this section, we calculate $\sum_{|v|=n-1}g(v)$.
The set of all special subwords of length $n$ of the word $\omega$, whose chain of ancestors is $v\rightarrow{v_1}\rightarrow{v_2}\rightarrow{\ldots}\rightarrow{v_{m}=z}$, is denoted by $B_{z}(n)$.
The cardinality of the set $B_{z}(n)$ is denoted by $S_{z}(n)$.
It is clear that the chain of ancestors of word $v$ consists of special words.
The set of all special words of length less than $L_\omega$, with the descendants of the length greater than $L_\omega$ is denoted by $B$.
\begin{lemma}
Let $xl+r\geq{L_\omega}$, $0\leq{r}\leq{l-1}$, and $b$ be a special word. Then the following recurrence relations hold:
1)$S_{b}(xl+r)=S_{b}(x+1)$ if $r>0$.
2)$S_{b}(xl)=S_{b}(x)$ if $r=0$.
3)$S_{b}(l^{k}|b|)=1$ if $k\geq{1}$.
\end{lemma}
Now we calculate $g(v)$.
\begin{lemma}
Let $v\in{B_{b}(n-1)}$. Then the following statements are true:
1)If $n\neq{l^{k}|b|+1}$ for any positive integer $k$, then $g(v)=k_{b}+t_{b}+r_{b}$.
2)If $n=l^{k}|b|+1$ for some positive integer $k$, then $g(v)=k_{b}+r_{b}$.
\end{lemma}
Let us introduce the function $\delta(n,b)$: if $n=l^{s}|b|+1$ for some positive integer $s$, then $\delta(n,b)=0$, otherwise $\delta(n,b)=1$.
\begin{theorem}
$\sum_{v\in{B(n-1)}}g(v)=\sum_{b\in{B}}[S_{b}(n-1)(k_{b}+t_{b}+r_{b})\delta(n,b)+(k_{b}+r_{b})(1-\delta(n,b))]$.
\end{theorem}
\section{The main theorem}
In this section we state the main theorem of this article.
For this purpose we use the following Lemma (Lemma 1 from $[3]$).
\begin{lemma}
Let $u=u_1\ldots{u_n}$ and $v=v_1\ldots{v_n}$ be two subwords of word $\omega$ and $u_{i}\neq{v_{i}}$ for some $1\leq{i}\leq{n-1}$. Then $u$ and $v$ do not generate the same permutations.
\end{lemma}
We can now prove the main theorem of this paper:
\begin{theorem}
Let $\omega$ be a fixed point of the morphism $\varphi$, where ${\varphi}\in{Q}$.
Then the permutation complexity of $\omega$ is calculated as follows:
$\lambda(n)=\sum_{a_1\in{A_1}}[C_{a_1}^{nar}(n)(m_{a_1}+n_{a_1})+(C_{a_1}^{bad}(n)+C_{a_1}^{wide}(n))(m_{a_1}+2n_{a_1})]+\sum_{a_2\in{A_2}}C_{a_2}(n)m_{a_2}-\sum_{b\in{B}}[S_{b}(n-1)(k_{b}+t_{b}+r_{b})\delta(n,b)+(k_{b}+r_{b})(1-\delta(n,b))]$.
\end{theorem}
\section{Permutation complexity of the Thue-Morse Word}
In \cite{widmer} Widmer calculated the factor complexity of the permutation generated by the Thue-Morse word.
In this section, we present an alternative proof of the formula for permutation complexity of the Thue-Morse word.
Let $n=2^{k}+b$, where $0<b\leq{2^k}$.
Note that the length $L_\omega$ of the synchronization of the Thue-Morse word is $4$.
It is also easy to understand that $A_1=\{010,101\}$ and $A_2=\{00,01,10,11,001,011,100,101\}$ (for example, $010$ generates two equivalent permutations $132$ and $231$ due to occurrences of words $\omega_{11}\omega_{12}\omega_{13}$ and $\omega_{4}\omega_{5}\omega_{6}$, respectively). Thus, we obtain:
$\sum_{|u|=n}f(u)=\sum_{a_{1}\in{A_1}}[C_{a_1}^{nar}(n)(m_{a_1}+n_{a_1})+(C_{a_1}^{bad}(n)+C_{a_1}^{wide}(n))(m_{a_1}+2n_{a_1})]+\sum_{a_{2}\in{A_2}}C_{a_2}(n)m_{a_2}=
\sum_{a_{1}\in{A_1}}[C_{a_1}^{nar}(n)+2(C_{a_1}^{bad}(n)+C_{a_1}^{wide}(n))]+\sum_{a_{2}\in{A_2}}C_{a_2}(n)=
C_{010}^{bad}(n)+
C_{010}^{wide}(n)+C_{101}^{bad}(n)+C_{101}^{wide}(n)+C(n)$.
It is clear that for $C_{a}^{bad}(n)$ and $C_{a}^{wide}(n)$ the following recurrence relations hold:
$C_{a}^{bad}(2n+1)=2C_{a}^{bad}(n+1)$,
$C_{a}^{wide}(2n+1)=2C_{a}^{wide}(n+1)$, $C_{a}^{wide}(2n)=C_{a}^{wide}(n+1)+C_{a}^{wide}(n)+C_{a}^{bad}(n)$.
Hence it is easy to see that $C_{010}^{bad}(2^{k}+1)=C_{101}^{bad}(2^{k}+1)=2^{k-1}$ for $k>0$; for other $n$ we have
$C_{010}^{bad}(n)=C_{101}^{bad}(n)=0$.
Let us prove by induction on $n$ that for $2^{k}+2\leq{n}<3*{2}^{k-1}$($k>2$) the relation $C_{010}^{wide}(n)=C_{101}^{wide}(n)=2^{k-1}-b+1$ holds.
The base $n=10$ follows from the relations $C_{010}^{wide}(10)=C_{010}^{wide}(6)+C_{010}^{wide}(5)+C_{010}^{bad}(5)=1+0+2=3=2^{3-1}-2+1$ and $C_{010}^{wide}(11)=2C_{010}^{bad}(6)=2=2=2^{3-1}-3+1$.
Let us prove the induction step. If $n=2^{k}+2b'$, then by the induction hypothesis we have $C_{010}^{wide}(2^{k-1}+b')=C_{101}^{wide}(2^{k-1}+b')=2^{k-2}-b'+1$ and $C_{010}^{wide}(2^{k-1}+b'+1)=C_{101}^{wide}(2^{k-1}+b'+1)=2^{k-2}-b'$. Hence we obtain $C_{010}^{wide}(n)=C_{101}^{wide}(n)=C_{010}^{wide}(2^{k-1}+b')+C_{010}^{wide}(2^{k-1}+b'+1)+C_{010}^{bad}(2^{k-1}+b')=2^{k}-2b'+1$. If $n=2^{k}+2b'+1$, then by the induction hypothesis we have $C_{010}^{wide}(2^{k-1}+b'+1)=C_{101}^{wide}(2^{k-1}+b'+1)=2^{k-2}-b'$. Hence we obtain $C_{010}^{wide}(n)=C_{101}^{wide}(n)=2C_{010}^{wide}(2^{k-1}+b'+1)=2(2^{k-2}-b')=2^{k-1}-2b'$. The induction step is proved.
Similarly, we prove by induction on $n$ that for $3*2^{k-1}+1\leq{n}\leq{{2}^{k+1}+1}$ the relation $C_{010}^{wide}(n)=C_{101}^{wide}(n)=0$ holds. The base $n=7$ and $n=8$ follow from the relations $C_{010}^{wide}(8)=C_{101}^{s}(8)=C_{010}^{wide}(5)+C_{010}^{wide}(4)+C_{010}^{bad}(4)=0+0+0=0$, $C_{010}^{wide}(7)=C_{101}^{wide}(7)=2C_{010}^{wide}(4)=0$. Let us prove the induction step. If $n=2^{k}+2b'$, then by the induction hypothesis we have $C_{010}^{wide}(2^{k-1}+b')=C_{101}^{wide}(2^{k-1}+b')=0$ and $C_{010}^{wide}(2^{k-1}+b'+1)=C_{101}^{s}(2^{k-1}+b'+1)=0$. Hence we obtain $C_{010}^{wide}(n)=C_{101}^{wide}(n)=C_{010}^{wide}(2^{k-1}+b')+C_{010}^{wide}(2^{k-1}+b'+1)+C_{010}^{bad}(2^{k-1}+b'+1)=0+0+0=0$. If $n=2^{k}+2b'+1$, then by the induction hypothesis we have $C_{010}^{wide}(2^{k-1}+b'+1)=C_{101}^{wide}(2^{k-1}+b'+1)=0$. Hence we obtain $C_{010}^{wide}(n)=C_{101}^{wide}(n)=2C_{010}^{wide}(2^{k-1}+b'+1)=2\cdot{0}=0$. The induction step is proved.
Thus we have proved the following:
1) If $2^{k}+2\leq{n}<3*{2}^{k-1}$, then $\sum_{|u|=n}f(u)=2(2^{k-1}-b+1)+4(2^{k}+b-1)-2^{k}=2^{k+2}+2b-2$.
2) If $3*2^{k-1}+1\leq{n}<{2}^{k+1}+1$, then $\sum_{|u|=n}f(u)=2(n-1)+2^{k+1}=2^{k+2}+2b-2$.
Therefore the equality $\sum_{|u|=n}f(u)=C(n)=2(n-1)+2^{k+1}=2^{k+2}+2b-2$ holds for all $n\geq{6}$.
It is clear that for $S_{b}(n)$ the following recurrence relations hold:
$S_{b}(2n+1)=S_{b}(n+1)$ and $S_{b}(2n)=S_{b}(n)$.
Taking into account that $S_{01}(3)=S_{10}(3)=1$ and $S_{01}(4)=S_{10}(4)=1$, it is easy to prove by induction that for $n\geq{3}$ the equality $S_{01}(n)=S_{10}(n)=1$ holds.
It is also easy to see that $B=\{01,10,010,101\}$.
In addition word $010$ generates two equivalent permutations $132$ and $231$, while word $011$ generates only a single permutation $132$.
Hence $k_{01}=t_{01}=0$ and $r_{01}=1$. Similarly $k_{10}=t_{10}=0$ and $r_{10}=1$.
Words $0101$ and $0100$ generate different inequivalent permutations $1324$ and $3421$.
Hence $k_{010}=t_{010}=0$ and $r_{010}=0$. Similarly $k_{101}=t_{101}=0$ and $r_{101}=0$. Therefore for $n\neq{2^k}+1$ by Theorem 2 the following relation holds:
$\sum_{v\in{B(n-1)}}g(v)=\sum_{b\in{B}}[S_{b}(n-1)(k_{b}+t_{b}+r_{b})\delta(n,b)+(k_{b}+r_{b})(1-\delta(n,b))]=
S_{01}(n-1)(k_{01}+t_{01}+r_{01})+S_{10}(n-1)(k_{10}+t_{10}+r_{10})=1+1=2$.
For $n=2^{k}+1$ by Theorem 2 the following relation holds:
$\sum_{v\in{B(n-1)}}g(v)=\sum_{b\in{B}}[S_{b}(n-1)(k_{b}+t_{b}+r_{b})\delta(n,b)+(k_{b}+r_{b})(1-\delta(n,b))]=
k_{01}+r_{01}+k_{10}+r_{10}=2$.
Thus the formula for the permutation complexity of the Thue-Morse word is
$\lambda(n)=\sum_{|u|=n}f(u)-\sum_{b\in{B(n-1)}}g(b)=2^{k+2}+2b-2-2=2(2^{k+1}+b-2)$ for $n=2^{k}+b$, where $0<b\leq{2^k}$.
\section{Acknowledgements}
I am grateful to A. E. Frid and S. V. Avgustinovich for helpful and stimulating discussions.
\bibliographystyle{eptcs}
|
1,108,101,564,383 | arxiv | \section{Introduction}
Patterns of the Standard Model (SM) fermion masses suggest undiscovered new physics in the SM Yukawa sector. Among various approaches to interpret the SM fermion mass spectra, the Koide formula~\cite{Koide:1982si, Koide:1983a, Koide:1983b}
\begin{equation}
K = \frac{m_e + m_\mu + m_\tau}{\left ( \sqrt{m_e} + \sqrt{m_\mu} + \sqrt{m_\tau} \right )^2}
= \frac{2}{3} \label{eq:1-01}
\end{equation}
exhibits a great consistency with the current experimental data~\cite{Patrignani:2016xqp, Tanabashi:2018oca, Zyla:2020zbs}. From the data of charged lepton masses
\begin{align}
m_e &= 0.5109989461 \pm 0.0000000031 \ \text{MeV} / c^2,\\
m_\mu &= 105.6583745 \pm 0.0000024 \ \text{MeV} / c^2,\\
m_\tau &= 1776.86 \pm 0.12 \ \text{MeV} / c^2,
\end{align}
with their 1-$\sigma$ errors, the Koide's character $K$ is calculated to be
\begin{equation}
K = 0.6666605 \pm 0.0000068
= \frac{2}{3} \times (0.999991 \pm 0.000010),
\end{equation}
which agrees with Eq.~\eqref{eq:1-01} within $10^{-5}$ precision and within its 1-$\sigma$ error. In addition, when quantum electrodynamics (QED) radiative correction to charged lepton masses are taken into account, the value of $K$ deviates from $2 / 3$ by about $10^{- 3}$~\cite{Li:2006et, Xing:2006vk, Xing:2019vks}, which is $10^2$ times larger than the present experimental error. To fix this deviation, the SM flavor symmetry is gauged, and the gauge interaction induces another radiative correction which may cancel the QED correction~\cite{Sumino:2008hu, Sumino:2008hy, Sumino:2009bt, Koide:2012kn, Koide:2014doa, Koide:2016bte, Koide:2016qeq}. So the Koide formula Eq.~\eqref{eq:1-01} holds for both the pole masses and the running masses of charged leptons. The Koide formula can be geometrically interpreted as the angle $\theta = \pi / 4$ between the two vectors $\left ( \sqrt{m_e}, \sqrt{m_\mu}, \sqrt{m_\tau} \right )$ and $(1, 1, 1)$ in three dimensions~\cite{Foot:1994yn}. Empirical extensions of the Koide formula to masses of quarks and neutrinos are also conjectured in literature~\cite{Esposito:1995bw, Li:2005rp, Rivero:2005vj, Gerard:2005ad, Guo:2007rn, Rodejohann:2011jj, Kartavtsev:2011jt, Cao:2012un, Zenczykowski:2012fg, Zenczykowski:2013bb, Gao:2015xnv, Huang:2016ocs, Huang:2016sht}.
Early proposals to explain the physical origin of the Koide formula include radiative mass generation models, Froggatt-Nielsen type models~\cite{Froggatt:1978nt} and seesaw-type models, with assumptions of discrete flavor symmetries, democratic mixing or tribimaximal mixing~\cite{Koide:1989ds, Koide:1989jq, Koide:1992vs, Koide:1993da, Koide:1994wu, Koide:1995ie, Koide:1995xk, Koide:1999mx, Koide:2000zi, Koide:2005nv, Koide:2005za, Koide:2005ep, Mohapatra:2006gs, Koide:2006dn, Ma:2006ht, Koide:2006vs, Koide:2007kw, Koide:2007sr, Koide:2007eu, Koide:2010vu}. A more recent development is known as the Yukawaon model~\cite{Haba:2008wr, Koide:2008eq, Koide:2008ey, Koide:2008tr, Koide:2010zz}, in which the SM Yukawa coupling constant for charged leptons is induced by the vacuum expectation value (VEV) of a 9-component Hermitian matrix-valued scalar field $Y^i_j$, with flavor indices $i, j = 1, 2, 3$. The nonet field $Y$ beyond SM, named the Yukawaon, can be set up in a $\mathbf{3} \otimes \mathbf{3}^* = \mathbf{8} \oplus \mathbf{1}$ representation of the $\mathrm{SU(3)}$ flavor symmetry,%
\footnote{The earliest usage of the term ``Yukawaon'' appears in~\cite{Koide:2008qm} referring to a sextet field in $\mathbf{3} \odot \mathbf{3} = \mathbf{5} \oplus \mathbf{1}$ of the $\mathrm{SO(3)}$ flavor symmetry, and in~\cite{Koide:2008tr} referring to a nonet field in $\mathbf{3} \otimes \mathbf{3}^* = \mathbf{8} \oplus \mathbf{1}$ of the $\mathrm{SU(3)}$ flavor symmetry. For simplicity, here we only discuss nonet Yukawaons.}
which makes charged leptons anomaly free in the $\mathrm{SU(3)}$~\cite{Koide:2013eca}. Yukawaon models for up quarks and down quarks can also be build in a similar way by introducing another two Yukawaons. The SM Yukawa coupling terms are then replaced by dimension-five effective operators involving these Yukawaons. We introduce three sectors of Yukawaons $Y^{(a)}$ with sector indices $a = e, u, d$, and arrange
\begin{equation}
\mathcal{L}^{(5)}_{\text{Yukawaon}} = - \frac{y_0}{\Lambda}
\left ( \bar{l}_{L i} Y^{(e) i}_j H e^j_R
+ \bar{q}_{L i} Y^{(u) i}_j \tilde{H} u^j_R
+ \bar{q}_{L i} Y^{(d) i}_j H d^j_R
\right )
+ \text{c.c.}, \label{eq:1-02}
\end{equation}
for the left-handed leptons $l^i_L = (\nu^i_L, e^i_L)^{\mathrm{T}}$, the right-handed charged leptons $e^i_R$, the left-handed quarks $q^i_L = (u^i_L, d^i_L)^{\mathrm{T}}$, the right-handed up quarks $u^i_R$, the right-handed down quarks $d^i_R$, the Higgs field $H = (H^+, H^0)^{\mathrm{T}}$ and its charge conjugation $\tilde{H} = \epsilon H^*$, with a dimensionless coefficient $y_0$ and an energy scale $\Lambda \gg m_W$. The $\mathrm{SU}(3)$ flavor symmetries can be extended to $\mathrm{U}(3)$'s which include $\mathrm{U}(1)$'s corresponding to the conserved lepton or baryon numbers, and Eq.~\eqref{eq:1-02} is also invariant under the $\mathrm{U}(3)$'s by setting all $\mathrm{U}(1)$ charges of Yukawaons to $0$. Below the scale $\Lambda$, the Yukawaons acquire non-zero VEV's in a quadratic form
\begin{equation}
\left \langle Y^{(a)} \right \rangle
\propto
\left \langle \Phi^{(a)} \right \rangle \left \langle \Phi^{(a)} \right \rangle,
\end{equation}
where the new $\mathrm{SU}(3)$ flavor nonet fields $\Phi^{(a)}$ are named the ur-Yukawaons. Such VEV relations can be obtained from the F-flatness condition with a superpotential, e.g.~\cite{Koide:2008tr},
\begin{align}
W_{\text{Yukawaon}} &= W_0 + W^{(\Phi)}(\Phi^{(a)}, \phi_i), \label{eq:1-03}\\
W_0 &= W^{(e)}_0 + W^{(u)}_0 + W^{(d)}_0,\\
W^{(a)}_0 &= \lambda^{(a)}_A \operatorname{Tr} \left [ \Phi^{(a)} \Phi^{(a)} A^{(a)} \right ]
+ \mu^{(a)}_A \operatorname{Tr} \left [ Y^{(a)} A^{(a)} \right ],
\end{align}
where another set of nonet fields $A^{(a)}$ are introduced. The $W^{(\Phi)}$ part of Eq.~\eqref{eq:1-03} may contains more chiral superfields $\phi_i$ to help the stabilization of $\Phi^{(a)}$ at their VEV's. At an SUSY vacuum satisfying the F-flatness condition, we have
\begin{equation}
\left \langle Y^{(a)} \right \rangle = - \frac{\lambda^{(a)}_A}{\mu^{(a)}_A}
\left \langle \Phi^{(a)} \right \rangle
\left \langle \Phi^{(a)} \right \rangle, \quad
\left \langle A^{(a)} \right \rangle = 0, \quad
\left \langle \partial_{\Phi^{(a)}} W^{(\Phi)} \right \rangle = 0, \quad
\left \langle \partial_{\phi_i} W^{(\Phi)} \right \rangle = 0. \label{eq:1-04}
\end{equation}
Non-zero VEV's of $\Phi$'s and $Y$'s spontaneously break the $\mathrm{SU}(3)$ flavor symmetry in each sector, and Eq.~\eqref{eq:1-02} becomes the SM Yukawa coupling terms with the effective Yukawa coupling coefficients
\begin{equation}
y^{(a) i}_j = \frac{y_0}{\Lambda} \left \langle Y^{(a) i}_j \right \rangle
= - \frac{y_0 \lambda^{(a)}_A}{\Lambda \mu^{(a)}_A}
\left \langle \Phi^{(a) i}_k \right \rangle
\left \langle \Phi^{(a) k}_j \right \rangle.
\end{equation}
Then the VEV of the Higgs doublet $\langle H \rangle = (0, v / \sqrt{2})^{\mathrm{T}}$ give the the SM fermion mass matrices
\begin{equation}
M^{(a) i}_j = \frac{v}{\sqrt{2}} y^{(a) i}_j
= - \frac{y_0 \lambda^{(a)}_A v}{\sqrt{2} \Lambda \mu^{(a)}_A}
\left \langle \Phi^{(a) i}_k \right \rangle
\left \langle \Phi^{(a) k}_j \right \rangle. \label{eq:1-05}
\end{equation}
For charged leptons, it is known from experiments with high precision that the mass matrix is diagonal, i.e.\ $M^{(e)} = \operatorname{diag} (m_e, m_\mu, m_\tau)$. The above effective mechanism gives the Koide's character
\begin{equation}
K = \frac{m_e + m_\mu + m_\tau}{\left ( \sqrt{m_e} + \sqrt{m_\mu} + \sqrt{m_\tau} \right )^2}
= \frac{[\Phi \Phi]}{[\Phi]^2}. \label{eq:1-06}
\end{equation}
For convenience, we use the notation $[X] \equiv \operatorname{Tr}[X]$ where $X$ can be any monomial of nonet fields, and write $\Phi$ for $\langle \Phi^{(e)} \rangle = \left \langle \Phi \right \rangle$ as long as it does not cause any ambiguity. For up quarks and down quarks, their masses are given by the singular values of the mass matrices. Although the quark mass matrices are not diagonal, the experimental data of quark masses and the Cabibbo-Kobayashi-Maskawa (CKM) matrix can always be fitted with the Yukawa coupling matrices being Hermitian. So the Yukawaons and ur-Yukawaons in up quark and down quark sectors can also be assumed to be Hermitian matrix-valued scalar fields, which can be diagonalized by unitary similarity transformations. We still have the similar mass formulas expressed by the VEV's of $\Phi$'s:
\begin{align}
K_{\text{up quarks}} &= \frac{m_u + m_c + m_t}{\left ( \sqrt{m_u} + \sqrt{m_c} + \sqrt{m_t} \right )^2}
= \frac{\left [ \Phi^{(u)} \Phi^{(u)} \right]}{\left [ \Phi^{(u)} \right ]^2}, \label{eq:1-07}\\
K_{\text{down quarks}} &= \frac{m_d + m_s + m_b}{\left ( \sqrt{m_d} + \sqrt{m_s} + \sqrt{m_b} \right )^2}
= \frac{\left [ \Phi^{(d)} \Phi^{(d)} \right]}{\left [ \Phi^{(d)} \right ]^2}. \label{eq:1-08}
\end{align}
With a properly arranged superpotential, the VEV's of $\Phi$'s give the values of $K$'s fitting the experimental data. Following this routine, different models have been build to reproduce the Koide formula Eq.~\eqref{eq:1-01}, its analogues for quark masses and neutrino masses, and the fermion mixing matrices~\cite{Koide:2008qm, Koide:2008sj, Koide:2008he, Koide:2008kw, Koide:2009zz, Koide:2010np, Koide:2010hp, Nishiura:2010rt, Koide:2012zz, Koide:2011wj, Koide:2012fw, Koide:2012ji, Koide:2013ie, Koide:2014nxa, Koide:2014oxa, Koide:2015ura, Koide:2015hya, Koide:2015ype, Koide:2017lan, Koide:2018fsj}.
The usage of flavor nonets to explain the Koide formula can also be traced back to some early works on seesaw-type models~\cite{Koide:1989jq, Koide:1992vs, Koide:1994wu, Koide:1995ie, Koide:1995xk, Koide:2007sr, Koide:2010vu}, in which a flavor nonet scalar $\Phi^{(e)} = \Phi$, a flavor singlet scalar $S^{(e)} = S$, and new heavy fermions $L^{(l)}_L = L_L = (N_L, F_L)^{\mathrm{T}}$ and $F^{(e)}_R = F_R$ are introduced for the charged lepton sector. Lepton masses are generated from the VEV's of $\Phi$, $S$ and the SM Higgs field $H$. From the dimension-five effective operators
\begin{equation}
\mathcal{L}^{(5)}_{\text{seesaw}} = - \frac{y_0}{\Lambda}
\left ( \bar{l}_{L i} \Phi^i_j H F^j_R
+ \bar{L}_{L i} \Phi^i_j H e^j_R
+ \bar{L}_{L i} S H F^i_R
\right )
+ \text{c.c.}, \label{eq:1-09}
\end{equation}
we obtain the see-saw type mass terms
\begin{equation}
\begin{split}
\mathcal{L}_{\text{seesaw}} &= - \bar{e}_L m_L F_R
- \bar{F}_L m_R e_R
- \bar{F}_L M_F F_R
+ \text{c.c.}\\
&= - \begin{pmatrix}
\bar{e}_{L i} & \bar{F}_{L j}
\end{pmatrix}
\begin{pmatrix}
0 & m^i_{L l}\\
m^j_{R k} & M_F \delta^j_l
\end{pmatrix}
\begin{pmatrix}
e^k_R\\
F^l_R
\end{pmatrix}
+ \text{c.c.},
\end{split} \label{eq:1-10}
\end{equation}
with effective mass parameters
\begin{equation}
m^i_{L j} = m^i_{R j}
= \frac{y_0 v}{\sqrt{2} \Lambda} \Phi^i_j, \quad
M_F = \frac{y_0 v}{\sqrt{2} \Lambda} S.
\end{equation}
Mass eigenvalues are found by a singular value decomposition (SVD) of the mass matrix in Eq.~\eqref{eq:1-10}. Assuming that the VEV of $S$ is much larger than the VEV of any component of $\Phi$, or $M_F \gg \lVert m_L \rVert = \lVert m_R \rVert$, the singular values are approximately identical to the magnitudes of the eigenvalues of the mass matrix. The heavy mass eigenstates are almost aligned to $F_L$ and $F_R$, with their mass $M \approx M_F$. The light mass eigenstates, which correspond to the SM charged leptons, have the seesaw mass matrix
\begin{equation}
M^{(e) i}_j \approx m^i_{L k} M_F^{-1} m^k_{R j}
= \frac{y_0 v}{\sqrt{2} \Lambda S} \Phi^i_k \Phi^k_j.
\end{equation}
Similarly to Eq.~\eqref{eq:1-05} in the Yukawaon model, here $M^{(e)}$ is also quadratic in $\Phi$. Thus the same expression of the Koide's character Eq.~\eqref{eq:1-06} is obtained. Expressions Eq.~\eqref{eq:1-07} for up quarks and Eq.~\eqref{eq:1-08} for down quarks can also be derived in a similar seesaw-type setup. With a properly arranged scalar potential $V(\Phi)$, the VEV of $\Phi$ gives the value of $K$ fitting the experimental data. Such a model is also referred to as the scalar potential model, with emphasis on the scalar potential $V(\Phi)$ instead of the seesaw mechanism which leads to the mass matrix quadratic in $\Phi$.
However, in previous versions of scalar potential models or Yukawaon models, the scalar potential $V$ or the superpotential $W$ is incomplete, i.e., it does not include all possible terms respecting $\mathrm{SU}(3)$. The missing $\mathrm{SU}(3)$-invariant terms in $V$ or $W$ must be unnaturally fine-tuned to have zero coefficients. A recent approach~\cite{Koide:2017lrf} considers all $\mathrm{SU}(3)$-invariant terms in the scalar potential. Besides the original Koide formula, a second formula on charged lepton masses is proposed. But some vital mistakes in the derivation actually invalidate the result, although the second formula can be successfully derived in Yukawaon models with a suitable superpotential $W$~\cite{Koide:2009hn, Koide:2018gdm}. As we are going to show, the complete scalar potential instead leads to a modified version of the Koide formula.
In this work, we take the scalar potential $V$ to include all $\mathrm{SU}(3)$-invariant terms up to quartic. A $\mathbb{Z}_2$ symmetry is imposed to eliminate linear and cubic terms, and a Higgs-like quadratic term is assumed to generate the non-zero VEV of $\Phi$. The equations for a stationary point lead to a modified version of the Koide formula. The modified formula is then reproduced in a Yukawaon model. The nonet $\Phi$ is promoted to a nonet chiral superfield, and two additional chiral superfields are introduced. The superpotential $W$ includes all $\mathrm{SU}(3)$-invariant terms up to cubic. An R-symmetry is imposed to further restrict the form of $W$, and a small R-symmetry breaking term is introduced to generate the non-zero VEV of $\Phi$. The F-flatness condition for a SUSY vacuum leads to the same VEV relations and the same modified formula as the result from the scalar potential model. In both models, the Koide's character $K$ in the modified formula is modified by two effective parameters. The modified range of $K$ covers all possible values of $K$ for charged leptons, up quarks and down quarks. It offers a natural interpretation of SM fermion mass spectra.
The rest part of this paper is arranged as following. In Section \ref{sec:2}, we derive the modified version of the Koide formula from a scalar potential with all $\mathrm{SU}(3)$-invariant terms. In Section \ref{sec:3}, from a superpotential constructed with the $\mathrm{SU}(3)$ flavor symmetry and an R-symmetry, we derived the VEV relations and the modified formula in the Yukawaon model, which is in agreement with the result from the scalar potential model. In Section \ref{sec:4}, we make some concluding remarks on the possible implications from the modified formula.
\section{The modified formula from a scalar potential model} \label{sec:2}
Following the previous approach~\cite{Koide:2017lrf}, we consider the scalar potential $V$ with all $\mathrm{SU}(3)$-invariant terms of the nonet scalar field $\Phi$ presented as the octet $\Phi_8$ and the singlet $[\Phi]$. A renormalizable $V$ contains terms only up to quartic. Besides the $\mathrm{SU}(3)$ flavor symmetry, we impose a $\mathbb{Z}_2$ symmetry under which both $\Phi_8$ and $[\Phi]$ are odd. Thus only quadratic and quartic terms are allowed in $V$. For quartic terms, we have
\begin{equation}
V_1 = a_0 [\Phi_8 \Phi_8 \Phi_8 \Phi_8]
+ a_{02} [\Phi_8 \Phi_8] [\Phi_8 \Phi_8]
+ a_1 [\Phi_8 \Phi_8 \Phi_8] [\Phi]
+ a_2 [\Phi_8 \Phi_8] [\Phi]^2
+ a_4 [\Phi]^4.
\end{equation}
Note that the dimension-five operators in Eq.~\eqref{eq:1-09} are also $\mathbb{Z}_2$-invariant by letting the Higgs field $H$ to be odd and all other SM fields to be neutral. For quadratic terms, we assume that they combine to a Higgs-like negative mass-square term to generate the non-zero VEV of $\Phi$:
\begin{equation}
V_2 = - \mu^2 [\Phi \Phi]
= - \mu^2 [\Phi_8 \Phi_8]
- \frac{1}{3} \mu^2 [\Phi]^2.
\end{equation}
Both $V_1$ and $V_2$ combine into the full scalar potential
\begin{equation}
V = V_1 + V_2.
\end{equation}
The octet and the singlet appear in the Clebsch-Gorden series of $\mathrm{SU}(3)$ representations $\mathbf{3} \otimes \mathbf{3}^* = \mathbf{8} \oplus \mathbf{1}$. They combine to the nonet scalar field
\begin{equation}
\Phi = \Phi_8 + \frac{1}{3} [\Phi] \mathbf{I}_{3 \times 3}
= \Phi_8^a t^a + [\Phi] t^0, \label{eq:2-01}
\end{equation}
where we denote $t^0 = \frac{1}{3} \mathbf{I}_{3 \times 3}$. The matrices $t^a$'s, where $a = 1, \dotsc , 8$, are eight generators of the Lie algebra $\mathfrak{su}(3)$. Altogether $t^0$ and $t^a$'s give nine generators of $\mathfrak{u}(3) \cong \mathfrak{su}(3) \times \mathfrak{u}(1)$, and the nine real scalar fields $\{ \Phi_8^a, [\Phi] \}$ give the Hermitian matrix-valued $\Phi \in \mathfrak{u}(3)$. Replacing $\Phi_8$ with $\Phi$ and $[\Phi]$, the following identities can be derived:
\begin{align}
{} [\Phi_8 \Phi_8] &= [\Phi \Phi]
- \frac{1}{3} [\Phi]^2,\\
[\Phi_8 \Phi_8 \Phi_8] &= [\Phi \Phi \Phi]
- [\Phi \Phi] [\Phi]
+ \frac{2}{9} [\Phi]^4,\\
[\Phi_8 \Phi_8 \Phi_8 \Phi_8] &= [\Phi \Phi \Phi \Phi]
- \frac{4}{3} [\Phi \Phi \Phi] [\Phi]
+ \frac{2}{3} [\Phi \Phi] [\Phi]^2
- \frac{1}{9} [\Phi]^4.
\end{align}
The scalar potential $V$ is then recast in terms of $[\Phi]$, $[\Phi \Phi]$, $[\Phi \Phi \Phi]$ and $[\Phi \Phi \Phi \Phi]$:
\begin{equation}
\begin{split}
V = \mbox{} & - \mu^2 [\Phi \Phi]
+ a_0 [\Phi \Phi \Phi \Phi]
+ a_{02} [\Phi \Phi] [\Phi \Phi]
+ \left ( - \frac{4}{3} a_0
+ a_1
\right ) [\Phi \Phi \Phi] [\Phi]\\
& + \left ( \frac{2}{3} a_0
- \frac{2}{3} a_{02}
- a_1
+ a_2
\right ) [\Phi \Phi] [\Phi]^2
+ \left ( - \frac{1}{9} a_0
+ \frac{1}{9} a_{02}
+ \frac{2}{9} a_1
- \dfrac{1}{3} a_2
+ a_4
\right ) [\Phi]^4.
\end{split}
\end{equation}
With proper choices of parameters $a_0$, $a_{02}$, $a_1$, $a_2$, $a_4$ and $\mu$, we expect that non-zero VEV's of $\Phi_8^a$ and $[\Phi]$ can be obtained to give the SM fermion masses through Eq.~\eqref{eq:1-05}.
The vacuum is a stationary point of $V$, i.e., the VEV of the first derivative of $V$ with respect to $\Phi_8^a$ and $[\Phi]$ must vanish:
\begin{equation}
\partial_{\Phi_8^a} V = \partial_{[\Phi]} V = 0. \label{eq:2-02}
\end{equation}
Since $V$ is a polynomial function of $\Phi_8^a$ and $[\Phi]$, it is also a holomorphic function if $\Phi_8^a$ and $[\Phi]$ are viewed as complex variables. Eq.~\eqref{eq:2-02} is then equivalent to the same set of equations complexified with its solutions restricted to real numbers. Known that the complexification of $\mathfrak{u}(3)$ is isomorphic to $\mathfrak{gl}(3, \mathbb{C})$ viewed as a complex Lie algebra, i.e, $\mathfrak{u}(3)_\mathbb{C} \cong \mathfrak{gl}(3, \mathbb{C})$, the linear map from nine complex fields $\Phi_8^a$ and $[\Phi]$ to the complex matrix $\Phi \in \mathfrak{gl}(3, \mathbb{C}) = \mathbb{C}^{3 \times 3}$ is bijective. So $V$ can also be interpreted as a function of the nine independent complex matrix components of $\Phi$, and Eq.~\eqref{eq:2-02} is equivalent to
\begin{equation}
\partial_\Phi V = \begin{pmatrix}
\partial_{\Phi^1_1} V & \partial_{\Phi^1_2} V & \partial_{\Phi^1_3} V\\
\partial_{\Phi^2_1} V & \partial_{\Phi^2_2} V & \partial_{\Phi^2_3} V\\
\partial_{\Phi^3_1} V & \partial_{\Phi^3_2} V & \partial_{\Phi^3_3} V\\
\end{pmatrix}
= 0.
\end{equation}
Applying the identities for matrix derivatives
\begin{equation}
\partial_\Phi [\Phi^n] = n \Phi^{n-1}, \quad
\partial_\Phi [\Phi] = \mathbf{I}_{3 \times 3},
\end{equation}
we have
\begin{equation}
\begin{split}
0 = \partial_\Phi V
= \mbox{} & - 2 \mu^2 \Phi
+ 4 a_0 \Phi \Phi \Phi
+ 4 a_{02} [\Phi \Phi] \Phi
+ \left ( - \frac{4}{3} a_0
+ a_1
\right )
(3 [\Phi] \Phi \Phi
+ [\Phi \Phi \Phi] \mathbf{I}_{3 \times 3}
)\\
& + 2 \left ( \frac{2}{3} a_0
- \frac{2}{3} a_{02}
- a_1
+ a_2
\right )
\left ( [\Phi]^2 \Phi
+ [\Phi \Phi] [\Phi] \mathbf{I}_{3 \times 3}
\right )\\
& + 4 \left ( - \frac{1}{9} a_0
+ \frac{1}{9} a_{02}
+ \frac{2}{9} a_1
- \dfrac{1}{3} a_2
+ a_4
\right ) [\Phi]^3 \mathbf{I}_{3 \times 3}.
\end{split} \label{eq:2-03}
\end{equation}
Notice that for any $3 \times 3$ complex matrix $\Phi \in \mathbb{C}^{3 \times 3}$, the following identities can be verified by straightforward calculation:
\begin{align}
\Phi \Phi \Phi &= [\Phi] \Phi \Phi
+ \frac{1}{2} \left ( [\Phi \Phi]
- [\Phi]^2
\right ) \Phi
+ \det(\Phi) \mathbf{I}_{3 \times 3},\\
[\Phi \Phi \Phi] &= \frac{3}{2} [\Phi \Phi] [\Phi]
- \frac{1}{2} [\Phi]^3
+ 3 \det(\Phi).
\end{align}
Using these identities, the vacuum equation Eq.~\eqref{eq:2-03} can be rearranged into
\begin{equation}
\begin{split}
0 = \partial_\Phi V
= \mbox{} & 3 a_1 [\Phi] \Phi \Phi
+ 2 \left ( - \mu_1^2
+ (a_0 + 2 a_{02}) [\Phi \Phi]
+ \left ( - \frac{1}{3} a_0
- \frac{2}{3} a_{02}
- a_1
+ a_2
\right ) [\Phi]^2
\right ) \Phi\\
& + 2 \left ( \left ( - \frac{1}{3} a_0
- \frac{2}{3} a_{02}
- \frac{1}{4} a_1
+ a_2
\right ) [\Phi \Phi] [\Phi] \right.\\
& \hphantom{+ 2 \left ( \vphantom{\frac{1}{3}} \right.}
\left. \mbox{}
+ \left ( \frac{1}{9} a_0
+ \frac{2}{9} a_{02}
+ \frac{7}{36} a_1
- \frac{2}{3} a_2
+ 2 a_4
\right ) [\Phi]^3
+ \frac{3}{2} a_1 \det(\Phi)
\right ) \mathbf{I}_{3 \times 3}.
\end{split} \label{eq:2-04}
\end{equation}
The VEV's of $\Phi \Phi$, $\Phi$ and $\mathbf{I}_{3 \times 3}$ are linearly independent for a non-zero VEV of $\Phi$ without fine-tuning. So their corresponding coefficients in Eq.~\eqref{eq:2-04} must vanish:
\begin{align}
0 &= 3 a_1 [\Phi], \label{eq:2-05}\\
0 &= - \mu^2
+ (a_0 + 2 a_{02}) [\Phi \Phi]
+ \left ( - \frac{1}{3} a_0
- \frac{2}{3} a_{02}
- a_1
+ a_2
\right ) [\Phi]^2, \label{eq:2-06}\\
\begin{split}
0 &= \left ( - \frac{1}{3} a_0
- \frac{2}{3} a_{02}
- \frac{1}{4} a_1
+ a_2
\right ) [\Phi \Phi] [\Phi]\\
& \hphantom{= \mbox{}}
+ \left ( \frac{1}{9} a_0
+ \frac{2}{9} a_{02}
+ \frac{7}{36} a_1
- \frac{2}{3} a_2
+ 2 a_4
\right ) [\Phi]^3
+ \frac{3}{2} a_1 \det(\Phi). \label{eq:2-07}
\end{split}
\end{align}
Assuming $[\Phi]$ gets a non-zero VEV, Eq.~\eqref{eq:2-05} gives
\begin{equation}
a_1 = 0,
\end{equation}
which implies that the term $a_1 [\Phi_8 \Phi_8 \Phi_8] [\Phi]$ must vanish in the scalar potential $V$ in order to get a non-zero VEV of $\Phi$. Then Eq.~\eqref{eq:2-07} becomes
\begin{equation}
0 = \left ( - \frac{1}{3} a_0
- \frac{2}{3} a_{02}
+ a_2
\right ) [\Phi \Phi]
+ \left ( \frac{1}{9} a_0
+ \frac{2}{9} a_{02}
- \frac{2}{3} a_2
+ 2 a_4
\right ) [\Phi]^2,
\end{equation}
which gives a modified version of the Koide formula:
\begin{equation}
K = \frac{[\Phi \Phi]}{[\Phi]^2}
= \frac{2}{3} \times \frac{(a_0 + 2 a_{02}) / 6 - a_2 + 3 a_4}{(a_0 + 2 a_{02}) / 3 - a_2}
= \frac{2}{3} \times \left ( 1 - \frac{a_0 + 2 a_{02} - 18 a_4}{2(a_0 + 2 a_{02} - 3 a_2)} \right ). \label{eq:2-08}
\end{equation}
Note that the term involving $\det(\Phi)$ disappears as $a_1$ becomes zero, thus the derivation of the second formula in~\cite{Koide:2017lrf} is invalid. Finally Eq.~\eqref{eq:2-06} gives
\begin{equation}
\mu^2 = (a_0 + 2 a_{02}) [\Phi \Phi]
- \left ( \frac{1}{3} a_0
+ \frac{2}{3} a_{02}
- a_2
\right ) [\Phi]^2
= \left ( (a_0 + 2 a_{02})
\left( K - \frac{1}{3} \right)
+ a_2
\right ) [\Phi]^2. \label{eq:2-09}
\end{equation}
The coefficient of $[\Phi]^2$ is zero only if parameters are fine-tuned to satisfy $a_2^2 = 2 a_4 (a_0 + 2 a_{02})$. Without such fine-tuning, a non-zero VEV of $\Phi$ requires a non-zero $\mu$.
The modified formula Eq.~\eqref{eq:2-08} contains four free parameters $a_0$, $a_{02}$, $a_2$, and $a_4$. Among them, $a_0$ and $a_{02}$ appears in the fixed combination $a_0 + 2 a_{02}$. In addition, $K$ only depends on ratios of the parameters. Hence there are only two effective parameters in the modified formula. Although the parameter $a_1$ is allowed by symmetries, it has to be set to zero to generate a non-zero VEV of $[\Phi]$. We may further explore other models which lead to the modified formula with less parameters. Such a model is constructed in the following SUSY scenario.
\section{The modified formula from a Yukawaon model} \label{sec:3}
Enlightened by previous derivation of the Koide formula from Yukawaon models, such as the work of~\cite{Koide:2008tr}, we consider a superpotential $W(z_i)$ of chiral superfields $\{ z_i \}$ in the Wess-Zumino model. In addition to the $\mathrm{SU}(3)$ flavor symmetry, we impose a $\mathrm{U}(1)$ R-symmetry under which $W$ has R-charge $2$. We introduce two $\mathrm{SU}(3)$ singlet chiral superfields $\phi'_1$ and $\phi'_2$, both with R-charge $1$. The nonet scalar field $\Phi$ in Eq.~\eqref{eq:2-01} is complexified to $\Phi \in \mathfrak{u}(3)_\mathbb{C} \cong \mathfrak{gl}(3, \mathbb{C}) = \mathbb{C}^{3 \times 3}$, and then promoted to a nonet chiral superfield with R-charge $1/2$. The superpotential
\begin{equation}
W_1 = \frac{1}{2}
\begin{pmatrix}
\phi'_1 & \phi'_2
\end{pmatrix}
\begin{pmatrix}
\mu'_{11} & \mu'_{12}\\
\mu'_{12} & \mu'_{22}
\end{pmatrix}
\begin{pmatrix}
\phi'_1\\
\phi'_2
\end{pmatrix}
+ \begin{pmatrix}
\phi'_1 & \phi'_2
\end{pmatrix}
\begin{pmatrix}
b'_{11} & b'_{12}\\
b'_{21} & b'_{22}
\end{pmatrix}
\begin{pmatrix}
[\Phi_8 \Phi_8]\\
[\Phi]^2
\end{pmatrix}
\end{equation}
includes all renormalizable terms respecting the $\mathrm{SU}(3)$ flavor symmetry and the R-symmetry. In addition, we introduce
\begin{equation}
W_2 = \mu_0 [\Phi \Phi]
= \mu_0 [\Phi_8 \Phi_8]
+ \frac{1}{3} \mu_0 [\Phi]^2.
\end{equation}
which slightly breaks the R-symmetry with a small $\mu_0$. This small R-symmetry breaking may be resolved by promoting $\mu_0$ to be an R-charge $1/2$ field, which get a VEV from an extra sector beyond the scope of our current discussion. Both $W_1$ and $W_2$ combine into the superpotential
\begin{equation}
W(\Phi, \phi'_1, \phi'_2) = W_1 + W_2,
\end{equation}
which contributes to the full superpotential as the $W^{(\Phi)}$ part of Eq.~\eqref{eq:1-03}.
With a redefinition of $\{ \phi'_1, \phi'_2 \}$, it is possible to make the quadratic term in $W_1$ off-diagonal. The coefficient matrix
\begin{equation}
M = \begin{pmatrix}
\mu'_{11} & \mu'_{12} \\
\mu'_{12} & \mu'_{22}
\end{pmatrix}
\end{equation}
is a $2 \times 2$ complex symmetric matrix. It can be first diagonalized by a Autonne-Takagi factorization~\cite{Autonne:1915a, Takagi:1924a}
\begin{equation}
U^\text{T} M U = D
= \begin{pmatrix}
\mu_1 & 0 \\
0 & \mu_2
\end{pmatrix}
\end{equation}
with a unitary matrix $U$. One can then define a transformation matrix
\begin{equation}
P = U D^{- \frac{1}{2}} O C, \quad
\text{where} \
O \in \mathrm{O}(2), \
C = \sqrt{\frac{\mu_3}{2}}
\begin{pmatrix}
1 & 1 \\
i & - i
\end{pmatrix},
\end{equation}
so that
\begin{equation}
P^\text{T} M P = \begin{pmatrix}
0 & \mu_3 \\
\mu_3 & 0
\end{pmatrix}.
\end{equation}
Note that $P$ is not unique because $O$ can be an arbitrary orthogonal matrix. A simple choice is
\begin{equation}
\begin{gathered}
P = \begin{pmatrix}
\mu_\pm & - \mu_{22}\\
- \mu_{11} & \mu_\pm
\end{pmatrix}
\begin{pmatrix}
\frac{\mu_3}{2 \Delta} & 0\\
0 & \frac{1}{\mu_\pm}
\end{pmatrix}
= \begin{pmatrix}
\frac{\mu_3 \mu_\pm}{2 \Delta} & \frac{- \mu_{22}}{\mu_\pm}\\
- \frac{\mu_3 \mu_{11}}{2 \Delta} & 1
\end{pmatrix},\\
\text{where} \
\Delta = \mu_{12}^2 - \mu_{11} \mu_{22}, \
\mu_\pm = \mu_{12} \pm \sqrt{\Delta}.
\end{gathered}
\end{equation}
With the redefinition
\begin{equation}
\begin{pmatrix}
\phi'_1\\
\phi'_2
\end{pmatrix}
= P
\begin{pmatrix}
\phi_1\\
\phi_2
\end{pmatrix}, \quad
\begin{pmatrix}
b'_{11} & b'_{12}\\
b'_{21} & b'_{22}
\end{pmatrix}
= \left ( P^\text{T} \right )^{- 1}
\begin{pmatrix}
b_{11} & b_{12}\\
b_{21} & b_{22}
\end{pmatrix},
\end{equation}
and replacing $\Phi_8$ with $\Phi$ and $[\Phi]$, the superpotential is simplified to
\begin{equation}
W(\Phi, \phi_1, \phi_2) = \mu_0 [\Phi \Phi]
+ \mu_3 \phi_1 \phi_2
+ \begin{pmatrix}
\phi_1 & \phi_2
\end{pmatrix}
\begin{pmatrix}
b_{11} & b_{12}\\
b_{21} & b_{22}
\end{pmatrix}
\begin{pmatrix}
[\Phi \Phi] - \frac{1}{3} [\Phi]^2\\
[\Phi]^2
\end{pmatrix}.
\end{equation}
With proper choices of parameters $b_{11}$, $b_{12}$, $b_{21}$, $b_{22}$, $\mu_0$ and $\mu_3$, we expect that a non-zero Hermitian matrix-valued VEV of $\Phi$ can be obtained to give the SM fermion masses through Eq.~\eqref{eq:1-05}.
We will look for SUSY vacua which satisfy the F-flatness condition. According to the last two subequations of Eq.~\eqref{eq:1-04}, the VEV's of the first derivatives of $W$ with respect to $\Phi$, $\phi_1$ and $\phi_2$ must vanish:
\begin{align}
0 & = \partial_\Phi W
= 2 \mu_0 \Phi
+ 2 (b_{11} \phi_1 + b_{21} \phi_2) \Phi
+ 2 \left ( \left ( b_{12} - \frac{1}{3} b_{11} \right ) \phi_1
+ \left ( b_{22} - \frac{1}{3} b_{21} \right ) \phi_2
\right ) [\Phi] \mathbf{I}_{3 \times 3}, \label{eq:3-01}\\
0 & = \partial_{\phi_1} W
= \mu_3 \phi_2
+ b_{11} [\Phi \Phi]
+ \left ( b_{12} - \frac{1}{3} b_{11} \right ) [\Phi]^2, \label{eq:3-02}\\
0 & = \partial_{\phi_2} W
= \mu_3 \phi_1
+ b_{21} [\Phi \Phi]
+ \left ( b_{22} - \frac{1}{3} b_{21} \right ) [\Phi]^2. \label{eq:3-03}
\end{align}
Eq.~\eqref{eq:3-02} and Eq.~\eqref{eq:3-03} give
\begin{align}
\phi_1 &= - \frac{1}{\mu_3}
\left ( b_{21} [\Phi \Phi]
+ \left ( b_{22} - \frac{1}{3} b_{21} \right ) [\Phi]^2
\right ),\\
\phi_2 &= - \frac{1}{\mu_3}
\left ( b_{11} [\Phi \Phi]
+ \left ( b_{12} - \frac{1}{3} b_{11} \right ) [\Phi]^2
\right ).
\end{align}
Replacing $\phi_1$ and $\phi_2$ with these expressions involving $[\Phi]$ and $[\Phi \Phi]$, Eq.~\eqref{eq:3-01} becomes
\begin{equation}
\begin{split}
0 = \mbox{} & 2
\left ( \mu_0
- \frac{1}{\mu_3}
\left ( 2 b_{11} b_{21} [\Phi \Phi]
+ \left ( b_{11} b_{22}
+ b_{12} b_{21}
- \frac{2}{3} b_{11} b_{21}
\right ) [\Phi]^2
\right )
\right ) \Phi\\
& - \frac{2}{\mu_3}
\left ( \left ( b_{11} b_{22}
+ b_{12} b_{21}
- \frac{2}{3} b_{11} b_{21}
\right ) [\Phi \Phi] \right.\\
& \hphantom{- \frac{2}{\mu_3} \left ( \vphantom{\frac{2}{3}} \right.}
\left. \mbox{}
+ \left ( \frac{2}{9} b_{11} b_{21}
+ 2 b_{12} b_{22}
- \frac{2}{3} b_{11} b_{22}
- \frac{2}{3} b_{12} b_{21}
\right ) [\Phi]^2
\right ) [\Phi] \mathbf{I}_{3 \times 3}. \label{eq:3-04}
\end{split}
\end{equation}
The VEV's of $\Phi$ and $\mathbf{I}_{3 \times 3}$ are linearly independent for a non-zero VEV of $\Phi$ without fine-tuning. So their corresponding coefficients in Eq.~\eqref{eq:3-04} must vanish:
\begin{align}
0 &= \mu_0
- \frac{1}{\mu_3}
\left ( 2 b_{11} b_{21} [\Phi \Phi]
+ \left ( b_{11} b_{22}
+ b_{12} b_{21}
- \frac{2}{3} b_{11} b_{21}
\right ) [\Phi]^2
\right ), \label{eq:3-05}\\
0 &= \left ( b_{11} b_{22}
+ b_{12} b_{21}
- \frac{2}{3} b_{11} b_{21}
\right ) [\Phi \Phi]
+ \left ( \frac{2}{9} b_{11} b_{21}
+ 2 b_{12} b_{22}
- \frac{2}{3} b_{11} b_{22}
- \frac{2}{3} b_{12} b_{21}
\right ) [\Phi]^2. \label{eq:3-06}
\end{align}
With the redefinition of parameters
\begin{equation}
a_{02} = b_{11} b_{21}, \quad
a_2 = b_{11} b_{22} + b_{12} b_{21}, \quad
a_4 = b_{12} b_{22}, \label{eq:3-07}
\end{equation}
Eq.~\eqref{eq:3-06} gives the modified version of the Koide formula:
\begin{equation}
K = \frac{[\Phi \Phi]}{[\Phi]^2}
= \frac{2}{3} \times \frac{a_{02} / 3 - a_2 + 3 a_4}{2 a_{02} / 3 - a_2}
= \frac{2}{3} \times
\left ( 1 - \frac{a_{02} - 9 a_4}{2 a_{02} - 3 a_2} \right ). \label{eq:3-08}
\end{equation}
And Eq.~\eqref{eq:3-05} gives
\begin{equation}
\mu_0 \mu_3 = 2 a_{02} [\Phi \Phi]
- \left ( \frac{2}{3} a_{02} - a_2 \right ) [\Phi]^2
= \left ( 2 a_{02} \left ( K - \frac{1}{3} \right ) + a_2 \right ) [\Phi]^2. \label{eq:3-09}
\end{equation}
The coefficient of $[\Phi]^2$ is zero only if parameters are fine-tuned to satisfy $a_2^2 = 4 a_{02} a_4$. Without such fine-tuning, a non-zero VEV of $\Phi$ requires non-zero $\mu_0$ and $\mu_3$.
The modified formula Eq.~\eqref{eq:3-08} contains three free parameters $a_{02}$, $a_2$, and $a_4$ after the parameter redefinition Eq.~\eqref{eq:3-07}. Since $K$ only depends on ratios of the parameters, there are only two effective parameters in the modified formula. Note that the previous formula Eq.~\eqref{eq:2-08} from the scalar potential model is identical to the formula Eq.~\eqref{eq:3-09} from the Yukawaon model if we make the identification from $a_0 / 2 + a_{02}$ to $a_{02}$.
\section{Concluding remarks} \label{sec:4}
In this work, we derived Eq.~\eqref{eq:2-08} or Eq.~\eqref{eq:3-08}, the modified version of the Koide formula, from a flavor nonet scalar field $\Phi$ in either a scalar potential model or a Yukawaon model, with all terms respecting symmetries included in the scalar potential $V(\Phi)$ or the superpotential $W(\Phi)$. In the scalar potential model, a $\mathbb{Z}_2$ symmetry is imposed in addition to the $\mathrm{SU}(3)$ flavor symmetry. Linear and cubic terms in $V$ are then eliminated by the $\mathbb{Z}_2$ symmetry. Quadratic terms are assumed to have a Higgs-like form with a negative mass-square $- \mu^2$, and one coefficient $a_1$ must vanish in order to have a non-zero VEV of $\Phi$. From Eq.~\eqref{eq:2-09} we see that without fine-tuning, the magnitude of the VEV of $\Phi$ is proportional to $\mu$. In the Yukawaon model, an R-symmetry is imposed in addition to the $\mathrm{SU}(3)$ flavor symmetry. The nonet scalar field $\Phi$ is promoted to a nonet chiral superfield, but later assumed to get a non-zero Hermitian matrix-valued VEV\@. The coefficients $a_0$ and $a_1$ in the scalar potential model naturally disappear in the Yukawaon model. From Eq.~\eqref{eq:3-09} we see that without fine-tuning, the magnitude of the VEV of $\Phi$ is proportional to $\sqrt{\mu_0 \mu_3}$. In both models, the modified formula is obtained with only two effective parameters. The fact that $\mu_0$ characterizes R-symmetry breaking in the Yukawaon model indicates that there may be some deep relation between the SM Yukawa coupling terms and R-symmetry breaking dynamics in the hidden sector, which is worth to explore in the future.
As mentioned in~\cite{Koide:2018gdm}, $V_2$ or $W_2$ is introduced as a special combinations of two irreducible terms proportional to $[\Phi_8 \Phi_8]$ and $[\Phi]^2$ respectively. In the scalar potential model, the two terms have different renormalization group flow, which can modify the VEV of $\Phi$ and invalidates our derivation of the modified formula. In the Yukawaon model, the superpotential does not get radiative corrections because of the non-renormalization theorem in SUSY. Thus the VEV of $\Phi$ and the modified formula are protected from radiative corrections.
The two effective parameters $a_{02} / a_2$ and $a_4 / a_2$ in the modified formula Eq.~\eqref{eq:3-09} modify the Koide's character from its original value $K = 2 / 3$ when these parameters take non-zero values. Such modification may serve as an alternative way, compared to previous approaches using gauged flavor symmetries~\cite{Sumino:2008hu, Sumino:2008hy, Sumino:2009bt, Koide:2012kn, Koide:2014doa, Koide:2016bte, Koide:2016qeq}, to cancel the possible QED correction of $K$. The modified value of $K$ can also fit all possible values of $K$ for charged leptons, up quarks and down quarks. For an arbitrary set of fermion masses, the Cauchy-Schwarz inequality and the positiveness of masses leads to
\begin{equation}
\frac{1}{3} \le K
\le 1,
\end{equation}
which corresponds to the parameter range
\begin{equation}
- \frac{1}{2} \le \frac{a_{02} - 9 a_4}{2 a_{02} - 3 a_2}
\le \frac{1}{2}.
\end{equation}
The value $K = 2 / 3$ for charged leptons corresponds to $a_{02} = 9 a_4$, which covers $a_{02} = a_4 = 0$ as a special case. From the data of quark masses~\cite{Zyla:2020zbs}
\begin{align}
m_u &= 2.16^{+ 0.49}_{- 0.26} \ \text{MeV} / c^2,&
m_d &= 4.67^{+ 0.48}_{- 0.17} \ \text{MeV} / c^2,\\
m_c &= 1.27 \pm 0.02 \ \text{GeV} / c^2,&
m_s &= 93^{+ 11}_{- 5} \ \text{MeV} / c^2,\\
m_t &= 172.76 \pm 0.30 \ \text{GeV} / c^2,&
m_b &= 4.18^{+ 0.03}_{- 0.02} \ \text{GeV} / c^2,
\end{align}
with their 1-$\sigma$ errors, $K$ is calculated to be
\begin{align}
K_{\text{up quarks}} &= \frac{m_u + m_c + m_t}{\left ( \sqrt{m_u} + \sqrt{m_c} + \sqrt{m_t} \right )^2}
= 0.8490^{+ 0.0014}_{- 0.0017}
= \frac{2}{3} \times (1.2735^{+ 0.0021}_{- 0.0025}),\\
K_{\text{down quarks}} &= \frac{m_d + m_s + m_b}{\left ( \sqrt{m_d} + \sqrt{m_s} + \sqrt{m_b} \right )^2}
= 0.730^{+ 0.006}_{- 0.011}
= \frac{2}{3} \times (1.095^{+ 0.008}_{- 0.016}).
\end{align}
The mean values of $K$ from experiments can be fitted with $a_2 = 1.885 a_{02} - 10.97 a_4$ for up quarks and $a_2 = 4.168 a_{02} - 31.52 a_4$ for down quarks. We expect that neutrino masses may also be interpreted by considering a see-saw mechanism together with the Yukawaon model, and the experimental verification depends on whether neutrino masses are in the normal or inverted hierarchy. Thus our scalar potential or superpotential constructed from symmetries may provide a natural interpretation of the SM fermion mass spectra. It is still a challenge to explain how the parameters $a_{02}$, $a_2$ and $a_4$ are set to give the correct mass formula for each sector.
\section*{Acknowledgement}
The authors thank Yoshio Koide, Jinmian Li, Bo-Qiang Ma and Zhong-Qi Ma for helpful discussions. This work is supported by the National Natural Science Foundation of China under grant 11305110.
|
1,108,101,564,384 | arxiv | \section{Introduction}
CP violation in the Standard Model (SM) has first been measured in the Kaon sector. The CP violating parameter measured in the famous Cronin-Fitch experiment \cite{Christenson:1964fg} is $\varepsilon_K$, which describes the mixing between CP and mass eigenstates of the neutral Kaon system. The parameter $\varepsilon_K$ measures the so-called CP violation through mixing. On the other hand, Kaons can also decay through direct CP violation. This CP violating decay is parametrized by the quantity $\varepsilon'$. The ratio of the two CP violating parameters $\varepsilon'/\varepsilon$, where we suppress $K$ in $\varepsilon_K$, is also accessible experimentally, namely through a confrontation of the $K_L\to\pi^+\pi^-$ and $K_L\to\pi^0\pi^0$ decay widths. It has been measured by the NA48 \cite{Batley:2002gn} and KTeV \cite{AlaviHarati:2002ye,
Abouzaid:2010ny} collaborations and leads to an experimental world average of
\hspace{0.1cm}
\begin{equation}\label{eq:SMexp}
(\varepsilon'/\varepsilon)_\text{exp} = (16.6 \pm 2.3) \times 10^{-4}\,.
\end{equation}
\hspace{2cm}
\noindent
The SM estimates for this observable depend on the long-distance (LD) treatment used to compute the $K\to\pi\pi$ hadronic matrix elements. As can be seen from Tab.~\ref{tab:SMpred}, the SM prediction differs for the three types of LD approaches and consequently there is some controversy over which treatment to use. The results obtained with Lattice QCD (LQCD) inputs as well as the ones in the Dual QCD (DQCD) approach are in good agreement with each other and exhibit about a $2.9\sigma$ deviation from the experimental value in eq.~\eqref{eq:SMexp}. The Chiral Perturbation Theory ($\chi$PT) approach leads to a value consistent with the SM, however exhibiting large uncertainties. Moreover the lower part of the error is consistent with the values obtained using Lattice or DQCD and therefore the situation is not conclusive.
\noindent
Taking the discrepancy between the SM prediction and the experimental value for granted, it is interesting to study beyond the SM (BSM) effect that could explain such deviations. In the following section I will review the SM prediction for $\varepsilon'/\varepsilon$ based on the DQCD approach. In Sec.~\ref{sec:BSMME} the computation of the BSM matrix elements relevant for $\varepsilon'/\varepsilon$ is discussed. In Sec.~\ref{sec:Masterform} a master formula for BSM effects in $\varepsilon'/\varepsilon$ is presented and in Sec.~\ref{sec:SMEFT} the relation between $\varepsilon'/\varepsilon$ and the SM effective theory (SMEFT) is discussed, before I summarize in Sec.~\ref{sec:concl}.
\section{$\varepsilon'/\varepsilon$ in the SM}
To describe $\varepsilon'/\varepsilon$ in a model-independent way, we use the effective Hamiltonian of three quark flavours which generates a $\Delta S=1$ transition. It consists of local operators multiplied by their corresponding Wilson coefficients and can be written as follows \cite{Buras:1991jm,Buras:1993dy,Ciuchini:1992tj,Ciuchini:1993vr}:
\begin{align}\label{eq:SMham}
\mathcal{H}_{\Delta S = 1}^{(3)} &
= - \sum_i C_i({\mu}) \, O_i\,.
\end{align}
This Hamiltonian is invariant under the unbroken gauge-group $SU(3)_c\times U(1)_{\rm em}$ and contains all the fields lighter than the charm quark as dynamical degrees of freedom. The minus sign is chosen to be in accord with the SMEFT conventions.
In the SM, the sum in eq.~\eqref{eq:SMham} contains seven four-quark operators consisting of $(V\pm A)$ currents as well as the chromomagnetic operator. The four-quark operators are generated through tree-level and box diagrams containing a $W$ boson and a gluon, as well as from QCD and Electroweak (EW) penguin diagrams. The seven effective operators can be written as linear combinations of the following vector-vector operators:
\begin{table}[tbp]
\centering
\begin{tabular}{cccc}
\toprule
Long-distance & SM prediction &
Group &
Ref.
\\
\midrule
Lattice & $(1.4 \pm 6.9) \times 10^{-4}$ &
RBC-UKQCD & \cite{Blum:2015ywa,Bai:2015nea}\\
& $(1.9 \pm 4.5) \times 10^{-4}$ &
Buras/Gorbahn/Jamin/J\"ager & \cite{Buras:2015yba}\\
& $(1.1 \pm 5.1) \times 10^{-4}$ &
Kitahara/Nierste/Tremper & \cite{Kitahara:2016nld}\\
\midrule
DQCD & $<(6.0\pm 2.4) \times 10^{-4}$ &
Buras/G\'erard & \cite{Buras:2015xba}\\
&$\qquad$ if $B_6<B_8 =B_8 \,(\text{LQCD})$ & &\\
\midrule
$\chi$PT
& $(15 \pm 7) \times 10^{-4}$ &
Gisbert/Pich & \cite{Gisbert:2017vvj}\\
\bottomrule
\end{tabular}
\captionsetup{width=0.9\linewidth}
\caption{SM estimates for $\varepsilon'/\varepsilon$, using different treatments of the long-distance effects.
}\label{tab:SMpred}
\end{table}
\begin{align}\label{eq:vecops}
O_{VAB}^q &
= (\bar s^i \gamma_{\mu} P_A d^i) (\bar q^j \gamma^{\mu} P_B q^j) \,,
&
\widetilde{O}_{VAB}^q &
= (\bar s^i \gamma_{\mu} P_A d^j) (\bar q^j \gamma^{\mu} P_B q^i) \,,
\end{align}
\hspace{2cm}
\noindent
where $P_{A,B}$ $(A,B=L,R)$ denote the chirality projection operators, $i,j$ are colour indices and $q=u,d,s$. The chromomagnetic operator reads:
\begin{align}\label{eq:chromo}
O_{8g} &
= m_s(\bar s \, \sigma^{\mu\nu} T^A P_{L} d) \, G^A_{\mu\nu} \,,
\end{align}
\hspace{2cm}
\noindent
with $\sigma^{\mu\nu}=\frac{i}{2}[\gamma^{\mu},\gamma^{\nu}]$, $\,T^A$ being the $SU(3)_c$ generators and $G^A_{\mu\nu}$ the gluonic field-strength tensor.
Having the Hamiltonian of eq.~\eqref{eq:SMham} at hand allows to compute the $\varepsilon'/\varepsilon$ observable, which is given by:
\begin{align}\label{eq:epspr}
\frac{\varepsilon'}{\varepsilon} &
= -\frac{\omega}{\sqrt{2}|\varepsilon_K|}
\left[ \frac{\text{Im}A_0}{\text{Re}A_0}
- \frac{\text{Im}A_2}{\text{Re}A_2} \right]\,.
\end{align}
\newline
\noindent
Here $\omega = {\text{Re}A_2}/{\text{Re}A_0} \approx 1/22$, reflecting the $\Delta I =1/2$ rule, and $\varepsilon_K$ is the Kaon mixing parameter mentioned before. The expression is therefore determined by the isospin amplitudes $A_{0,2}$ defined by
\begin{align}
A_{0,2} &
= \Big\langle (\pi\pi)_{I=0,2}\, \Big|\; \mathcal{H}_{\Delta S = 1}^{(3)}({\mu})
\;\Big|\, K \Big\rangle \,.
\end{align}
\hspace{2cm}
\noindent
After having fixed the Wilson coefficients of $\mathcal{H}_{\Delta S = 1}^{(3)}$ by performing a matching procedure, the only remaining task is to compute the hadronic matrix elements of the local operators in eq.~\eqref{eq:SMham}. In the following subsection, we will look into this computation by employing the DQCD approach.
\subsection{Long-distance effects in the DQCD approach}
The DQCD is based on the large $N_c$ limit, first studied by t'Hooft \cite{'tHooft:1973jz,'tHooft:1974hx} and Witten \cite{Witten:1979kh,Treiman:1986ep} for strong interactions.
To study hadronic weak decays, the following truncated Chiral Lagrangian is used \cite{Bardeen:1986vp,Bardeen:1986uz,Bardeen:1986vz}:
\begin{equation}\label{eq:chiralLag}
\mathcal{L}_{tr}=\frac{F^2}{8}\left[\text{Tr}(D^\mu UD_\mu U^\dagger)+r\text{Tr}(mU^\dagger+\text{h.c.})-\frac{r}{\Lambda^2_\chi}\text{Tr}(mD^2U^\dagger+\text{h.c.})\right] {\,,}
\end{equation}
with the unitary chiral matrix and the octet of lowest-lying pseudoscalars
\begin{equation}
U=\exp(i\sqrt{2}\frac{\Pi}{F}), \qquad
\Pi=\sum_{\alpha=1}^8\lambda_\alpha\pi^\alpha{\,.}
\end{equation}
The Lagrangian depends on the quark mass matrix and the chiral enhancement factor
\begin{equation}
m=\text{diag}(m_u,m_d,m_s)\,, \qquad r=\frac{2m_K^2}{m_s^2+m_d^2}\,.
\end{equation}
\hspace{2cm}
\noindent
It contains a hadronic mass scale $\Lambda_\chi$ corresponding to higher resonances.
\noindent
Employing now the large $N_c$ limit, the Lagrangian of eq.~\eqref{eq:chiralLag} can be matched onto the regular QCD Lagrangian containing quark and gluon fields only. In the chiral limit and at order $\mathcal{O}(p^2)$ the quark currents are then given by:
\begin{equation}\label{eq:mesrep}
(\gamma^\mu P_L)^{ba}=i \frac{F^2}{4} (\partial^\mu U U^\dagger)^{ab}, \qquad (P_L)^{ba}=- \frac{F^2}{8} r (U)^{ab}\,, \qquad (\sigma^{\mu\nu}P_L)^{ab} = 0\,,
\end{equation}
\hspace{2cm}
\noindent
for the flavour indices $a,b$. The chirality flipped versions are obtained by the replacement $U\leftrightarrow U^\dag$. These relations allow to express the local operators in terms of the lowest-lying mesons and therefore to compute their corresponding matrix elements. Furthermore, this framework allows to study the renormalization group (RG) evolution of the matrix elements up to a scale of $\mathcal{O}(1\text{GeV})$ until where the theory is valid. This RG evolution is dubbed meson evolution.
\noindent
The DQCD approach was first employed in the context of $K\to\pi\pi$ matrix elements in \cite{Bardeen:1986vp,Bardeen:1986vz,Buras:2014maa}. Its validity is confirmed by results obtained within LQCD. Among them is the correctly predicted hierarchy of the bag factors for the SM operators $Q_6$ and $Q_8$ \cite{Buras:2015xba}
\begin{equation}\label{BG}
B_6^{(1/2)} \leq B_8^{(3/2)} < 1 \, .
\end{equation}
\hspace{2cm}
\noindent
Also the explicit calculations for $B_6^{(1/2)}(m_c),\,B_8^{(3/2)}(m_c)$ are in good agreement with the Lattice results \cite{Bai:2015nea,Blum:2015ywa}. Not only for the SM four-quark operators but also for the matrix element of the chromomagnetic operator of eq.~\eqref{eq:chromo}, DQCD \cite{Buras:2018evv} agrees well with LQCD \cite{Constantinou:2017sgv}. Furthermore, the impact of final state interactions has been analysed within the DQCD approach in \cite{Buras:2016fys} and has been shown to be less important for $\varepsilon'/\varepsilon$ than for the $\Delta I=1/2$ rule, and less important than meson evolution which is responsible for (\ref{BG}).
Finally DQCD also allows, with the help of meson evolution, to understand
the pattern of the BSM $K^0-\bar K^0$ mixing matrix elements \cite{Buras:2018lgu} obtained by LQCD \cite{Carrasco:2015pra,Jang:2015sla,Boyle:2017ssm}.
More information on DQCD can be found in the original papers and in the reviews in \cite{Buras:2018hze,Buras:2014maa}.
\section{BSM matrix elements for $\varepsilon'/\varepsilon$}\label{sec:BSMME}
Generalizing the SM Hamiltonian by allowing for all possible Lorentz- and gauge invariant operators, one finds that there are 13 additional four-quark operators to be added to $\mathcal{H}_{\Delta S = 1}^{(3)}$. Three of them are vector-vector operators which are independent of the seven operators generated within the SM. They can also be written as linear combinations of the operators in eq.~\eqref{eq:vecops}. The other BSM operators consist of scalar or tensor bilinears and can be written as linear combinations of the following operators:
\begin{align}
O_{SAB}^q &
= (\bar s^i P_A d^i) (\bar q^jP_B q^j) \,,
&
\widetilde{O}_{SAB}^q &
= (\bar s^i P_A d^j) (\bar q^j P_B q^i) \,, \\
O_{TA}^q &
= (\bar s^i \sigma_{\mu\nu} P_A d^i) (\bar q^j\sigma^{\mu\nu}P_A q^j) \,,
&
\widetilde{O}_{TA}^q &
= (\bar s^i \sigma_{\mu\nu} P_A d^j) (\bar q^j\sigma^{\mu\nu}P_A q^i) \,,
\end{align}
\hspace{2cm}
\noindent
for $q=u,d,s$. Two equivalent bases for the 13 BSM operators can be found in \cite{Aebischer:2018rrz}.
\noindent
The $K\to\pi\pi$ matrix elements of these BSM operators have been calculated for the first time in \cite{Aebischer:2018rrz}, using the DQCD approach. They were computed first at the factorization scale $\mu_F$ at which the meson representation of eq.~\eqref{eq:mesrep} holds. The factorization scale corresponds to very low momenta of $\mathcal{O}(p^2\approx 0)$. Since the observable $\varepsilon'/\varepsilon$ is usually computed at the charm scale $\mu_c=\mathcal{O}(m_c)$, the running of the matrix elements has to be performed from the factorization scale up to the scale $\mu_c$ via the meson evolution for scales below $1\, {\rm GeV}$ followed by the
usual QCD evolution.
The explicit expressions and numerical values of all the matrix elements at the charm scale as well as further details of the computation can be found in \cite{Aebischer:2018rrz}. Here, we summarize only quantitatively the results of the analysis. For the different types of BSM operators, one finds for their respective matrix elements at the factorization scale $\mu_F$ and at the charm scale $\mu_c$:
\newline
\begin{itemize}
\item Vector operators: small at $\mu_F$ and at $\mu_c$.
\item Scalar operators: large at $\mu_F$, moderate at $\mu_c$.
\item Tensor operators: zero at $\mu_F$, large at $\mu_c$.
\item Scalar/Tensor operators containing three $s$ quarks: zero at $\mu_F$ and at $\mu_c$.
\end{itemize}
\section{Master formula for BSM effects in $\varepsilon'/\varepsilon$}\label{sec:Masterform}
Knowing the matrix elements for the complete set of local effective operators relevant for $\varepsilon'/\varepsilon$ allows for a model-independent analysis of the BSM effects. In this section we provide the means for such an analysis in the form of a master formula for $\varepsilon'/\varepsilon$ \cite{Aebischer:2018quc}. For this purpose, we split the observable in the following way:
\begin{align}
\frac{\varepsilon'}{\varepsilon} &
= \left(\frac{\varepsilon'}{\varepsilon}\right)_\text{SM}
+ \left(\frac{\varepsilon'}{\varepsilon}\right)_\text{BSM} \,,
\end{align}
and focus on the BSM part. Since many NP scenarios contain heavy degrees of freedom with a mass scale above the EW scale, it is reasonable to provide a master formula evaluated at the EW scale $\mu_W$. Consequently, a NP analysis of a particular model only requires a simple tree-level matching at $\mu_W$. To evaluate eq.~\eqref{eq:epspr} at the EW scale, the RG evolution of the matrix elements from $\mu_c$ up to $\mu_W$ has to be taken into account \cite{Aebischer:2017gaw,Jenkins:2017dyc}. In the running up to the EW scale new operators containing $c$ and $b$ quarks will be generated through QCD and QED mixing, leading to the more general Hamiltonian of five flavours $\mathcal{H}_{\Delta S = 1}^{(5)}$. The master formula will therefore depend on the Wilson coefficients of all such effective operators. Setting the parameter $\varepsilon_K$ as well as $\text{Re}(A_0)$ and $\text{Re}(A_2)$ appearing in eq.~\eqref{eq:epspr} to their experimental values \cite{Cirigliano:2011ny} one finds the following master formula:
\begin{align}
\label{eq:master}
\left(\frac{\varepsilon'}{\varepsilon}\right)_\text{BSM} &
= \sum_i P_i(\mu_W) ~\text{Im}\left[ C_i(\mu_W) - C^\prime_i(\mu_W)\right]
\times (1\,\text{TeV})^2,
\end{align}
with
\begin{align}
\label{eq:master2}
P_i(\mu_W) & = \sum_{j} \sum_{I=0,2} p_{ij}^{(I)}(\mu_W, \mu_c)
\,\left[\frac{\langle O_j (\mu_c) \rangle_I}{\text{GeV}^3}\right].
\end{align}
Here, the $p_{ij}^{(I)}$ contain the evolution from $\mu_c$ to $\mu_W$. The matrix elements $\langle O_j (\mu_c) \rangle_I$ are taken from LQCD \cite{Blum:2015ywa,Bai:2015nea} for the SM operators and from DQCD \cite{Aebischer:2018rrz} for the BSM operators. The crucial objects determining the impact of each Wilson coefficient on $\varepsilon'/\varepsilon$ are the $P_i$ values. These were obtained using the public codes \texttt{wcxf} \cite{Aebischer:2017ugx} for the basis change, \texttt{wilson} \cite{Aebischer:2018bkb} for the RG running and \texttt{flavio} \cite{Straub:2018kue} to compute $\varepsilon'/\varepsilon$ at the EW scale. The $P_i$ values of the full set of operators contained in $\mathcal{H}_{\Delta S = 1}^{(5)}$ can be grouped into five classes $(A-E)$, which are listed in Tab.~\ref{tab:Pis}. The operators either give a direct BSM contribution to $\varepsilon'/\varepsilon$ through their matrix element (ME) or contribute to the observable indirectly through RG mixing. For further details and the explicit values of the $P_i$'s as well as their respective uncertainties we refer to \cite{Aebischer:2018quc}.
\begin{table}
\centering
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{ccccc}
\hline
Class & Type & $O_i$ & $P_i$ & Impact \\
\hline
A & SM
& $O_{VAB}^{u,d}, \widetilde O_{VAB}^{u,d}, O_{SLR}^{d}$ & can be large & ME \\
&
& $O_{VAB}^{s,c,b}, \widetilde O_{VAB}^{c,b}, O_{SLR}^{s}$ & small & Mixing \\
\hline
B & Chromomagnetic & $O_{8g}$ & small & Mixing \\
& Scalar: $s ,c, b$ & $O_{SLL}^{s,c,b}, \widetilde O_{SLL}^{c,b} $ & small & Mixing \\
& Tensor: $s ,c, b$ & $O_{TLL}^{s,c,b}$ & small & Mixing \\
\hline
C & Scalar: $u$ & $O_{SLL}^{u}, \widetilde O_{SLL}^{u} $ & small & ME \\
& Tensor: $u$ & $O_{TLL}^{u}, \widetilde O_{TLL}^{u} $ & large & ME \\
\hline
D & Scalar: $d$ & $O_{SLL}^{d} $ & small & ME \\
& Tensor: $d$ & $O_{TLL}^{d} $ & large & ME \\
\hline
E & Scalar LR: $u$ & $O_{SLR}^{u}, \widetilde O_{SLR}^{u} $ & can be large & ME \\
\hline
\end{tabular}
\captionsetup{width=0.9\linewidth}
\caption{$P_i$ values of the effective operators relevant for $\varepsilon'/\varepsilon$ at the EW scale, grouped into five classes (A-E). The operators either contribute via their matrix element (ME) or through mixing effects to the observable.}
\label{tab:Pis}
\end{table}
\section{$\varepsilon'/\varepsilon$ meets SMEFT}\label{sec:SMEFT}
Assuming that NP manifests itself at scales much higher than the EW scale, the SMEFT \cite{Buchmuller:1985jz,Grzadkowski:2010es} consists of a valid low-energy effective theory of such a NP scenario. Therefore it is reasonable to adopt the SMEFT as an intermediate theory between any NP model and the SM. This procedure allows to describe NP effects in a model independent way. The complete tree-level matching of the SMEFT onto the weak effective theory is done in \cite{Aebischer:2015fzz,Jenkins:2017jig} and in \cite{Aebischer:2018csl} all the SMEFT operators relevant for $\varepsilon'/\varepsilon$ have been identified. There are:
\begin{itemize}
\item vector four-quark operators: $\mathcal{O}_{qq}^{(1,3)}, \mathcal{O}_{qu}^{(1,8)},\mathcal{O}_{qd}^{(1,8)}, \mathcal{O}_{ud}^{(1,8)} ,\mathcal{O}_{dd}\,,$
\item scalar four-quark operators: $\mathcal{O}_{quqd}^{(1,8)}\,,$
\item modified W and Z couplings: $\mathcal{O}_{Hq}^{(1,3)},\mathcal{O}_{Hd},\mathcal{O}_{Hud}\,,$
\item chromomagnetic dipole operator: $\mathcal{O}_{dG}$.
\end{itemize}
An effect in $\varepsilon'/\varepsilon$ stemming from SMEFT operators can result in correlations with other observables. This occurs for operators containing a quark doublet after changing from the flavour to the interaction basis, or through flavour dependent RG mixing effects. In \cite{Aebischer:2018csl} correlations of $\varepsilon'/\varepsilon$ to $\Delta S =2$ and $\Delta C=1, 2$ processes, semileptonic Kaon decays, the electroweak $T$ parameter, collider constraints as well as the neutron electric dipole moment (EDM) have been analysed. Furthermore, several tree-level mediator scenarios have been studied, which are summarised in Tab.~\ref{tab:treemed}. Further details on correlations of $\varepsilon'/\varepsilon$ and the observables mentioned here can be found in \cite{Aebischer:2018csl}.
\section{Summary}\label{sec:concl}
The hadronic matrix elements for the BSM operators relevant for $\varepsilon'/\varepsilon$ have been presented for the first time in \cite{Aebischer:2018rrz}. The newly acquired matrix elements allowed for the first time to derive a master formula for $\varepsilon'/\varepsilon$, depending on SM and BSM operators. This master formula is presented in \cite{Aebischer:2018quc} and is already included in several public codes, such as \texttt{flavio} \cite{Straub:2018kue} and \texttt{smelli} \cite{Aebischer:2018iyb}. Based on this master formula, different correlations of $\varepsilon'/\varepsilon$ to other observables have been analysed in the context of the SMEFT in \cite{Aebischer:2018csl}.
\begin{table}[tbp]
\centering
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{cccc}
\hline
Mediator & SM Representation & SMEFT & Correlation \\
\hline
$Z'$ & $(1,1)_0$ & $ \mathcal{O}_{qd}^{(1)}$ & $\varepsilon_K$ \\
& & $\mathcal{O}_{qu}^{(1)}$ & $pp\rightarrow jj$ \\
& & $\mathcal{O}_{HD}$ & T parameter \\
\hline
Coloured scalar & $(8,2)_{1/2}$ & $ \mathcal{O}_{qd}^{(1)}$ & $\varepsilon_K$ \\
& & $\mathcal{O}^{(8)}_{quqd}$ & neutron EDM \\
\hline
\end{tabular}
\captionsetup{width=0.9\linewidth}
\caption{Tree-level models, which can have a sizable effect in $\varepsilon'/\varepsilon$ and their correlations to other observables.}
\label{tab:treemed}
\end{table}
\section*{Acknowledgements}
It is a pleasure to thank the organizers of this workshop for inviting me
to this interesting event. In particular I would like to thank my great collaborators: Christoph Bobeth, Andrzej Buras, Jean-Marc
G{\'e}rard and David Straub for a very nice and inspiring collaboration. A special thanks goes to Andrzej Buras for all his support and for giving me the opportunity to give this talk. This research was
supported by the DFG cluster of excellence ``Origin and Structure of the Universe''.
\bibliographystyle{JHEP}
|
1,108,101,564,385 | arxiv | \section{Introduction}\label{sec:introduction}
In recent years, interest in analyzing team sport videos has increased significantly in academia and
industry~\citep{r1, r2, r3, r4, r5, r6, r7}.
This is important for sports broadcasters and teams to understand key events in the game and
extract useful information from the videos. Use cases include identifying participating players, tracking player movement
for game statistics, measuring health and safety indicators, and automatically placing graphic overlays.
For broadcasters and teams that don't have the leeway or the capital to install hardware sensors in player wearables,
a Computer Vision (CV) based solution is the only viable option to automatically understand and generate insights
from games or practice videos. One important task in all sports CV applications is identifying players, specifically
identifying players with their jersey numbers. This task is challenging due to distortion and deformation of player
jerseys based on the player posture, movement and camera angle, rarity of labelled datasets, low-quality videos,
small image size in zoomed out videos, and warped display caused by the player movement.
(see Figure~\ref{fig:wideshot} and ~\ref{fig:playerposture}) \par
Current approaches for jersey number identification consist of two steps: collecting and annotating large
datasets~\citep{r5, r7}, and training large and complex models~\citep{r5, r6, r7}. These approaches include either sequential training of
multiple computer vision models or training one large model, solving for 2 objectives: identifying the jersey number
location (through custom object detection models or training a custom human pose estimation model) and classifying
the jersey number~\citep{r4, r5, r6, r7}. These approaches are tedious, time-consuming, and cost-prohibitive thus making it
intractable for all sports organizations. \par
In this paper we present a novel approach to detect jersey numbers in a small dataset consisting of practice video
footage from the Seattle Seahawks team . We use a three-step approach to number detection that leverages pretrained
models and novel synthetic datasets. We first identify and crop players in a video frame using a person detection model.
We then utilize a human pose estimation model for localizing jerseys on the detected players using the torso key-points,
obviating the need for annotating bounding boxes for number locations. This results in images that are less than
20x25 px with a high imbalance in jersey numbers (see Figure~\ref{fig:playerposture}). Finally, we test two different learning approaches
for model training - multi-class and multi-label each yielding an accuracy of 88\%, with an ensemble accuracy of
89\% to identify jersey numbers from cropped player torsos. \par
Additionally, to compensate for the low number of examples in some of the jersey numbers, we propose two novel
synthetic dataset generators — Simple2D and Complex2D. The Simple2D generator creates two-digit number images from
different combinations of fonts and background colors to mimic those of the Seattle Seahawks jerseys. The Complex2D
generator superimposes the Simple2D numbers on random COCO dataset~\citep{r8} images to add more complexity to the background
and make the model training robust. By pretraining our two CNNs on these synthetic datasets, we observe a 9\% increase
in accuracy on the ensemble models pre-trained with synthetic data compared to the baseline models trained with the
only the Seattle Seahawks numbers. Furthermore, we observe better generalization with low data. \par
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/wideshot.png}
\caption{Example frames from the practice videos demonstrating the challenges to identify jersey numbers in zoomed out videos.}\label{fig:wideshot}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/playerposture.png}
\caption{Cropped players examples showing the player posture, movement and camera angle challenges to
identify jersey numbers.}\label{fig:playerposture}
\end{figure}
\section{Related work}\label{sec:related-work}
\subsection{Synthetic Data Generation}\label{subsec:rw-synthetic-data-generation}
CNN algorithms, that are commonly used in most CV tasks, require large datasets to learn patterns in images.
Collecting and annotating large datasets is a manual, costly and time-consuming task. Several new approaches
including Active Learning~\citep{r9}, Zero or Few-shot learning~\citep{r10} and Synthetic data generation~\citep{r11} have emerged in
recent years to tackle complexities in obtaining a large annotated dataset. Our work focuses primarily on the use
of synthetically generated data. This idea dates back to the 1990's~\citep{r12} and is an active field of research that
alleviates the cost and efforts needed to obtain and manually label real-world data. Nowadays, models (pre)trained
on synthetic datasets have a broad range of utility including feature matching~\citep{r13} autonomous driving~\citep{r14}, robotics
indoor and aerial navigation~\citep{r15}, scene segmentation~\citep{r16} and anonymized image generation in healthcare~\citep{r17}.
The approaches broadly adopt the following process: pre-train with synthetic data before training on real-world
scenes~\citep{r13, r18}, generate composites of synthetic data and real images to create a new one that contains the desired
representation~\citep{r19} or generate realistic datasets using simulation engines like Unity~\citep{r20} or generative models
like GANs~\citep{r21, r22}. There are limitations to each of these regimes but one of the most common pitfalls is
performance deterioration in real-world datasets. Models trained only synthetic datasets don't generalize to
real-world data; this phenomenon is called "domain shift"~\citep{r21}.
\par
In order to reduce the need for annotating large dataset as well as account for the size and imbalance of the
real-world data, we generated two double-digit synthetic datasets - Simple2D and Complex2D with different levels
of complexity as described in Section~\ref{subsubsec:syn-data-gen} This helps to circumvent the domain shift when only synthetic data is
used and improves generalization on real-world data for fine-tuning.
\subsection{Number Identification}\label{subsec:rw-number-identification}
Automatic number identification in sports video has evolved from classical computer vision techniques including
feature extraction using contrast adjustment, edge detection of numbers~\citep{r1, r2, r3} to deep learning-based architectures
that use CNNs for classification~\citep{r4, r5, r6, r7}. A fundamental problem in number identification in sports is the
jersey number distortion due to erratic and continuous player movement. The spatial transformer-based approach
introduced in~\citep{r5} tries to localize and better position the number, so that the classifier has a better chance of
an accurate prediction. The faster-RCNN with pose estimation guidance mechanism~\citep{r6} combines the detection,
classification and key-point estimation tasks in one large network to correct region proposals, reducing the
number of false negative predictions. This approach needed careful labeling of the player bounding-boxes and four
human body key-points, shoulder (right, left), hip (right, left), in addition to the numbers. It also made use of
high-resolution number images (512 px). This approach yields 92\% accuracy for jersey number recognition as a whole
and 94\% on the digit-wise number recognition task. However, getting the right conditions for it i.e., label the
dataset for the three tasks, acquiring high resolution images and training a large model might be challenging for
real-world cases. Furthermore, a lack of standardization and availability of public (commercial use) datasets,
makes it difficult to obtain a benchmark for the number identification task.
\section{Approach}\label{sec:approach}
\subsection{Task Definition}\label{subsec:task-definition}
We define a jersey number as the one or two-digit number printed on the back of a player’s shirt. The jersey number is
used to identify and distinguish players and one number is associated with exactly one player. Our solution takes
cropped images of player’s torsos as input and attempts to classify the jersey number into 101 classes
(0-99 for actual numbers and 100 for unrecognizable images/ jerseys with no numbers).
\subsection{American Football Dataset}\label{subsec:american-football-dataset}
The data used for this work consisted of a collection of 6 practice videos from different angles for training and
additional 4 for testing from the Seattle Seahawks archives. Half of the videos were from the endzone perspective,
that is, the scoring zone between the end line and the goal line. The other half were from the sideline perspective,
the boundary line that separates the play area from the sides. Both cameras were placed on a high altitude to get a
panoramic view for the play and capture the majority of the actions taken by the players. A pitfall for collecting
data using this camera angle is that the size of a player is less than 10\% of the image size when the players are
far away from the camera. In addition, the sideline view has restricted visibility of jersey numbers compared to
end-zone (see Figure~\ref{fig:perspectives}). The videos were recorded in 1280x720 resolution and we sampled frames
from each video at 1, 5 and 10 frames per second (fps) rates. We noticed that images sampled at 5 fps sufficiently
captured all the jersey numbers in a play and we decided to use the same sampling rate throughout our solution.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/perspectives.png}
\caption{Examples of frames obtained from the two different angles from the training videos. Left, is the endzone
view of the players. Right is the sideline view which offers better visibility into jersey numbers. Within a play,
we can find players, observers with/without football jerseys.}\label{fig:perspectives}
\end{figure}
\subsubsection{Jersey number localization}\label{subsec:jersey-number-localization}
To mitigate the need for annotating player location, jersey number bounding boxes and consequently training person and
jersey number detection models, we utilized pretrained models for person detection and pose estimation to localize the
jersey number region. This approach prevents the model to generate correlations with wrong features like player
background, helmets or clothing items and confining the learning to the region of interest.
For the number localization we first use a pretrained person detector, Centernet~\citep{r23} model (ResNet50 backbone), to
detect and crop players from an image. Instead of training a custom human key-point estimation head~\citep{r6}, we use a
pretrained, pose estimation model, AlphaPose (https://gitee.com/marcy/AlphaPose, with ResNet101 backbone), to identify
four torso key-points
(left and right - hips and shoulders) on the cropped player images from the person detection step (see Figure~\ref{fig:models}).
We use the four key-points to create a bounding box around jersey numbers. To accommodate inaccuracies in key-point
prediction and localization due to complex human poses, we increased the size of torso keypoint area by expanding the
coordinates 60\% outward to better capture jersey numbers. The torso area is then cropped and used as the input for
the number prediction models discussed in Section~\ref{subsubsec:syn-data-gen} In previous works, the use of high-resolution images of
players and jersey numbers is very common. However, the American football dataset we used was captured from a bird’s
eye view, where jersey numbers were smaller than 32x32 px. In fact, the average size of the torso crops is 20x25 with
the actual jersey number being even a smaller portion of this area (see Figure~\ref{fig:datasize}).
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/datasize.png}
\caption{Distribution of the sizes from person and torso bounding boxes. Note how the great majority of torso sizes is less than 32x32 px.}\label{fig:datasize}
\end{figure}
After player detection and jersey number localization, we generated 9,000 candidate images for number detection.
We labelled the images with Amazon SageMaker GroundTruth and noticed that 6,000 images contained non-players
(trainers, referees, watchers); the pose estimation model for jersey number localization simply identifies human
body key-points and doesn’t differentiate between players and non-players. 3,000 labelled images with severe
imbalance (see Figure~\ref{fig:datadistro}) were usable for the training.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/datadistro.png}
\caption{Distribution of the jersey number labels in training set. Number 3 has 500+ images while numbers 43, 63, 69 and 93 have 10 images or less.}\label{fig:datadistro}
\end{figure}
\subsubsection{Synthetic Data Generation}\label{subsubsec:syn-data-gen}
Typically, a licensed (SVHN~\citep{r25}) or a large custom dataset is used for (pre)training number recognition models.
Since there are no standardized public datasets with permissive licenses, we created two 2-digit synthetic datasets
to pretrain our models. We investigated 2-digit MNIST~\citep{r26}, however it did not have pixel color and font variations
needed for jersey detection and performed poorly in our tests. Hence, we generated two different synthetic datasets;
a simple two-digit (Simple2D) numbers with font and background similar to the football dataset and other with 2-digit
synthetic numbers superimposed on COCO~\citep{r8} dataset images (Complex2D) to account for variations in numbers background.
The Simple2D dataset was generated by randomly selecting a number from a uniform distribution of 0 to 9 and randomly
scaling it. Color backgrounds (Red, Navy Blue, Green, Red, Yellow, White) and special font (Freshman ) that resembled
the team jerseys were used to generate these numbers (see Figure~\ref{fig:datasize}). One Light, five Medium and five Hard augmentations
(see Table~\ref{tab:data-aug}) were used on each digit to be later permuted and concatenated to obtain 4000 images (100 x 100 px) of
each 2-digit number, from 00 to 99. At the end this dataset consisted of a total of 400,000 images.
Since the real-world images had more complicated background, textures and lighting conditions, we decided to
synthetically generate another dataset (see Figure~\ref{fig:synthetic}) to increase the robustness and generalization of our pretrained
model. The complex2D dataset was designed to increase background noise by superimposing numbers from Sample2D on
random real-world images from the COCO dataset~\citep{r8}. We generated a total of 400,000 images (4000 per class) with
noisy backgrounds.
Our algorithm is explained in more details in Algorithms~\ref{alg:number-generation}, \ref{alg:simple2d} and \ref{alg:complex2d}.
\begin{table}[h]
\caption{data augmentations}\label{tab:data-aug}
\centering
\begin{tabular}{p{.1\linewidth} p{0.85\linewidth}}
\toprule
Name & Augmentations \\
\midrule
Light & Gaussian Noise, Optical distortion \\
Medium & Light + Grid distortion \\
Hard & Medium + Shuffling RGB channels, Random Shift-Scale-Rotation \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/synthetic.png}
\caption{Synthetic data generation with Simple2D and Complex2D. Simple2D dataset was generated by creating numbers
in football dataset jersey colors and fonts. Several augmentations (Table~\ref{tab:data-aug}) were applied on these numbers to get
Simple2D dataset. The numbers from this dataset were randomly sampled and randomly placed on COCO dataset images
to form Complex2D dataset}\label{fig:synthetic}
\end{figure}
\begin{algorithm}[hbt!]
\caption{Number generation}\label{alg:number-generation}
\ForAll{n in 0-9}{
select a jersey background and font color with a probability of U(1,n) = number of combinations\;
choose a font size with a probability of U(a,b) if a, b are scaled factors of image size \;
paste single number with chosen font and background color and size \;
}
\end{algorithm}
\begin{algorithm}[hbt!]
\caption{Simple2D}\label{alg:simple2d}
\ForAll{n in 0-99}{
\ForAll{background colors}{
generate 1000 images\;
\eIf{single digit}{
perform light, medium and hard augmentations\;
scale image to 100x100 px\;
}{
perform light, medium and hard augmentations on each digit\;
concatinate digits \;
scale image to 100x100 px\;
}
}
}
randomly sample 4000 images per number across all color combinations \;
\end{algorithm}
\begin{algorithm}[hbt!]
\caption{Complex2D}\label{alg:complex2d}
\ForAll{n in 0-99}{
select a random image from COCO dataset\;
select a random jersey number image\;
super-impose jersey number at a random position in the COCO image\;
rescale image to 100x100 px\;
continue until 4000 images per number are obtained\;
}
\end{algorithm}
\subsubsection{Jersey number detection}\label{subsubsec:jersey-n-detection}
After the number localization step above, two models were sequentially pretrained with the synthetic datasets
(Simple2D to Complex2D) and fine-tuned with the real-world football dataset (see Figure~\ref{fig:models}). The idea of training a
model with increasingly difficult samples is called curriculum learning. This technique has empirically shown accuracy
increase and faster convergence~\citep{r27, r28}. One of the challenges of implementing curriculum learning is manually ranking
difficulty in the training set~\citep{r27}. In our case, the synthetic data was generated explicitly in this manner
(simple to complex) and our training regime adopted this order, thus, bypassing this challenge.
Both models used a ResNet50~\citep{r29} architecture with deep residual connections, as backbone and a final layer predicting
classes (jersey numbers). The first model was a multi-class image classifier to detect two-digit number with a total
of 101 different classes (numbers from 0 - 99 plus an unrecognizable class). The second model was a multi-class
multi-label classifier with 21 classes to detect single digits (10 digits for each side- right, left numbers, plus an
unrecognizable class).
We define the i-th input feature $X_i$ (cropped image of a player) with the label $y_i$ (0-99 for actual numbers and 100 for
unrecognizable). Our multi-class model was optimized with the following loss function:
\[ L_{mc} = \sum_{i} {L_i} = - \sum_{i} {y_i \log \hat{y}_{mc} (X_i)} \]
where $y_i$ is the true label and $\hat{y}_{mc}$ is calculated as a softmax over scores computed by the multi-class
model as follows:
\[ \hat{y}_{mc} (X_i) = \sigma (\vec{Z}) \]
\[ \sigma (\vec{Z})_k = \frac {e^{Z_k}} {\sum_{j=0}^{100} e^{Z_j}} \]
Where $\vec{Z}$ is the outputs from the last layer of the multiclass model consists of $(z_0, ..., z_100)$ given $X_i$.
For the multi-label model, the loss function is defined as:
\[ L_{ml} = \sum_{i} {L_i} = - \sum_{i} {y_i \log \hat{y}_{ml} (X_i)} \]
where $y_i$ is the true label and $\hat{y}_{ml}$ is calculated as a sigmoid over scores computed by the multi-label model as follows:
\[ \hat{y}_{ml} (X_i) = \frac {1} {1 + e^{\vec{Z}}} \]
Where $\vec{Z}$ is the outputs from the last layer of the multilabel model given $X_i$.
Both models were trained until convergence and the model from the epoch with the best performance was selected. We
explored the combination of the two models to provide the final decision and we explain our results in section~\ref{sec:exp-results}
Our original idea was that the multi-label model would augment performance of the multi-class model and address
generalization issues with unseen/ low data availability for certain numbers. For example, if 83, 74 were present in
the training set but not 73, the right and left side of prediction nodes for 3 and 7 would have been activated in the
train set for all numbers starting and ending with 7 or 3 and hence the multi-label model would have enough samples
to predict 73.
We considered training a custom object detection model to identify single-digit numbers. However, due to additional
cost and time associated with labeling bounding boxes, image quality and small size of localized jersey numbers
(approximately 20 x 25 px), we chose the image classification approach.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/models.png}
\caption{Overview of the approach for extracting data, training and generating jersey number predictions.
a) describes the high-level football dataset processing pipeline - identify person in video, pass each person image
through pose estimation model to identify torso region and crop them. b) shows the sequential pretraining of
multi-class/label models with synthetic number datasets - Simple2D and Complex2D as well as fine-tuning on football
dataset. c) represents the inference pipeline that uses data pipeline from a) to crop jersey numbers and perform
prediction using multi-class/label models Figure b)}\label{fig:models}
\end{figure}
\section{Experimental Results}\label{sec:exp-results}
We trained the ResNet50 multi-class(number-detection) and multi-label(digit-detection) jersey number classifiers on
the football dataset to establish baseline performance without the synthetic data. For the multi-class model, we
took the number with highest softmax score as the prediction. For the multi-label model, we applied a threshold of
0.5 to both right and left predicted classes to get the output. Eventually we computed the final prediction from the
output of the two models.
The baseline model accuracy was 80\% for both models. We experimented with various input image sizes and found optimal
accuracy at 224x224 px for the multi-class and 100x100 px for the multi-label model. Our dataset presented a high
imbalance across several numbers where 24\% of the numbers have less than 100 samples and only 5\% reach the 400-sample
mark (See Figure~\ref{fig:perspectives}). Hence, we duplicated data points for each number to have 400 images in the training set when
needed. Our training pipeline dynamically applies image augmentation so that no image is seen twice by the models,
even when the base image is the same. We also up sample our test-set images to maintain 20 images per number.
After having our baselines, we investigated the effects of pre-training with the generated synthetic data on our model
performance. Pre-training on the Simple2D dataset and fine-tuning on the football dataset, resulted in a performance
improvement of 2\% over the baseline (82\%), for both, multi-class and multi-label models. However, pre-training on
the Complex2D dataset and fine-tuning on the football dataset, resulted in 3\% improvement on the multi-class model
and 8\% on the multi-label model. By pre-training on both Simple2D and Complex2D, we achieved 8.8\% and 6\% improvement
above the baseline in multi-class and multi-label models respectively.
The best multi-label model (Complex2D + Football dataset) had positive accuracy improvements on 74 classes, no change
in accuracy in 19 classes, negative change in accuracy in 8 classes (drop by 10\%). The best multi-class model
(Simple2D + Complex2D + Football dataset) had positive accuracy improvements on 63 classes, no change in accuracy in
21 classes, negative change in accuracy in 17 classes (drop by 7\%). In order to validate the hypothesis
(Section~\ref{subsubsec:jersey-n-detection}) that multi-label model could have better performance on numbers with
less images, we compare its
results with best multi-class model on numbers with less than 50 images in training set. We notice an average increase
in accuracy of 18.5\% for multi-class model and 20\% for multi-label model before and after training on synthetic data,
for these numbers. Despite larger gains in accuracy shown by multi-label model, the absolute accuracy scores for these
numbers were better for multi-class model, 81\% compared to 78\% for multi-label model.
\begin{figure}
\centering
\includegraphics[width=.35\textwidth]{figures/player1.png}
\includegraphics[width=.3\textwidth]{figures/player2.png}
\caption{Images where multi-label predicted class 100. The multi-label model is not sure of the number class when
the input image has very low resolution.}\label{fig:mc100}
\end{figure}
By analyzing the confusion matrix of the model predictions
, we learnt that the best multi-label
model produces false predictions in 2 major scenarios (see Figure~\ref{fig:mc100}): predicting one digit rather than both digits,
and predicting class 100 for low-resolution and hard-to-recognize digits. In other words, the multi-label model is
more likely to predict one digit number and non-number classes when challenged with new data. The multi-class model,
however, has relatively spread-out false predictions (see Figure~\ref{fig:ml100}). Major areas of error for this model are:
predicting one digit rather than both digits, and mistaking single digits for two digits or unrecognizable class.
\begin{figure}
\centering
\includegraphics[width=.3\textwidth]{figures/player3.png}
\includegraphics[width=.3\textwidth]{figures/player4.png}
\caption{Image where multi-class predicted class 100. Confusion for the multi-class model arise when the
numbers are rotated or occluded.}\label{fig:ml100}
\end{figure}
Examining the performance of the two models independently we noticed that predictions agree in 84.4\% of the test
cases, suggesting that despite the different objectives (multi-class vs multi-label) there is a robust learning of
the number representations. Furthermore, we notice an additional improvement of 0.4\% by two-model ensemble.
Table 2 presents our results.
\begin{table}[h]
\caption{A comparison of model performance under different conditions with confidence threshold of 0.5}\label{tab:results}
\centering
\begin{tabular}{p{.5\linewidth} p{0.1\linewidth}p{0.1\linewidth}p{0.1\linewidth}}
\toprule
Experiment & Multi-class & Multi-label & Ensemble \\
\midrule
\multicolumn{4}{c}{Without synthetic data} \\
Football dataset & 0.8064 & 0.8 & \\
Best (Multi-class + Multi-label) & & & 0.8028 \\
\\
\multicolumn{4}{c}{With synthetic data pre-training} \\
Simple2D + Football dataset & 0.8282 & 0.82 & \\
Complex2D + Football dataset & 0.8306 & 0.88 & \\
Simple2D + Complex2D + Football dataset & 0.8886 & 0.86 & \\
Best (Multi-class + Multi-label) & & & 0.8931 \\
\bottomrule
\end{tabular}
\end{table}
\section{Limitations}\label{sec:limitations}
The work presented in this paper shows that the number identification task can be simplified by leveraging synthetic
datasets. We were able to obtain a good performance that is comparable with previous works~\citep{r1, r2, r4} requiring no
change in the data collection pipeline. Despite these findings, we recognize this approach has some limitations which
we describe in this section.
We were able to achieve 89\% accuracy for our test dataset regardless of the challenging nature of jersey number
identification in a low-data regime. This performance is on par with some of the most recent works~\citep{r7}. However,
the lack of a benchmark dataset for this task and unavailability of already implemented tools, is a big barrier
for comparing performance across all methods. The only solution is to label large amounts of high-quality data
and retrain the available solutions in-house. This requires a lot of computational resources and man-hours put
into work, which is not always an option for all institutions.
In our jersey detection models, we used ResNet50 as a base model, because it proved to be effective for this task.
Bigger and more sophisticated models might provide better accuracy and recall but an exhaustive search is necessary
for each of the components of the solutions to determine an optimal cost-benefit tradeoff. We recognize that more
investigation is needed here to determine such optimal.
In our solution we chose a three-model pipeline approach versus a one-pass prediction model. Our approach comes with
a few limitations including cascading inaccuracies from one model to the next and increase in latency. However, our
choice was justified by ease of implementation, maintenance and portability to other domains. Even with this cascading
effect, our solution proves to have a good performance in our highly imbalanced, limited dataset.
\section{Future Work}\label{sec:future-work}
Our approach to increase performance can be broadly classified into two categories: improving data quality and quantity
or experimenting with different models.
\subsection{Data quality and quantity}\label{subsec:data-quality-and-quantity}
We observed no improvement in model accuracy by increasing the number of duplicated samples or the number of image
augmentations. The confidence of the predictions directly correlated with the quality and resolution of the jersey
number crop (input image). In future work, we plan to experiment with various image quality enhancement methods in
classical CV and deep learning domains to observe if it improves performance. Another path that can be considered is
to refine our synthetic data generation pipeline to produce images that are closer to the real-world dataset.
\subsection{Different model strategies}\label{subsec:different-model-strategies}
Our current method has minimal labeling effort. However, by collecting more images of reasonable quality and quantity
we plan to test object detection-based models. One way to improve frame level accuracy would be to track detected
jersey numbers across both side-line and end-zone views so that in situations where numbers are partially visible
or player pose is complex, we would be able to obtain predictions with continuity. Tracking players in team sports
like football is still a major challenge in the sports CV domain and we will evaluate its utility in our future work.
\section{Conclusion}\label{sec:conclusion}
This paper presented a new solution for low-data regime jersey detection with two-stage novel synthetic data generation
techniques, pose estimation for jersey number localization and CNN ensemble learning to detect jersey numbers.
Data augmentations during training and the use of large synthetic dataset provided enough variations for the model
to generalize well and learn numbers. Our solution is easy to implement, requires minimal labeling, curation,
supervision, and can be customized for various sports jersey fonts, colors and backgrounds. Our framework improves
the accuracy of number detection task by 9\% and can be easily extended to similar tasks across various Sports
communities as well as industries with similar use cases. Furthermore, our solution did not require the modification
of the data capturing or processing pipeline that is already in place, making it convenient and flexible.
Additionally, it introduces a novel data synthesis technique that can boost custom solution performance in a wide
array of sports. We hope this solution enables the Sport Analytics community to rapidly automate video understanding solutions.
\vskip 0.2in
|
1,108,101,564,386 | arxiv |
\section{Method}
\label{sec:algorithm}
\subsection{Notations}
Sets are shown with calligraphic letters. For a set $\mathcal{C}$, $|\mathcal{C}|$ designates its cardinality. Matrices are indicated with uppercase letters; vectors and scalars are indicated with lowercase letters. Let $[n] = \{1,2,\ldots,n\}$. For a matrix $A \in \mathbb{R}^{n \times m}$, $A_{i, j}$ designates the element in row $i$ and column $j$. For a vector $x$, $x_i$ indicates its $i$-th entry. We use $\|x\|_p$ to denote the $\ell_p$ norm of a vector $x \in \mathbb{R}^n$, and denote the inner product by $\langle \cdot, \cdot \rangle$.
\subsection{Problem Statement}
We consider the problem of minimizing a black-box function over the Boolean hypercube. The black-box functions of interest are intrinsically expensive to evaluate, potentially noisy,
and for which in general there is no trivial means to find the minimum.
More precisely, given a subset $\mathcal{C}$ of the Boolean hypercube $\mathcal{X} = \{-1, 1\}^d$, the objective is to find
\begin{equation}
x^* = \arg\min_{x \in \mathcal{C}} f(x)
\end{equation}
where $f$ is a real-valued Boolean function $f(x): \mathcal{X} \mapsto \mathbb{R}$. Exhaustive search requires $|\mathcal{C}|$ function evaluations; however, since evaluating the black-box function $f$ is expensive, we are interested in finding $x^*$ (or an approximation of it) in as few function evaluations as possible. In this problem, the performance of any algorithm is measured in terms of \textit{simple regret}, which is the difference between the best evaluation seen until time $t$ and the minimum function value $f(x^*)$:
\begin{equation}
R_t = \min_{i \in [t]} |f(x_i) - f(x^*)|.
\end{equation}
Two particularly important instances of such combinatorial structures are $(i)$ \textit{unconstrained optimization problems} where $\mathcal{C}$ includes the entire Boolean hypercube $\mathcal{X}$ where $|\mathcal{C}| = |\mathcal{X}| = 2^d$, and $(ii)$ \textit{optimization problems with a sum constraint} where $\mathcal{C}_n$ corresponds with the $n$-subsets of $[d]$ such that $\sum_i I(x_i = 1) = n$, where $I(.)$ is the indicator function. In the latter problem, we have $|\mathcal{C}_n| = \binom{d}{n}$.
We note that \textit{anytime} algorithms are particularly desirable for this problem for the following reasons: (1) in many applications the evaluation budget is not known in advance, and (2) the algorithm is run until certain stopping criteria are met. One such stopping criteria is the finite time budget, which is measured as the total computational time required for the algorithm to produce samples to be evaluated by the black-box function, plus the evaluation time consumed by the black-box function of interest.
\subsection{Surrogate Model}
In this work, we pursue the framework of using a surrogate model to approximate the black box function along with an acquisition function applied to this surrogate model. At each time step $t$, the surrogate model provides an estimate for the black-box function using the observations $\{(x_i, f(x_i)): i \in [t]\}$ acquired so far. Having been equipped with the new estimate model, the acquisition function selects a new candidate point $x_{t}$ for evaluation. The black-box function then returns the evaluation $f(x_{t})$ for the latter data point. This process is repeated until a stopping criterion, such as an evaluation budget or a time budget, is met.
Any real-valued Boolean function can be uniquely expressed by its \textit{multilinear polynomial} representation \cite{Boolean}:
\begin{equation}
f(x) = \sum_{\mathcal{I} \subseteq [d]} \alpha^{*}_{\mathcal{I}} \psi_{\mathcal{I}}(x)
\end{equation}
which is referred to as the \textit{Fourier} expansion of $f$, the real number $\alpha^{*}_{\mathcal{I}}$ is called the Fourier coefficient of $f$ on $\mathcal{I}$, and $\psi_{\mathcal{I}}(x) = \Pi_{i \in \mathcal{I}} x_i$ are monomials of order $|\mathcal{I}|$. The generality of Fourier expansions and the monomials' capability to capture interactions among different variables, make this representation particularly attractive for problems over the Boolean hypercube.
In addition, in many applications of interest monomials of orders up to $m << d$ are sufficient to capture interactions among the variables, reducing the number of Fourier coefficients from $2^d$ to $p = \sum_{i=0}^m \binom{d}{i}$. This leads to the following approximate surrogate model for $f$:
\begin{equation}
\widehat{f}_{\alpha}(x) = \sum_{i \in [p]} \alpha_{i} \psi_{i}(x).
\end{equation}
We employ the latter representation as the surrogate model in our proposed algorithm.
\subsection{The \texttt{COMEX} Algorithm}
Motivated by the properties of the hedge algorithm \cite{arora2012multiplicative}, we adopt an exponential weight update rule for our surrogate model. More precisely, we maintain a pool of monomials where each monomial term plays the role of an expert. In particular, we are interested in finding the optimal Fourier coefficient $\alpha_i$ for the \textit{monomial expert} $\psi_i$. Note that exponential weights are non-negative, while the Fourier coefficients could be either negative or positive. Following the same approach as sparse online linear regression literature \cite{KIVINEN1997}, we maintain two non-negative coefficients for each Fourier coefficient $\alpha_i^t$ at time step $t$: $\alpha_{i, +}^t$ and $\alpha_{i, -}^t$. The value of the Fourier coefficient is then obtained via the subtraction $\alpha_i^t = (\alpha_{i, +}^t - \alpha_{i, -}^t)$.
More specifically, our algorithm works in the following way. We initialize the monomial coefficients $\alpha_{i, -}$ and $\alpha_{i, +}$ ($\forall i \in [p]$) with a uniform prior. In each time step $t$, the algorithm produces a sample point $x_t$ via Simulated Annealing (SA) over its current estimate for the Fourier representation $\widehat{f}_{\alpha^t}$ with Fourier coefficients $\alpha^t$. We then observe the black-box function evaluation $f(x_t)$ for our query $x_t$. This leads to a mixture loss $\ell_t$ which is equal to the difference between the evaluations obtained by our estimate model and the black-box function. This mixture loss, in turn, leads to the individual losses $\ell_i^t = 2 \, \lambda \, \ell_t \, \psi_i(x_t)$ for the monomial experts $\psi_i: \forall i \in [p]$. Finally, we update the current estimate for the Fourier coefficients $\alpha^t$ via the exponential weight update rule, incorporating the incurred losses. We repeat this process until the stopping criteria are met. Note that we use the anytime learning rate schedule of \cite{adaptive_EG}, which is a decreasing function of time $t$ (see Appendix \ref{app:learning_rate} for more details). A summary of the proposed algorithm, which we refer to as \textit{Combinatorial Optimization with Monomial Experts} (\texttt{COMEX}), is given in Algorithm \ref{algo:COMEX}. \\
\begin{algorithm}[!t]
\caption{Combinatorial Optimization with Monomial Experts}
\begin{algorithmic}[1]
\STATE \textbf{Inputs:} sparsity $\lambda$, maximum monomial order $m$
\STATE $t = 0$
\STATE $\forall \gamma \in \{-, +\} \: \textrm{and} \: \, \forall i \in [p]: \alpha^t_{i, \gamma} = \tfrac{1}{2 p}$
\REPEAT
\STATE $x_t \sim \widehat{f}_{\alpha^t} \;$ via Algorithm \ref{algo:SA}
\STATE Observe $\, f(x_t)$
\STATE $\widehat{f}_{\alpha^t}(x) \gets \sum_{i \in [p]} \big (\alpha^{t}_{i, +} - \alpha^{t}_{i, -} \big )~\psi_i(x) $.
\STATE $\ell^{t+1} \gets \widehat{f}_{\alpha^t}(x_t) - f(x_t)$
\FOR {$i \in [p] \: \textrm{and} \: \gamma \in \{-, +\}$}
\STATE $\ell_i^{t+1} \gets 2 \, \lambda \, \ell^{t+1} \, \psi_i(x_t)$
\STATE $\alpha^{t+1}_{i, \gamma} \gets \alpha^{t}_{i, \gamma} \exp \big (- \,\gamma \, \eta_t \, \ell_i^{t+1} \big)$
\STATE $\alpha^{t+1}_{i, \gamma} \gets \lambda \cdot \tfrac{\alpha^{t+1}_{i, \gamma}}{\sum_{\mu \in \{-, +\}} \sum_{j \in [p]} \alpha^{t+1}_{j, \mu}}$
\ENDFOR
\STATE $t \gets t + 1$
\UNTIL{Stopping Criteria}
\RETURN $\widehat{x}* = \arg\min_{\{x_i : \, \forall i \in [t]\}} f(x_i)$
\end{algorithmic}
\label{algo:COMEX}
\end{algorithm}
\textbf{Theoretical Insights}: Let $D_{\mathrm{KL}}(p||q)$ denote the KL divergence between two distributions $p$ and $q$, i.e. $D_{\mathrm{KL}}(p||q) = \sum_i p_i \log\big(\tfrac{p_i}{q_i}\big)$. We can show that the KL-divergence between the estimate and true Fourier coefficients decreases over time, assuming that the true Fourier coefficients $\alpha^*$ are non-negative, and form a distribution, i.e. $\sum_i \alpha^*_i = 1$. Define $\phi_t = D_{\mathrm{KL}} (\alpha^*||\alpha^t)$ as the KL divergence between $\alpha^t$ and $\alpha^*$, where $\alpha^t$ are the estimates of Fourier coefficients at time $t$. With respect to Algorithm \ref{algo:COMEX}, $\alpha_i^t=\alpha_{i,+}^t$ and $\alpha_{i,-}^t=0$ in this case.
\begin{lemma}
\label{lemma_positive}
The exponential weight update at any time step $t$ for the Fourier coefficients $\alpha^t$, under the above stated assumption of non-negativity of the true Fourier coefficients $\alpha^*$, yields
$$\phi_{t-1} \geq \phi_t + \eta \, 2 \, \lambda \, \big (\hat{f}_{\alpha_t}(x_t)-f(x_t) \big)^2 - \eta^2$$
for $\eta < \frac{1}{8 \lambda}$.
\end{lemma}
\begin{proof}
Using Lemma 4.1 of \cite{Moritz2010}, for each exponential weight update at step $t$ where
$\alpha_i^t=\alpha_{i,+}^t$ and $\alpha_{i,-}^t=0$, we have $\phi_{t-1} - \phi_{t} \geq \eta \, \langle r^t, \alpha^{t-1} - \alpha^* \rangle - \eta^2$ (for $0<\eta < \frac{1}{8 \lambda}$), where $r_t$ is the vector of individual losses, i.e. $r^t_i = \ell^t_i$. As a result, we only need to show that $\langle r^t, \alpha^{t-1} - \alpha^* \rangle$ is always greater than or equal to zero, since the value of $\eta$ can be chosen to be suitably small:
\begin{align*}
\langle r^t, \alpha^{t-1} - \alpha^* \rangle &= \sum_i \ell_i^t \, ( \alpha^{t-1}_i - \alpha^*_i ) \\
&= \sum_i 2 \lambda \ell^t \, \psi_i(x_t) \, ( \alpha^{t-1}_i - \alpha^*_i ) \\
&= 2 \lambda \, \ell^t \, \sum_i ( \alpha^{t-1}_i - \alpha^*_i ) \psi_i(x_t) \\
&= 2 \lambda \, \bigg ( \sum_i ( \alpha^{t-1}_i - \alpha^*_i ) \psi_i(x_t) \bigg ) ^ 2 \\
& = 2 \lambda (\hat{f}_{\alpha_t}(x_t)-f(x_t))^2 \geq 0.
\end{align*}
This proves the Lemma.
\end{proof}
For the generalization of this result to the case of Fourier coefficients with arbitrary signs, see Remark \ref{remark_general} in Appendix \ref{app:lemma_extension}.
\begin{remark} The above Lemma shows that for a small enough $\eta$, $\phi_{t-1} -\phi_t \geq 0$ for any evaluation point $x_t$.
This shows that for a sufficiently small learning rate $\eta$, irrespective of the evaluation point $x_t$, there is a potential drop in the distance between the true and estimated coefficients after the exponential weight update at time $t$. This observation motivates our surrogate model and the deployment of the exponential weight update rule.
\end{remark}
\subsection{Acquisition Function}
Our acquisition function is designed to minimize $\hat{f}$, the current estimate, in a way that allows some exploration. To this end,
we employ a version of simulated annealing (SA) \cite{Kirkpatrick1983, SA} as our acquisition function that uses the offline evaluations of the surrogate model. SA consists of a discrete-time inhomogeneous Markov chain, and is used to address discrete optimization problems. The key feature of simulated annealing is that it provides a means to escape local optima by allowing probabilistic hill-climbing moves in hopes of finding a global optimum. Although SA is not sample efficient in practice, and as we will see in Section \ref{sec:experiments} is not suitable for optimization of black-box functions, it can be set up in conjunction with a surrogate model.
Define the neighborhood model $\mathcal{N}$ for the unconstrained problem as:
\begin{equation}
\mathcal{N}(x_t) \gets \{x_i: d_H(x_i, x_t) = 1 \; \textrm{and} \; x_i \in \{0, 1\}^d \},
\end{equation}
where $d_H(x_i, x_t)$ designates the Hamming distance between $x_i$ and $x_t$.
Also, we define the following neighborhood model for the sum-constrained problem:
\begin{equation}
\label{constrained_neighborhood}
\mathcal{N}(x_t) \gets \{x_i: d_H(x_i, x_t) = 2 \; \textrm{and} \; x_i \in \mathcal{C}_n \}.
\end{equation}
Algorithm \ref{algo:SA} presents the simulated annealing for the latter two combinatorial structures, where $s(t)$ is an annealing schedule, which is a non-increasing function of $t$. We use the annealing schedule suggested in \cite{SA}, which follows an exponential decay given by $s(t) = \exp(-\omega t/d)$, where $\omega$ is a decay parameter. \\
\textbf{Analysis for Exponential Acquisition Function}:
We used a simulated annealing based acquisition function on the surrogate model $\hat{f}(x)$. This model is very difficult to analyze. Instead, we analyze an exponential acquisition function given by
$$x \sim \frac{\exp\left(-\nicefrac{\hat{f}_{\alpha_t}(x)}{T}\right)}{\sum_{x\in \{-1,1\}^d} \exp \left(-\nicefrac{\hat{f}_{\alpha_t}(x)}{T}\right)}$$
where $T$ is the temperature. Let the p.m.f of this acquired sample be $\hat{P}_{\alpha_t}(x)$.
If we had access to the actual function $f$, we would use the acquisition function:
$$x \sim \frac{\exp\left(-\nicefrac{f(x)}{T}\right)}{\sum_{x\in \{-1,1\}^d} \exp \left(-\nicefrac{f(x)}{T}\right)}.$$
Let the p.m.f of this acquired sample be $P(x)$. We emphasize that, given explicit access to $f$ (white-box), one would simply repeatedly acquire samples using the acquisition function for $f$. In our black-box algorithm, we use the surrogate model to acquire samples.
Now, we show a result that implies the following under some additional technical condition: \textit{Until the acquisition function based on $\hat{f}_{\alpha_t}$ yields samples which are close in KL divergence to samples yielded by the acquisition function based on $f$, average $\phi_{t-1} - \phi_{t}$ (as in Lemma \ref{lemma_positive}) is large.}
In other words, if the acquisition function for our algorithm is far from the white-box acquisition function, then non-trivial learning of $f$ happens, i.e. $\alpha_t$ moves closer to $\alpha^{*}$ at least by a certain amount.
Let $\hat{Z}= \sum_{x} \exp \left( - \hat{f}_{\alpha_t}(x)/T \right)$ be the partition function. Similarly, let $Z$ be the partition function associated with $P(x)$.
\begin{theorem}
\label{thm:Acquisition}
Let $-1 \leq \hat{f}_{\alpha_t}(x), f(x) \leq 1$. If at any time $t$, and for a temperature $T$, we have for some $\epsilon>0$: \[ \left \lvert D_{\mathrm{KL}}(\hat{P}_{\alpha_t} \lVert P) - \log \left( \frac{ Z}{\hat{Z}}\right) \right \rvert \geq \epsilon,\]
then $\mathbb{E}_{\hat{P}_{\alpha_t}} [\phi_{t-1}-\phi_t] \geq 2 \, \eta \, \lambda \, \epsilon^2 T^2 - \eta^2$. Here, $D_{\mathrm{KL}}$ is defined with respect to $\log_e$ for convenience.
\end{theorem}
\begin{proof}
The proof is in the supplement.
\end{proof}
\begin{remark} Note that the condition in the theorem definitely implies the condition that $D_{\mathrm{KL}}(\hat{P}_{\alpha_t} \lVert P) >0$.
\end{remark}
\begin{algorithm}[!t]
\caption{Simulated Annealing for Combinatorial Constraints}
\begin{algorithmic}[1]
\STATE \textbf{Inputs:} surrogate model $\widehat{f}_{\alpha_t}$, neighborhood model $\mathcal{N}$, Constraint Set $\mathcal{C}$
\STATE $t = 0$
\STATE Initialize $\; x_0 \in \mathcal{C}$
\REPEAT
\STATE $z \sim \texttt{unif}\big(\mathcal{N}(x_{t}) \big)$
\IF {$\widehat{f}_{\alpha_t}(z) \leq \widehat{f}_{\alpha_t}(x_{t})$}
\STATE $x_{t+1} \gets z$
\ELSIF{$\texttt{unif}(0, 1) \leq \exp\bigg(-\tfrac{\widehat{f}_{\alpha_t}(z) - \widehat{f}_{\alpha_t}(x_{t})}{s(t)}\bigg)$}
\STATE $x_{t+1} \gets z$
\ELSE
\STATE $x_{t+1} \gets x_t$
\ENDIF
\STATE $t \gets t + 1$
\UNTIL{Stopping Criteria}
\RETURN $x_t$
\end{algorithmic}
\label{algo:SA}
\end{algorithm}
\subsection{Computational Complexity}
The computational complexity per time step for model learning in the proposed algorithm is in $\mathcal{O}(p) = \mathcal{O}(d^m)$, which is linear in the number of Fourier coefficients. More importantly, the complexity of the proposed learning algorithm is independent of the number of function evaluations (i.e. time step $t$). We also note that the complexity of the simulated annealing is in $\mathcal{O}(d^2)$; therefore, the overall complexity of the algorithm remains $\mathcal{O}(p)$ for $m \geq 2$.
\section{Extension of Lemma \ref{lemma_positive}}
\label{app:lemma_extension}
The results of Lemma \ref{lemma_positive} can be extended to the general case of Fourier coefficients with arbitrary signs, as follows.
\begin{remark}
\label{remark_general}
Lemma \ref{lemma_positive} holds for the general case of Fourier coefficients $\alpha_i^t$ with arbitrary signs.
\end{remark}
\begin{proof}
Following on the idea from \cite{hoeven18}, if $\lVert \alpha^{*} \rVert_1 \leq 1$, then one can always write $\alpha^{*}_i = \alpha^{*}_{i,+}-\alpha^{*}_{i,-}$ where $\sum_{\gamma,i}\alpha^{*}_{i,\gamma}=1$ and $\alpha^{*}_{i,\gamma} \geq 0$. This is because any point inside an $\ell_1$ ball is in the convex hull of $\{e_i,-e_i\}_{i \in [d]}$ where $e_i$ are the canonical unit vectors. Therefore, to approximate it at any time $t$ during the algorithm with exponential weight update, we assume that we have a set of $2p$ Fourier coefficients; we consider the monomial terms $+\psi_i(x)$ for the Fourier coefficients $\alpha_{i, +}^t$ as well as the monomial terms $-\psi_i(x)$ for the Fourier coefficients $\alpha_{i, -}^t$. Note that all the coefficients $\alpha_{i, \gamma}^t, \forall \gamma \in \{-, +\}$ are non-negative, and that the set of all such coefficients form a distribution, i.e. $\sum_{i, \gamma} \alpha_{i, \gamma}^t = 1$, due to the normalization in Algorithm \ref{algo:COMEX}. Therefore, applying Lemma \ref{lemma_positive} to the extended set of Fourier coefficients completes the proof.
\end{proof}
\section{Proof of Theorem \ref{thm:Acquisition}}
In the main paper, we used a simulated annealing based acquisition function on the surrogate model $\hat{f}(x)$. This model is very difficult to analyze. Instead, we analyze a more idealized and simpler acquisition function which is
$$x \sim \frac{\exp\left(-\nicefrac{\hat{f}_{\alpha_t}(x)}{T}\right)}{\sum_{x\in \{-1,1\}^d} \exp \left(-\nicefrac{\hat{f}_{\alpha_t}(x)}{T}\right)}$$
where $T$ is the temperature. Let the p.m.f of this acquired sample be $\hat{P}_{\alpha_t}(x)$.
Similarly, if we had access to the actual $f$, we would be using the acquisition function:
$$x \sim \frac{\exp\left(-\nicefrac{f(x)}{T}\right)}{\sum_{x\in \{-1,1\}^d} \exp \left(-\nicefrac{f(x)}{T}\right)}.$$
Let the p.m.f of this acquired sample be $P(x)$.
Now, we show a result that implies the following under some additional technical condition: \textit{Until the acquisition function based on $\hat{f}_{\alpha_t}$ yield samples which is close in KL divergence to samples yielded by acquisition function based on $f$, average $\phi_{t-1}- \phi_{t}$ (as in Lemma \ref{lemma_positive}) is large.}
Let $\hat{Z}= \sum_{x} \exp \left( - \hat{f}_{\alpha_t}(x)/T \right)$ be the partition function. Similarly, let $Z$ be the partition function associated with $P(x)$.
\begin{theorem}[Theorem \ref{thm:Acquisition} restated]
Let $$-1 \leq \hat{f}_{\alpha_t}(x), f(x) \leq 1.$$ If at any time $t$, and for a temperature $T$, we have for some $\epsilon>0$: \[ \bigg \lvert D_{\mathrm{KL}}(\hat{P}_{\alpha_t} \lVert P) - \log \left ( \frac{ Z}{\hat{Z}} \right ) \bigg \rvert \geq \epsilon ,\]
then $\mathbb{E}_{\hat{P}_{\alpha_t}} [\phi_{t-1}-\phi_t] \geq 2 \eta \lambda \epsilon^2 T^2 - \eta^2$. Here, $D_{\mathrm{KL}}$ is defined with respect to $\log_e$ for convenience.
\end{theorem}
\begin{proof}
We have:
\begin{align}
\epsilon \leq \bigg \lvert D_{\mathrm{KL}}(\hat{P}_{\alpha_t} \lVert P) -
\log \left(\frac{Z}{\hat{Z}} \right) \bigg \rvert &= \bigg \lvert \mathbb{E}_{\hat{P}_{\alpha_t}} [ -\frac{1}{T} (\hat{f}_{\alpha_t}(x)- f(x))] \nonumber \\
& + \log \left ( \frac{ Z}{\hat{Z}} \right ) - \log \left ( \frac{ Z}{\hat{Z}} \right ) \bigg \rvert \nonumber \\
& = \bigg \lvert \mathbb{E}_{\hat{P}_{\alpha_t}} [ -\frac{1}{T} (\hat{f}_{\alpha_t}(x)- f(x))] \bigg \rvert \nonumber \\
&\overset{a}{\leq} \frac{1}{T}\mathbb{E}_{\hat{P}_{\alpha_t}} [ \lvert \hat{f}_{\alpha_t}(x)- f(x) \rvert] \nonumber \\
& \overset{b}{\leq} \frac{1}{T}\sqrt{\mathbb{E}_{\hat{P}_{\alpha_t}} [ \lvert \hat{f}_{\alpha_t}(x)- f(x) \rvert^2 ]}
\end{align}
Justifications: (a) Jensen's inequality applied to $|x|$. (b) Jensen's inequality applied to the function $x^2$, i.e $(\mathbb{E}[\lvert X \rvert])^2 \leq \mathbb{E}[ X^2 ] $.
Combined with Lemma \ref{lemma_positive}, this implies:
\begin{align}
\mathbb{E}_{\hat{P}_{\alpha_t}} [\phi_{t-1}-\phi_t] \geq 2 \eta \lambda \epsilon^2 T^2 - \eta^2.
\end{align}
\end{proof}
\section{Learning Rate in Algorithm \ref{algo:COMEX}}
\label{app:learning_rate}
In Algorithm \ref{algo:COMEX}, we use the anytime learning rate schedule of \cite{adaptive_EG}, which is a decreasing function of time $t$. The learning rate at time step $t$ is given by:
\begin{equation}
\eta_t = \min \bigg \{ \frac{1}{e_{t-1}}, c \sqrt{\frac{\ln{(2 \, p)}}{v_{t-1}}} \bigg \},
\end{equation}
where $c \overset{\Delta}{=} \sqrt{2(\sqrt{2} - 1)/(\exp(1)-2)}$ and
\begin{align*}
z_{j, t}^{\gamma} &\overset{\Delta}{=} - 2 \, \gamma \, \lambda \, \ell_t \, \psi_j(x_t) \\
e_t &\overset{\Delta}{=} \inf_{k \in \mathbb{Z}} \bigg \{ 2^k: 2^k \geq \max_{s \in [t]} \max_{ \substack{j, k \in [p] \\ \gamma, \mu \in \{-, +\}}} | z_{j, s}^{\gamma} - z_{k, s}^{\mu} | \bigg \} \\
v_t &\overset{\Delta}{=} \sum_{s \in [t]} \sum_{\substack{j \in [p] \\ \gamma \in \{-, +\}} } \alpha_{j, s}^{\gamma} \bigg ( z_{j, s}^{\gamma} - \sum_{\substack{k \in [p] \\ \mu \in \{-, +\}}} \alpha_{k, s}^{\mu} z_{k, s}^{\mu} \bigg )^2.
\end{align*}
\section{Future Work}
As mentioned in Section \ref{sec:algorithm}, the computational cost (per time step) of the proposed \texttt{COMEX} algorithm is independent of the number of function evaluations and is linear in the number of monomial terms. This is a major improvement over the existing state-of-the-art algorithms in that the dependence on the number of function evaluations is completely eliminated from the complexity. Nevertheless, the complexity of the algorithm with respect to the number of variables grows polynomially, and could become expensive for problems with particularly higher orders of interactions incorporating a large number of variables. Therefore, an important direction for future work would involve investigating the possibility of improving this time complexity.
We speculate that the proposed algorithm can be extended to accommodate such computational requirements via addition of fresh experts over time in an adaptive fashion.
In this work, we utilized a simple simulated annealing method as our acquisition function model and proved our results regarding acquisition through exponential sampling. Another avenue for future research is to develop more efficient strategies to model the acquisition function, particularly devised for real-valued boolean functions.
\section{Experiments and Results}
\label{sec:experiments}
In this section, we evaluate the performance of the proposed algorithm in terms of simple regret as well as average computational time required to draw a sample for each function evaluation. We consider four problems: two unconstrained combinatorial problems (Sparsification of Ising Models and Contamination Control) as well as two combinatorial optimization problems with a sum constraint (Noisy $n$-Queens and optimal defect pattern in 2D nanomaterials). The latter problem is adopted from a real-world scenario; it takes advantage of a genuine energy evaluator (as black-box function) recognized in the molecular modeling literature, and has crucial real-life applications in designing nanomaterials utilized in nanoscale electronic devices \cite{defect_dynamics}. The two unconstrained combinatorial problems have also been considered in \cite{BOCS} and \cite{COMBO}.
We investigate the performance of different algorithms in two settings: (i) finite evaluation-budget regime, and (ii) finite time-budget regime. In the former case, we assume that each algorithm is given a fixed evaluation budget and has access to an unlimited time budget. In this setting, we consider problems with a relatively small number of variables. In the latter case, we assume that each algorithm, in addition to an evaluation budget, has a limited time budget and reports the minimum obtained within that time frame. This scenario is particularly relevant in problems with moderate numbers of variables, since the computational cost for state-of-the art algorithms are prohibitive, which makes it impossible in practice to draw a large number of samples for function evaluation.
The results are compared against two baselines, random search (RS) \cite{RS} and simulated annealing (SA) \cite{Kirkpatrick1983, SA}, as well as two state-of-the-art algorithms, BOCS \cite{BOCS} and COMBO \cite{COMBO}. We use the BOCS-SA version of BOCS, as opposed to BOCS-SDP, since the former version is computationally less expensive; as such, its use would make more sense than BOCS-SDP in the finite time-budget setting. In addition, the BOCS-SA algorithm can be adapted to optimization problems with a sum constraint in a straightforward fashion.
All the results are averaged over $10$ runs. We measure the performance of different algorithms in terms of the mean over simple regrets $\pm$ one standard error of the mean. We run the experiments on machines from the Xeon E5-2600 v3 family. The function evaluations in all the experiments are linearly mapped from the original interval $[\textrm{min}(f), \textrm{max}(f)]$ to the target interval $[-1, 1]$. Hence, the function value of $-1$ corresponds with the desired minimum. In many cases, we know a lower bound on $\min(f)$ and an upper bound on $\max(f)$ that enables us to scale to $[-1,1]$. In other cases where the lower bound on $\min(f)$ is unknown analytically, we fix a universal level which is smaller than all observed evaluations and compare all algorithms over all runs to this fixed level.
The sparsity parameter $\lambda$ of our proposed algorithm is set to $1$ in all the experiments. In our experiments, the algorithm was relatively insensitive to the choice of this parameter. Note that BOCS and COMBO also include sparsity/regularization and exploration parameters for their respective algorithms, the choice of which did not seem to impact the outcome noticeably, in a similar fashion to our algorithm.
Finally, we note that COMBO is order agnostic, while our proposed algorithm as well as BOCS take the maximum monomial degree $m$ as an input parameter. The maximum order $m = 2$ or $m = 3$ is deployed for our algorithm in the experiments. In particular, as shown in \cite{BOCS} and verified in our experiments, the sparsification of the Ising models as well as the contamination control problem have natural interactions of orders higher than two among the variables. As such, we set $m = 3$ in the latter two problems. In the remaining experiments, a maximum order of $m = 2$ is utilized. We set the maximum order of BOCS to $2$ in all the experiments, due to its excessively high computational cost at $m=3$.
\subsection{Sparsification of Ising Models}
Let $p$ be a zero-field Ising model with the probability distribution of $p(z) = \tfrac{1}{Z_p} \exp(z^T J^p z)$, where $z \in \{-1, 1\}^n$, $Z_p = \sum_z \exp(z^T J^p z)$ is the partition function, and $J^p \in \mathbb{R}^{n \times n}$ is a symmetric interaction matrix. The aim of the Ising sparsification problem is to find a sparse approximation model $q_x(z) = \tfrac{1}{Z_q} \exp(z^T J^{q_x} z)$ such that $\forall i, j \in [n]: J^{q_x}_{i, j} = x_{i, j} J^{p}_{i, j}$, where $x_{i, j} \in \{0, 1\}$ are decision variables. The solution to this problem is given by
\begin{equation}
x = \arg\min_{x \in \{0, 1\}^d} D_{\mathrm{KL}}(p||q_x) + \lambda \|x\|_1
\end{equation}
where $D_{\mathrm{KL}}(p||q_x)$ is the KL divergence of $p$ and $q_x$, and $\lambda$ is a regularization parameter. The KL divergence is expensive to compute; in addition, an exhaustive search requires $2^d$ function evaluations, which is infeasible in general.
We follow the same experimental setup as in \cite{BOCS} and \cite{COMBO}, where we have an Ising model with $n = 16$ nodes and $d = 24$ interactions. The values of the interactions are sampled uniformly at random from the interval $[0.05, 5]$. The simple regret of various algorithms for the regularization parameter $\lambda = 0.01$ is depicted in Figure \ref{fig:ising}. The proposed algorithm is able to hit the minimum found by both COMBO and BOCS, although it requires more function evaluations to achieve that feat. However, we point out that, as depicted in Table \ref{times}, the proposed algorithm only takes $0.1$ seconds per time step on average as opposed to $47.7$ and $78.4$ seconds per time step taken for BOCS and COMBO, respectively. In particular, both BOCS and COMBO are computationally far more expensive than the black-box function, whose average evaluation cost for each query is $2.24$ seconds. Despite its poor initial performance, SA is also able to virtually reach the same minimum value as the latter three algorithms.
We note that the complexity of the Ising sparsification problem grows exponentially with $d$; hence, it is computationally infeasible to obtain black-box evaluations for moderately large numbers of variables; hence, we only considered the finite evaluation-budget setting for this particular problem.
\begin{figure}[t]
\center
\includegraphics[width=\linewidth]{figures/edit_simple_regret_ising.pdf}
\caption{\small \label{fig:ising}
Simple regret for the Ising Sparsification problem.}
\end{figure}
\subsection{Contamination Control}
The contamination control in food supply chain \cite{contamination_control} is an optimization problem with $d$ binary variables representing stages that can be contaminated with pathogenic microorganisms. At each time step, one can intervene at each stage of the supply chain to reduce the contamination risk by a random rate (which follows a beta distribution) and incur an associated cost. The goal is to minimize the overall food contamination while minimizing the total prevention cost. As such, the minimization is carried out with respect to the set of decision variables $x \in \{0, 1\}^d$ incorporating an $\ell_1$ regularization term with a regularization parameter $\lambda$.
Following the experimental setting of \cite{COMBO} and \cite{BOCS}, we initially set $d = 21$ and $\lambda = 0.01$. The results in terms of simple regret are shown in Figure \ref{fig:contamination}. As we can see from this Figure, COMBO outperforms the rest of the algorithms in that it is able to find the minimum in just over $100$ function evaluations on average. Despite its initially large regret results, BOCS is also able to find the minimum in just over $150$ function evaluations. The proposed \texttt{COMEX} algorithm is also competitive and is able to find the minimum in just over $200$ function evaluations. Note that SA and especially RS were not able to achieve the minimum in $250$ function evaluations. Finally, we point out that the proposed algorithm takes a fraction of time required by BOCS and COMBO in order to draw evaluation samples, as shown in Table \ref{times}.
We then increase the dimensionality of the problem to $d = 100$ variables. Due to the high dimensionality of this problem, drawing samples from both COMBO and BOCS becomes computationally expensive. Therefore, in addition to the evaluation budget of $1000$, we set a finite time budget of $24$ hours and run the experiments until at least one of the budget constraints is attained. Simple regret results are depicted in Figure \ref{fig:contamination}. In this setting, BOCS is only able to draw $\approx 150$ samples, while COMBO exceeds the time budget at around $100$ samples. On the other hand, the proposed algorithm is able to produce $1000$ samples quickly and approach the minimum function value. Considering the high dimensionality of this data, RS produces poor results, whereas SA incurs an eventual simple regret of $0.2$ on average. Finally, we note that \texttt{COMEX} is over $100$ times faster than both BOCS and COMBO, as depicted\footnote{Note that the average computation times reported in Table \ref{times} correspond only with producing a point via the algorithm, whereas the $24$-hour budget includes both the black-box evaluation cost and the computation time of the algorithm.} in Table \ref{times}.
\begin{figure}[t]
\center
\includegraphics[width=\linewidth]{figures/edit_simple_regret_contamination.pdf} \\
\includegraphics[width=\linewidth]{figures/edit_simple_regret_contamination100.pdf}
\caption{\small \label{fig:contamination}
Simple regret for the contamination control problem with: $d = 21$ (top), and $d = 100$ (bottom).}
\end{figure}
\begin{table}[t]
\caption{Average computation time per step (in Seconds) over different datasets for different algorithms}
\label{times}
\begin{center}
\small
\begin{sc}
\begin{tabular}{lccccr}
\toprule
\multicolumn{3}{c}{Black Box} & \multicolumn{3}{c}{Algorithm} \\
\cmidrule(lr){1-3}\cmidrule(lr){4-6}
Dataset & $d$ & Cost & BOCS & COMBO & \texttt{COMEX} \\
\cmidrule(lr){1-3}\cmidrule(lr){4-6}
N-Queens & 49 & 0.001 & 202.1 & 336.7 & $\bm{0.09}$ \\
Contamination & 21 & 0.001 & 28.6 & 53.8 & $\bm{0.07}$ \\
Ising & 24 & 2.24 & 47.7 & 78.4 & $\bm{0.10}$ \\
\cmidrule(lr){1-3}\cmidrule(lr){4-6}
N-Queens & 144 & 0.001 & 401.28 & 722.05 & $\bm{2.87}$ \\
Contamination & 100 & 0.002 & 454.93 & 587.65 & $\bm{1.33}$ \\
Defect Dynamics & 400 & 73.16 & 873.99 & 3869.92 & $\bm{65.5}$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{center}
\end{table}
\subsection{Noisy \texorpdfstring{$n$}{-}-Queens}
We next consider a constrained optimization problem where the search space is restricted to the combinatorial domain $\mathcal{C}_n$. We adapt the acquisition function of different algorithms to this constrained domain in a straightforward manner. More specifically, we modify the local neighborhood search in both SA (in BOCS as well as in our proposed algorithm) and graph local search (in COMBO) to the constrained domain $\mathcal{C}_n$ by restricting the neighborhood to data points with Hamming distance of two rather than one, as defined in \eqref{constrained_neighborhood}.
The $n$-queens problem is a commonly used benchmark in combinatorial optimization literature \cite{Mukherjee2015, Hu2003swarm, takenaka2000proposal, homaifar1992n}. This problem consists of finding the placement of $n$ queens on an $n \times n$ chessboard so that no two queens share the same row, column, or diagonal \cite{Bell2009survey}. This problem can be formulated as a constrained binary optimization problem. We use binary variables $x_{ij}$ to represent the placement of a queen in each square position of the chessboard given by its row and column pair $(i,j)$, for $i, j \in [n]$. A placement of queens is then represented by a binary vector $x$ of length $d = n \times n$. Hence, a solution to the $n$-queens problem simultaneously meets the following constraints:
\begin{itemize}
\item There is exactly one queen in each row $i \in [n]$:
\begin{equation}\label{eq:queens_r} e_{\textrm{rows}}(x) = \sum_i ( \sum_j x_{ij} -1 )^2 = 0, \end{equation}
\item There is exactly one queen in each column $j \in [n]$:
\begin{equation}\label{eq:queens_c} e_{\textrm{cols}}(x) = \sum_j ( \sum_i x_{ij} -1 )^2 = 0, \end{equation}
\item There is at most one queen in each diagonal:
\begin{equation}\label{eq:queens_d} e_{\textrm{diags}}(x) =\sum_{\ell} \, \sum_{(i, j) \neq (k, h) \in \mathcal{D}_{\ell}} x_{ij} x_{kh} = 0, \end{equation}
where $\mathcal{D}_{\ell}$ represents the set of all the elements in the $\ell$-th diagonal, and the first summation is taken over all the diagonals with at least one square position.
\end{itemize}
\noindent The non-negative quadratic terms in constraints \eqref{eq:queens_r}-\eqref{eq:queens_d} indicate deviations from the required number of queens in each row, column, and diagonal, respectively. Thus, if there exists a solution to the $n$-queens problem given by a binary vector $x$ satisfying all the constraints, the minimum of \begin{equation}\label{eq:queens_energy} f(x) = e_{\textrm{rows}}(x) + e_{\textrm{cols}}(x) + e_{\textrm{diags}}(x) \end{equation} must be achieved at zero, and vice versa. We know that for $n>3$ a solution to the $n$-queens problem indeed exists; therefore, minimizing $f(x)$ is equivalent to solving the constraints \eqref{eq:queens_r}--\eqref{eq:queens_d}. This allows the formulation of the problem as an unconstrained optimization one.
To provide a benchmark for the constrained optimization case, we add the redundant constraint that $\sum_i \sum_j x_{ij}= n$ to our formulation when generating samples, effectively reducing the search space to $\mathcal{C}_n$. Thus, for each problem of size $d$, we have $d = n \times n$ binary variables to optimize over, where the search space is constrained to binary vectors with $n$ ones.
We consider a noisy version of this problem, where the function evaluations from Equation \eqref{eq:queens_energy}, having been linearly mapped to the interval $[-1, 1]$, incur an additive Gaussian noise with zero mean and standard deviation of $\sigma = 0.02$.
First, we consider a smaller version of this problem with $n = 7$ and a finite evaluation budget of $250$ samples. In this experiment, all the algorithms are able to exhaust the evaluation budget within the allocated $24$-hour time frame. The results in terms of simple regret are depicted in Figure \ref{fig:nqueens}. As we can see from this figure, COMBO outperforms all the algorithms. BOCS performs only slightly better than RS. \texttt{COMEX} is a close second, while being able to run the experiment at a fraction of the time consumed by either COMBO or BOCS as indicated in Table \ref{times}.
Next, we increase the size of the problem to $n = 12$ and enforce a finite time budget of $24$ hours. In this case, COMBO and BOCS are unable to use the evaluation budget within the allotted time frame, and manage to draw only $\approx 100$ and $\approx 150$ samples, respectively. The proposed algorithm, on the other hand, is able to take advantage of the full evaluation budget and outperforms the baselines by a significant margin, as shown in Figure \ref{fig:nqueens}.
\begin{figure}[t]
\center
\includegraphics[width=\linewidth]{figures/edit_simple_regret_nqueens.pdf} \\
\includegraphics[width=\linewidth]{figures/edit_simple_regret_12queens.pdf}
\caption{\small \label{fig:nqueens}
Simple regret for the noisy $n$-Queens problem with: $n = 7$ ($d = 49$) (top), and $n = 12$ ($d = 144$) (bottom).}
\end{figure}
\subsection{Optimal Arrangement of Defect Structures in Nanomaterials}
Physical, chemical, and optoelectronic properties of two-dimensional transition-metal dichalcogenides (TMDs), such as MoS$_2$, are governed by structural defects such as Sulfur vacancies. A fundamental understanding of the spatial distribution of defects in these low-dimensional systems is critical for advances
in nanotechnology. Therefore, understanding the dynamics of point defect arrangements at various vacancy concentrations is crucial, as those are known to play a key role in phase transformation governing nanoscale electronic devices \cite{defect_dynamics}.
Given a two-dimensional grid of size $d$, the problem is to find the formation of a defect structure of size $n$ (corresponding to a given concentration factor) with the lowest energy, in which defects can be in either isolated or extended form (i.e. several defects next to each other) or a combination of both \cite{defect_dynamics}. Using the reactive force field (ReaxFF) \cite{ostadhossein2017reaxff} within LAMMPS simulation package \cite{Plimpton1995lammps}, we are able to obtain an energy evaluation for each selection of the defect structure. However, such function evaluations are computationally expensive to acquire. Hence, we are interested in finding the low-energy defect structure with as few energy evaluations as possible.
In our experiments, we deploy a 2-D MoS${_2}$ monolayer with a grid of size $d=400$, following the framework suggested in \cite{defect_dynamics}. In particular, we are interested in finding the optimal placement of $n = 16$ sulfur vacancies in the MoS$_2$ monolayer. Considering the moderately high dimensionality of this problem, the computational complexities of BOCS and COMBO render their applicability practically impossible, as it would take the latter algorithms several weeks to accumulate a few hundred candidate data points for energy evaluation purposes\footnote{With a $24$-hour time budget, BOCS managed to complete just over $50$ steps with a simple regret of $0.59$, whereas COMBO even failed to produce that many steps.}.
As can be observed in Figure \ref{fig:defect_dynamics}, the proposed \texttt{COMEX} algorithm outperforms the baselines, RS and SA, in identifying the optimal defect structure in the sample TMD grid of size $d=400$ over $500$ energy evaluations. In this experiment, since the exact value of the minimum energy is unknown, for the purpose of simple regret calculations, we pick a fixed universal energy level which is less than all the obtained function evaluations via all the algorithms in our experiments.
\begin{figure}[t]
\center
\includegraphics[width=\linewidth]{figures/edit_simple_regret_lammps_16.pdf}
\caption{\small \label{fig:defect_dynamics}
Simple regret for the optimal organization of point defect problem with $n = 16$ ($d = 400$).}
\end{figure}
\section{Introduction}
Combinatorial optimization (CO) problems arise in numerous application domains including machine learning, engineering, economics, transport, healthcare, and natural and social sciences \cite{wolsey1999integer}. Broadly speaking, such CO problems involve optimizing an explicit function over a constraint set on a discrete domain. A number of important problems in this class are NP-hard and there is a vast literature on approximating them in polynomial time \cite{williamson2011design}. In this work, we focus on black-box combinatorial optimization where we seek to minimize a function defined on the Boolean domain (a combinatorial domain) through acquiring noisy/perfect function evaluations from a black-box oracle.
There exists a vast literature on black-box function optimization when it comes to functions over the continuous domains. Bayesian Optimization (BO) \cite{movckus1975bayesian} is a well-established paradigm for optimizing costly-to-evaluate black-box objective functions $f$ with noisy evaluations. The latter paradigm consists of approximating $f$ using a probabilistic function model, often called a \textit{surrogate model}, and utilizing an \textit{acquisition function} along with the surrogate model to draw samples \cite{jones1998efficient}. Some common acquisition functions are Expected Improvement, Probability of Improvement, Thompson Sampling and Upper Confidence Bounds \cite{srinivas2009gaussian,thompson1933likelihood,auer2002using,mockus1994application}.
Only recently, generic black-box optimization algorithms, such as BOCS \cite{BOCS} and COMBO \cite{COMBO}, have been proposed for combinatorial domains. However, learning the surrogate model followed by drawing a sample using either BOCS or COMBO, even for a moderate number of variables, is more expensive than an oracle evaluation for many black-box functions of interest. For larger numbers of variables, it is essentially impractical to use BOCS and COMBO, as it takes a significant amount of time to determine the next sample to evaluate.
In this work, we introduce an efficient black box optimization algorithm, that uses a multi-linear polynomial of bounded degree as the surrogate model and sequentially updates this model using exponential weight updates while treating each monomial as an expert. At each step, the acquisition function is a version of simulated annealing applied to the current multilinear polynomial representation given by the surrogate model. Numerical experiments on various datasets in both unconstrained and sum-constrained Boolean optimization problems indicate the competitive performance of the proposed algorithm, while improving the computational time up to several orders of magnitude compared to state-of-the-art algorithms in the literature.
\subsection{Contributions}
We summarize our main contributions as follows:
\begin{enumerate}[leftmargin=5mm, itemsep=-1pt]
\item[1.] We propose a novel and computationally efficient algorithm for black-box function optimization over the Boolean hypercube. Our algorithm, Combinatorial Optimization with Monomial Experts (\texttt{COMEX}), comprises a pool of monomial experts forming an approximate multilinear polynomial representation for the black-box function. At any time step, the coefficients of the monomial experts are refreshed via an exponential weight update rule.
\item[2.] The proposed method uses a version of simulated annealing applied to the current polynomial representation to produce new candidate points for black-box function evaluations.
\item[3.] We present theoretical insights on the sequential improvements in the proposed surrogate model as a result of exponential weight updates. Furthermore, we offer theoretical results proving that samples drawn under an exponential acquisition function model lead to sequential improvements in the surrogate model under some technical conditions.
\item[4.] We evaluate the performance of the \texttt{COMEX} algorithm, together with recently developed state-of-the-art BO methods for the combinatorial domain as well as popular heuristic-based baseline methods, on a variety of benchmark problems of different dimensionality. The CO problems investigated in this study are sparsification of Ising models, noisy $n$-queens, food contamination control, and optimal arrangement of point defects in 2D nanomaterials.
\item[5.] \texttt{COMEX} performs competitively on all benchmark problems of low dimensionality, while improving the computational time up to several orders of magnitude. On problems of higher dimensionality, \texttt{COMEX} outperforms all baseline and state-of-the-art BO methods in terms of finding a minimum within a finite time budget.
\end{enumerate}
\section{Related Work}
The existing algorithms in discrete optimization literature, which are capable of handling black-box functions, are not particularly sample efficient; in many applications, a large evaluation budget is required for such algorithms to converge to functions' optima. In addition, they are not necessarily guaranteed to find the global optima. The most popular algorithms in this category include local search \cite{Kirkpatrick1983, SA} and evolutionary search, such as particle search \cite{Schafer2013}.
\textbf{Bayesian Optimization}:
The majority of work on black-box function optimization targets continuous domains. In particular, algorithms based on Bayesian Optimization \cite{BO} have attracted a lot of attention in the literature. Many popular BO methods are built on top of Gaussian Processes (GPs), which rely on the smoothness defined by a kernel to model uncertainty \cite{srinivas2009gaussian,thompson1933likelihood,mockus1994application}. As such, they are best suited for continuous spaces \cite{BO,hebbal2019bayesian,djolonga2013high,wang2013bayesian}. The only exceptions are the recently introduced algorithm BOCS \cite{BOCS} and COMBO \cite{COMBO}.
\textbf{Hyperparameter Optimization}:
Bayesian Optimization methods have been adapted to hyperparameter optimization \cite{bergstra2011algorithms}. Here, one seeks to find the best hyperparameter configuration that minimizes the validation loss after training a model with that configuration. In this adaptation of BO methods, the goal is to select the next hyperparameter configuration given the function outputs in the previous iterations. However, in hyperparameter optimization, the focus is on the total training time and not the total number of noisy evaluations of hyperparameters. Therefore, bandit-based and tree search methods which focus on resource allocation have been developed \cite{li2017hyperband,kandasamy2017multi,sen2018multi}. In our work, the main cost criterion is the number of function evaluations rather than other resources which can be controlled.
\textbf{Black-Box Combinatorial Optimization}: Similar to our proposed approach, the BOCS algorithm \cite{BOCS} employs a sparse monomial representation to model the interactions among different variables. However, the latter utilizes sparse Bayesian linear regression with a heavy-tailed horse-shoe prior to learn the coefficients of the model, which as we will discuss in the sequel, is computationally costly. A Gibbs sampler is then used in order to draw a sample from the posterior over the monomial coefficients. When the monomials are restricted at the order of two, the problem of minimizing the acquisition function is posed as a second order program which is solved via semidefinite programming. Alternatively, simulated annealing is advocated for higher order monomial models or so as to speed up the computation for the case of order-two monomials.
More recently, the COMBO algorithm \cite{COMBO} was introduced to address the impediments of BOCS. A one-subgraph-per-variable model is utilized for various combinatorial choices of the variables; the collection of such subgraphs is then joined via graph Cartesian product to construct a combinatorial graph to model different combinations of variables. The Graph Fourier Transform (GFT) over the formed combinatorial graph is used to gauge the smoothness of the black-box function. A GP with a variant of the diffusion kernel, referred to as automatic relevance determination diffusion kernel, is proposed for which GFT can be carried out in a computationally tractable fashion. The proposed GP is capable of accounting for arbitrarily high orders of interactions among variables.
The computational complexity of surrogate-model learning in BOCS at time step $t$ is in $\mathcal{O}(t^2 \cdot d^{2 \, m})$, where $d$ is the number of variables and $m$ is the maximum monomial order in the model. This complexity is associated with the cost of sampling parameters from the posterior distribution. Hence, the complexity of BOCS grows quadratically in the number of evaluations, which particularly makes it unappealing for larger numbers of variables.
On the other hand, despite the fact that the GFT utilized by COMBO is shown to be run in linear time with respect to the number of variables for the Boolean case, the overall computational complexity of the algorithm remains prohibitively high. More precisely, the overall computational complexity of learning the surrogate model in COMBO is in $\mathcal{O}(\max\{t^3, d^2\})$. The $\mathcal{O}(t^3)$ complexity is associated with marginal likelihood computation for the GP, whereas the $O(d^2)$ term stems from the slice sampling utilized for fitting the parameters of the surrogate model.
Therefore, both BOCS and COMBO incorporate model learning methods which grow polynomially in the number of function evaluations. This particularly hinders the applicability of the aforementioned algorithms for problems with moderate numbers of variables, since a larger number of function evaluations is required due to the curse of dimensionality.
\textbf{Prediction with Expert Advice}:
In the framework of prediction with expert advice \cite{prediction_ebook, Vovk1998, hedge}, at each time step $t$ the forecaster receives an instance from a fixed domain. The forecaster is given access to a set of $p$ experts, and is required to produce a distribution $w^t$ over such experts in each time step $t$. Each expert $i$ then incurs a loss $\ell_i^t$, which contributes to the mixture loss of the forecaster given by $\ell^t = \sum_i w_i^t \ell_i^t$. In general, there are no restrictions on the distribution of expert losses. The Hedge algorithm \cite{Vovk1998, hedge} is perhaps the most popular approach to address this problem via an exponential weight update rule given by $w_i^t = w_i^{t-1} \exp(- \eta_t \,\ell_i^t)$, where $\eta$ is a learning rate.
The prediction with expert advice paradigm has been tailored to the problem of sparse online linear prediction from individual sequences. In particular, the $\texttt{EG}^{\pm}$ algorithm \cite{KIVINEN1997} uses an exponential weight update rule to formulate an online linear regression algorithm which performs comparably to the best linear predictor under sparsity assumptions. The adaptive $\texttt{EG}^{\pm}$ algorithm \cite{adaptive_EG} further proposes a parameter-free version of $\texttt{EG}^{\pm}$ where the learning rate $\eta_t$ is updated in an adaptive fashion, and is a decreasing function of time step $t$.
|
1,108,101,564,387 | arxiv | \section{Introduction}
Black holes are maximum entropy objects, and so a black hole has more entropy than any other object with the same volume \cite{1, 1a, 2, 4, 4a}.
It is important to associate a maximum entropy with a black hole as finite entropy objects
can cross the horizon of a black hole. This would spontaneously reduce the entropy of the universe, and thus
the second law of thermodynamics can get violated if a maximum entropy is not associated with a black hole.
This maximum entropy associated with a black hole scales with the area of the horizon. In fact, the entropy of the black hole can be
expressed in terms of the area of the horizon as $s = A/4$. The observation that the maximum entropy of a region of space scales with its area
has motivated the holographic principle \cite{5, 5a}. This principle states that the number of degrees of freedom
in any region of space is equal to the number of degrees of freedom on the boundary of that region.\\
The holographic principle has found various applications in many different areas of physics. However, it is expected that
the holographic principle will get modified near Planck scale \cite{6, 6a}. This is because the area-entropy law of black holes
gets modified due to the quantum gravitational effects. In fact, almost all approaches to quantum gravity predict the same functional
form for these quantum corrections to
the area-entropy relation i.e., the area-entropy law gets corrected by a logarithmic correction term. However,
the coefficient of this logarithmic correction term depends on the specific approach chosen, and is different for different
approaches to quantum gravity. Such logarithmic correction term has been obtained using
the non-perturbative quantum general
relativity \cite{1z}. This was done by using the relation between the
density of states of a black hole and the conformal blocks of a well
defined conformal field theory. The Cardy formula has been used for obtaining such corrections terms
to the area-entropy relation \cite{card}. The correction for a BTZ black hole has been calculated, and it has been demonstrated that
these are logarithmic corrections \cite{card}.\\
The effect of matter fields surrounding a black hole has been studied \cite{other, other0, other1}. This analysis has also been used to obtain corrections
to the area-entropy relation, and it was observed that this correction term is logarithmic.
The string theoretical corrections to the entropy of a black hole have been calculated, and it has been found that the entropy of
a black hole gets corrected by logarithmic term generated from string theoretical effects \cite{solo1, solo2, solo4, solo5}.
The logarithmic correction to the entropy of a dilatonic black hole has been obtained \cite{jy}. The partition
function of a black hole has been used to obtain the logarithmic correction to the area-entropy law of a black hole \cite{bss}.
The corrections obtained using the generalized uncertainty principle are also logarithmic \cite{mi, r1}. The thermodynamics and statistics of
G\"{o}del black hole with logarithmic correction has been studied \cite{Pourdarvish}. Furthermore, $P-V$ criticality of dyonic charged AdS black
hole with a logarithmic correction has been also been analyzed \cite{Sadeghi}.\\
It may be noted that in the Jacobson formalism, the Einstein's equation can be derived from the first law of thermodynamics \cite{z12j, jz12}.
In this formalism, it is required that the
Clausius relation holds for all the local Rindler causal horizons through each space-time point, and this gives rise to the Einstein's equation.
As there exists a relation between the geometry and thermodynamics of a black hole, we expect that thermal fluctuations in the thermodynamics will give
rise to the quantum fluctuations in the metric. In fact, it has been demonstrate that the entropy of the black hole gets corrected by
logarithmic terms due to these thermal fluctuations \cite{l1, SPR}. As such corrections are expected to occur from most approaches to quantum gravity,
we will study the effect of such term on our system. Furthermore, as the coefficient of such terms depends on the approach to
the quantum gravity, we will not fix the coefficient of such correction term, and hence see the effect of such correction term
on the system, which would be generated from different approaches to quantum gravity.\\
Already, the effect of thermal fluctuations on the thermodynamics quantities of an AdS charged black hole has been analyzed \cite{1503.07418}.
It was demonstrated that the thermal fluctuations decrease the certain thermodynamics potentials associated with the system,
for example, the free energy reduced due to these fluctuations.
The modification to the thermodynamics of a black Saturn because of the thermal fluctuations has also been studied \cite{1505.02373}, and it was observed that
logarithmic corrections do not affect stability of the black Saturn. The
logarithmic corrections to entropy of a modified Hayward black hole has been analyzed \cite{1603.01457}, and it was observed that the
value of the pressure and internal energy reduced due to
such corrections. It is also demonstrated that the first law of thermodynamics is satisfied for the
these black hole in the presence of thermal fluctuations. The correction to the thermodynamics of a
charged dilatonic black Saturn have been studied \cite{1605.00924}, and it has been demonstrated
that for this system the corrections obtained from a conformal field theory are the same as the corrections obtained from the fluctuations in the energy.
It may be noted that such corrections where studied by
analyzing the thermal fluctuations very close to the equilibrium, and this approximation is expected to breakdown near the Planck scale.
This is because near the Planck scale, the thermal fluctuations will become so large that the equilibrium thermodynamics cannot be used to describe the system.
However, as long as the system remain close to the equilibrium, it is possible to analyze the effect of thermal fluctuations as a perturbation around
the equilibrium state \cite{Landau, fl}.
\\
In this paper, we will analyze the effects of such thermal fluctuation on a higher dimensional singly spinning Kerr-AdS black hole \cite{1411.4309}. It may be
noted that the thermodynamics of such a black hole has already been studied \cite{4a, 1510.00085},
and we shall analyze the corrections to the thermodynamics by thermal fluctuations. We will show that thermal fluctuations have an important effect
on the thermodynamics and
critical points of singly spinning Kerr-AdS black holes in higher dimensions. We will study the effect of logarithmic correction on the partition function,
and also investigate the special case of an ordinary Kerr-AdS black hole in four dimensions.
\section{Singly Spinning Kerr-AdS Black Hole}
In this section, we will discuss thermodynamic properties of a higher dimensional singly spinning Kerr-AdS black hole.
The metric for a $d$-dimensional Kerr-AdS black hole in Boyer-Lindquist coordinates can be written as \cite{Gibb},
\begin{eqnarray}\label{A1}
ds^{2}&=&-W\left(1+\frac{r^{2}}{l^{2}}\right)d\tau^{2}+\frac{2m}{U}\left(Wd\tau-
\sum_{i=1}^{N}\frac{a_{i}\mu_{i}^{2}d\varphi_{i}}{\Xi_{i}}\right)^{2}\nonumber\\
&+&\sum_{i=1}^{N}\frac{r^{2}+a_{i}^{2}}{\Xi_{i}}\mu_{i}^{2}d\varphi_{i}^{2}+\frac{U dr^{2}}{\mathcal{F}-2m}+\sum_{i=1}^{N+\varepsilon}\frac{r^{2}+a_{i}^{2}}{\Xi_{i}}d\mu_{i}^{2}\nonumber\\
&-&\frac{1}{W(r^{2}+l^{2})}\left(\sum_{i=1}^{N+\varepsilon}\frac{r^{2}+a_{i}^{2}}{\Xi_{i}}\mu_{i}d\mu_{i}\right)^{2},
\end{eqnarray}
where
\begin{eqnarray}\label{A2}
W&=&\sum_{i=1}^{N+\varepsilon}\frac{\mu_{i}^{2}}{\Xi_{i}},\nonumber\\
U&=&r^{\varepsilon}\sum_{i=1}^{N+\varepsilon}\frac{\mu_{i}^{2}}{r^{2}+a_{i}^{2}}\prod_{j}^{N}(r^{2}+a_{j}^{2}),\nonumber\\
\mathcal{F}&=&r^{\varepsilon-2}\left(1+\frac{r^{2}}{l^{2}}\right)\prod_{j}^{N}(r^{2}+a_{j}^{2}),\nonumber\\
\Xi_{i}&=&1-\frac{a_{i}^{2}}{l^{2}}.
\end{eqnarray}
Here, $m$ and $a_{i}$ are mass and rotation parameters, respectively. The coordinates $\mu_{i}$ satisfy the following constraint
\begin{equation}\label{A3}
\sum_{i=1}^{N+\varepsilon}\mu_{i}^{2}=1,
\end{equation}
where $\varepsilon=0$ for odd $d$ or $\varepsilon=1$ for even $d$. In the case of $d = 4$, the above space-time reduces to the four-dimensional Kerr-AdS metric.
Thermodynamics of this system has already been studied \cite{4a}. The mass of this
black hole (as well as enthalpy) is given by,
\begin{equation}\label{A4}
M=\frac{m\omega_{d-2}}{4\pi(\prod_{j}\Xi_{j})}\left(\sum_{i=1}^{N}\frac{1}{\Xi_{i}}-\frac{1-\varepsilon}{2}\right),
\end{equation}
where $\omega_{d-2}$ is the volume of the unit $(d-2)$-sphere which is given by,
\begin{equation}\label{A5}
\omega_{d-2}=\frac{2\pi^{(\frac{d-1}{2})}}{\Gamma(\frac{d-1}{2})}.
\end{equation}
The angular momenta is given by,
\begin{equation}\label{A6}
J_{i}=\frac{a_{i}m\omega_{d-2}}{4\pi\Xi_{i}(\prod_{j}\Xi_{j}))}.
\end{equation}
The angular velocities of the horizon is given by,
\begin{equation}\label{A7}
\Omega_{i}=\frac{a_{i}(r_{+}^{2}+l^{2})}{l^{2}(r_{+}^{2}+a_{i}^{2})}.
\end{equation}
Furthermore, the temperature $T$ can be expressed as
\begin{equation}\label{A8}
T=\frac{1}{2\pi}\left[r_{+}(1+\frac{r_{+}^{2}}{l^{2}})
\sum_{i=1}^{N}\frac{1}{r_{+}^{2}+a_{i}^{2}}-\frac{1}{r_{+}}\left(\frac{1}{2}-\frac{r_{+}^{2}}{2l^{2}}\right)^{\varepsilon}\right].
\end{equation}
The entropy of this black hole is given by $s=\frac{A}{4}$, where
\begin{equation}\label{A9}
A=\frac{\omega_{d-2}}{r_{+}^{1-\varepsilon}}\prod_{i=1}^{N}\frac{r_{+}^{2}+a_{i}^{2}}{\Xi_{i}}.
\end{equation}
Here, the horizon radius $r_{+}$ is the largest root of $\mathcal{F} - 2m = 0$. The thermodynamic volume is given by,
\begin{equation}\label{A10}
V=\frac{Ar_{+}}{d-1}+\frac{8\pi}{(d-1)(d-2)}\sum_{i}a_{i}J_{i}.
\end{equation}
\\
In this paper, we will study the logarithmic corrections to the thermodynamics of singly spinning Kerr-AdS black hole. A
singly spinning Kerr-AdS black holes can be described using one non-zero rotation parameter $a_{1}=a$ (while other $a_{i}$ are zero), and
so the metric (\ref{A1}) takes the following form,
\begin{eqnarray}\label{B1}
ds^{2}&=&-\frac{\Delta}{\rho^{2}}(dt-\frac{a}{\Xi}\sin^{2}\theta d\varphi)^{2}+\frac{\rho^{2}}{\Delta}dr^{2}+\frac{\rho^{2}}{\Sigma}d\theta^{2}\nonumber\\
&+&\frac{\Sigma\sin^{2}\theta}{\rho^{2}}[a dt-\frac{r^{2}+a^{2}}{\Xi}d\varphi]^{2}+r^{2}\cos^{2}\theta d\Omega_{d-4}^{2},
\end{eqnarray}
where
\begin{eqnarray}\label{B2}
\Delta&=&(r^{2}+a^{2})(1+\frac{r^{2}}{l^{2}})-2mr^{5-d},\nonumber\\
\Sigma&=&1-\frac{a^{2}}{l^{2}}\cos^{2}\theta,\nonumber\\
\rho^{2}&=&r^{2}+a^{2}\cos^{2}\theta,\nonumber\\
\Xi_{i}&=&1-\frac{a^{2}}{l^{2}}.
\end{eqnarray}
Thermodynamics quantities associated with this metric have been studied \cite{Gibb2}. The
temperature and entropy of this black hole, are given by
\begin{equation}\label{B3}
T=\frac{1}{2\pi}\left[r_{+}(1+\frac{r_{+}^{2}}{l^{2}})\left(\frac{1}{r_{+}^{2}+a^{2}}+\frac{d-3}{2r_{+}^{2}}\right)-\frac{1}{r_{+}}\right],
\end{equation}
and
\begin{equation}\label{B4}
s=\frac{\omega_{d-2}}{4}\frac{(r_{+}^{2}+a^{2})r_{+}^{d-4}}{\Xi_{i}},
\end{equation}
where $r_{+}$ is the largest positive real root of $\Delta=0$, which has been obtained using from the first equation in (\ref{B2}).
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{fig1-1.eps}&\includegraphics[width=50 mm]{fig1-2.eps}&\includegraphics[width=50 mm]{fig1-3.eps}\\
\includegraphics[width=50 mm]{fig1-4.eps}&\includegraphics[width=50 mm]{fig1-5.eps}&\includegraphics[width=50 mm]{fig1-6.eps}
\end{array}$
\end{center}
\caption{$\Delta$ in terms of $r$. (a) $l=1.3$, $m=1$, and $a=0.5$. (b) $l=1.3$, $m=1.2$, and $a=0.7$. (c) $l=1.5$, $m=1.2$, and $a=0.8$.
(d) $l=1$, $m=2$, and $a=0.5$. (e) $l=1.3$, $a=0.5$, $m=0.6$ (upper thin), $m=1$ (middle), and $m=2$ (lower thick). (f) $l=1.3$, $a=0.5$, $m=0.6$
(upper thin), $m=1$ (middle), and $m=2$ (lower thick). $d=4$ (solid red), $d=5$ (dashed blue), $d=6$ (dotted black), $d=10$ (dash dotted green).}
\label{fig:1}
\end{figure}
Plots of the Fig. \ref{fig:1} shows that there is at least one positive root. There are special choices of $(l,m,a)$
parameters such as $(1.3,1,0.5)$, $(1.5,1.2,0.8)$, $(1.3, 1.2, 0.7)$,... where
$r_{+}=1$ for all $d$. We see small differences between Figs. \ref{fig:1} (a), (b) and (c)
where $r_{+}=1$, while Fig. \ref{fig:1} (d) has separated $r_{+}$ for different dimensions.
Figs. \ref{fig:1} (e) and (f) show variation of event horizon with $m$ for $d=4$ and $d=10$, respectively.
We can observe that for the fixed $l$ and $a$, $r_{+}$ varies with $m$.
It will be important to study behavior of temperature with $r_{+}$. Also, from the last equation of (\ref{B2}),
we should set $a^{2}<l^{2}$ to have positive entropy.\\
In the Fig. \ref{fig:2} we can see behavior of the temperature with $r_{+}$.
As for the fixed $l$ and $a$, $r_{+}$ varies with $m$,
hence, $T$ varies with $r_{+}$. We can see different behavior for $d\geq6$ and $d\leq5$.
In the cases of $d=4$ and $d=5$ (solid and dashed lines of Fig. \ref{fig:2}),
temperature is totally increasing function of $r_{+}$.
On the other hand, for $d\geq6$, temperature has a minimum. So, it decreases
with small $r_{+}$, and increases with large $r_{+}$. Minimum temperature occurs at the critical value $r_{+c}$ ($d\geq6$),
which is root of the following equation,
\begin{equation}\label{B5}
\left( d-1 \right) {r_{+}}^{6}+ \left( 2\,{a}^{2}d-{l}^{2} \left( d-3
\right) \right) {r_{+}}^{4}+{a}^{2} \left( \left( d-3 \right) {a}^{2}-2
\,{l}^{2} \left( d-6 \right) \right) {r_{+}}^{2}-{a}^{4}{l}^{2} \left( d-
5 \right) =0.
\end{equation}
We can find critical value of event horizon radius by variation of $m$.
So, we observe that
$r_{+c}\approx1$ corresponding to $d=10$ and $d=4$ obtained for $m\approx1$ and $m\approx0.1$, respectively (for $a=0.5$ and $l=1.3$).
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=55 mm]{fig2.eps}
\end{array}$
\end{center}
\caption{Temperature in terms of $r_{+}$ for $l=1.3$, and $a=0.5$. $d=4$ (solid red), $d=5$ (dashed blue), $d=6$ (dotted black), $d=10$ (dash dotted green).}
\label{fig:2}
\end{figure}
Other thermodynamics quantities like mass, angular momenta, angular velocity and volume can be expressed as
\begin{eqnarray}\label{B6}
M&=&\frac{m\omega_{d-2}}{4\pi\Xi^{2}}\left(1+\frac{(d-4)\Xi}{2}\right),
\\ \label{B7}
J&=&\frac{ma\omega_{d-2}}{4\pi\Xi^{2}},
\\ \label{B8}
\Omega&=&\frac{a}{l^{2}}\frac{r_{+}^{2}+l^{2}}{r_{+}^{2}+a^{2}},
\\ \label{B9}
V&=&\frac{Ar_{+}}{d-1}\left[1+\frac{a^{2}(r_{+}^{2}+l^{2})}{l^{2}\Xi(d-2)r_{+}^{2}}\right],
\end{eqnarray}
where $A=4s$, as given by the relation (\ref{B4}). It is clear that, in the special case of $d=4$ and $a=0$, we get $V=\frac{4}{3}\pi r_{+}^{3}$ as expected.
Hence, the effect of $a$ is that it increases the volume in four dimension. Situation is similar for the higher dimensional case. Finally, Gibbs free energy given by
\begin{equation}\label{B10}
G=\frac{\omega_{d-2}r_{+}^{d-5}}{16\pi\Xi^{2}}\left(3a^{2}+r_{+}^{2}-\frac{(r_{+}^{2}-a^{2})^{2}}{l^{2}}
+\frac{3a^{2}r_{+}^{4}+a^{4}r_{+}^{2}}{l^{4}}\right).
\end{equation}
Behavior of $G$ and critical points discussed.
We will discuss other thermodynamics potentials like Helmholtz free energy, $PV$ diagram,
critical point and stability of system in the next section, and also analyze the effect of thermal fluctuations on the thermodynamics of this system.
\section{Thermal Fluctuations}
The entropy of a black hole will be corrected by a logarithmic term due to the thermal fluctuations.
The
It may be noted that we will be analyzing this system very close to the equilibrium, and so we will analyze
the thermal fluctuations as perturbations around the equilibrium. This approximation will be valid as long
as the correction due to the thermal fluctuations is small compared to the original quantity, i.e.,
as long as $S- s/ s << 1$, where $S$ is the corrected entropy and $s$ is the original entropy of the system.
It should also be noted that at very high temperatures, i.e., near the Planck scale,
the thermal fluctuations will be very large, and at this stage the system cannot be analyzed as a perturbation around equilibrium temperature.
However, for such temperatures, where we can analyze this system as a perturbation around equilibrium, we can write
the corrected entropy as \cite{Landau, fl, l1},
\begin{equation}\label{C1}
S = s - \frac{\ln s^{\prime\prime}}{2},
\end{equation}
here prime denote derivative with respect to $ {T}^{-1}$, where $T$ is the equilibrium temperature.
Furthermore, for such systems,
the second derivative of the entropy can be expressed in terms of fluctuations of the energy, and so the
the corrected entropy can be written as \cite{l1}
\begin{equation}\label{C2}
S = s -\frac{\alpha}{2} \ln(sT^{2}),
\end{equation}
where $s$ is original entropy given by the Eq. (\ref{B4}). The parameter $\alpha$ is added by hand to analyze effect of thermal fluctuations.
Furthermore,
as the logarithmic corrections to the entropy are generated from almost all approaches to quantum gravity, but the coefficient of such
corrections varies between different approaches, we will keep such a coefficient as a variable in this paper. Thus, we can effectively
discuss the effect of the corrections to the thermodynamics from various different approaches to quantum gravity. It may be noted that the limiting cases,
$\alpha=1$ is valid for the the entropy corrected by thermal fluctuations, and $\alpha=0$ is valid for the original entropy of this system.
\\
As we will be analyzing a very general form of the corrections to the entropy, we will generalize this result.
We can generalize this result by looking at the ideal gas entropy,
written as,
\begin{equation}\label{IGS}
S = \frac{5}{2}N k_{B} -N k_{B}\ln(\frac{N}{V}\frac{h^{3}}{(2\pi m k_{B} T)^{\frac{3}{2}}}),
\end{equation}
where $k_{B}$ is Boltzmann constant and $N$ is the total particle number. Motivated by the equation (\ref{IGS}), we propose the following logarithmic entropy,
\begin{equation}\label{LCS}
S(T) = s +\alpha\ln(s f(T)),
\end{equation}
where $f(T)$ is specific function of temperature, and $\alpha$ is free parameter which is used to incorporate the
the effect of logarithmic corrections.
If we assume $s=\frac{5}{2}N k_{B}$, $\alpha=-N k_{B}$ and $f(T)=\frac{2\hbar}{5V}(\frac{2\pi}{m})^{\frac{3}{2}}T^{-\frac{3}{2}}$, then
the ideal gas entropy (\ref{IGS}) reproduced. Presence of $\hbar$ in the $f(T)$ show that thermal fluctuations are indeed quantum effect
and will be tested for several physical system.\\
Furthermore, if we assume $f(T)=T^{2}$ and $\alpha=-\frac{1}{2}$, then the corrections to the entropy due to thermal fluctuations are reproduced.\\
In the canonical ensemble, one can relate entropy $S(T)$ to the partition function $Z$,
\begin{equation}\label{P1}
S(T) = k_{B}\ln{Z}+k_{B} T (\frac{\partial \ln{Z}}{\partial T}),
\end{equation}
So, for the any physical system with ordinary entropy $s$, one can obtain,
\begin{equation}\label{P2}
\ln{Z}=\frac{1}{T}\int{\frac{s +\alpha\ln(s f(T))}{k_{B}}dT}.
\end{equation}
Using the equation (\ref{P2}), one can obtain other thermodynamics quantities like internal energy,
\begin{equation}\label{P3}
E=k_{B}T^{2}\frac{d\ln{Z}}{dT}=T(s +\alpha\ln(s f(T)))-\int{(s +\alpha\ln(s f(T)))dT},
\end{equation}
Helmholtz free energy,
\begin{equation}\label{P4}
F=-k_{B}T\ln{Z}=-\int{(s +\alpha\ln(s f(T)))dT},
\end{equation}
specific heat at constant volume,
\begin{equation}\label{P5}
C_{V}=T\left[\frac{d}{d T}(s +\alpha\ln(s f(T)))\right]_{V},
\end{equation}
specific heat at constant pressure,
\begin{equation}\label{P6}
C_{P}=C_{V}+\left[P+\left(\frac{\partial E}{\partial V}\right)_{T}\right]\left(\frac{\partial V}{\partial T}\right)_{P},
\end{equation}
and pressure,
\begin{equation}\label{P7}
P=k_{B}T\left(\frac{d\ln{Z}}{dV}\right)_{T}.
\end{equation}
Then, we can obtain enthalpy and Gibbs free energy as
\begin{equation}\label{P8}
H=E+PV,
\end{equation}
and
\begin{equation}\label{P9}
G=F+PV=H-T(s +\alpha\ln(s f(T))).
\end{equation}
Now we will assume $f(T)=T^{2}$, and investigate thermodynamics properties of this system.\\
In the plots of the Fig. \ref{fig:3}, we can see the behavior of the entropy for various dimensions (we focus on $d=4, 5, 6, 10$).
It is expected that logarithmic correction is important for small black hole, so we see that for the larger $r_{+}$ both
$S$ and $s$ coincide. On the other hand, for the small $r_{+}$, corrected entropy has completely different behavior,
so a minimum entropy is available for $d\geq5$. In the special case of $d=4$, there is a maximum for the entropy, which has a singular behavior.
However, it is possible that this approximation breaks down near this point, as the perturbations can not be used to analyze such
points.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{fig3-1.eps}&\includegraphics[width=50 mm]{fig3-2.eps}\\
\includegraphics[width=50 mm]{fig3-3.eps}&\includegraphics[width=50 mm]{fig3-4.eps}
\end{array}$
\end{center}
\caption{Entropy in terms of $r_{+}$ for $l=1.3$, and $a=0.5$. Thin curves show ordinary entropy ($\alpha=0$) given by the equation (\ref{B4}),
while thick curves show logarithmic corrected entropy ($\alpha=1$) given by the equation (\ref{C2}). $d=4$ (solid red), $d=5$ (dashed blue), $d=6$
(dotted black), $d=10$ (dash dotted green).}
\label{fig:3}
\end{figure}
Now, we can use the specific heat given by the equation (\ref{P5})
to study stability of the black hole. Plots of the Fig. \ref{fig:4} show behavior of
specific heat in terms of $r_{+}$ for various dimensions. In absence of extra dimensions ($d=4$),
we can see unstable region for the small black hole. It means that there is a minimum size
for the black hole. So, the thermal fluctuations can cause a phase transition in the black hole,
(as one can see from the second, third and last plots of
Fig. \ref{fig:4}), and so without the logarithmic corrections there is no phase transition, except in four dimensions.
We can see that the black hole is completely stable in $d=5$, without logarithmic correction.
However, thermal fluctuations make the black hole unstable. Situation is completely different for the cases of $d\geq6$.
In this cases the black hole is completely unstable.
It is an interesting result that the
logarithmic corrections are needed to make the black holes stable in presence of at least two extra dimensions ($d\geq6$).
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{fig4-1.eps}&\includegraphics[width=50 mm]{fig4-2.eps}\\
\includegraphics[width=50 mm]{fig4-3.eps}&\includegraphics[width=50 mm]{fig4-4.eps}
\end{array}$
\end{center}
\caption{Specific heat in terms of $r_{+}$ for $l=1.3$, and $a=0.5$. Thin curves show ordinary case ($\alpha=0$), while thick curves show logarithmic
corrected case ($\alpha=1$). $d=4$ (solid red), $d=5$ (dashed blue), $d=6$ (dotted black), $d=10$ (dash dotted green).}
\label{fig:4}
\end{figure}
In order to obtain Helmholtz free energy we need to calculate internal energy,
\begin{equation}\label{C4}
E=\int{C dT}=E_{1}+\alpha E_{2},
\end{equation}
where $E_{1}$ and $E_{2}$ for the case of $d=4$ given by,
\begin{equation}\label{C5}
E_{1}=\frac{2a(l^{2}-a^{2})\tan^{-1}(\frac{r_{+}}{a})+r_{+}(2a^{2}-l^{2}-r_{+}^{2})}{2(a^{2}-l^{2})},
\end{equation}
and
\begin{equation}\label{C6}
E_{2}=\frac{ar_{+}(a^{2}+r_{+}^{2})\tan^{-1}(\frac{r_{+}}{a})+\frac{a^{2}}{4}(l^{2}-5r_{+}^{2})-\frac{3}{2}r_{+}^{4}}{\pi l^{2}r_{+}(a^{2}+r_{+}^{2})}.
\end{equation}
In presence of extra dimension, we can obtain approximately similar results which are illustrated by Fig. \ref{fig:5}.
In the case of $d=4$, we can see important effect for the small black hole, which have infinite energy because of
thermal fluctuations. In the cases of $d=6$ and $d=10$, we obtain a minimum for internal energy, due to the thermal fluctuations.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{fig5-1.eps}&\includegraphics[width=50 mm]{fig5-2.eps}
\end{array}$
\end{center}
\caption{Internal energy in terms of $r_{+}$ for $l=1.3$, and $a=0.5$. Thin curves show ordinary case ($\alpha=0$),
while thick curves show logarithmic corrected case ($\alpha=1$). $d=4$ (solid red), $d=5$ (dashed blue), $d=6$ (dotted black), $d=10$ (dash dotted green).}
\label{fig:5}
\end{figure}
Now, we can calculate Helmholtz free energy given by the equation (\ref{P4}).
In the plots of the Fig. \ref{fig:6}, we can see behavior of the Helmholtz
free energy in terms of $r_{+}$ for various dimensions.
In the cases of $d\geq5$, we can see that logarithmic correction decreases value of $F$.
In the special case of $d=4$, there is a critical horizon radius $r_{c}$, where $F(\alpha=1)=F(\alpha=0)$.
If $r_{+}>r_{c}$, then the effect of logarithmic correction is that it decreases the Helmholtz free energy,
while if $r_{+}<r_{c}$, then the effect of logarithmic correction is that it increases the Helmholtz free energy to a maximum value.
Now, $F(\alpha=0)\rightarrow+\infty$, and $F(\alpha=1)\rightarrow-\infty$
at $r_{+}\rightarrow0$. Interesting point is that value of corrected
$F$ at $r_{+}\approx1$ is the same for $d=4$ and $d=5$.
In higher dimensions, Helmholtz free energy is zero for small black hole without thermal fluctuations,
while it has negative infinite value in presence of thermal fluctuations.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{fig6-1.eps}&\includegraphics[width=50 mm]{fig6-2.eps}&\includegraphics[width=50 mm]{fig6-3.eps}
\end{array}$
\end{center}
\caption{Helmholtz free energy in terms of $r_{+}$ for $l=1.3$, and $a=0.5$. Thin curves show ordinary case ($\alpha=0$),
while thick curves show logarithmic corrected case ($\alpha=1$). $d=4$ (solid red), $d=5$ (dashed blue), $d=6$ (dotted black), $d=10$ (dash dotted green).}
\label{fig:6}
\end{figure}
Gibbs potential obtained using enthalpy (which interpreted as mass given by the equation (\ref{B6})), temperature and entropy which is given by the equation (\ref{P9}).
We can see that value of $G$ is depend on $m$, $a$ and $l$. So, at the fixed $m$, $a$ and $l$, value of $r_{+}$ is fixed.
In the table (1), we can see effect of logarithmic correction on the Gibbs potential. We can conclude that
thermal fluctuations decrease value of $G$ in the $d=4$ and $d=5$ dimensions. On the
other hand, for the cases of $d\geq6$, thermal fluctuation increases value of the Gibbs potential.
For the cases of $d=9$ and $d=10$, Gibbs free energy is negative in absence of logarithmic correction, but
it is positive in presence of thermal fluctuations.
\begin{table}[h!]
\centering
\caption{Value of the Gibbs potential for $l=1.3$, $m=1$ and $a=0.5$ for various dimensions.}
\label{tab:table1}
\begin{tabular}{|l|c||r|}
\hline
$\alpha=0$ & $\alpha=1$ & $d$ \\ \hline\hline
0.38932 & 0.21797 & 4 \\ \hline
0.47997 & 0.4132 & 5 \\ \hline
0.46452 & 0.5807 & 6 \\ \hline
0.34055 & 0.6809 & 7 \\ \hline
0.14279 & 0.7235 & 8 \\ \hline
-0.076259 & 0.7434 & 9 \\ \hline
-0.26761 & 0.7751 & 10 \\ \hline
\end{tabular}
\end{table}
Now, using the equations (\ref{B9}), (\ref{P4}) and
\begin{equation}\label{C9}
P=-\left(\frac{\partial F}{\partial V}\right),
\end{equation}
we can study behavior of pressure. In order to find $PV$ diagram, we can find $r_{+}$ in terms of $V$ from the Eq.
(\ref{B9}), then remove $r_{+}$ in the Eq. (\ref{C9}), to obtain behavior of $P$ in terms of $V$. In the case of $d=5$, we can obtain,
\begin{equation}\label{C10}
r_{+}^{2}=\frac{\pi l a^{2}(2l^{2}-a^{2})-(a^{2}-l^{2})\sqrt{\pi^{2} a^{4} l^{2}+6(3l^{2}-2a^{2})V}}{\pi l (2a^{2}-3l^{2})}.
\end{equation}
Using the relation (\ref{C10}) in the Eq. (\ref{C9}) for $d=5$, we can obtain behavior of $P$
as illustrated in the Fig. \ref{fig:7}. We can see critical point shifted
due to the thermal fluctuations. It is clear that the interested case is when
$l=1.3$ and, $a=0.5$, and for this we obtain a completely stable black hole. We should note that, there is some negative regions in the Fig. \ref{fig:7}
which are unphysical since it would require analytically continuing $l\rightarrow il$, and so we cannot use the values below $P=0$ to analyze the
Van der waals behavior. These unphysical situations are obtained for small $a$ and large $l$, so one can remove such negative region
by fixing $a$ and $l$. It produces a maximum value for $l$ and minimum value for $a$, with an uncertainty like equation.
Fitting the curves of Fig. \ref{fig:7}, suggests the following Virial expansion form of pressure,
\begin{equation}\label{C11}
P=\frac{A}{V}+\frac{B}{V^{2}}+\frac{C}{V^{3}}+\cdots
\end{equation}
and hence we can write,
\begin{equation}\label{C12}
\frac{PV}{T}=A(T)+\frac{B(T)}{V}+\frac{C(T)}{V^{2}}+\cdots
\end{equation}
It means that singly spinning Kerr-AdS black hole in five dimension behave as van der Waals fluid. The best fitted values of a stable case for Virial
coefficients $A(T)$, $B(T)$ and $C(T)$ yields to the following relation,
\begin{equation}\label{C12}
\frac{PV}{T}=0.3+\frac{0.7}{V^{2}},
\end{equation}
so $B(T)=0$, and a black hole is in Boyle temperature ($B(T)=0$).
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{fig7.eps}
\end{array}$
\end{center}
\caption{Pressure in terms of $V$ for $l=1.3$ and $d=5$. Thin curves show ordinary case ($\alpha=0$), while thick
curves show logarithmic corrected case ($\alpha=1$). $a=0$ (dotted red), $a=0.25$ (dashed black), $a=0.333$ (solid green), $a=0.5$ (dash dotted black).}
\label{fig:7}
\end{figure}
Situation is complicated for other dimensions where analytic expression of $r_{+}$ in terms of $V$ is not available.
For example, using the relation (\ref{B9}), with $d=10$, we have the following equation,
\begin{equation}\label{C13}
(8l^{2}-7a^{2})r^{9}+(9a^{2}l^{2}-7a^{4})r^{7}+l^{2}a^{4}r^{5}-\frac{945(a^{2}-l^{2})^{2}}{l^{2}\pi^{4}}V=0.
\end{equation}
However, we can assume small $a$ and do numerical analysis to obtain pressure in terms of volume, and this is illustrated by the Fig. \ref{fig:8}.
We can see that the thermal fluctuations are necessary to have critical point, and
obtain expected curves. Right plot of the Fig. \ref{fig:8}, shows that $a$ and $l$ should be small to have the expected curves.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{fig8-1.eps}&\includegraphics[width=50 mm]{fig8-2.eps}
\end{array}$
\end{center}
\caption{Pressure in terms of $V$ for $a=0.01$ and $d=10$.
Left plot show ordinary case ($\alpha=0$), while right plot show logarithmic corrected case ($\alpha=1$).
$l=0.2$ (dotted red), $l=0.3$ (dashed blue), $l=0.4$ (solid green), $l=0.5$ (dash dotted black), $l=1.3$ (long dashed orange).}
\label{fig:8}
\end{figure}
\section{Four Dimensional Kerr-AdS Black Hole}
In this section, we will discuss about the special case of a rotating black hole in four dimensions, and also analyze the effects of thermal fluctuation
on such a black hole.
The corresponding metric is obtained by putting $d=4$ in the solution given by (\ref{B1}) and (\ref{B2}).
Thermodynamics quantities of Kerr-AdS black hole are listed below. The black hole mass and angular momenta are given by
\begin{equation}\label{A33}
M=\frac{m}{\Xi^{2}},
\end{equation}
and
\begin{equation}\label{A44}
J=\frac{ma}{\Xi^{2}},
\end{equation}
while angular velocity of the horizon is given by
\begin{equation}\label{A55}
\Omega=\frac{a(r_{+}^{2}+l^{2})}{l^{2}(r_{+}^{2}+a^{2})},
\end{equation}
where $r_{+}$ is the largest root of $\Delta = 0$. By appropriate choice of $l$, $m$ and $a$, there are two real positive roots,
$r_{\pm}$, which illustrated by the Fig. \ref{fig:9}. In the table (2), we can see value of $r_{+}$, with some possible values of free black hole parameters.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{fig9.eps}
\end{array}$
\end{center}
\caption{Typical behavior of $\Delta$ in terms of $r$ with $l=1$, $m=1$, and $a=0$.}
\label{fig:9}
\end{figure}
\begin{table}[h!]
\centering
\caption{Value of $r_{+}$.}
\label{tab:table2}
\begin{tabular}{|l||c|c|r|}
\hline
$r_{+}$ & $a$ & $m$ & $l$ \\ \hline\hline
1 & 0 & 1 & 1 \\ \hline
0.5 (extremal) & 0.742 & 1 & 1 \\ \hline
0.5 & 0.2 & 0.367& 1 \\ \hline
1 & 0.4 & 0.728& 2 \\ \hline
1 & 1 & 2 & 1 \\ \hline
\end{tabular}
\end{table}
Furthermore, the temperature $T$ is given by,
\begin{equation}\label{A66}
T=\frac{1}{2\pi r_{+}}\left[\frac{(a^{2}+3r_{+}^{2})(r_{+}^{2}+l^{2})}{2l^{2}(a^{2}+r_{+}^{2})}-1\right].
\end{equation}
The ordinary entropy of the black hole is given by $s=\frac{A}{4}$, and hence
\begin{equation}\label{A77}
s=\pi\frac{r_{+}^{2}+a^{2}}{\Xi}.
\end{equation}
The thermodynamic volume is given by,
\begin{equation}\label{A1010}
V=\frac{4\pi r_{+}}{3}\frac{r_{+}^{2}+a^{2}}{\Xi}\left(1+\frac{(r_{+}^{2}+l^{2})a^{2}}{2l^{2}r_{+}^{2}\Xi}\right).
\end{equation}
We will now study logarithmic corrections to the thermodynamics of this Kerr-AdS black hole.
In the Fig. \ref{fig:10}, we can see behavior of the partition function for various values of $a$ and $l$. We can see
that logarithmic correction reduces value of the partition function. We can reproduce behavior of $Z$, using the following function at the region of $r\leq2$,
\begin{equation}\label{E1}
Z\approx c_{1}+\frac{(c_{2}-c_{3})(r_{+}-b)^{4}}{r_{+}}+\frac{c_{4}(r_{+}-b)}{(r_{+}-d)^{2}},
\end{equation}
where $c_{1}$, $c_{2}$, $c_{3}$ ($c_{2}>c_{3}$), $c_{4}$, $b$ and $d$ are
constants which can fit above function corresponding to the curves of the Fig. \ref{fig:2}. Here $c_{1}=0$
corresponds to the corrected partition function ($\alpha\neq0$), while $c_{1}\neq0$ and $c_{3}=0$ corresponds to the ordinary entropy ($\alpha=0$).
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{fig10.eps}
\end{array}$
\end{center}
\caption{Partition function in terms of $r_{+}$ with $l=1$, and $a=0.5$ (blue); $l=1$, and $a=0.742$ (green); $l=2$, and $a=0.4$ (red). Solid and dashed lines represent $\alpha=1$ and $\alpha=0$ respectively.}
\label{fig:10}
\end{figure}
In the Fig. \ref{fig:11} we can see typical behavior of free energy $F$ as a function of $r_{+}$. It is illustrated that
the effect of thermal fluctuations increases as $r_{+}$ becomes smaller. It is clear that logarithmic correction increases Helmholtz free energy.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{fig11.eps}
\end{array}$
\end{center}
\caption{Helmholtz free energy in terms of $r_{+}$ with $l=1$, and $a=0.2$. Solid and dashed lines represent logarithmic corrected and ordinary free energy respectively.}
\label{fig:11}
\end{figure}
Now, by using the equation (\ref{P7}) one can discuss about pressure,
and critical point, which are obtained by solving $\partial_{V}P=\partial_{VV}P=0$.
In the Fig. \ref{fig:12}, we can see behavior of pressure compare it with the critical pressure $P_{C}$.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{fig12.eps}
\end{array}$
\end{center}
\caption{Pressure in terms of $r_{+}$ with $l=1$, and $a=0.5$. Solid ($P>P_{C}$), Dashed ($P=P_{C}$), Dotted ($P<P_{C}$).}
\label{fig:12}
\end{figure}
\section{Conclusions and Discussion}
In this paper, we analyzed the effects of thermal fluctuations on the thermodynamics of a small singly spinning AdS-Kerr black hole in higher dimensional space-time.
It was observed that the entropy of this black hole gets corrected by a logarithmic correction term due to the thermal fluctuations, and this has important
consequences for the properties of such a black hole. We also calculated the corrections to the various thermodynamical quantities due to the thermal fluctuations.
Then, we analyzed the stability of this black hole in higher dimensions, and the effect of thermal fluctuations on the stability of these black hole. We also
studied the critical phenomena for such black hole, and the effect of thermal fluctuations on critical phenomena. It was observed that the effect of these thermal
fluctuations can be neglected for a large black hole, but it become important for a sufficiently small black hole. This was expected, as the correction to the
entropy-area relation occurs due to quantum fluctuations in the geometry, and these fluctuations become important only at small scale.\\
This was done studding the phase transition in canonical ensemble. The curve of specific heat for this system were also analyzed, and it was observed that they
have two divergent points (see for example the third plot of the Fig. \ref{fig:4} corresponding to $d=6$). Furthermore, it was also observed that in four
dimensions both the large radius region and the small radius region are thermodynamically stable, with positive specific heat. However, the medium radius region are
is unstable with negative specific heat. For the case of $d\leq5$ it is clear that when the size of black hole is small, logarithmic correction make the black hole
unstable. However in higher dimension situation is the inverse.\\
It may be noted that there are several interesting asymptomatic geometries, and it is possible to study the correction to the thermodynamics of such geometries
by thermal fluctuations. It would also be interesting to analyze the effect of thermal fluctuations on black holes in modified theories of gravity.
It is expected that these thermal fluctuations will also correct the entropy of modified theories of gravity. Such correction to the entropy of modified
theories of gravity, will also produce correction terms for other thermodynamical quantities. This can also affect the critical phenomena which can be studied for
AdS black holes in modified theories of gravity. It may be noted that the critical phenomena has been studied for AdS black holes in $f(R)$ theories of
gravity \cite{adsads1}.\\
The topological black hole solutions of third order Lovelock gravity couple with two classes of Born-Infeld type
nonlinear electrodynamics have been also been studied \cite{adsads2}.
In this analysis, the geometric and thermodynamics properties of AdS black hole solutions was discussed,
and it was observed that the first law of thermodynamics holds for such solutions.
The heat capacity and determinant of Hessian matrix for these black holes was used
evaluate thermal stability in both canonical and grand canonical ensembles. It would be interesting to analyze the effect of thermal fluctuations
on the thermodynamics and stability of such black in Lovelock gravity couple with two classes of Born-Infeld type
nonlinear electrodynamics.\\
It may be noted that recently the correction to the thermodynamics of black holes has also been obtained from gravity's rainbow \cite{gr12,gr12ab, gr14, gr15, gr17}.
It has been argued that the correction from gravity's rainbow can help resolve the black hole information paradox \cite{info1, infor4, info2}.
It would be interesting to analyze the effects of gravity's rainbow on the thermodynamics of singly spinning AdS-Kerr black hole.
In fact, it is possible to use the results of this paper to obtain certain constraints on the rainbow functions. This is because as the
logarithmic correction is so universally produced by almost all approaches to quantum gravity, we expect that such corrections should also
be produced by gravity's rainbow. This can be used to constrain the parameters used in the rainbow function.
It would also be interesting to perform such an analysis for other kind of black holes, and analyze the relation between correction obtained
from thermal fluctuations and corrections obtained from gravity's rainbow.
It may be noted that the thermodynamics of black holes has also been studied in massive gravity \cite{mass1, mass2, mass4, mass6}.
It would also be interesting to analyze the effects of thermal fluctuations on the thermodynamics of black holes in massive gravity.
Finally, it may be interesting to study logarithmic correction effect on the $3D$ hairy black hole, as such systems have also been studied \cite{001,002,003,004}. It would be interesting to analyze the effect that logarithmic corrections can have on such black holes.
|
1,108,101,564,388 | arxiv | \section{Introduction}
The spectroscopic study of nucleon resonances ($N^*$ and $\Delta^*$) dates back
to the discovery of the $\Delta$ baryon by the Chicago University group in 1952~\cite{chicago}.
Here, the existence of a new baryon with the isospin $3/2$,
which has come to be known as $\Delta(1232)3/2^+$,
was suggested
from the rapid increase of the $\pi^+ p$ and $\pi^- p$ reaction total cross sections
at $\sim 150$~MeV of the incident pion momentum and the ratios of the cross sections.
After 60 years of this discovery, nearly 50 $N^*$ and $\Delta^*$ baryons
have been reported, as listed by Particle Data Group~\cite{pdg14}.
However, as pointed out by the George Washington University group~(see the Introduction of
Ref.~\cite{said06}), one still does not have any definitive conclusions
for more than half number of the reported $N^*$ and $\Delta^*$ baryons, even for their existence.
The $N^*$ and $\Delta^*$ spectroscopy
therefore remains as a fundamental challenge in the hadron physics.
In the past, a number of static hadron models, such as constituent quark models~\cite{cqm}
and models based on the Dyson-Schwinger equations~\cite{dse},
have been proposed to study the mass spectrum and quark-gluon substructure of hadrons.
In such static hadron models, the excited hadrons are usually treated as stable particles.
However, in reality, the excited hadrons strongly couple to the multihadron scattering states
and can exist only as unstable resonances in hadron reactions.
This fact raises an intriguing question how important
the dynamical effects arising from such a strong coupling to scattering states are
in understanding the mass spectrum, structure, and production mechanism
of hadrons as resonant particles.
To answer this question, the so-called dynamical coupled-channels (DCC) approaches
have been developed by a number of theoretical groups including us.
These approaches have been applied to the analysis of various meson-production reactions
in the nucleon resonance region and have succeeded in providing new insight into
dynamical contents of hadron resonances,
which is difficult to be addressed by the static hadron models.
In this contribution, we give an overview of the DCC approaches
and present our recent efforts for the $N^*$ and
$\Delta^*$ spectroscopy based on the so-called ANL-Osaka DCC approach~\cite{msl07,knls13,knls16}.
\section{$N^*$ and $\Delta^*$ spectroscopy: Physics of broad and overlapping resonances}
\label{sec:}
The resonances usually appear as isolated peaks in the cross sections.
In fact, the first peak in the $\pi^- p$ reaction total cross section
is attributed to the existence of the $\Delta(1232)3/2^+$ resonance (Fig.~\ref{fig:pimptcs}).
One then may expect that next two peaks at $\sqrt{s} \sim 1.5$~GeV and $\sqrt{s} \sim 1.7$~GeV
are also produced by isolated resonances.
However, it is turned out that they contain $\sim 20$ $N^*$ and $\Delta^*$ resonances.
Furthermore, the decay widths of these resonances are found to be
very broad, $\sim 300$~MeV on average,
which can be even broader than the energy range of the two peaks.
This means that the $N^*$ and $\Delta^*$ resonances are highly overlapping
with each other in energy, and thus a peak in the cross section does not necessarily
mean the existence of an isolated resonance in the $N^*$ and $\Delta^*$ spectroscopy.
This situation is quite different from other systems such as heavy-quark hadrons,
atoms, and nuclei.
In those systems, the resonances usually appear as clear and well-separated peaks
in the cross sections.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth,clip]{pimp-tcs}
\caption{\label{fig:pimptcs}
Total cross section for inclusive $\pi^- p$ reaction in the resonance region.
The first peak at $\sqrt{s}\sim 1.2$~GeV is produced solely by the $\Delta(1232)3/2^+$ resonance,
while the next two higher peaks contain $\sim 20$ $N^*$ and $\Delta^*$ resonances.
}
\end{figure}
The broad and overlapping nature of $N^*$ and $\Delta^*$ resonances makes
their experimental identification very difficult.
Cooperative works between experiments and theoretical analyses
are therefore indispensable for the $N^*$ and $\Delta^*$ spectroscopy.
In fact, tremendous efforts in such a direction have been performed since the late 90s.
A huge amount of high statistics data of meson-production reactions off the nucleon
were obtained from photon- and electron-beam facilities,
such as ELPH, ELSA, JLab, MAMI, and SPring-8,
and were brought to theoretical analysis groups using coupled-channels approaches
such as ANL-Osaka, Bonn-Gatchina, J\"uelich, and SAID~\cite{nstar2015}.
The analysis groups then performed comprehensive partial-wave analyses of the data and
extracted various properties of $N^*$ and $\Delta^*$ resonances
defined by poles of scattering amplitudes in the complex-energy plane.
In parallel with this, the analysis groups gave feedback about what data are
further needed for more complete determination of $N^*$ and $\Delta^*$ resonances.
With this close cooperation between experiments and theoretical analyses,
significant progress has been achieved for the $N^*$ and $\Delta^*$ spectroscopy in recent years.
\section{Multichannel unitarity and dynamical coupled-channels approaches}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth,clip]{gp-tcs}
\caption{\label{fig:gptcs}
Total cross section for exclusive $\gamma p$ reactions in the resonance region.
}
\end{figure}
The unitarity of the multichannel $S$-matrix, $S^\dag S = 1$, is the key to performing
the coupled-channels analysis
and making reliable extraction of $N^*$ and $\Delta^*$ resonances from reaction data.
Defining the $T$-matrix as $S_{ba}= \delta_{ba} -i2\pi \delta(E_b-E_a)T_{ba}$,
where $E_a = \sum_i E_{a,i}(\vec p_{a,i})$ with $E_{a,i}$ and $\vec p_{a,i}$ being the energy and momentum of
the $i$th particle belonging to the channel $a$, respectively.
The unitarity $S^\dag S = 1$ gives
the following condition obeyed by the on-shell $T$-matrix elements~\cite{gw} (the generalized optical theorem):
\begin{equation}
T_{ba}(E) - T^\dag_{ba}(E) = -2\pi i \sum_c T^\dag_{bc}(E)\delta(E-E_c)T_{ca}(E),
\label{eq:opt}
\end{equation}
where the subscripts represent the reaction channels, and $E=E_a=E_b$.
There are two critical reasons why the multichannel unitarity is so important.
First, it ensures the conservation of probabilities in multichannel reactions.
As can be seen from the $\gamma p$ reaction total cross sections presented in Fig.~\ref{fig:gptcs},
many inelastic channels open in the resonance region.
It is almost impossible to treat all of the inelastic reactions
consistently in a single reaction framework unless
the transition probabilities are automatically conserved by the multichannel unitarity.
Second, the multichannel unitarity condition [Eq.~(\ref{eq:opt})] properly defines
the analytic structure (branch points and unitarity cuts, etc.)
of the scattering amplitudes in the complex-energy plane.
Any reaction framework that does not satisfy this condition would fail
to make a proper analytic continuation of the amplitudes, and this
may result in picking up wrong signals of resonances.
It is known that Eq.~(\ref{eq:opt}) is satisfied by any $T$-matrix given by the Heitler equation~\cite{gw}:
\begin{equation}
T_{ba}(E) = K_{ba}(E) + \sum_c K_{bc}(E)[-i\pi \delta(E-E_c)]T_{ca}(E),
\label{eq:heitler}
\end{equation}
where $K_{ba}(E)$ is known as the (on-shell) $K$-matrix, and
the unitarity condition requires this to be Hermitian for real $E$.
Since the unitarity condition does not give any further constraints on the form of $K$-matrix as a function of $E$,
usually two approaches are taken for parametrizing the $K$-matrix.
One is called the (on-shell) $K$-matrix approach, where the $K$-matrix is simply parametrized
as a sum of polynomials and pole terms of $E$.
In this case, the Heitler equation can be reduced to a simple algebraic equation
at least for the case of two-body reactions.
Another is called the dynamical-model approach, in which the $K$-matrix is obtained
by solving the following equation:
\begin{equation}
K_{ba}(E) \equiv K_{ba}(\vec p_b,\vec p_a;E)
= V_{ba}(\vec p_b,\vec p_a;E)
+ {\sum_d}' {\cal P}\int d\vec p_d V_{bd} (\vec p_b,\vec p_d;E)\frac{1}{E-E_d} K_{da}(\vec p_d,\vec p_a;E),
\label{eq:k-dcc}
\end{equation}
where $V$ is the transition potential defined by some model Hamiltonian;
$\vec p_a$ symbolically denotes the momenta of all particles in the channel $a$, $\vec p_a = (p_{a,1},..,p_{a,N_a})$
with $N_a$ being number of the particles in the channel $a$;
the symbol ${\cal P}$ means taking the Cauchy principal value for the integral over the momentum variable $\vec p_d$;
and the symbol ${\sum'_d}$ means taking summation or integral
for all variables of the channel $d$ except for the momenta.
The second term in the right hand side of Eq.~(\ref{eq:k-dcc})
describes the off-shell rescattering effect in the reaction processes.
The Heitler equation~(\ref{eq:heitler}) combined with the $K$-matrix given by Eq.~(\ref{eq:k-dcc})
is nothing but the Lippmann-Schwinger integral equation describing the quantum scattering.
Our approach (the ANL-Osaka DCC approach) belongs to the dynamical-model approach.
The (on-shell) $K$-matrix approach seems more ``economical'' than the dynamical-model approach
in terms of numerical analysis of reaction data.
In fact, the numerical cost of the (on-shell) $K$-matrix approach is basically much cheaper
than the dynamical-model approach, because in the latter case,
one has to solve the very time-consuming off-shell integral equation.
In addition, the (on-shell) $K$-matrix approach is much easier to obtain
a good fit to the data because one can parametrize the $K$-matrix as one likes.
On the other hand, in the dynamical-model approach,
the form of the $K$-matrix, which is given from the potential $V$ by Eq.~(\ref{eq:k-dcc}),
is severely constrained by a model Hamiltonian employed as a theoretical input.
Therefore, the (on-shell) $K$-matrix approach would be enough
if enough amounts of precise data are available and if
what one wants to know is just the resonance pole positions and residues
of the on-shell scattering amplitudes.
However, if one further wants to understand the physics of reaction dynamics
behind various properties of hadron resonances, then
the dynamical-model approach is necessary,
because such a study can be achieved only by appropriately modeling
the reaction processes and solving a proper quantum scattering equation.
This is why we employ the dynamical-model approach.
\begin{figure}[t]
\centering
\includegraphics[width=0.65\textwidth,clip]{p11}
\caption{\label{fig:p11}
Dynamical origin of $P_{11}$ ($J^P = 1/2^+$) $N^*$ resonance poles~\cite{sjklms10}
within the dynamical coupled-channels model developed in Ref.~\cite{jlms07}.
Poles A and B are the double-pole (pole and shadow-pole) structure of the Roper resonance
with respect to the $\pi \Delta$ channel, while pole C corresponds to $N(1710)1/2^+$.
Filled square is the so-called ``bare'' $N^*$ state, which is defined as an eigenstate of
the model Hamiltonian for which the coupling to the reaction channels are turned off.
Dynamical effects originating from multichannel reaction processes
trigger the generation of all three resonance poles A, B, and C from the single bare $N^*$ state.
See Ref.~\cite{sjklms10} for the details of the description of the figure.
}
\end{figure}
Let us present two examples that clearly show an importance of
using dynamical-model approaches to clarify the role of
reaction dynamics in understanding properties of $N^*$ and $\Delta^*$ resonances.
One is the dynamical origin of $P_{11}$ $N^*$ resonances~\cite{sjklms10}.
Figure~\ref{fig:p11} shows pole positions of $P_{11}$ $N^*$ resonances
extracted from a dynamical model developed in Ref.~\cite{jlms07}.
Here, the poles $A$ and $B$ are well known as the double-pole (pole and shadow-pole~\cite{eden})
structure of the Roper resonance with respect to the $\pi \Delta$ channel,
which has been observed also in Refs.~\cite{arndt85,cw90,said06,doring09}
and mentioned by PDG~\cite{pdg14},
while the pole $C$ corresponds to the $N^*(1710)1/2^+$ resonance.
On the other hand, the so-called ``bare'' $N^*$ state, which has the real
mass of 1763 MeV (the filled square in Fig.~\ref{fig:p11}),
is the one defined as an eigenstate of the model Hamiltonian for which
the coupling to the reaction channels are turned off.
The bare $N^*$ state therefore conceptually corresponds to
a baryon state obtained in the static hadron models.
Then it was found that within the model developed in Ref.~\cite{jlms07},
all of the presented three $P_{11}$ resonance poles (the poles A, B, C) are generated from
this single bare state as a result of its coupling to the multireaction channels~\cite{eden}.
This implies that a na\"ive one-to-one correspondence between
the physical resonances and the baryons within static hadron models, within which
the dynamical effects originating from coupling to the reaction channels are neglected,
does not exist in general.
Furthermore, the reaction dynamics can produce a sizable mass shift,
as can be seen from the mass difference between the bare state and the Roper resonance.
These findings for the $P_{11}$ resonance mass spectrum might be still dependent
on this particular model, and further investigations combined with other quantities
such as electromagnetic transition form factors would be necessary
to obtain more conclusive results.
However, at least one can say that the mass spectrum of physical resonances can be
very different from that obtained in static hadron models, and
one cannot neglect reaction dynamics in understanding the nucleon resonances.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth,clip]{n-d-emff}
\caption{\label{fig:n-d-emff}
(Left)
Schematic diagram of the electromagnetic form factor for the transition from
the nucleon to a nucleon resonance.
Within the dynamical-model approach, the form factor is given by a sum of
the bare $N^*$ and meson-cloud contributions.
(Right)
$Q^2$ dependence of the $M1$ transition form factor, $G_M^*(Q^2)$,
between the nucleon and the $\Delta(1232)3/2^+$ resonance,
divided by $3 G_D(Q^2)$ with $G_D(Q^2)=(1+Q^2/[0.71(\textrm{GeV/c})^2])^{-2}$.
The solid (dashed) curve is the results of the full dressed (bare) form factor.
The result is given from a dynamical model developed in Ref.~\cite{sl2}.
}
\end{figure}
Another example indicating the importance of using dynamical-model approaches is
the electromagnetic transition form factors between the nucleon and
nucleon resonances probed by the virtual photon (the left side of Fig.~\ref{fig:n-d-emff}).
Here, $Q^2$ defined by $Q^2\equiv -q^2$ with $q$ being the four-momentum of virtual photon
represents the ``resolution'' of the virtual photon,
and hence the $Q^2$ dependence of the form factors is expected
to provide crucial information on the substructure of the $N^*$ and $\Delta^*$ resonances.
Because of this, the electromagnetic transition form factors are being actively investigated
both experimentally and theoretically, and this has opened a great opportunity
to make a quantitative study of the substructure of the $N^*$ and $\Delta^*$ resonances
in close relation with experimental data (see, e.g., Ref.~\cite{az13}).
The right side of Fig.~\ref{fig:n-d-emff} shows the $M1$ transition
form factor between the nucleon and
the $\Delta(1232)3/2^+$ resonance extracted from a dynamical model developed in Ref.~\cite{sl2}.
Within dynamical models, the full dressed form factor consists of
the bare form factor and the meson cloud, where
the latter purely originates from the reaction dynamics.
It is found that $\sim 30$ \% of the full dressed form factor
comes from the meson cloud in the low $Q^2$ region.
It is notable that most of the available static hadron models,
in which the reaction dynamics is not taken into account,
indeed give the form factor close to the bare bare form factor,
but not to the full dressed form factor.
One can also observe from the right side of Fig.~\ref{fig:n-d-emff}
that the meson cloud effect becomes smaller as $Q^2$ increases.
These results obtained from the dynamical-model approach suggests that
at a long distance scale the $\Delta(1232)3/2^+$ resonance can be understood
as a constituent quark-gluon core surrounded by dense meson clouds, and
the core part gradually emerges at shorter distance scales.
To obtain deeper insight into the transition form factors in the high $Q^2$ region,
in which the contribution of quark-gluon core is expected to dominate,
experimental determination of the transition form factors through
the measurement of electroproduction reactions in the region of
$5\lesssim Q^2 \lesssim 12$ GeV$^2$ is planned at CLAS12~\cite{clas12-proposals}.
\section{Recent results from ANL-Osaka DCC analysis}
Now let us move on to presenting our recent efforts for the $N^*$ and $\Delta^*$ spectroscopy
based on the ANL-Osaka DCC model~\cite{msl07,knls13,knls16}.
The basic formula of the model is the multichannel
Lippmann-Schwinger equation obeyed by the partial-wave amplitudes:
\begin{equation}
T^{(J^P I)}_{b,a} (p_b,p_a;E) =
V^{(J^PI)}_{b,a} (p_b,p_a;E)
+\sum_c \int_C dp_c p_c^2 V^{(J^PI)}_{b,c} (p_b,p_c;E) G_c(q;E) T^{(J^PI)}_{c,a} (p_c,p_a;E),
\label{lseq}
\end{equation}
where the subscripts represent the reaction channels and their spin and angular momentum quantum numbers;
$p_a$ represents the magnitude of the relative momentum of the channel $a$ in the center-of-mass system;
and $(J^P I)$ specifies the total angular momentum, parity and total isospin of the considered
partial wave.
At present, we have taken into account the $\pi N$, $\eta N$, $\pi \Delta$, $\rho N$, $\sigma N$, $K\Lambda$, and $K\Sigma$ channels,
where the $\pi\Delta$, $\rho N$, and $\sigma N$ are
the quasi-two body channels that subsequently decay into the three-body $\pi\pi N$ channel.
The Green's function $G_c(q;E)$ is given by
$G_c(q;E)=1/[E-E_M(q)-E_B(q)+i\varepsilon]$ for $c= \pi N, \eta N, K\Lambda, K\Sigma$,
while $G_c(q;E)=1/[E-E_M(q)-E_B(q)-\Sigma_c(q;E)]$ for $c =\pi\Delta, \rho N, \sigma N$,
where $M$ and $B$ are the meson and baryon contained in the channel $c$,
$E_M (q) = (m_M^2 + q^2)^{1/2}$ is the energy of the particle $M$,
and $\Sigma_c(q;E)$ is the self energy of $\Delta$, $\rho$, or $\sigma$ in the presence of the spectator particle.
For the $\pi\Delta$, $\rho N$, and $\sigma N$ channels, the Green's function produces the three-body cut due to the
opening of the $\pi \pi N$ channel in the intermediate reaction processes.
Our physics input is contained in the transition potential.
In our framework, the potential consists of three pieces:
\begin{equation}
V^{(J^PI)}_{b,a} (p_b,p_a;E) = v^{(J^PI)}_{b,a} (p_b,p_a;E) + Z^{(J^PI)}_{b,a} (p_b,p_a;E) +
\sum_{N^*_n}\frac{ \Gamma_{b,N_n^*}(p_b) \Gamma_{N_n^*,a}(p_a)} {E - M^0_{N_n^*}}.
\label{pot}
\end{equation}
The first two terms describe the so-called non-resonant processes including only
the ground state mesons and baryons belonging to each flavor SU(3) multiplet,
and the third term describes the propagation of the bare $N^*$ states.
We quote Ref.~\cite{knls13} for the details of the potential.
It is noted that the $Z$-diagram potential [the second term of Eq.~(\ref{pot})]
also produces the three-body $\pi \pi N$ unitarity cut,
and the implementation of both the $Z$-diagram potential
and the self-energy in the Green's functions
is necessary for maintaining the three-body unitarity.
Within our framework, the bare $N^*$ states are defined as eigenstates of the Hamiltonian
for which the couplings to the reaction channels are turned off.
So by definition, our bare $N^*$ states would correspond to the hadron states
obtained from the static hadron models such as constituent quark models.
By solving Eq.~(\ref{lseq}), the bare $N^*$ states couple to the reaction channels considered,
and then they get complex mass shifts and become resonance states.
Of course there is another possibility that the hadron-exchange potential
[the first and second terms of Eq.~(\ref{pot})]
generates resonance poles dynamically.
Our model contains both possibilities.
To study $N^*$ and $\Delta^*$ resonances, we first need to determine the model parameters such as coupling constants and bare baryon masses, and this is done by fitting to the data of meson production reactions.
Our latest 8-channel model developed and updated in Refs.~\cite{knls13,knls16} was constructed by
a simultaneous fit of more than 27,000 data points of the differential cross sections and
spin polarization observables for $\pi N\to \pi N$ up to $W=2.3$ GeV,
$\pi N\to \eta N, K\Lambda, K\Sigma$ and $\gamma p\to \pi N \eta N, K\Lambda, K\Sigma$ up to $W=2.1$ GeV,
and $\gamma \textrm{`}n\textrm{'} \to \pi N$ up to $W = 2$ GeV.
As an example of our fit, the differential cross section and photon asymmetry
for the $\gamma p \to \pi^0 p$ reaction
are presented in Fig.~\ref{fig:gppi0p}.
Here the results from our original 8-channel model developed
in 2013~\cite{knls13} are compared with
the latest updated version~\cite{knls16},
showing visible improvements of our fit
at several energies, particularly for the photon asymmetry.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth,clip]{gppi0p-dc}
\qquad
\includegraphics[width=0.45\textwidth,clip]{gppi0p-s}
\caption{\label{fig:gppi0p}
(Left) Differential cross section $d\sigma/d\Omega$ for $\gamma p \to \pi^0 p$
(Copyright 2016 The Physical Society of Japan~\cite{nstar-kamano}).
(Right) Photon asymmetry $\Sigma$ for $\gamma p \to \pi^0 p$.
The numbers shown in each panel are the corresponding total scattering energy $W$ in MeV.
The red solid (blue dashed) curves are the results from
our original 8-channel model~\cite{knls13} (its latest updated model~\cite{knls16}).
See Ref.~\cite{knls13} for the references of the data.
}
\end{figure}
Figure~\ref{fig:spectrum} shows the real parts of the extracted resonance pole masses.
Here, our results from Refs.~\cite{knls13,knls16} are compared with
those obtained from the other coupled-channels analyses by the J\"ulich~\cite{juelich} and Bonn-Gatchina~\cite{bg} groups.
One can see that the existence and mass values agree very well for the lowest states in most spin-party states.
Actually, the community has now more or less arrived at a consensus that
the existence and mass spectrum for low-lying $N^*$ and $\Delta^*$ resonances below $\textrm{Re}(M_R) \sim 1.7$~GeV
has been firmly established.
One exception is the second $P_{33}$ resonance, the Roper-like state of the $\Delta$ baryon.
Although its existence is fairly established, the value of the pole mass is fluctuated a lot
between the coupled-channels analyses.
In fact, our results appear much higher than the J\"ulich and Bonn-Gatchina results.
A major reason for this is because this resonance couples weakly to the $\pi N$ and $\gamma N$ channels
and thus it is hard to establish the resonance with the single $\pi$ production data.
However, we find that this resonance has a large decay branching ratio to the three-body $\pi \pi N$ channel~(see, e.g., Fig.~6 of Ref.~\cite{e45}).
This implies that the $\pi \pi N$ production data are expected to provide crucial information on establishing
the second $P_{33}$ resonance.
In this regard, the J-PARC E45 experiment~\cite{e45},
in which the high statistics measurement of the $\pi^\pm p \to \pi \pi N$ reactions will be performed,
is very promising to resolve this issue because
only $I=3/2$ $\Delta$ resonances selectively appear in the direct $s$-channel process
for the case of $\pi^+ p$ reactions.
We will improve our DCC model, which is ready for computing
observables of the $\pi N \to \pi \pi N$ reactions~\cite{pi2pi1,pi2pi2},
once the new data are available from the J-PARC E45 experiment~\cite{e45}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth,clip]{spectrum-v9}
\caption{\label{fig:spectrum}
Mass spectra for $N^*$ and $\Delta^*$ resonances.
Real parts of the resonance pole masses $M_R$ are plotted.
The red solid~\cite{knls16} and dashed~\cite{knls13} lines are the results from the ANL-Osaka analyses,
while the blue and green solid lines are the results from the J\"ulich~(Model B of \cite{juelich}) and Bonn-Gatchina~\cite{bg} analyses.
The spectra of four- and three-star resonances rated by PDG~\cite{pdg14} are also presented with the red and blue filled squares,
respectively, which represent the range of the resonance masses assigned by PDG.
Here we present only the resonances that have the decay width less than 400 MeV.
}
\end{figure}
We also put effort into the analysis of the available data of electroproduction reactions
to determine the electromagnetic transition form factors between the nucleon and nucleon resonances.
We currently focus on analyzing the single pion electroproductions
data from CLAS in the kinematical region up to $Q^2 = 6$~GeV$^2$ and $W=1.7$~GeV.
Figure~\ref{fig:e1pi} shows some preliminary results of our ongoing analysis.
In the analysis, we use the so-called structure functions as the data to analyze,
which were installed in our analysis with the help of K.~Joo and L.~C.~Smith~\cite{joo-smith}.
We see that our current results reproduce the data reasonably well up to $Q^2 = 6$ GeV$^2$.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth,clip]{e1pi}
\caption{\label{fig:e1pi}
Preliminary results for the analysis of $p(e,e'\pi^0)p$ reaction within the ANL-Osaka DCC model.
The results are presented for the structure function $\sigma_T + \varepsilon \sigma_L$
at $Q^2=1.15, 3, 5, 6$ GeV$^2$.
The data for the structure functions are the ones extracted in Ref.~\cite{joo-smith} from
the $p(e,e'\pi^0)p$ reaction cross sections published by CLAS~\cite{clas}.
}
\end{figure}
Figure~\ref{fig:n-d-emff-2} shows the real parts of the $A_{3/2}$ helicity amplitudes
for the electromagnetic transition from the nucleon to the
$\Delta(1232)3/2^+$ resonance evaluated at the pole position.
In the left panel, the full result extracted from our current analysis based on the ANL-Osaka
DCC model is presented by the red circles,
while its meson cloud contribution is plotted in the red dashed curve.
Comparing with Fig.~\ref{fig:n-d-emff}, one can see again that the meson-cloud contribution
is almost 30\% at low $Q^2$ region, and its percentage becomes smaller as $Q^2$ increases.
In the same panel, the results from our previous analysis~\cite{jklmss09,ssl2}
and from the Sato-Lee model~\cite{sl2} are also presented.
One can clearly see that all of the three results agree very well with each other,
indicating that the transition form factors associated with this first $P_{33}$ resonance
has been firmly established.
The right panel shows a comparison of the form factors
without the pion-cloud contribution, which is defined as the full dressed form factor
from which only the $\pi N$-loop contribution is subtracted.
These results are also found to agree well between the three models,
even though their dynamical contents are rather different~\cite{footnote}.
This result would be quite remarkable because in general the separation of
the bare and meson-cloud contributions is dependent on models, but this agreement implies
that pion-cloud contributions might be nearly independent of the dynamical models employed.
To arrive at a more definitive conclusion on this interesting finding, however, we have to
make further investigations of the transition form factors including other resonances as well,
and this is in progress.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth,clip]{n-d-emff-2}
\caption{\label{fig:n-d-emff-2}
Real part of the helicity amplitude $A_{3/2}$ for the transition between the nucleon and the
$\Delta(1232)3/2^+$ resonance evaluated at the pole position.
(Left panel): The full form factors from the current ANL-Osaka DCC analysis (red filled circles),
our previous analysis~\cite{jklmss09,ssl2} (green filled diamonds),
and the Sato-Lee model~\cite{sl2} (blue solid curve).
The red dashed curve is the meson-cloud contribution from the current ANL-Osaka DCC analysis.
(Right panel): The form factors without pion-cloud ($\pi N$-loop) contribution.
The results are from the current ANL-Osaka DCC analysis (red circles),
our previous analysis~\cite{jklmss09,ssl2} (green diamonds),
and the Sato-Lee model~\cite{sl2} (blue dashed curve).
}
\end{figure}
\section{Summary}
\label{sec:summary}
We have presented recent efforts for the spectroscopic study of $N^*$ and $\Delta^*$ resonances
based on the dynamical coupled-channels approach.
The $N^*$ and $\Delta^*$ baryons are the system of very broad and highly
overlapping resonances, and this requires close cooperative works
between the experiments and the theoretical analyses with coupled-channels approaches
to accomplish a reliable extraction of $N^*$ and $\Delta^*$ resonances from reaction data.
Tremendous efforts for this cooperation have led to a recent significant progress of
the $N^*$ and $\Delta^*$ spectroscopy.
Because of the broad and overlapping nature of the $N^*$ and $\Delta^*$ resonances,
the reaction dynamics plays a crucial role in understanding the physics behind
their spectrum and substructure.
We have shown that the dynamical coupled-channels approach is one of
the most suitable approaches to study it, as demonstrated with
the dynamical origin of $P_{11}$ $N^*$ resonances and the meson-cloud effect
in the electromagnetic transition form factors.
The existence and mass values for the low-lying nucleon resonances have now been
firmly established (with one exception of the second $P_{33}$ resonance).
The next important task is therefore to establish the spectrum of high-mass resonances.
In this regard, the so-called (over-)complete experiments for meson photoproduction
reactions are underway at photon- and electron-beam facilities such as ELSA, JLab, and MAMI,
and new high statistics data are continuously published.
Another major topic in the $N^*$ and $\Delta^*$ spectroscopy is to determine the $Q^2$
dependence of electromagnetic transition form factors for the well-established
low-lying resonances.
In this regard, huge amount of electroproduction data are being published from CLAS and
new experiments are planned at CLAS12~\cite{clas12-proposals}.
To contribute to these interesting topics,
the extension of our coupled-channels approach by including more reaction channels is underway.
Finally, the theoretical framework of the ANL-Osaka DCC approach itself is quite general,
and it has been applied also to the spectroscopy of $S = -1$ hyperon resonances via
the comprehensive analysis of $K^- p$~\cite{knlskp1,knlskp2} and $K^- d$~\cite{kl16} reactions,
the meson spectroscopy via the analysis of three-meson decay processes~\cite{meson1,meson2},
and the neutrino-induced reactions~\cite{neutrino1,neutrino2}
associated with the neutrino-oscillation experiments in the multi-GeV region.
More efforts will be put into these directions, too.
\\
The author would like to thank T.-S.~H.~Lee, S.~X.~Nakamura, and T.~Sato for their collaborations.
This work was supported by the JSPS KAKENHI Grant Number JP25800149.
|
1,108,101,564,389 | arxiv | \section{Introduction}\label{sec:introduction}}
\IEEEPARstart{u}{ndeniably}, the advances in immersive multimedia technologies introduced in the last five years are impressive. Extended reality (XR) technologies, which include virtual (VR) and augmented reality (AR) technologies, made huge leaps forward both in terms of realism and user interaction~\cite{suh2018state}. A significant factor in the current turmoil on the topic has undoubtedly been the remarkable interest of related companies such as Meta (formerly Facebook), Microsoft, and Sony. In particular, Meta has decided to focus heavily on the \textsl{metaverse} concept, thus, boosting the interest in these technologies, their potential use cases, and related issues~\cite{cheng2022will}. Due to their inherent characteristics, immersive use cases nowadays play a significant role in the development of a great multitude of enabling technologies. The recent interest in XR technologies has led to enormous investment increments, which have enabled lighter and cheaper devices to reach unprecedented levels of resolution. For example, the Varjo XR-3~\footnote{https://varjo.com/products/xr-3/} head mounted display (HMD) provides a visual resolution of 70 pixels per degree, matching the human eye's resolution.
Advanced XR requires not only ultra-realistic resolution but also the implementation or improvement of other algorithms that aim to enhance user experience~\cite{kilteni2012embodiment}. This goal requires complex and computationally expensive algorithms, such as semantic segmentation~\cite{hengshuang2018semantic, dai20173drecon}, to run in real time. Therefore, to expand the current limits of XR technologies, HMDs must have access to high-end hardware with powerful graphical processing units for ultra-realistic rendering supported by machine learning (ML) processes. For this reason, advanced XR HMDs such as the Varjo XR-3 are tethered, uncomfortable and expensive. Consequently, next-generation wireless systems, such as 5G and 6G, must support XR technologies that have, as a whole, quickly become one of the killer use case families~\cite{perez2022emerging}. The goal, toward this end, is to offload XR heavy processing tasks to a nearby server, or a multi-access edge computing (MEC) platform, to loosen the in-built hardware requirements of XR HMDs while increasing their overall computing capabilities. However, XR offloading is a complex task with extreme requirements in terms of latency and throughput~\cite{gonzalez2022toward,gonzalez2020cutting}, which requires a well-designed and configured network.
It is not trivial for developers and researchers to have access to fully developed XR offloading implementations. The current trend is to rely on pre-recorded or modeled traffic data, which are then fed to various simulation environments or actual wireless access network deployments. Pre-recorded traffic traces allow using extremely realistic data with simple use case-agnostic tools, such as tcpreplay~\footnote{https://tcpreplay.appneta.com/}. On the other hand, traffic models allow the generation of longer traffic traces while providing greater flexibility than pre-recorded traffic data. Even though it is true that the traffic characteristics for each XR use case can be very diverse, thus making it difficult to define a general-purpose model, access to modeled or pre-recorded XR traffic data can considerably accelerate and simplify the testing and prototyping steps.
A number of previous works deal with immersive multimedia traffic capture and modeling or present ready-to-use models. Authors in~\cite{navarro2020survey} provide details on specific use cases employing AR and VR and how one can approximately model their behaviors using the models from 5G-PPP~\cite{osseiran2014scenarios,fantastic5g}. In~\cite{schulz2021analysis}, the authors modeled augmented reality downlink traffic using a classical two-state Markovian process. In~\cite{lecci2021open}, a complete framework aimed to model XR application is presented, alongside an accurate statistical analysis and an ad-hoc traffic generator algorithm. Furthermore, the work carried out in~\cite{lecci2021open} has been exploited to create a VR traffic generator framework for the ns-3 simulator~\cite{lecci2021ns3}. Authors in~\cite{bojovic2022enabling} model 3GPP-compliant traffic cases for next-generation mobile network applications, which include advanced gaming, but no explicit XR case is considered. Generally speaking, the previous works mostly focus on providing models for the downlink traffic. However, as described in~\cite{gonzalez2022toward,gonzalez2020cutting}, advanced XR technologies require multiple complex algorithms to run simultaneously in order to provide the user with a sufficiently high level of interaction, immersiveness, and experience. These algorithms, in many cases, require high-end hardware and therefore, can be considered potential offloading candidates. They require to be continuously fed with the sensor streams captured by the XR HMD, which can be as heavy as or more than ultra-realistic rendered frames. Aligned with this idea, 3GPP has recently included very detailed traffic models for AR and XR in Release 17, differentiated according to the type of data streamed~\cite{3gpp_17}. While the considered VR use cases are still centered only on distributed rendering solutions with a special focus on downlink traffic, AR traffic models also consider complex and heavy uplink traffic.
In this work, we aim to provide realistic traffic traces and their associated models for two separate state-of-the-art XR offloading scenarios, both for downlink and uplink. Our goal is to complement and improve the models proposed in~\cite{3gpp_17}. First, the proposed scenarios, complement the ones described in~\cite{3gpp_17,lecci2021open}. Besides this, we provide the raw data, uploaded to~\cite{opensourcefikore}, which can be useful for many researchers not only to use it as it is for simulation or prototyping purposes but for generating other models more suitable for their use. We also provide a set of XR traffic models obtained from the traces. Differently from other models in the state of the art, we also model the inter-packet arrival time, which, as we show, can be extremely relevant for XR offloading resource allocation algorithms design.
The scenarios under consideration include full XR offloading and egocentric human segmentation, both sitting on the very edge of the current state of the art. Therefore, we believe that our contribution will provide valuable tools to design, test, improve or extend wireless network solutions both in academia and industry. To our knowledge, this is the first work that provides both an accurate traffic dataset and validated models for the mentioned use cases and applications. Our main contributions can be summarized as:
\begin{itemize}
\item An XR offloading traffic dataset for two different relevant offloading scenarios, captured for multiple streaming resolutions;
\item XR traffic models obtained from the captured traces, including the inter-packet arrival time, not available in most of the models provided in the state of the art;
\item A thorough validation of the proposed models using a realistic 5G radio access network (RAN) emulator, showing how an accurate inter-packet arrival time can considerably improve the quality of the models for specific applications.
\end{itemize}
The remaining of this paper is organized as follows: Section~\ref{sec:scenarios} summarizes the two reference XR offloading scenarios; Sections ~\ref{section:arch} and ~\ref{section:capture} describe the offloading architecture and the traffic capture methodology employed in the use cases, respectively; in Section~\ref{section:modelling} we focus on the statistical modeling of the cases by using the previously captured traffic; furthermore, in Section~\ref{section:traffic_generation} we summarize artificial traffic generation with the use of the developed statistical models that we employ, in Section~\ref{section:validation}, in validation experiments that are carried out by means of simulation, in order to verify the behavioral compliance of the modeled traffic with the real captured one; lastly, final conclusions are drawn in Section~\ref{section:conclusion}.
\section{XR Offloading Scenarios}
\label{sec:scenarios}
Our goal is to capture a relevant IP traffic dataset for two demanding XR offloading scenarios, that is, full XR offloading (scenario A) and egocentric human segmentation algorithm offloading (scenario B). In scenario A, all the processing but the sensor capture is moved from the XR device to a nearby server. Differently from~\cite{3gpp_17}, we consider the VR HMD to be a very light device in charge of only capturing the sensor data. The sensor data are streamed to the server, where they get processed. The sensor info is used to render a new high-definition VR frame which is sent back to the device. This is a very relevant use case for advanced and future networks, which can enable ultra-light and wearable XR devices. In our case, we consider the sensor data to be generated by a stereo camera feed and inertial sensors. The inertial sensors traffic can be neglected as its associated throughput is much lower than the stereo camera feed throughput~\cite{gonzalez2022toward,gonzalez2020cutting}. This is an extremely demanding use case as the round trip times should lay below the frame update period, i.e., around 11 ms for a device running at 90~Hz. While there are some techniques to slightly expand this time budget, such as XR time warp~\cite{waveren2016timewarp}, the latency requirements are still tight, especially for ultra-high definition XR scenes rendering, encoding, and transmission.
Scenario B focuses on the particular case of egocentric body segmentation~\cite{gonzalez2022segmentation}, since this is a promising state-of-the-art solution for XR applications. The upstream traffic includes the stereo camera traffic while the server is sending back simple binary masks to the device in which the white pixels correspond to the user's body. The received masks are used by the XR device to render only the pixels corresponding to the user's body within the VR scene. While still a demanding offloading use case, the overall requirements are much lower than in Scenario A, since the downlink stream is just composed of single-channel binary masks.
\section{Offloading Architecture}
\label{section:arch}
Our offloading architecture, described in~\cite{gonzalez2022arch}, relies on two main agents to share data between different peers. On one hand, we have Alga, which connects individual peers. On the other hand, we have Polyp, a data re-router, and replicator, in charge of transmitting the data from one source to one or multiple listening peers. We implemented a publisher-subscriber approach based on topics. When a client subscribes to a topic, Polyp is in charge of re-routing and replicating all the data of the topic toward this client. Similarly, when a client publishes data to a topic, Polyp ensures that these data are transmitted to all the peers subscribed to this topic. Our architecture allows direct communication between end clients without having to use Polyp. Polyp itself is a peer that can subscribe or publish to a topic. Alga is in charge of creating all the necessary connections and transmitting the data. The general representation of our architecture is depicted in Fig.~\ref{fig:polypalga}.
The first version of this offloading architecture, implemented Alga using TCP for IP traffic transmission. Besides, to efficiently avoid TCP disadvantages, we sent each frame separately encoded in JPEG. This architecture served us to use our ML egocentric body segmentation algorithm, running on a nearby server, with a commercial XR HMD, the Meta Quest 2~\footnote{https://www.meta.com/en/quest/products/quest-2/}. However, joint JPEG encoding and TCP transmission, while useful in many scenarios due to their associated reliability, as described in~\cite{gonzalez2022arch}, were not originally designed to support high throughput and low latency. Therefore, we extended Alga's functionality incorporating H.264 video encoding~\cite{h264} and RTP (real-time transport protocol) over UDP transmission~\cite{rtp}. To encode the sensor streams in H.264 and pack the data in RTP frames the architecture uses GStreamer~\footnote{https://gstreamer.freedesktop.org/}.
For traffic control reasons and to preserve compatibility with Polyp's in-built functionalities, we need to have control over the individual video frames and attach the metadata associated with them, such as the destination topic, timestamps, etc. This metadata can also be useful for performance analysis or bottleneck detection. To achieve this goal, we use RTP extended headers. Thus, the metadata is added to each video frame as an RTP extended header, which can be decoded and read on the receiving end. This is achieved using GStreamer in-built functionality. Alga's data flow for both TCP and RTP/UDP modes is depicted in Fig.~\ref{fig:polypalga}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{figures/PolypAlgaGreeks.pdf}
\caption{The proposed offloading architecture strategy and simplified data flow for a general multi-peer scenario (top). Alga data flow for both TCP-JPEG and RTP-H-264 implemented transmission pipelines (bottom). }
\label{fig:polypalga}
\end{figure}
From the sending peer, the frames are fed to Alga in raw RGB format. Alga injects the raw frame along with its associated metadata into the GStreamer encoding and transmitting pipeline. If there are multiple peers subscribed to the same topic, the traffic is replicated and routed by Polyp, leaving this traffic untouched and just accessing the headers to read the target destination. In both this case and the case of direct traffic transmission between end peers using just Alga, the GStreamer pipeline receives and decodes the RTP frame. Once decoded, the frame can be accessed by the application layer.
\section{Traffic Capture Methodology}
\label{section:capture}
As described in Section~\ref{section:arch}, our offloading architecture implementation has already been tested on a full end-to-end offloading solution using a commercial XR device, the Meta Quest 2. However, we decided to use a high-end laptop to emulate the XR offloading IP traffic for two main reasons:
\begin{itemize}
\item \textbf{Uncontrollable overhead} -- our architecture is optimized for wireless offloading via WiFi or advanced RAN networks such as 5G. We need to capture the data on the transmitting peer to avoid any overhead introduced by the wireless transmission, traffic routing, congestion, etc. These potential sources of overhead can lead to latencies, jitter, or packet loss which strongly depend on the used configuration, wireless technology, and other external factors. It is out of the scope of this work to model the network behavior and its associated configuration. However, we could not find an efficient manner to capture the IP traffic being transmitted from the XR device.
\item \textbf{Cover demanding XR offloading use cases} -- the Meta Quest 2 is not capable of handling demanding XR offloading use cases due to its limited computation capabilities. Our target is to cover XR offloading use cases which are still not possible with current XR or wireless access points technologies.
\end{itemize}
Following these considerations, all the data were captured using a high-end laptop, with an Intel Core i7-10870H CPU @ 2.20~GHz \texttimes\ 16, and 16 GB of RAM, running Ubuntu 18.04 LTS. The offloading architecture was set up and configured identically to an actual XR offloading deployment. For simplification and as we were not using an actual XR HMD, the IP traffic from each stream (scenario A and scenario B uplink and downlink streams) was captured via separate capture runs. Therefore, we used prerecorded data to capture the IP traffic. According to the scenarios described in Section~\ref{sec:scenarios} we recorded data for the following streams:
\begin{figure}[th]
\centering
\includegraphics[width=\linewidth]{figures/CaptureSetup.pdf}
\caption{ Simplified representation of the final setup used to capture the XR offloading IP traffic dataset.}
\label{fig:capturesetup}
\end{figure}
\begin{itemize}
\item \textbf{Stream 1 -- Uplink stereo camera stream}: This corresponds to the frontal stereo camera data, which are transmitted in both offloading scenarios A and B. The recorded data were obtained from the same stereo camera used in the end-to-end offloading solution of~\cite{gonzalez2022segmentation}. We recorded a continuous stereo video stream of 2560 \texttimes\ 960 resolution at 60~Hz, the maximum supported by the camera. While XR devices are expected to run at rates above 90~Hz, the sensor data are not required to be updated so fast~\cite{gonzalez2022toward,gonzalez2020cutting}. The prerecorded data had a length of 15 minutes.
\item \textbf{Stream 2 -- Downlink rendered frames}: This corresponds to the immersive frames rendered on the server in scenario A. In this case, we used a high-definition stereo video from a first person video game. The recorded video has a resolution of 3840 \texttimes\ 1920 and an update rate of 90~Hz.
\item \textbf{Stream 3 -- Downlink segmentation masks}: This corresponds to the binary pixel classification output by the egocentric body segmentation ML algorithm in scenario B. From the stream 1, we estimated the black and white binary single channel masks for each frame using the segmentation algorithm described in~\cite{gonzalez2022realtime}. Therefore, the resolution and update rate is the same as in stream 1 (2560 \texttimes\ 960 @ 60~Hz).
\end{itemize}
To expand and add extra value to the presented dataset, we downscaled the three streams to different sets of resolutions/update rates that can be useful for potential researchers and applications. A summary of all the resolutions and update rates we used to generate the traffic data is shown in Table~\ref{tab:scenarios_resolutions}.
\begin{table}[t]
\centering
\caption{Uplink and downlink resolutions and frame rates used to generate the proposed XR IP traffic dataset }
\label{tab:scenarios_resolutions}
\begin{tabular}{cccc}
& \multicolumn{3}{c}{Resolution @ FPS}\\
\midrule
& Stream 1 & Stream 2 & Stream 3\\
\midrule
High & 2560 \texttimes\ 960 @ 60 & 3840 \texttimes\ 1920 @ 90 & 2560 \texttimes\ 960 @ 60\\
Medium & 1920 \texttimes\ 720 @ 60 & 3840 \texttimes\ 1920 @ 72 & 1920 \texttimes\ 720 @ 60\\
Low & 1280 \texttimes\ 480 @ 60 & & 1280 \texttimes\ 480 @ 60 \\
\bottomrule\\
\end{tabular}
\end{table}
Each of the resolution/frames per second (FPS) and transmission direction (uplink or downlink) streams was captured separately. To capture the IP traffic, the client reads the individual raw frames, one by one, from the selected stream and sends them using the described architecture. As both the client and server run on the same machine to accelerate and simplify the capture process, we set a streaming client connected to a server, which just discards the incoming packets using GStreamer's \textit{Fakesink} module to avoid any additional overhead. There is no instance of Polyp and both client and server are directly connected using Alga in H.264-RTP mode: the raw frames are encoded using H.264 and packetized as RTP frames to be transmitted via UDP, in localhost. The IP traffic was captured using Wireshark, generating an individual packet capture (PCAP) file for each capture run. The final simplified capturing setup is depicted in Fig.~\ref{fig:capturesetup}. Each capture run had a duration of 10 minutes, for a total of 110 minutes of data.
\section{Traffic Modeling}
\label{section:modelling}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{figures/arrows.pdf}
\caption{Simplified IP packets (black arrows) representation packed in several RTP frames. The RTP frame size, inter-frame, and inter-packet interval times are illustrated. }
\label{fig:rtpframesarrows}
\end{figure}
\begin{table}[th]
\centering
\caption{RTP frame size, inter-frame interval, inter-packet interval and individual IP packet sizes basic statistics from the captured data
\label{tab:mean_values}
\begin{tabular}{l l c c c}
\multicolumn{5}{c}{Stream 1 -- Uplink stereo camera} \\
\midrule
&Resolution & Mean & Std. Dev. & 95\textsuperscript{th} perc. \\
\midrule
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Frame size \\ (bytes) \end{tabular}} & Low & 34602.44 & 9529.36 & 55735 \\
& Medium & 86149.87 & 19936.04 & 132384 \\
& High & 232084.33 & 28141.99 & 269008 \\
\midrule
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Inter-frame \\interval (ms)\end{tabular}} & Low & 16.76 & 0.26 & 17.12 \\
& Medium & 16.76 & 0.50 & 17.53 \\
& High & 16.80 & 2.57 & 21.29 \\
\midrule
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Inter-packet \\interval (\textmu s)\end{tabular}} & Low & 3.94 & 6.08 & 17.10 \\
& Medium & 3.53 & 5.47 & 17.27 \\
& High & 4.55 & 11.02 & 6.43 \\
\midrule
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}IP packet size \\(bytes)\end{tabular}} & Low & 1280.79 & 356.58 & 1428 \\
& Medium & 1364.81 & 244.83 & 1428 \\
& High & 1403.88 & 154.31 & 1428 \\
\bottomrule
\\
\multicolumn{5}{c}{Stream 2 -- Downlink rendered frames} \\
\midrule
&Update rate & Mean & Std. Dev. & 95\textsuperscript{th} perc. \\
\midrule
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Frame size \\ (bytes) \end{tabular}} & 72~Hz & 207968.42 & 122929.70 & 396402 \\
& 90~Hz & 163548.89 & 116837.86 & 339396 \\
\midrule
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Inter-frame \\interval (ms)\end{tabular}} & 72~Hz & 13.88 & 0.05 & 13.94 \\
& 90~Hz & 11.11 & 0.04 & 11.17 \\
\midrule
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Inter-packet \\interval (\textmu s)\end{tabular}} & 72~Hz & 3.41 & 9.18 & 4.85 \\
& 90~Hz & 3.66 & 9.08 & 6.91 \\
\midrule
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}IP packet size \\(bytes)\end{tabular}} & 72~Hz & 1400.04 & 171.38 & 1428 \\
& 90~Hz & 1392.66 & 191.91 & 1428 \\
\bottomrule
\\
\multicolumn{5}{c}{Stream 3 -- Segmentation masks} \\
\midrule
&Resolution & Mean & Std. Dev. & 95\textsuperscript{th} perc. \\
\midrule
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Frame size \\ (bytes) \end{tabular}} & Low & 4968.50 & 2175.03 & 7708 \\
& Medium & 8273.98 & 3921.00 & 13970 \\
& High & 24378.90 & 11440.59 & 43458 \\
\midrule
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Inter-frame \\interval (ms)\end{tabular}} & Low & 16.76 & 0.20 & 17.05 \\
& Medium & 16.75 & 0.61 & 17.77 \\
& High & 17.10 & 3.30 & 22.44 \\
\midrule
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Inter-packet \\interval (\textmu s)\end{tabular}} & Low & 7.01 & 6.17 & 15.04 \\
& Medium & 5.87 & 9.61 & 15.34 \\
& High & 7.54 & 24.80 & 24.63 \\
\midrule
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}IP packet size \\(bytes)\end{tabular}} & Low & 749.83 & 517.39 & 1428 \\
& Medium & 933.53 & 527.48 & 1428 \\
& High & 1216.27 & 419.88 & 1428 \\
\bottomrule\\
\end{tabular}
\end{table}
In addition to releasing the PCAP files publicly, we made a systematic effort to statistically model the most relevant video streaming IP traffic parameters: i) RTP frame size, defined as the size of each individual RTP frame, ii) inter-frame interval, that is, the time between individual RTP frames and iii) inter-packet interval, i.e., the time between successive packets within an individual RTP frame. These parameters are depicted schematically in Fig.~\ref{fig:rtpframesarrows}. The main goal is to allow potential researchers, in the context of wireless communication systems analysis and evaluation, to generate realistic XR IP traffic, online or offline, based on the models derived.
\subsection{Data Pre-processing}
The PCAP files are large and contain a lot of information that can be useful in future works, such as the transmitted bytes themselves or other relevant metadata. To derive the traffic models, we store the payload, timestamps and the new RTP frame marker bit of the individual IP packets coming into the arbitrary port used for transmission. This bit information is necessary to identify a new frame. Data pre-processing takes place in two steps, as follows.
In the first step, we obtain a list of all the captured IP packets, ordered according to their timestamp. For each packet, we keep the payload in bytes, the timestamp, and a custom boolean indicator, i.e., a combination of the bit marker and the timestamp separation, which determines if the IP packet initiates a new RTP frame or not. This first pre-processing step is implemented using Python and Scapy\footnote{https://scapy.net/} library to parse the PCAP file.
In the second step, we go through all the IP packets and group them in individual RTP frames according to the custom boolean indicator. Then we estimate, for each RTP frame, the total size in bytes (RTP frame size), the time in between consecutive frames (inter-frame intervals), and the time in between consecutive IP packets (inter-packet intervals). These parameters are stored in three separate arrays and saved as an NPY (Python NumPy format) file. These NPY files are the ones used to model the IP traffic. This second step is implemented in Python as well.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/histogram_new_1.pdf} \\
\includegraphics[width=\linewidth]{figures/histogram_new_3.pdf}\\
\includegraphics[width=\linewidth]{figures/histogram_new_2.pdf}
\caption{Histograms (in blue) and cumulative distribution functions (CDF, in red) from the captured data for Stream 1 (top), 2 (middle), and 3 (bottom) for the target parameters: inter-packet interval times (left), inter-frame interval times (center) and frame sizes (right). }
\label{fig:example_hist}
\end{figure*}
These two steps are applied to all the captured PCAP files. The final outputs are stored as individual files to easily identify each capture run. In Table~\ref{tab:mean_values} we show the basic statistics, i.e., mean value, standard deviation, and 95\textsuperscript{th} percentile of all the captured data cases, for the frame size, inter-frame interval, inter-packet interval, and IP packet size. The packet size information is useful to generate synthetic data from the fitted models.
\subsection{Prior Data Analysis}
Before taking any modeling decisions, we studied the histograms of the pre-processed data. In particular, we plotted the histograms for all the parameters to be modeled for all the captured data. In Fig.~\ref{fig:example_hist} we present examples of the RTP frame size, inter-frame interval, and inter-packet interval histograms, for the high-resolution Stream 1 and 3 (at 60~Hz), as well as Stream~2 at 90~Hz.
We observe, in all streams, that the inter-frame intervals are evenly distributed around a mean value that coincides with the frame update period according to the selected FPS value. Due to variable rate encoding, which guarantees low latency, the coding rate and the frame size may include peaks and variations. For the 60~Hz captured data, this is not an issue since the encoder is faster than the frame update period for all cases and resolutions. However, for very high resolutions and frame update rates, the coding rate needs to dynamically adapt, resulting in frame sizes with more than one peak, as shown for Stream 2 in Fig.~\ref{fig:example_hist}. This also affects the standard deviation of the inter-frame interval, which is reduced in Stream 2 cases due to the stricter encoding time requirements.
Regarding the potential distributions to model the target parameters, we observe, that in both Stream 1 and 3, these parameters can be modeled as unimodal continuous distributions. On the other hand, we observe that the distribution of the RTP frame sizes of Stream 2 presents two local maxima. These local maxima are smaller for the higher frame update rate (90~Hz) depicted in Fig.~\ref{fig:example_hist}. Nevertheless, we decided to model Stream 3 RTP frame sizes as continuous unimodal distributions as well and check if they provide a sufficiently good fitting before testing multimodal distributions.
In Fig.~\ref{fig:example_hist} we can observe that the inter-packet interval distribution seems not to be unimodal, since slight changes in convexity appear. However, the inter-packet intervals lay in the order of the microsecond, as shown in Table~\ref{tab:mean_values}. On that scale, many external sources can affect the measured value, such as the operating system particular operations, Wireshark processing, etc. Again, modeling these possible external factors that can affect the inter-packet intervals is out of the scope of this work. Therefore, we choose to move forward with the simple approach of modeling the inter-packet interval time as a unimodal continuous distribution.
\subsection{IP Traffic Models}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{figures/KS_values.pdf}
\caption{Best KS test scoring models (from left to right) for the target parameters to be modelled: RTP frame sizes (left), inter-frame interval times (center), and inter-packet interval times (right). }
\label{fig:ksvalues}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/norm_dist_framesizes_new.pdf}
\caption{Johnson's $S_U$ and exponential normal fitted distributions' probability density functions (PDF) and CDFs for the Stream 2 and 90~Hz case.}
\label{fig:normfsksvalues}
\end{figure}
There is a wide range of well-established and commonly used continuous distributions for the parameters under consideration. To find the best candidate distributions that fit our data, we used Python's Scipy library\cite{virtanen2020scipy}. Scipy is capable of modeling more than 90 different continuous distributions. We decided to fit all the distributions available and evaluate their goodness of fit using the Kolmogorov-Smirnov (KS) test~\cite{kstest}. The KS test quantifies the distance between the empirical CDF $F_n(x)$ of a sample and the fitted CDF of an arbitrary distribution $F(x)$ as
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/examplefit_new.pdf}
\caption{Histograms (in blue) and CDFs (yellow) from the captured data for Stream 1 for the parameters: inter-packet interval times (left), inter-frame interval times (center) and frame sizes (right). On top, the Johnson's $S_U$ fitted distribution's PDF (red) and CDF (green).}
\label{fig:examplefit}
\end{figure*}
\begin{equation}
\text{KS} = \sup_{x}|F_n(x) - F(x)|,
\end{equation}
where $\sup_{x}$ is the supremum of all the set of distances across $x$ values. The lower the KS test value, the better the fitting of the candidate distribution with the captured data. The KS test results of the 15 best-fitted distributions for each parameter and stream type are depicted in Fig.~\ref{fig:ksvalues}, sorted from best (left) to worst (right). We observe that Johnson's $S_U$ distribution~\cite{johnson1949johnsonsu} obtains the best mean KS value across all the captured data. This distribution was proposed by N.~L.~Johnson in 1949 and has been historically used in finances. The key characteristic of Johnson's $S_U$ distribution is its flexibility which originates from its four parameters that allow the distribution to be either symmetric or asymmetric. The probability density function (pdf) is expressed as
\begin{equation}
\begin{split}
f(x, \gamma, \delta, \lambda, \varepsilon) &= \frac{\varepsilon}{\delta\cdot m(x,\gamma,\delta)}
\phi \bigl\{\gamma + \delta\log(x) \\
& \quad \cdot\left [ k(x,\gamma,\delta) + m(x,\gamma,\delta) \right] \bigr\} ,
\end{split}
\end{equation}
where
\begin{equation}
k(x, \gamma, \delta) = \frac{x-\gamma}{\delta}, \quad m(x, \gamma, \delta) = \sqrt{k(x, \gamma, \delta)^2 + 1},
\end{equation}
with $\gamma$ and $\delta$ being the location and scale parameters, respectively, $\lambda$ and $\varepsilon$ the Johnson's $S_U$ specific shape parameters, and $\phi(\cdot)$ the pdf of the normal distribution.
By further inspecting Fig.~\ref{fig:ksvalues}, we notice that for the RTP frame sizes, the only case that Johnson's $S_U$ does not provide the best fit is for the Stream 2 @ 90~Hz, for which the Exponential Normal distribution is the best. However, as we can see in Fig.~\ref{fig:normfsksvalues} the practical differences between the two distributions for Stream 2 @ 90~Hz are small enough. Besides, even if Johnson's $S_U$ fit is not as accurate as in the other RTP frame size distributions (see Fig.~\ref{fig:examplefit}), the measured KS values obtained are low enough, with a good fit for the larger packet sizes. Therefore, we decided to model and evaluate the RTP frame sizes using the Johnson's $S_U$ distribution for all the captured data. Similarly, we decided to use Johnson's $S_U$ distribution to model also the inter-frame and inter-packet intervals. The parameters of Johnson's $S_U$ distribution for all the traffic parameters under consideration and captured data are summarized in Table~A.1 of the Appendix.
\section{Realistic Traffic Generation}
\label{section:traffic_generation}
Our next goal is to build a tool that allows the generation of realistic XR offloading IP traffic. Such a tool is useful for researchers and application developers to generate and use synthetic data for analysis or incorporate it into complex link-level or system-level simulations. While other video XR traffic~\cite{3gpp_17} state-of-the-art models only consider the frame size and inter-frame interval for generating synthetic data, we believe that including the inter-packet interval data extends the applicability of our models to a wider range of research efforts. For instance, when designing novel or advanced resource allocation techniques, an accurate inter-packet interval model might be extremely useful and lead to better and more appropriate solutions.
To create synthetic data we have to generate random values from the fitted distributions. Towards this end, we used Scipy's \textit{rvs} in-build function which generates random values from a specific distribution. In addition, we need the size of the individual RTP packets. In the real captured data this is not constant, as shown in Table~\ref{tab:mean_values}, in terms of the IP packet sizes, especially for Stream~3, since the way the segmentation mask is coded and organized in RTP packets varies from the regular color video stream (1 and 2). In general, the packets of each RTP frame have a fixed size chosen in the encoding/RTP framing pipeline (1442 bytes in our case). The first (including the RTP header) and the last are usually different. Depending on the chosen pipeline and configuration there may be smaller packets also in between, as in our case. However, these phenomena happen rarely as we can observe in the packet size histograms. The significant difference between the mean and the maximum packet sizes in low throughput streams, such as Stream 3, is expected because the number of packets between the first and last within an RTP frame is small (smaller than 5 in the low resolution Stream 3 case). Therefore, we consider two IP packet size options: i) the mean size value, as in Table~\ref{tab:mean_values} or ii) the 95\textsuperscript{th} percentile value. We refer to case i) as \textit{Mean Packet} and case ii) as \textit{Max Packet}.
\begin{algorithm}[t]
\caption{\label{alg:datagen}Synthetic IP packets generation algorithm.}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{$N_{RTP}$, $s_{RTP}$}
\Output{A sequence of IP packets}
\While{$N_{RTP}^{\text{generated}} < N_{RTP}$}
{
1. $s_{IP} \gets \text{FS}_{\text{random}}$ \\
2. $N_{IP} \gets s_{RTP} / s_{IP}$
3. \While{$N_{IP}^\text{generated} < N_{IP}$}
{
3.1 $\Delta t_{IP} \gets \text{IPI}_{\text{random}}$\\
3.2 $t_s \gets t_s + \Delta t_{IP}$ \\
3.3 Store new IP packet P($t_s$, $s_{RTP}$)\\
3.4 $N_{IP}^{\textit{generated}} \gets N_{IP}^{\textit{generated}} + 1$
}
4. $\Delta t_{IF} \gets \text{IFI}_{\text{random}}$ \\
5. $t_s \gets t_s + \Delta t_{IF}$ \\
6. $N_{RTP}^{\textit{generated}} \gets N_{RTP}^{\textit{generated}} + 1 $
}
\end{algorithm}
Once we have the generators and packet sizes, we can easily define a procedure for synthetic realistic IP traffic generation, as described in Alg.~\ref{alg:datagen}. For each RTP frame among the $N_{RTP}$ to be generated, we begin by getting its size $s_{RTP}$ from the selected RTP generator, and by choosing the IP packet size $s_{IP}$ equal to Max Packet or Mean Packet. Then we compute the total number of packets $N_{IP}$ simply by dividing $s_{RTP}$ by $s_{IP}$. We continue by generating $N_{IP}$ packets of size $s_{IP}$, each with a specific timestamp $t_s$. The timestamp is computed by adding a random inter-packet interval $\Delta t_{IP}$ to the previous packet timestamp. Once all $N_{IP}$ packets are generated, a new randomly picked inter-frame interval $\Delta t_{IF}$ is added to the current timestamp. The random values are generated from the modelled distributions. The above procedure is repeated for each RTP frame.
The described algorithm can be easily implemented in any programming language and therefore used in any simulation environment. Additionally, it can be used to create synthetic traffic traces by storing the generated packets in a separate PCAP file and utilize them at a later time.
\section{Validation Experiments}
\label{section:validation}
In this section, we test the traffic generated with the methodology described in the previous section, over a realistic RAN scenario, to determine its ability to accurately mimic the behavior of the captured XR offloading data traffic. To do this, we first compare the average throughput obtained from the captured data with the corresponding generated synthetic data obtained. Then, we study the behavior of the different synthetic data models in terms of application layer throughput and latency in the most relevant offloading scenarios using a real-time 5G RAN emulator. Finally, we thoroughly examine the impact of the type of traffic model used on resource allocation by comparing synthetic data from different models with actual XR traffic.
\begin{table}[t]
\centering
\caption{Mean throughput of the generated synthetic data in comparison with the captured traffic's throughput }
\label{tab:generatedTp}
\begin{adjustbox}{width=\linewidth}
\begin{tabular}{lccccc}
\multicolumn{6}{c}{Stream 1 -- Uplink stereo camera} \\
\toprule
& \multicolumn{1}{c}{Captured} & \multicolumn{2}{c}{Max Packet model} & \multicolumn{2}{c}{Mean Packet model}\\
& (Mbps) & (Mbps) & Error (\%) & (Mbps) & Error (\%)\\
\midrule
Low & \multicolumn{1}{c}{16.51} & 16.49 & \multicolumn{1}{c}{0.12} & 16.49 & 0.12 \\
Med. &\multicolumn{1}{c}{41.11} & 41.15 & \multicolumn{1}{c}{0.09} & 40.96 & 0.36 \\
High & \multicolumn{1}{c}{110.55} & 110.50 & \multicolumn{1}{c}{0.05} & 110.27 & 0.25 \\
\bottomrule\\
\multicolumn{6}{c}{Stream 2 -- Downlink rendered frames} \\
\toprule
& \multicolumn{1}{c}{Captured} & \multicolumn{2}{c}{Max Packet model} & \multicolumn{2}{c}{Mean Packet model}\\
& (Mbps) & (Mbps) & Error (\%) & (Mbps) & Error (\%)\\
\midrule
72~Hz & \multicolumn{1}{c}{119.79} & 121.36 & \multicolumn{1}{c}{1.29} & 120.83 & 0.86 \\
90~Hz & \multicolumn{1}{c}{117.76} & 117.74 & \multicolumn{1}{c}{0.02} & 118.60 & 0.71 \\
\bottomrule\\
\multicolumn{6}{c}{Stream 3 -- Downlink segmentation mask} \\
\toprule
& \multicolumn{1}{c}{Captured} & \multicolumn{2}{c}{Max Packet model} & \multicolumn{2}{c}{Mean Packet model}\\
& (Mbps) & (Mbps) & Error (\%) & (Mbps) & Error (\%)\\
\midrule
Low & \multicolumn{1}{c}{2.37} & 2.35 & \multicolumn{1}{c}{0.89} & 2.36 & 0.51 \\
Med. &\multicolumn{1}{c}{3.95} & 3.92 & \multicolumn{1}{c}{0.66} & 3.94 & 0.35 \\
High &\multicolumn{1}{c}{11.4} & 11.44 & \multicolumn{1}{c}{0.38} & 11.44 & 0.32 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
The first step is to compare the generated mean throughput of the synthetic data using the modeled Johnson's $S_U$ distribution with the captured data. The mean throughput results of the captured and synthetic data, for both Max Packet and Mean Packet cases (IP packet sizes), are shown in Table~\ref{tab:generatedTp}. The differences between the synthetic and captured data throughput are also included as percentage differences. We can observe that the throughput differences are low in all cases, with a peak of 1.29\% for Stream 2 and 72~Hz case. All other cases present differences below 1\%, for both IP packet sizes, with an average error of 0.435\%, and 0.438\%, for Max Packet, and Mean Packet case, respectively.
\begin{table}[t]
\centering
\caption{Common simulation parameters used in all the experiment runs}
\label{tab:simparameters}
\begin{tabular}{cc}
\multicolumn{2}{c}{Simulation Parameters} \\
\toprule
TDD Configuration & 1(UL):1(DL) \\
Modulation & 256-QAM \\
Frequency Band & 26.5 GHz \\
UE MIMO Layers & 2 \\
Allocation Type & 0 \\
Allocation Configuration & 1 \\
Scenario & Rural Macrocell \\
\bottomrule
\end{tabular}
\end{table}
The next evaluation step is to compare the behavior of both synthetic and captured data on a realistic advanced 5G RAN deployment. Towards this end, we used the open-source 5G RAN real-time emulator, named FikoRE~\cite{morin2022fikore,opensourcefikore}. FikoRE has been specifically designed for application layer researchers and developers to test their solutions on a realistic RAN setup. It supports the simulation of multiple background user equipment (UE) while handling high actual IP traffic throughput (above 1 Gbps). For our validation experiments, FikoRE runs as a simulator since we are not injecting actual IP traffic, but the traces from the captured or synthetic data. We tested the two scenarios described in Section~\ref{sec:scenarios}, with the following setup:
\begin{itemize}
\item \textbf{Scenario A -- Full Offloading}: On the downlink side, we chose to evaluate the 72~Hz rendered frames stream since it represents the current rendering offloading possibilities of commercial XR devices such as the Meta Quest 2. The Meta Quest~2 devices are capable of performing offloaded rendering, via a WLAN network, to a laptop in charge of rendering the immersive scene. The recommended setup is 72~Hz, being the rendering resolution of 1832 \texttimes\ 1920 per eye, which is slightly smaller than our captured data for the rendering frames stream. The uplink corresponds to the sensor stream (Stream~1) with a stereo resolution of 1920 \texttimes\ 720.
\item \textbf{Scenario B -- Egocentric Human Body Segmentation}: Successful deployment of this scenario was achieved in previous works~\cite{gonzalez2022segmentation}. While our deployment uses smaller resolutions, we evaluated the scenario in which both Stream~1 and 3 use a resolution of 1920 \texttimes\ 720.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[trim={0 0 {1.94\linewidth} 0},width=\linewidth, clip]{figures/macrotpnew.pdf}
\caption{Mean downlink throughput measured for Scenario A and B along with configuration C for the captured and synthetic data. }
\label{fig:resultstp}
\end{figure}
Both offloading scenarios were evaluated in three different network configurations:
\begin{itemize}
\item \textbf{Configuration A -- Multiple background UEs and a single immersive UE with proportional fair (PF)}: In this scenario we simulated multiple UEs which are transmitting 5~Mbps of traffic in each direction. The throughput is not continuous, but is synthetically generated using the video streaming models from~\cite{tgaxevaluation} applicable for streaming applications such as Netflix. The emulator is set up with a single carrier of 100~MHz bandwidth on the 26.5~GHz millimeter wave (mm-wave) frequency band. Resource allocation takes place based on the PF metric~\cite{capozzi2013metrics}, using 1:1 (downlink:uplink) time division duplexing (TDD). We tested this network with a single immersive UE and 0, 20, 40, 60, 80 and 100 background UEs with 5~Mbps traffic in each direction. The network starts saturating around 80 simultaneous UEs.
\item \textbf{Configuration B -- Mutiple immersive UEs with PF}: In this scenario we have multiple immersive UEs, all using the same synthetic data. The throughput per UE is much higher than in Configuration~A, so we increased the total bandwidth to 200~MHz in order to be able to simulate more UEs before reaching network UE saturation.
\item \textbf{Configuration C -- Mutiple immersive UEs with maximum throughput (MT)}: this setup is identical to Configuration~B only changing the resource allocation metric used from PF to MT~\cite{capozzi2013metrics}.
\end{itemize}
\begin{table*}[t]
\centering
\caption{Emulated application level throughput and latency comparison between captured and synthetic traffic. The synthetic experiments are repeated using the maximum and mean packet sizes}
\label{tab:macroresults}
\begin{tabular}{lcccccccc}
\multicolumn{9}{c}{Configuration A -- Multiple background UEs and a single immersive UE with PF} \\
\cmidrule{2-9}
& \multicolumn{4}{c}{Scenario A: Full offloading} & \multicolumn{4}{c}{Scenario B: Deep learning offloading} \\
\cmidrule{2-9}
& \multicolumn{2}{c}{Throughput error} & \multicolumn{2}{c}{Latency error} & \multicolumn{2}{c}{Throughput error} & \multicolumn{2}{c}{Latency error} \\
\cmidrule{2-9}
& Downlink & \multicolumn{1}{c}{Uplink} & Downlink & \multicolumn{1}{c}{Uplink} & Downlink & \multicolumn{1}{c}{Uplink} & Downlink & Uplink \\
\midrule
Max Packet size (\%) & 0.85 & \multicolumn{1}{c}{0.06} & 0.07 & \multicolumn{1}{c}{0.27} & 0.28 & \multicolumn{1}{c}{0.06} & 1.48 & 0.23 \\
Mean Packet size (\%) & 0.59 & \multicolumn{1}{c}{0.13} & 0.58 & \multicolumn{1}{c}{0.38} & 0.35 & \multicolumn{1}{c}{0.13} & 1.55 & 0.51 \\
\bottomrule \\
\multicolumn{9}{c}{Configuration B -- Multiple immersive UEs with PF} \\
\cmidrule{2-9}
& \multicolumn{4}{c}{Scenario A: Full offloading} & \multicolumn{4}{c}{Scenario B: Deep learning offloading} \\
\cmidrule{2-9}
& \multicolumn{2}{c}{Throughput error} & \multicolumn{2}{c}{Latency error} & \multicolumn{2}{c}{Throughput error} & \multicolumn{2}{c}{Latency error} \\
\cmidrule{2-9}
& Downlink & \multicolumn{1}{c}{Uplink} & Downlink & \multicolumn{1}{c}{Uplink} & Downlink & \multicolumn{1}{c}{Uplink} & Downlink & Uplink \\
\midrule
Max Packet size (\%) & 0.57 & \multicolumn{1}{c}{0.23} & 0.71 & \multicolumn{1}{c}{0.15} & 1.55 & \multicolumn{1}{c}{0.16} & 0.04 & 0.48 \\
Mean Packet size (\%) & 0.65 & \multicolumn{1}{c}{0.46} & 0.29 & \multicolumn{1}{c}{0.32} & 1.50 & \multicolumn{1}{c}{0.28} & 0.03 & 0.36 \\
\bottomrule \\
\multicolumn{9}{c}{Configuration C -- Multiple immersive UEs with MT} \\
\cmidrule{2-9}
& \multicolumn{4}{c}{Scenario A: Full offloading} & \multicolumn{4}{c}{Scenario B: Deep learning offloading} \\
\cmidrule{2-9}
& \multicolumn{2}{c}{Throughput error} & \multicolumn{2}{c}{Latency error} & \multicolumn{2}{c}{Throughput error} & \multicolumn{2}{c}{Latency error} \\
\cmidrule{2-9}
& Downlink & \multicolumn{1}{c}{Uplink} & Downlink & \multicolumn{1}{c}{Uplink} & Downlink & \multicolumn{1}{c}{Uplink} & Downlink & Uplink \\
\midrule
Max Packet size (\%) & 1.72 & \multicolumn{1}{c}{0.23} & 0.83 & \multicolumn{1}{c}{0.34} & 1.55 & \multicolumn{1}{c}{0.16} & 0.06 & 0.20 \\
Mean Packet size (\%) & 1.28 & \multicolumn{1}{c}{0.46} & 0.42 & \multicolumn{1}{c}{0.38} & 1.92 & \multicolumn{1}{c}{0.28} & 0.03 & 0.12 \\
\bottomrule
\end{tabular}
\end{table*}
All three configurations have in common the simulation parameters included in Table~\ref{tab:simparameters}. Each individual simulation run has a duration of 500 seconds and is repeated for each combination of configuration, number of UEs, offloading scenario (A and B), and type of data (synthetic with both packet types and captured data). In all cases, there is a ``principal'' immersive UE closer to the emulated gNB than the other simulated UEs, from which we obtained the measurements used in this analysis. The goal is to study and compare the behavior of each type of IP traffic data at the application level, so we evaluated the application layer throughput and latency. The throughput is measured as the total mean throughput transmitted by all UEs. The latency, is measured only for the principal immersive UE. All the stochastic models, including the initial position of the non-principal UEs, have the same random seed across the experiments. The principal UE is placed 100~m away from the gNB to ensure it has priority regardless of the metric used for allocation, while the rest are placed randomly, at a longer distance.
The application layer mean throughput results obtained for the downlink transmission of Scenario~A for Configuration~C, are depicted in Fig.~\ref{fig:resultstp}. It is evident that the difference between the real, and the modeled data, for the total of UEs, is very low. We observe that from 8 UEs onward, the network starts saturating and the throughput does not increase linearly. This is because UEs with worse channel quality get fewer allocation grants. The measured latency behaves similarly showing low differences. Furthermore, similar results were obtained for all other configurations and scenarios. Overall, the throughput and latency differences between the captured and synthetic data obtained from FikoRE simulations are gathered in Table~\ref{tab:macroresults}. These differences are expressed by the relative mean error across emulation runs with different numbers of UEs. We observe that they are very low, below $2\%$, in all cases. Besides, the differences between the Max Packet and Mean Packet cases of the IP packet sizes are negligible, with a mean difference of less than $0.04\%$. These results validate the goodness of the fitting of the proposed models for application-level simulations.
As a further step, we assess how well the synthetic traffic data generated with our model behave on the lower layers of the stack compared to the captured data. In particular, we study the resource allocation differences when using the captured or synthetic data as input for the simulator. Besides, we highlight the necessity of an accurate model which includes also the inter-packet intervals, contrary to the models proposed in~\cite{3gpp_17}. In this context, we generated synthetic data using a simple Normal model using the statistical metrics from the captured data included in Table~\ref{tab:mean_values}. However, instead of generating multiple IP packets within an RTP frame, we generated all the bits within the RTP frame in the same timestamp. By doing so we do not only highlight the necessity of an accurate model in terms of RTP frame size and inter-frame interval, but also the relevance of the inter-packet interval models. We refer to this simpler model as ``Norm'' model.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/comparison_e_tp.pdf}
\caption{Example of allocation error matrices for both transmission directions (UL and DL) between the captured and synthetic data. The error is estimated for the entire grid. This case corresponds to Scenario A and Configuration~B. }
\label{fig:allocationerrormat}
\end{figure}
For this validation step, we also used FikoRE which is capable of logging every single allocation step, that is, how the resources are allocated at each subframe. We use this information to compare the differences in terms of the allocated throughput, for each resource block (RB) within the allocation grid. More specifically, we measure the number of bits allocated for each RB and each UE. The number of RBs along the time and frequency axes depend on the bandwidth and selected numerology. The allocation error is estimated by comparing the bits allocated to each RB and UE when using the synthetic data from different models and using the actual XR traffic. We can build, for each UE, the allocation matrices illustrated in Fig.~\ref{fig:allocationerrormat}. These matrices express the resource allocation differences, or allocation errors, between a selected model and the actual XR traffic in bits per second, so the metric does not depend on the total duration of the simulation run. To estimate the allocation error of the entire grid as a percentage of the total transmitted error, we use the formula
\begin{equation}
e(\%) = 100\frac{\sum_{i=1}^{K} \left |t_{c}(i) - t_{m}(i)) \right |}{\sum_{i=1}^{K} t_{c}(i)},
\end{equation}
where $t_{m}(i)$ and $t_{c}(i)$ denote the allocated throughput of the model being evaluated, and the captured data, respectively, for the $i$th RB ($1\leq i\leq K$) along the total simulation time, with $K$ the total number of RBs. To really understand how the different sources of traffic data are being allocated, we decided to simulate a single UE, the principal one, in each run. By doing this, we avoid the effects of the selected configuration (such as the allocation metric, UEs channel quality, etc.) that directly affect the resource allocation procedure and can lead to inaccurate conclusions.
Using the same configuration parameters described in Table~\ref{tab:simparameters}, we tested multiple combinations of total bandwidth and numerology $\mu$, which directly affect how the resource allocation grid is built, for a single immersive UE. In specific, we tested bandwidths of 40~MHz with $\mu=1$, 100~MHz with $\mu=2$, 200~MHz with $\mu=2$, and 200, 400, 800~MHz with $\mu=3$. Each simulation run had a duration of 500 seconds. The simulations were repeated for each configuration, scenario (A and B), and source of data (captured, Jonhson's $S_U$ with Max Packet size, and Norm). The synthetic data generated using the Mean Packet size presented no evident differences with the Max Packet size option.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/allocerror_new.pdf}
\caption{Measured allocation errors between the captured and synthetic data using our emulation tool for each transmission direction (UL and DL) and validation scenario (A and B) for different numerology and bandwidth configurations. All simulations were done for a single UE. }
\label{fig:allocationerror}
\end{figure}
Observing the measured allocation errors depicted in Fig.~\ref{fig:allocationerror} we can extract several conclusions. First, we notice that the allocation error is considerably higher for the Norm simpler model compared to the proposed Johnson's $S_U$ model. Besides, the error difference increases rapidly in favor of Johnson's $S_U$ model as we configure the emulator with more total bandwidth. Increasing the numerology also negatively impacts the performance of the Norm model. For low bandwidths, the error difference is low, as an RTP frame does not fit in a single subframe and has to be transmitted along several subframes. Therefore, the entire resource allocation grid gets saturated and the allocation differences, being estimated in comparison with the total allocated throughput in each RB, become hard to measure. On the contrary, for higher bandwidths, not all the RBs are allocated for each RTP frame and the differences become more noticeable. Our intuition is that the difference that we observe for higher bandwidths could also be observed if we could discard the saturated subframes. In addition, the allocation error of the proposed Johnson's $S_U$ models remains almost constant along the test configurations, which clearly is not the case for the Norm model. Thus, we get a strong hint of the importance of obtaining accurate models which include the inter-packet intervals, especially for high numerologies and bandwidths, in designing successful resource allocation techniques.
\section{Conclusions}
\label{section:conclusion}
This work provided realistic traffic traces and associated models for XR offloading scenarios to complement and improve upon the models proposed in previous works, such as~\cite{3gpp_17}. We proposed two XR offloading scenarios that are at the cutting edge of the current state of the art. The first scenario represents a full offloading solution in which the XR HMD captures and transmits sensor data to a nearby server or MEC facility for processing and rendering ultra-high definition immersive frames, which are then transmitted back to the device. The second scenario focuses on offloading heavy ML algorithms, such as the real-time egocentric human body segmentation algorithm, which allows users to see themselves within the virtual scene.
The traffic data were captured using a recently introduced offloading architecture, described in~\cite{gonzalez2022arch}, with additional functionality presented in this work. To avoid any uncontrollable overhead that we did not aim to model, the data have been captured on the sender side, using a local host network. The IP traffic was captured for multiple resolutions for both the uplink and downlink streams and both offloading scenarios.
The collected data were cleaned and post-processed, and we conducted a thorough analysis to determine the most appropriate modeling approach. We modeled the three main components of video traffic, that is, the frame size, inter-frame interval, and inter-packet interval, using continuous unimodal distributions. While many video or XR traffic models, such as~\cite{3gpp_17}, do not include inter-packet interval information, we consider it a crucial feature to include in XR traffic models, especially for resource allocation techniques design and optimization, as demonstrated in our validation experiments. We fitted multiple continuous distributions to the data for all resolutions and found that the Johnson's $S_U$ distribution provided the best fit, as determined by using the KS test.
The Johnson's $S_U$ distribution was fitted for all target parameters, scenarios, and resolutions. With these models, we generated synthetic data and used them in validation experiments with an open-source 5G RAN emulator~\cite{opensourcefikore}. These experiments compared the performance of the captured and synthetic data at both the application and resource allocation layers. At the application layer, we found that our models can generate realistic XR traffic data for the proposed scenarios. In the resource allocation layer, we demonstrated the importance of including inter-packet interval time for designing advanced resource allocation techniques specifically optimized for XR offloading.
In conclusion, the data and models presented in this work can be effectively used for the design, testing, improvement, and expansion of wireless network solutions in both academia and industry. They offer a comprehensive approach to studying extended reality (XR) offloading scenarios and provide insight into the importance of considering inter-packet interval times for resource allocation techniques. Overall, we believe that this work provides a useful contribution to the field of wireless networks and XR technology.
|
1,108,101,564,390 | arxiv | \section{Introduction}
\label{sint}
The goal of the classical curve simplification problem is to reduce
the number of the vertices of a polygonal curve, without changing
its shape significantly. There are several applications in which
curve simplification plays an important role. In trajectory analysis, for instance,
there are two important reasons for this reduction. First,
it reduces the storage and bandwidth requirements for storing
and transferring huge and growing collections of trajectory data.
Second, and probably more importantly, the complexity of most
trajectory analysis algorithms depends on the number of the
vertices of the input curves, and simplifying trajectories
can reduce the running time of these algorithms.
Let $P = \left< p_1, p_2, ..., p_n \right>$ be a polygonal curve
on the plane.
The curve $P' = \left< p'_1, p'_2, ..., p'_m \right>$ is a
simplification of $P$, if $p'_1 = p_1$, $p'_m = p_n$, $m \le n$,
and the distance between $P$ and $P'$ is at most $\epsilon$ (the
definition of the distance between these curves and the value
of $\epsilon$ is described below).
The simplified curve may be vertex-restricted, curve-restricted, or
unrestricted. In vertex-restricted simplification, the vertices
of $P'$ coincide with the vertices of the input curve,
i.e.~for each $i$ where $1 \le i \le m$, $p'_i = p_j$ for
some index $j$, where $1 \le j \le n$.
In curve-restricted simplification, the vertices of $P'$ can be
placed on any point of the input curve, and
in unrestricted simplification there is no limitation on the
placement of the internal vertices of $P'$.
In the first two cases, which is the focus of the present paper,
the vertices of the simplified curve should
appear in order on the input curve, and thus split $P$ into sub-curves.
For each edge of the simplification $p'_ip'_{i+1}$,
in which $1 \le i \le m - 1$,
let $P_{p'_ip'_{i+1}}$ denote the sub-curve of $P$ from $p'_i$ to $p'_{i+1}$.
The distance between two curves is computed using measures
such as Fr{\'e}chet or Hausdorff \cite{agarwal05} (other
measures too are sometimes used such as \cite{buzer07}).
Let $D(C, C')$ denote the function that computes the distance
between two curves using any such measure.
The distance between the original and simplified curves is either
\emph{global} and computed for the curves as a whole,
or is \emph{local} and computed as the maximum distance of the
corresponding sub-curves, i.e.~$\max_{1 \le i \le m-1} D(p'_i p'_{i+1}, P_{p'_ip'_{i+1}})$.
Curve simplification is usually studied in two settings \cite{imai86}.
In the min-$\epsilon$ setting the maximum value of $m$ (the number of
the vertices of the simplified curve) is specified and
$\epsilon$ (the amount of distance between the original and
simplified curves) is minimised, and
in the min-\# setting $\epsilon$ is given while $m$ is minimised.
There are numerous results on vertex-restricted curve simplification in
the min-\# setting, only some of which provide a guarantee on the
number of the vertices of the simplification.
In the rest of this paper we focus on the min-\# problem,
and assume that $\epsilon$ is specified as an input.
The well-known algorithm presented by Douglas and Peucker \cite{douglas73}
does not minimise the number of the vertices of the simplified curve,
but is both simple and effective.
It assumes local directed Hausdorff distance from the input curve
to the simplified curve.
For simplifying $P$ with the maximum distance $\epsilon$,
it finds the most distant vertex $p_k$
from segment $p_1 p_n$; if their distance is at most $\epsilon$,
this segment is a link of the simplification.
Otherwise, the algorithm recursively simplifies
$\left<p_1, ..., p_k\right>$ and $\left<p_k, ..., p_n\right>$.
The worst-case time complexity of this algorithm is $O(n^2)$.
Hershberger and Snoeyink~\cite{hershberger94,hershberger97} improved the running
time of this algorithm to $O(n \log n)$ and later to $O(n \log^* n)$.
Among algorithms that compute an optimal simplification,
i.e.~a simplification with the minimum number of links,
the one presented by Imai and Iri~\cite{imai88}
is probably the most popular for local Hausdorff distance.
It creates a shortcut graph, the vertices of which represent
the vertices of the input curve.
An edge $p_ip_j$ shows that the distance between link $p_ip_j$
and sub-curve $\left<p_i, p_{i+1}, ..., p_j\right>$ is at most $\epsilon$.
A shortest path algorithm on this graph, finds the simplification
with the minimum number of vertices.
The time complexity of this algorithm is $O(n^2 \log n)$.
Chan and Chin \cite{chan96}, and also Melkman and O'Rourke \cite{melkman88}
improved the running time of this
algorithm to $O(n^2)$, and Chen and Daescu~\cite{chen03} reduced
its space complexity to $O(n)$.
There are many other results on vertex-restricted simplification
that consider the Fr{\'e}chet distance or compute the distance
of the curves globally.
For instance, van Kreveld at al.~\cite{vankreveld18}
studied the performance of the Douglas and Peucker~\cite{douglas73}
and Imai and Iri~\cite{imai88} algorithms, described above,
under the global Hausdorff or Fr{\'e}chet distance measures.
They showed that computing an optimal vertex-restricted simplification
using the global undirected Hausdorff distance or
global directed Hausdorff distance from the simplified to the
optimal curve is NP-hard, and
presented an output-sensitive dynamic programming algorithm with
the time complexity $O(mn^5)$ for computing an optimal
simplification under the global Fr{\'e}chet distance.
A faster dynamic programming algorithm for the same variation
of the problem was presented by van de Kerkhof et al.~\cite{kerkhof18}
with the time complexity $O(n^4)$.
Some results on vertex-restricted simplification
do not obtain an optimal simplification but
provide a guarantee on the number of the links of the
resulting simplifications using approximation algorithms.
Agarwal et al.~\cite{agarwal05}
for instance, presented a near-linear time approximation algorithm
for local Hausdorff distance using the uniform distance metric,
in which the distance between a point and a curve is defined
as their vertical distance.
They also presented an approximation algorithm for local
Fr{\'e}chet distance under $L_p$ metric.
Both of these algorithms are simple and greedy in nature.
Among these results, there are also vertex-restricted simplification
algorithms that assume streaming input or online
setting \cite{abam10,lin17,cao17,muckell14},
in which a limited storage is available or
the curve should be simplified in one pass.
It is beyond the scope of this paper to review the
literature on curve simplification extensively;
even many heuristic algorithms, such as \cite{chen12,long13},
have been presented for curve simplification (Zhang et al.~\cite{zhang18}
surveyed many of them for trajectory simplification).
Despite the number of results on vertex-restricted curve simplification,
curve-restricted simplification, which has attracted less attention,
can yield a curve with much fewer vertices,
as in Figure~\ref{frestriction}, in which a curve-restricted
simplification with only four vertices is demonstrated for a
curve whose vertex-restricted simplification is the same as the input curve.
For global directed Hausdorff distance, van de Kerkhof et al.~\cite{kerkhof18}
showed that curve-restricted simplification is NP-hard and provided an $O(n)$
algorithm for global Fr{\'e}chet distance in $\mathbb{R}^1$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{sim1}
\caption{An example showing that curve-restricted simplifications
can have far fewer vertices compared to vertex-restricted
simplifications; the dashed links are a curve-restricted
simplification of the curve with solid edges.}
\label{frestriction}
\end{figure}
In this paper, we study the min-\# curve-restricted
simplification problem with maximum local Hausdorff
distance $\epsilon$ from the input curve to the simplified curve.
We present a dynamic programming algorithm that computes a simplified curve,
the number of the links of which is at most twice the minimum possible.
This paper is organized as follows: In Section~\ref{sprel}
we introduce the notation used in this paper.
In Section~\ref{slink}, we show how to compute a simplification
link between two edges of the input curve and in Section~\ref{smain},
we present our main algorithm.
We conclude this paper Section~\ref{sconclude}.
\section{Preliminaries and Notation}
\label{sprel}
A two-dimensional polygonal curve is represented as a sequence of vertices
on the plane, with line segments as edges between contiguous vertices.
The directed Hausdorff distance between curves $C$ and $C'$, denoted
as $H(C, C')$, is defined as the maximum of the distance between any
point of $C$ to the curve $C'$, i.e.~$H(C, C') = \max_{p \in C} \mathit{dist}(p, C')$,
in which $\mathit{dist}(p, C')$ is the
Euclidean distance between point $p$ and curve $C'$.
Let $P' = \left< p'_1, p'_2, ..., p'_m \right>$ be a
curve-restricted simplification of $P$.
We have $p'_1 = p_1$, $p'_m = p_n$, $m \le n$,
and the distance between $P$ and $P'$ is at most $\epsilon$.
Also, the vertices of $P'$ should appear in order along $P$.
Given a parameter $\epsilon$, the goal in the min-\# simplification is
to find a simplified curve with the minimum number of vertices, such that the
distance between the original and simplified curves is at most $\epsilon$.
In what follows, we use the term \emph{link} to refer to the edges of
the simplified curve, to distinguish them from the edges of the input curve.
For a link $\ell$ of $P'$, suppose $x$ and $y$ on $P$ are points
corresponding to the start and end points of $\ell$ and suppose $x$ is
on edge $p_ip_{i+1}$ and $y$ is on $p_jp_{j+1}$.
Then, $\ell$ \emph{covers} all edges $p_kp_{k+1}$ for $i \le k \le j$.
Let $P_\ell$ be the sub-curve of $P$ corresponding to link $\ell$,
i.e.~the sub-curve of $P$ from point $x$ to $y$.
The local Hausdorff distance from $P$ to $P'$ is the maximum
of $H(P_\ell, \ell)$ over all links $\ell$ of $P'$.
In this paper we assume local Hausdorff distance
to measure the distance between the input and simplified curves.
The $\epsilon$-neighbourhood of a vertex of $P$ or a segment
which is defined as follows.
\begin{figure}
\centering
\includegraphics[width=10cm]{sim3}
\caption{The $\epsilon$ neighbourhood of a segment}
\label{fneigh}
\end{figure}
\begin{defn}
The $\epsilon$-neighbourhood of a point $p$, denoted as $N(p)$
is a disk of radius $\epsilon$ and with centre is at $p$.
Clearly, the set of points inside $N(p)$ are all points at
distance at most $\epsilon$ from $p$.
Similarly, the $\epsilon$-neighbourhood of a segment $s$,
denoted as $N(s)$, is the set of points at distance at most $\epsilon$
from any point of the segment $s$.
\end{defn}
The $\epsilon$-neighbourhood of a segment $s$ is demonstrated in
Figure~\ref{fneigh}.
\section{Identifying Simplification Links}
\label{slink}
\begin{lem}
\label{llink}
For the curve $P = \left< p_1, p_2, ..., p_n \right>$,
a segment $s$ from point $x$ on edge $p_i p_{i+1}$ to
point $y$ on edge $p_j p_{j+1}$ can be a link of a (not
necessarily optimal) curve-restricted simplification
if and only if it intersects $N(p_k)$ for every index
$k$, where $i < k \le j$.
\end{lem}
\begin{proof}
Let $C$ be the sub-curve $P$ from $x$ to $y$.
If $s$ is a link of a simplification of $P$,
$H(C, s)$ is at most $\epsilon$.
This implies that the distance of every point of $C$ to $s$
is at most $\epsilon$. For each vertex $p$ of $C$
this means that $s$ should include at least one point
from $N(p)$.
For the converse, suppose $s$ intersects $p_i p_{i+1}$
at $x$ and $p_j p_{j+1}$ at $y$, as well as $N(p)$ for
every vertex of $C$, the sub-curve of $P$ from $x$ to $y$.
It is enough to show that $H(C, s) \le \epsilon$.
For each edge, since the distance between its end points
and $s$ is at most $\epsilon$, the distance of other
points of the edge cannot be greater. This holds for
every internal edge of $C$ and implies
$H(C, s) \le \epsilon$ as required.
\end{proof}
Lemma~\ref{llink} corresponds to a similar statement for
vertex-restricted simplifications.
We use this lemma later to compute the links of a simplification.
\begin{cor}
\label{clink}
For the curve $P = \left< p_1, p_2, ..., p_n \right>$,
a segment $s$ from point $x$ on edge $p_i p_{i+1}$ to
point $y$ on edge $p_j p_{j+1}$ is a link of a (not
necessarily optimal) curve-restricted simplification
if and only if $N(s)$ contains $p_k$ for every index
$k$, where $i < k \le j$.
\end{cor}
Corollary~\ref{clink} holds because if a segment $s$ intersects
the $\epsilon$-neighbourhood of a vertex $v_k$, the distance of
$v_k$ to $s$ is at most $\epsilon$ and it should be inside $N(s)$.
We use Corollary~\ref{clink} later to improve the time complexity
of detecting simplification links.
\begin{lem}
\label{llinkalgn}
Suppose $\ell$ is a link of a curve-restricted simplification
of curve $P = \left< p_1, p_2, ..., p_n \right>$, such that
$\ell$ starts from point $x$ on edge $p_i p_{i+1}$ and
ends at point $y$ on edge $p_j p_{j+1}$.
There exists another link $\ell'$ covering the same set of edges such
that the line that results from extending $\ell'$ has the following
property for at least two values of $k$ where $i < k \le j$:
either i) it is a tangent to $N(p_k)$, or
ii) it passes through one of the end points of $p_i p_{i+1}$ or
$p_jp_{j+1}$, or their intersection with $N(p_k)$.
\end{lem}
\begin{proof}
Let $L$ be the line resulting from extending the segment $\ell$.
If none of the mentioned properties hold for any value of $k$,
we move $L$ downwards until one of them holds for some value $k$,
i.e.~it becomes tangent to the $\epsilon$-neighbourhood
of $p_k$ or passes through the intersection of the $\epsilon$-neighbourhood
of $p_k$ and the last or the first edge covered by the $s$.
We then rotate $L$ around $p_k$ for case i, or the intersection of
case ii, until one of the conditions holds for another index.
Let $s$ be the segment on line $L$ with end points on $p_ip_{i+1}$
and $p_jp_{j+1}$; such a segment surely exist, since the movement
or rotation stops at the end points of these edges.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{sim4}
\caption{Rotating line $L$ around $N(D)$ counterclockwise;
$s$ no longer intersects $N(C)$.}
\label{fshort}
\end{figure}
Clearly $L$ cannot leave $N(p_k)$ for any possible index $k$ for
both the downward movement and the rotation; just before leaving
$N(p_k)$, $L$ becomes its tangent.
The only problem may be that although $N(p_k)$, for some $k$
where $i < k \le j$, is intersected by both $\ell$ and $L$,
$s$ may be too short to intersect $N(p_k)$; this is demonstrated
in Figure~\ref{fshort}. However, since the rotation stops
at the intersection the first or the last edge and $N(p_k)$,
this case never happens.
\end{proof}
\begin{lem}
\label{llinkfind}
A link of a curve-restricted simplification
of $P = \left< p_1, p_2, ..., p_n \right>$,
from a point on edge $p_i p_{i+1}$ to a point on edge $p_j p_{j+1}$
can be found with the time complexity $O(m^3)$ where $m = j - i + 1$,
provided such a link exists.
\end{lem}
\begin{proof}
We find a line for which the condition mentioned in Lemma~\ref{llinkalgn}
holds. To do so, we find three parallel lines at
distance $\epsilon$ on the plane, $L_1$, $L_2$, and $L_3$, such
that a link can be found on line $L_2$.
We consider possible placements of these lines according to Lemma~\ref{llinkalgn} and
check for which of them the condition of Lemma~\ref{llink} holds for a segment on $L_2$.
If $L_2$ is a tangent to $N(p_k)$ for some value of $k$ where
$i < k \le j$, then either $L_1$ or $L_3$ should pass through
$p_k$. We therefore try different placements of these three lines
such that the following property holds for two values of $k$ for $i < k \le j$:
either i) $L_1$ or $L_3$ passes through $p_k$,
or ii) $L_2$ passes through the intersection $N(p_k)$ and
one of $p_{i - 1}p_i$ or $p_jp_{j+1}$ or the end points of these edges.
Since there are $O(m)$ choices for the first and the second
conditions, the number of total cases to consider is $O(m^2)$.
For each of $O(n^2)$ possible placements of these lines,
we have to verify if there exists
a segment $s$ on $L_2$ such that $H(C, s)$ is at most $\epsilon$.
Let $x$ be the intersection of $L_2$ and $p_i p_{i+1}$ and
let $y$ be the intersection of $L_2$ and $p_j p_{j+1}$; if
$x$ or $y$ do not exist, $L_2$ cannot contain a link.
Based on Lemma~\ref{llink}, if the segment $xy$ intersects
$N(p_k)$ for every $i < k \le j$, it is a valid link.
This can be checked with the time complexity $O(m)$.
\end{proof}
\begin{cor}
\label{rlinkfind}
To force the link to start from $p_i$, instead of
any point on edge $p_ip_{i+1}$ in Lemma~\ref{llinkfind},
we can fix this point on $L_2$ and try the condition mentioned
in the proof of Lemma~\ref{llinkfind} for only one value of $k$.
\end{cor}
Algorithms based on the construction of the shortcut graph of
Imai and Iri~\cite{imai88} perform steps similar to Lemma~\ref{llinkfind}:
for each $i$ and $j$, where $1 \le i < j \le n$, it should
be verified if the segment $p_i p_j$ intersects the
$\epsilon$-neighbourhood of every vertex $p_k$ for $i < k < j$.
This task can be optimised by computing
the set of lines that pass through $p_i$ and intersect the
$\epsilon$-neighbourhood of the vertices that appear after it
(the intersection of double cones; see \cite{chen03}, for instance).
Unfortunately, for curve-restricted simplification that does
not seem possible, since the end points of each link may not be
a vertex and are chosen from a much larger set (see Lemma~\ref{llinkalgn}).
Therefore, to improve the time complexity of Lemma~\ref{llinkfind},
we should use an alternative strategy.
\begin{figure}
\centering
\includegraphics[width=8cm]{sim7}
\caption{Symbols used for $N(s)$ in Lemma~\ref{llinkfast}}
\label{flinkfast}
\end{figure}
\begin{lem}
\label{llinkfast}
Let $S$ be a set of $n$ points on the plane and let $\delta$ be a constant,
where $0 < \delta < 1$.
There exists a data structure with $O(n^{1+\delta})$ preprocessing time and space,
which, for any segment $s$, can verify if all points in $S$ are
inside the $\epsilon$-neighbourhood of $s$ in $O(2^{1/\delta}\log n)$ time.
\end{lem}
\begin{proof}
We first compute the convex hull $H$ of the points in $S$.
The most distant point of $S$ from $s$ is a vertex of $H$.
Let $\ell(x, y)$ be the line that results from extending
the segment from point $x$ to point $y$, and
let $h(x, y)$ be the halfplane on the left side of $\ell(x, y)$.
All members of $S$ are in $N(s)$, if and only if there is no point
in the following four regions (we use the symbols defined in Figure~\ref{flinkfast}):
\begin{enumerate}
\item $h(a, c)$,
\item $h(d, b)$,
\item $h(b, a) \setminus N(u)$, and
\item $h(c, d) \setminus N(u)$.
\end{enumerate}
Since, the intersections of a convex polygon and a line can be
computed in logarithmic time, the first two regions can be checked
in $O(\log n)$.
The other two regions can be checked using \emph{halfplane proximity queries}:
given a directed line $\ell$ and a point $q$, report the point farthest from $q$
among those to the left of $\ell$.
Aronov et al.~\cite{aronov18} presented a data structure that uses $O(n^{1 + \delta})$
preprocessing time and space, to answer such queries in $O(2^{1 / \delta} \log n)$ time,
for any $\delta$ ($0 < \delta < 1$).
Therefore, to check the third region, we perform a halfplane proximity query
for line $\ell(b, a)$ and point $u$; only if the distance of the farthest point
to $u$ in $h(b, a)$ is at most $\epsilon$, the third region is empty.
Similarly, to check the fourth region, we perform a halfplane proximity query,
specifying line $\ell(c, d)$ and point $v$ as inputs.
\end{proof}
\begin{lem}
\label{llinkfindfast}
Let $\delta$ be a constant, where $0 < \delta < 1$.
With $O(n^{3+\delta})$ preprocessing time and space,
a link of a curve-restricted simplification
of a polygonal curve $P = \left< p_1, p_2, ..., p_n \right>$,
from any edge $p_i p_{i+1}$ to any other edge $p_j p_{j+1}$
can be found with the time complexity $O(n^2 \log n)$,
provided that such a link exists.
\end{lem}
\begin{proof}
For every pair of indices $x$ and $y$, where $1 < x \le y < n$,
we initialize the data structure mentioned in Lemma~\ref{llinkfast}
$D_{xy}$ for points $\{ p_x, p_{x + 1}, ..., p_y \}$.
This can be done with the time complexity $O(n^{3 + \delta})$.
In Lemma~\ref{llinkfind}, to check if a segment from $p_i p_{i+1}$
to $p_j p_{j+1}$ is a link, we test to see if it intersects $N(p_k)$
for every index $k$, where $i < k \le j$. We improve the time
complexity of this task to $O(\log n)$ by using $D_{xy}$.
\end{proof}
\section{Simplification Algorithm}
\label{smain}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{sim5}
\caption{A DLC $\left< p'_1 p'_2, p'_3 p'_4 \right>$
of a curve with six links (Definition~\ref{ddlc})}
\label{fdlc}
\end{figure}
\begin{defn}
\label{ddlc}
A sequence of segments $D = \left< p'_1 p'_2, p'_3 p'_4, ..., p'_{2k - 1} p'_{2k} \right>$
is a \emph{disjoint link chain} (DLC) for curve
$P = \left< p_1, p_2, ..., p_n \right>$, if
i) $p'_1$ is on $p_1 p_2$ and $p'_{2k}$ is on $p_{n-1} p_n$,
ii) for each index $i$,
where $1 \le i \le k$, $p'_{2i - 1} p'_{2i}$ is a valid simplification link, and
iii) for each index $i$,
where $1 \le i < k$, $p'_{2i}$ and $p'_{2i+1}$ are on the same edge of $P$,
and
iv) the vertices of $D$ appear in order on $P$ (i.e.~first $p'_1$
appears on $P$, then $p'_2$, then $p'_3$, and so forth).
\end{defn}
Figure~\ref{fdlc} demonstrates a DLC of a curve with six links.
\begin{prop}
\label{rdlc}
Given a DLC $D = \left< p'_1 p'_2, p'_3 p'_4, ..., p'_{2k - 1} p'_{2k} \right>$
for curve $P = \left< p_1, p_2, ..., p_n \right>$,
such that $p'_1 = p_1$,
a curve-restricted simplification of $P$ with $2k$ links can be obtained
from $D$ by connecting the end of each link of $D$ to the start of its next
link and connecting the end of the last one to $p_n$.
\end{prop}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{sim6}
\caption{$\mathrm{FLP}_1(\left< p_1, p_2, ..., p_m \right>)$
of a curve with six links (Definition~\ref{dflp})}
\label{fflp}
\end{figure}
\begin{defn}
\label{dflp}
For a polygonal curve $C = \left< p_1, p_2, ..., p_m \right>$,
the \emph{first link point} with $k$ links,
denoted as $\mathrm{FLP}_k(C)$,
is the first point $x$ on $p_{m-1} p_m$,
such that there exists a disjoint link
chain $D= \left< p'_1 p'_2, p'_3 p'_4, ..., p'_{2k - 1} p'_{2k} \right>$ of $C$,
in which $p'_{2k} = x$.
\end{defn}
Figure~\ref{fflp} demonstrates $\mathrm{FLP}_1$ of a curve with four edges.
Since the line containing a link can be moved or rotated
to obtain a new link, unless the conditions mentioned in
Lemma~\ref{llinkalgn} holds for it,
Lemma~\ref{llinkfindfast} yields the following corollary.
\begin{cor}
\label{ccorner}
For a sub-curve $Q = \left< q_1, q_2, ..., q_m \right>$
of a polygonal curve $P = \left< p_1, p_2, ..., p_n \right>$,
$\mathrm{FLP}_1(Q)$ and its corresponding link can be computed
with the time complexity $O(n^2 \log n)$,
after some preprocessing with the time complexity $O(n^{3+\delta})$,
for some constant $\delta$ ($0 < \delta < 1$).
\end{cor}
In Theorem~\ref{tmain} we present an algorithm for computing a
minimum-sized DLC.
\begin{thm}
\label{tmain}
A DLC of minimum size for curve
$P = \left< p_1, p_2, ..., p_n \right>$
can be computed in $O(n^5 \log n)$.
\end{thm}
\begin{proof}
We use dynamic programming to fill a two-dimensional table $F$.
$F[i, j]$, for $1 \le i \le n$ and $1 \le j \le n$, denotes
$\mathrm{FLP}_j(\left< p_1, p_2, ..., p_i \right>)$.
Parallel to table $F$, we can store the last link of $F[i, j]$
in another two-dimensional table $L$ to reconstruct the chain.
For points $u$ and $v$ on $P$, $u < v$ holds if $u$ appears
before $v$ on $P$. We fill the tables as follows.
\begin{enumerate}
\item $F[i][1]$ is initialized as $\mathrm{FLP}_1(\left<p_1, p_2, ..., p_i\right>)$,
forcing the first vertex of the resulting link to be on $p_1$ (Corollary~\ref{rlinkfind}).
$L[i][1]$ is initialised as the link corresponding to $\mathrm{FLP}_1(\left<p_1, p_2, ..., p_i\right>)$.
If there is no such link, $F[i][1]$ and $L[i][1]$ are not filled.
\item
For $d$ from $2$ to $n$, $F[i][d]$ and $L[i][d]$ for $1 \le i \le n$
are filled as follows:
The value $F[i][d]$ is the minimum value
of $\mathrm{FLT}_1(\left< F[j][d - 1], p_{j+1}, p_{j+2}, ..., p_{i}\right>)$,
over all indices of $j$, where $j < i$ and $F[j][d - 1]$ is filled.
The value of $L[i][d]$ should indicate the link corresponding to
of $\mathrm{FLT}_1(\left< F[j][d - 1], p_{j+1}, p_{j+2}, ..., p_{i}\right>)$.
\end{enumerate}
Based on Corollary~\ref{ccorner}, filling these tables can be done with
the time complexity $O(n^5 \log n)$.
Let $m$ be the lowest index, such that $F[n][m]$ is filled. By
following the links backwards using dynamic programming tables,
we obtain a DLC
$D = \left< p'_1 p'_2, p'_3 p'_4, ..., p'_{2k-1} p'_{2k} \right>$.
We prove that the size of $D$ is the minimum possible.
To do so, we use induction on $d$ to show that $F[i][d]$ for $1 \le i \le n$
is filled if and only if there is a DLC for
$\left< p_1, p_2, ..., p_i \right>$ with $d$ links.
For $d = 1$, the statement is trivial and follows from
the definition of $\mathrm{FLT}_1$ and its computation
(Corollary~\ref{rlinkfind}).
For $d > 1$, suppose there is a DLC
$\left< q'_1 q'_2, q'_3 q'_4, ..., q'_{2d - 1} q'_{2d} \right>$
for $\left< p_1, p_2, ..., p_i \right>$.
Let $q'_{2d - 2}$ be on $p_{j} p_{j + 1}$.
Obviously, $\left< q'_1 q'_2, q'_3 q'_4, ..., q'_{2d - 3} q'_{2d - 2} \right>$
is a DLC of $\left< p_1, p_2, ..., p_{j + 1} \right>$.
By induction hypothesis, $F[j][d - 1]$ is filled with
a point on or before $q'_{2d - d}$.
Since $q'_{2d - 1} q'_{2d}$ is a valid link, where $q'_{2d - 1}$
appears after $q'_{2d - 2}$ on $P$,
there is a valid link from $q'_{2d - 2} p_{j+1}$ to $p_{i - 1} p_i$,
and $P[i][d]$ is filled in the dynamic programming algorithm.
\end{proof}
\begin{thm}
\label{tmaincomp}
A curve-restricted simplification of a polygonal curve
$P = \left< p_1, p_2, ..., p_n \right>$
can be computed in $O(n^5 \log n)$,
such that its number of links is at most twice the number
of the links of an optimal simplification.
\end{thm}
\begin{proof}
Let $D$ be the DLC of $P$ with $k$ links
computed using Theorem~\ref{tmain}.
We can obtain a curve-restricted simplification
$P'$ from $D$ with $m = 2k$ links (Proposition~\ref{rdlc}).
Let $O$ be a curve-restricted simplification of $P$ with
the minimum number of links $x$.
Based on Definition~\ref{ddlc}, $O$ is also a DLC of $P$.
Since $D$ is a DLC with the minimum number of links,
$x \ge k$. This implies $2x \ge 2k = m$.
\end{proof}
\section{Concluding Remarks}
\label{sconclude}
Although, the min-\# curve-restricted simplification of
polygonal curves can reduce the number of the vertices of the
curves much better than vertex-restricted simplification, the time complexity of the
algorithm presented in this paper is not very appealing for
real-world applications. A faster approximate or exact algorithm
may fill this gap.
|
1,108,101,564,391 | arxiv | \section{Fidelity susceptibility for the $S=1$ $J_1-J_3$ chain in the vicinity of the phase transition}
\begin{figure}[b]
\includegraphics[width=0.45\textwidth]{fidelity_susceptibility_new2}
\caption{(Color online) Fidelity susceptibility [Eq.~\eqref{eq:fsusc}] as a function of $J_3 / J_1$ for different system sizes in the vicinity of the phase transition $J_3 / J_1 \approx 0.11$. The inset shows the finite-size scaling of the peak position.}
\label{fig:fsusc}
\end{figure}
In this section, we discuss in more detail our findings for the fidelity susceptibility \cite{PhysRevE.74.031123} ($J_1 \equiv 1$ in the following)
\begin{equation}
\chi(J_3) = 2 \, \frac{1 - \left| \langle \psi_0(J_3) | \psi_0(J_3+\delta J_3) \rangle \right|}{L \, (\delta J_3)^2},
\label{eq:fsusc}
\end{equation}
with $|\psi_0(J_3) \rangle$ the ground state of the $J_1-J_3$ chain.
According to the analysis of Refs.~\onlinecite{PhysRevLett.99.095701,PhysRevB.76.180403} and the numerical findings of Refs.~\onlinecite{PhysRevA.77.012311,PhysRevB.77.245109,PhysRevB.78.115410,PhysRevE.76.022101,PhysRevA.84.043601}, in the thermodynamic limit $\chi(J_3)$ should either possess a divergence or a minimum at the critical point, depending on the values of the scaling dimensions and of the critical exponents.
The results of Fig.~\ref{fig:fsusc} indicate that in the present case a peak develops.
We perform an extrapolation of the peak with system size and find that in the thermodynamic limit indeed a divergence is obtained.
As can be seen in the inset of Fig.~\ref{fig:fsusc}, extrapolating the peak position leads to a value $J_c \approx 0.11$, in agreement with the findings for the dimerization and for the correlation length.
\section{Finite size results for the central charge}
To characterize the transition, we have computed the central charge $c$ from the block entropy of the system, $S_\ell = - {\rm Tr} \varrho_\ell \ln \varrho_\ell$, with $\varrho_\ell$ the reduced density matrix of a subsystem of size $\ell$.
For a gapless 1D system with periodic boundary conditions, it behaves as \cite{Cardy}
\begin{equation}
S_\ell = \frac{c}{3} \ln \left[ \frac{L}{\pi} \sin \left( \frac{\pi \ell}{L}\right) \right] + g_{\rm PBC}.
\label{eq:entropy}
\end{equation}
\begin{figure}
\includegraphics[width=0.45\textwidth]{scaling_centralcharge}
\caption{(Color online)
Central charge obtained for systems of size $L=50$, $80$ and $100$ from a fit to Eq. (\ref{eq:entropy}) after different numbers of sweeps of the DMRG algorithm. The error bars only take into account the precision of the fit for a given
number of sweeps. As one can see from the evolution of the value of the central charge with the number of sweeps, the values for $L=80$ and $L=100$ have not fully converged due the truncation of the Hilbert space, and the values
for the largest number of sweeps we could achieve are probably underestimates. Because of these uncertainties,
a meaningful finite-size scaling is not possible. Still, these results are clearly compatible with a central charge $c=3/2$.}
\label{fig:c_scaling}
\end{figure}
Due to the limitations of the DMRG when dealing with systems with PBC, we are restricted to system sizes of the order of 100 lattice sites.
In Fig.~\ref{fig:c_scaling}, we show the results for systems with $L=50$, $80$ and $100$ lattice sites including
the error bars that come from the fit to the Calabrese-Cardy formula of Eq. (\ref{eq:entropy}). Another source of error comes from the truncation of the Hilbert space
in the DMRG algorithm. To give the reader an idea of this error, we have plotted the central charge obtained after
different numbers of DMRG sweeps up to the maximal number of sweeps we could achieve. On the very fine scale of the plot, the change in the central charge is negligible for 50 sites, but it is already noticeable for 80 sites and quite significant for 100 sites. Given these uncertainties, and the smallness of the deviations from c=3/2, it does not appear meaningful to perform a finite size extrapolation. This sould be contrasted to the spin-1/2 case treated in \cite{Nishimoto}, where a finite-size analysis could be performed thanks to the good convergence achieved up to 144 sites.
Moreover, one should keep in mind that the critical ratio $J_3/J_1$ is not known exactly, another potential source of error. Still, the results clearly point to a central charge $c=3/2$.
\section{Gap and condition on interchain coupling and on temperature}
At the Majumdar-Ghosh point, as in the case of the $J_1-J_2$ model, the gap is expected to
be a significant fraction of $J_1$ \cite{white_affleck}. We have checked this expectation with DMRG in the $S=1$ case.
Indeed, as shown in Fig. \ref{fig:gap}, the gap increases very fast above $J_3/J_1=0.11$ to reach values
of the order of $0.7J_1$ around the MG point, above which it decreases to stabilize around
$0.3\sqrt{J_1^2+J_3^2}$.
\begin{figure}
\includegraphics[width=0.45\textwidth]{gap}
\caption{(Color online) Gap of the system as a function of $J_3 / J_1$ for different system sizes. }
\label{fig:gap}
\end{figure}
In antiferromagnets, the conditions to observe the dimerization are thus expected to be met
in sufficiently anisotropic systems and provided one can reach temperatures smaller than the
dominant coupling constant, which is essentially always the case.
In cold atoms, to suppress the interchain coupling, one should work in a 1D trap.
The condition on the temperature should be translated into a condition on entropy. It is not possible
to be quantitative without calculating the temperature dependence of the entropy, a calculation
far beyond the scope of the present analysis, but to reach a temperature equal to a fraction of
the main coupling constant means to work with an entropy per site equal to a fraction of $\ln (2S+1)$.
This is the typical condition to observe antiferromagnetic correlations in cold-atom realizations of the
Heisenberg model, and this is an issue on which the experimental community is currently actively working.
\section{Derivation of the $J_1-J_3$ model from a two-orbital Hubbard model}
The present section is devoted to the derivation of the effective $J_1-J_3$ model from
a microscopic Hubbard model.
In solid state physics spin-1 systems may be realized in the case of transition metal
compounds: If there are 2 degenerate orbitals due to the crystal field splitting and if the
Hund's coupling $J_h$ and the Coulomb repulsion are large enough,
at half-filling (i.e. two spin-1/2 per transition metal ion) the system consists of
localized spin S=1 moments.
Assuming this to be the case, we derive an effective spin S=1
Hamiltonian up to fourth order in degenerate perturbation theory
in the strong coupling limit of a two-orbital Hubbard model on a chain,
and we show under which conditions on the original microscopic parameters (hopping integrals,
Coulomb repulsion, Hund's coupling) this effective Hamiltonian reduces
to a $J_1-J_3$ model with nearest-neighbor coupling and three-body interaction terms.
\subsection{Generalized Hubbard model}
Our starting point is the following Hamiltonian at half-filling:
\begin{eqnarray}
\mathcal{H}_{Hb} &=& \sum_{i,j} \sum_{m,m'}\sum_{\sigma} t_{m,m'}^{ij}c_{i m \sigma}^{\dagger}c_{j m' \sigma} \\
&+&\frac{1}{2}\sum_{i}\sum_{m,m'}\sum_{\sigma ,\sigma'}U_{m m'}n_{i m \sigma}n_{i m' \sigma'} \nonumber \\
&+&\frac{1}{2}\sum_{i}\sum_{m\neq m'}\sum_{\sigma\neq\sigma'}\{J_{h}n_{im\sigma}n_{im'\sigma} \nonumber \\
&+&J_{h}c_{im\sigma}^{\dagger}c_{im\sigma'}c_{im'\sigma'}^{\dagger}c_{im'\sigma} \nonumber \\
&+& 2 J_{p}c_{im'\sigma'}^{\dagger}c_{im'\sigma}^{\dagger}c_{im\sigma'}c_{im\sigma}\}\nonumber
\label{eq:2bandHM}
\end{eqnarray}
where $i,j$ are the site indices, $m,m'$ refer to the orbitals $a$ and $b$,
and $\sigma$ to the electronic spin. The hopping integrals between
two neighboring orbitals are denoted by $t_{m,m'}^{ij}$, the on-site Coulomb repulsions
by $U_{mm'}$, $J_{h}$ represents the Hund's coupling
and $J_{p}$ the pair hopping amplitude. Furthermore,
we assume that additional relations, typical of cubic symmetry, are
satisfied, namely $U_{aa}=U_{bb}$ and $U=U_{aa}-2J_{h}$. This kind of
Hamiltonian has been extensively discussed in the context of systems with
orbital degeneracy \cite{fazekas,castellani}.
\subsection{Effective spin-model to fourth order in degenerate perturbation theory}
Using degenerate perturbation theory, the effective spin model of Eq.~\eqref{eq:2bandHM}
on a chain takes the form
\begin{eqnarray}
H&=&H^{(2)}+H^{(4)} \nonumber \\
H^{(2)} &=& J^{(2)}_{\text{Heis}} \sum_{<i,j>} ({\bf S}_i \cdot {\bf S}_j) \nonumber \\
H^{(4)}&=& \sum_{<i,j>}\left( J^{(4)}_{\text{Heis}}{\bf S}_i \cdot {\bf S}_j + J^{(4)}_{\text{Biqu}}({\bf S}_i \cdot {\bf S}_j)^2\right) \nonumber \\
&+& \sum_{\ll i,j \gg} J^{(4)}_{\text{nn}} ({\bf S}_i\cdot{\bf S}_j) \nonumber \\
&+& \sum_{<i,j,k>}J^{(4)}_{\text{3}} \left[({\bf S}_i\cdot {\bf S}_j)({\bf S}_j\cdot{\bf S}_k)+h.c. \right] , \nonumber \\
\label{eq:spinHam}
\end{eqnarray}
where ${\bf S}$ are spin-1 operators. The sum over $<i,j>$ runs over nearest-neighbor
pairs, the one over $\ll i,j\gg$ runs over next-nearest neighbor pairs, while the one over $<i,j,k>$ runs over all sequences of three spins.
The effective Hamiltonian consists of different terms. At lowest order, one recovers the
Heisenberg model with a nearest-neighbor spin coupling $J^{(2)}_{\text{Heis}}$ which gets
renormalized at fourth order by the coefficients $J^{(4)}_{\text{Heis}}$.
At fourth order, two other 2-site terms appear: A next-nearest neighbor coupling $J^{(4)}_{\text{nn}}$, as
in the S=$\frac{1}{2}$ case \cite{Delannoy}, and a biquadratic coupling $J^{(4)}_{\text{Biqu}}$
typical of $S=1$ systems.
Finally, there is in additional three-site interaction $J^{(4)}_{\text{3}}$ which cannot be reformulated
as a 2-site operator. Let us mention that these terms, which appear in the perturbation expansion
of the Hubbard model, have also been extracted from ab initio calculations
in a different context \cite{Graaf}.
In order to have more compact expressions, we introduce the following relations:
\begin{eqnarray*}
\frac{t_{aa}^4+t_{bb}^4}{2}&=&t_1^4\\
\frac{t_{aa}^2t_{ab}^2+t_{bb}^2t_{ab}^2}{2}&=&t_2^4\\
t_{aa}^2t_{bb}^2&=&t_{2p}^4\\
t_{aa}t_{bb}t_{ab}^2&=&t_4^4\\
\end{eqnarray*}
The various coefficients of \eqref{eq:spinHam} can then be expressed in terms of the microscopic
parameters of the original Hubbard model as sums of terms classified according to the combination
of $U$, $J_h$ and $J_p$ that appears in the denominator:
\begin{widetext}
\begin{eqnarray*}
J^{(2)}_{\text{Heis}} &= &\frac{\text{t}_{aa}^2+2 \text{t}_{ab}^2+\text{t}_{bb}^2}{2 \text{J}_h+U} \\
J^{(4)}_{\text{Heis}} &=&
\frac{-8 \text{t}_1^4-32 \text{t}_2^4-12 \text{t}_{2p}^4+8 \text{t}_4^4-20 \text{t}_{ab}^4}{(2 \text{J}_h+U)^3}+
\frac{2 \text{t}_1^4-2 \text{t}_{2p}^4}{\text{J}_h (2 \text{J}_h+U)^2}+
\frac{-16 \text{t}_2^4-16 \text{t}_4^4}{(-3 \text{J}_h-2 \text{J}_p) (2 \text{J}_h+U)^2}\\
\\&+&
\frac{-8 \text{t}_{2p}^4+16 \text{t}_4^4-8 \text{t}_{ab}^4}{(-4 \text{J}_h-3 U) (2 \text{J}_h+U)^2}+
\frac{-4 \text{t}_1^4+8 \text{t}_4^4-4 \text{t}_{ab}^4}{(-4 \text{J}_h-U) (2 \text{J}_h+U)^2}+\\&+&
\frac{-16 \text{t}_2^4-4 \text{t}_{2p}^4-8 \text{t}_4^4-4 \text{t}_{ab}^4}{(-5 \text{J}_h-2 \text{J}_p-U) (2 \text{J}_h+U)^2}+
\frac{-4 \text{t}_{2p}^4+8 \text{t}_4^4-4 \text{t}_{ab}^4}{(-5 \text{J}_h+2 \text{J}_p-U) (2 \text{J}_h+U)^2}
\\
J^{(4)}_{\text{Biqu}}&=&
\frac{2 \text{t}_1^4+8 \text{t}_2^4+2 \text{t}_{2p}^4+4 \text{t}_{ab}^4}{(2 \text{J}_h+U)^3}+
\frac{-16 \text{t}_2^4-16 \text{t}_4^4}{(-3 \text{J}_h-2 \text{J}_p) (2 \text{J}_h+U)^2}+
\frac{16 \text{t}_2^4-16 \text{t}_4^4}{(-5 \text{J}_h-2 \text{J}_p) (2 \text{J}_h+U)^2}+\\&+&
\frac{\frac{3 \text{t}_1^4}{2}+2 \text{t}_2^4-
\frac{5 \text{t}_{2p}^4}{2}-\text{t}_{ab}^4}{\text{J}_h (2 \text{J}_h+U)^2}+
\frac{4 \text{t}_{2p}^4-8 \text{t}_4^4+4 \text{t}_{ab}^4}{(-6 \text{J}_h+4 \text{J}_p) (2 \text{J}_h+U)^2}+\\&+&
\frac{4 \text{t}_{2p}^4+8 \text{t}_4^4+4 \text{t}_{ab}^4}{(-6 \text{J}_h-4 \text{J}_p) (2 \text{J}_h+U)^2}+
\frac{-2 \text{t}_{2p}^4+4 \text{t}_4^4-2 \text{t}_{ab}^4}{U (2 \text{J}_h+U)^2}
\\
J^{(4)}_{\text{3}} &=&
\frac{4 \text{t}_1^4+16 \text{t}_2^4+2 \text{t}_{2p}^4+4 \text{t}_4^4+6 \text{t}_{ab}^4}{(2 \text{J}_h+U)^3}+
\frac{\text{t}_1^4-\text{t}_{2p}^4}{\text{J}_h (2 \text{J}_h+U)^2}+
\frac{-8 \text{t}_2^4-8 \text{t}_4^4}{(-3 \text{J}_h-2 \text{J}_p) (2 \text{J}_h+U)^2}+\\&+&
\frac{4 \text{t}_{2p}^4-8 \text{t}_4^4+4 \text{t}_{ab}^4}{(-2 \text{J}_h-3 U) (2 \text{J}_h+U)^2}+
\frac{2 \text{t}_1^4-4 \text{t}_4^4+2 \text{t}_{ab}^4}{(-4 \text{J}_h-U) (2 \text{J}_h+U)^2}+
\frac{8 \text{t}_2^4+2 \text{t}_{2p}^4+4 \text{t}_4^4+2 \text{t}_{ab}^4}{(-5 \text{J}_h-2 \text{J}_p-U) (2 \text{J}_h+U)^2}+\\&+&
\frac{2 \text{t}_{2p}^4-4 \text{t}_4^4+2 \text{t}_{ab}^4}{(-5 \text{J}_h+2 \text{J}_p-U) (2 \text{J}_h+U)^2}
\end{eqnarray*}
\begin{eqnarray*}
J^{(4)}_{\text{nn}} &=&\frac{-4 \text{t}_1^4-16 \text{t}_2^4-8 \text{t}_4^4-4 \text{t}_{ab}^4}{(2 \text{J}_h+U)^3}+
\frac{-2 \text{t}_1^4+2 \text{t}_{2p}^4}{\text{J}_h (2 \text{J}_h+U)^2}+
\frac{16 \text{t}_2^4+16 \text{t}_4^4}{(-3 \text{J}_h-2 \text{J}_p) (2 \text{J}_h+U)^2}+\\&+&
\frac{-4 \text{t}_{2p}^4+8 \text{t}_4^4-4 \text{t}_{ab}^4}{(-2 \text{J}_h-3 U) (2 \text{J}_h+U)^2}+
\frac{-2 \text{t}_1^4+4 \text{t}_4^4-2 \text{t}_{ab}^4}{(-4 \text{J}_h-U) (2 \text{J}_h+U)^2}+
\frac{-8 \text{t}_2^4-2 \text{t}_{2p}^4-4 \text{t}_4^4-2 \text{t}_{ab}^4}{(-5 \text{J}_h-2 \text{J}_p-U) (2 \text{J}_h+U)^2}+\\&+&
\frac{-2 \text{t}_{2p}^4+4 \text{t}_4^4-2 \text{t}_{ab}^4}{(-5 \text{J}_h+2 \text{J}_p-U) (2 \text{J}_h+U)^2}.
\end{eqnarray*}
\end{widetext}
|
1,108,101,564,392 | arxiv | \section*{Introduction}
The amount of sources generating multimedia streaming data has extremely increased in recent decades. Multimedia streaming data are really relevant in computerized systems, social medias, web forums, computational vision applications, and daily digital news purposes to name just a few. Multimedia streaming problems may be approached with intelligent strategies and algorithms along with different network models which required to accurately evaluate and validate.
Performance evaluation of network applications has received much less attention than applications themselves. Although application developers try to improve the quality of user experience with their products based on the feedback they get from customers. Experimental evaluation of this group of applications needs more attention. Time efficiency and accuracy are two important factors that directly affects the quality of user's experience with network applications \cite{apt5}, \cite{apt1}, \cite{apt2}, \cite{apt3}, \cite{apt4}.
The motivation of this research study is to revisit the state of the art real-time multimedia streaming systems to provide better knowledge and insights on performance evaluation criteria and address both challenges and possible improvement in the literature. In the current paper, we limit ourselves to perform a survey on experimental evaluations and measurements of only internet-based applications.
\section*{Related Work}
\par Akhshabi \textit{et al.} performed an experimental evaluation of rate adaption algorithms for streaming over HTTP~\cite{akhshabi2011experimental}~\cite{akhshabi2012experimental}.
The study experimentally evaluates three common video streaming applications under a different bandwidth value ranges. Results of this experiment shows that TCP congestion control and its reliable nature does not necessarily affect the performance of this group of streaming applications. Interaction of rate-adaption logic and TCP congestion control is an area of investigation that has been left for the future.
\par Cicco \textit{et al.} performed an experimental investigation on the Google Congestion Control (GCC) in the RTCWeb IETF WG~\cite{de2013experimental}. In this experiment, authors implemented a controlled testbed. Results of this experimental study show thatGoogle Congestion Control (GCC) in the RTCWeb IETF WG works well but it does not perform a fair bandwidth utilization when the bandwidth is shared by two GCC flows or it is shared by one GCC and one TCP flow.
\par Cicco \textit{et al.} have also performed an experimental investigation on the High Definition (HD) video distribution of Akamai~\cite{de2010experimental}.
\par Lukasz Budzisz \textit{et al.} proposed a delayed-based congestion control solution for streaming applications~\cite{budzisz2011fair}. Using this system results shorter queues and low delay in homogeneous networks, and balanced flows in delay-based and loss-based heterogeneous networks. Authors of this paper argue that this system can achieve these properties under the whole diversity of loss values, and it outperforms TCP flows. They demonstrate that this system guarantees aforementioned properties using experiments and analysis, .
\par Yang Xu \textit{et al.} performed a measurement study on Google+, Skype, and iChat ~\cite{xu2012video}. In this study, they explained the anatomy of these applications. The authors of this paper explored some performance details of these applications such as generating the video and techniques used for, strategies to recover from the packet loss, and end-to-end delays metrics Using passive and active experiments. Based on this experiment, the server location has a significant impact on user performance and also recovery from the packet in server-based applications. This experiment also argues that using batched re-transmissions can be a better solution than Forward Error Correction (FEC) in real time applications. FEC is an error control technique in streaming over unreliable network connections.
\par A mesh-pull-based P2P video streaming solution that uses Fountain codes is presented by Oh \textit{et al.}~\cite{oh2011mesh}. The proposed system offers a fast and smooth streaming application with a low complexity. Evaluations show that the system has better performance than existing buffer-map-based streaming systems when packet loss happens. Considering jitter as another parameter and evaluation of behavior of proposed system under jitter values can be an extension of the study.
\par Performance of Skype's FEC mechanism has been measured by Te-Yuan Huang \textit{et al.} ~\cite{huang2010could}. Authors measured the amount of the redundancy resulting by the FEC mechanism and the trade-offs between the quality of the users' experience and redundancy resulted by the FEC. The study tries to find an optimal level of redundancy to gain the highest quality of experience.
\par Using Fountain Multiple Description Coding (MDC) in video streaming applications over a heterogeneous peer to peer networks is evaluated by Smith \textit{et al.}~\cite{smith2012limit}. This experimental study concludes that Fountain MDC code is a good option in such cases, but there are some restrictions that should be considered in real-world P2P streaming systems.
\par Assefi \textit{et al.} performed experimental evaluation on real-time cloud speech recognition applications. They used speech recognition applications as a tool for measuring the quality of user experiment under difficult network conditions. The also presented a network coding solution to improve the accuracy and delay of streaming system under different values of packet loss and jitter\cite{assefi2015impact,assefimeasuring,assefi2015experimental}.
\par Te-Yuan Huang \textit{et al.} performed an experimental study on voice rate adaption of Skype under different network conditions~\cite{huang2009tuning}. Results of this experiment shows that using public domain codec is not an perfect solution to achieve user satisfaction. In this study, authors considered different values of packet loss in the experiment and proposed a model to reduce the redundancy resulting from the packet loss reduction.
\par An experimentally study on the performance of multipath TCP over wireless networks is performed by Chen \textit{et al.} ~\cite{chen2013measurement}. They measured the resulting delay from different cellular data providers. Results of this study show that MPTCP results a robust data transport under various network traffic conditions. Studying the cost of energy and performance trade-offs should be considered as a possible future work for this study. Authors explained details of Akamai's client-server protocol which implements a quality adaption algorithm. This study shows that the proposed technique encodes any video at a number of different bit rates and stores all encodings at the server. Server selects the bitrate matching the bandwidth that is measured based on the receiving signal from the cilent. The bitrate level changes based on the available bandwidth. Authors also evaluated the dynamics of the algorithm in three different scenarios.
\par An algorithm tolerating non-congestion related packet loss is proposed by Hayes \textit{et al.} ~\cite{hayes2010improved}. The experimental evaluation of the algorithm shows that it improves the throughput by 150\% under packet loss of 1\% and allows the system to share the capacity by more than 50\% with respect to other solutions.
A set of experiments to asses quality of experience on television and mobile applications is presented by Winkler \textit{et al.} ~\cite{winkler2003video,winklr2003video}. The proposed experiment considers different values of bit rates, codecs, contents, and network traffic conditions. Authors used Single Stimulus Contiguous Quality Evaluation (SSCQE) and Double Stimulus Impairment Scale (DSIS) on a same set of inputs and compared the results and analyzed methods the compare performance of codecs.
\par A framework for measuring users' QoE is proposed by Kuan-Ta Chen \textit{et al.} ~\cite{chen2009oneclick}. The framework is called OneClick, and it provides a dedicate key that can be pressed by users every time they feel unsatisfied with the quality of the streaming media. OneClick is implemented on two applications --shooter games, and instant messaging.
\par Another setup quantifying the quality of a user's experience is proposed by Kuan-Ta Chen \textit{et al}~\cite{chen2009crowdsourceable}. This system is able to verify inputs from each participant, so it supports crowd-sourcing. Participation is made easy with this setup, and it also generates interval-scale scores. Authors argue that other researchers can use this framework to improve the quality of a users' experience without affecting the results and achieve a higher diversity of users while keeping the cost low.
\par Finally, Vukobratovic \textit{et al.} proposed a multicast streaming solution that is based on Expanding Window Fountain (EWF) codes in the real-time multicast~\cite{vukobratovic2009scalable}. Using Raptor-like precoding is addressed as a potential improvement in this area.
\nocite{assefi2012optimizing}
\section*{Conclusion}
In this study, the general trends in streaming applications was presented along with issues and challenges associated with achieving an efficient and reliable streaming system. Current streaming systems with their respective features were reviewed, related studies in the area were discussed, and open research areas were pointed out. Potential solutions to some of challenges and issues that have been associated with streaming systems, recent advances in the area and some of the most promising experiments, and studies in the field were also presented.
|
1,108,101,564,393 | arxiv | \section*{Extensivity of the entropy}
Let now address point (i). We consider the classical Hamiltonian mentioned above, with (attractive) potential $V(r)$ which diverges for any dimensionless distance $r<1$, and precisely equals $-A/r^\alpha$ for $r \ge 1$ with $\alpha \ge 0$ and $A>0$.
As mentioned, we verify that the potential energy per particle $U(N)/N \propto \int_1^\infty dr r^{d-1}r^{-\alpha}$ diverges for $0 \le \alpha/d \le 1$ (long-range interactions), and converges for $\alpha/d>1$ (short-range interactions).
We also verify straightforwardly that $\int_1^{N^{1/d}} dr \, r^{d-1}r^{-\alpha} =\tilde{N} / d$, where $\tilde{N} \equiv \frac{N^{1-\alpha/d}-1}{1-\alpha/d}$ \cite{JundKimTsallis1995,Tsallis2009}.
This expression implies that $U(N)$ is extensive (i.e., $\propto N$) for $\alpha/d>1$, and nonextensive otherwise.
More precisely, $U(N)$ is proportional to $N \ln N$ for $\alpha/d=1$, and proportional to $N^{2-\alpha/d}$ for $0 \le \alpha/d <1$.
Let us now focus on a generic thermodynamical potential
\begin{eqnarray}
G(N,T,p,\mu,...)=U(N,T,p,\mu,...) -TS(N,T,p,\mu,...)+pV(N,T,p,\mu,...) -\mu N - \cdots \,,
\end{eqnarray}
where $T, p, \mu$ are the temperature, pressure, chemical potential, and $U,S,V,N$ are the internal energy, entropy, volume, number of particles.
For a classical model in the above short/long range class, the thermodynamic limit corresponds to taking $N\to\infty$ in the following expression:
\begin{eqnarray}
\frac{G(N,T,p,\mu,...)}{N\tilde{N}}=\frac{U(N,T,p,\mu,...)}{N\tilde{N}} - \frac{T}{\tilde{N}} \frac{S(N,T,p,\mu,...)}{N}+\frac{p}{\tilde{N}}\frac{V(N,T,p,\mu,...)}{N} - \frac{\mu}{\tilde{N}} \frac{N}{N} - \cdots \,,
\end{eqnarray}
hence
\begin{eqnarray}
&&g(\tilde{T},\tilde{p},\tilde{\mu},...)=u(\tilde{T},\tilde{p},\tilde{\mu},...) -\tilde{T} s(\tilde{T},\tilde{p},\tilde{\mu},...)+\tilde{p}v(\tilde{T},\tilde{p},\tilde{\mu}, ...) - \tilde{\mu} - \cdots \,,
\label{tilde}
\end{eqnarray}
with $(\tilde{T},\tilde{p},\tilde{\mu},...) \equiv \left(\frac{T}{\tilde{N}}, \frac{p}{\tilde{N}}, \frac{\mu}{\tilde{N}},...\right)$. The correctness of these (conjectural) scalings has been profusely verified in the literature (in ferrofluid \cite{JundKimTsallis1995}, fluid \cite{Grigera1996}, magnetic \cite{CannasTamarit1996,CannasTamarit1996_2, CannasTamarit1996_3}, diffusive \cite{CondatRangelLamberti2002}, percolation \cite{RegoLucenaSilvaTsallis1999_2} systems, among others; see \cite{Tsallis2009} for an overview).
In all these cases, it has been verified that {\it finite} equations of states are obtained, for both short- and long-range interactions, in the $N\to\infty$ limit when using these rescaled thermodynamic variables, whereas the use of the standard (i.e., non rescaled, or equivalently, rescaled with $\tilde{N}$ for $\alpha/d >1$) variables works (naturally) correctly for short-range interactions, but fails for long-range ones.
What Eq.~(\ref{tilde}) implies is that $S,V,N,...$ play totally analogous thermodynamical roles, in particular that they are {\it extensive in all cases} (which, for $N$, is verified by mere construction). In contrast, $U$, $G$ and all thermodynamic potentials are extensive for short-range interactions, but are superextensive for long-range interactions.
Analogously, $T,p,\mu,...$ are intensive for short-range interactions, but scale with $\tilde{N}$ for long-range interactions.
We see therefore that the traditional thermodynamical variables which are extensive for short-range interactions split into two classes for long-range ones.
The first class corresponds to energies (which become super-extensive in the long-range case). The second class corresponds to those variables which appear, within the usual Legendre transformations, in thermodynamically conjugated pairs. This class remains extensive even in the long-range case.
The entropy belongs to this class.
A second argument pointing towards the correctness of using nonadditive entropic forms in order to re-establish the entropic extensivity of the system can be found in the mutually consistent results achieved by Hanel and Thurner \cite{HanelThurner2011, HanelThurner2011_2} by focusing on the Khinchine axioms and on complex systems with surface-dominant statistics.
As a third indication we can refer to the analogy with the time $t$ dependence of the entropy of simple nonlinear dynamical systems, e.g., the logistic map.
Indeed, for the parameter values for which the system has positive Lyapunov exponent (i.e., strong chaos and ergodicity), we verify $S_{BG} \propto t$ (under appropriate mathematical limits), but for parameter values where the Lyapunov exponent vanishes (i.e., weak chaos and breakdown of ergodicity), it is a nonadditive entropy ($S_q$, discussed below) the one which grows linearly with $t$ (see \cite{BaldovinRobledo2004} and references therein).
If we take into account that, in many such dynamical systems, $t$ plays a role analogous to $N$ in thermodynamical systems, we have here one more indication which aligns with the extensivity of the entropy for complex systems.
\section*{Nonadditive entropies}
Let us now turn onto the above mentioned point (ii), namely the fact that entropies generalizing that of BG become necessary in order to recover thermodynamic extensivity for nonstandard systems.
As a possibility for addressing complexities such as those illustrated above it was proposed in 1988~\cite{Tsallis1988} (see also \cite{GellMannTsallis2004,TsallisGellMannSato2005,Tsallis2009}) a generalization of the BG theory, currently referred to as nonextensive statistical mechanics. It is based on the nonadditive entropy
\begin{equation}
S_q=k_B\frac{1-\sum_{i=1}^Wp_i^q}{q-1}
= k_B \sum_{i=1}^W p_i \ln_q \frac{1}{p_i} \;\;\; \left(q \in {\cal R}; \, \sum_{i=1}^W p_i=1 \right),
\label{qentropy}
\end{equation}
with $\ln_q z \equiv \frac{z^{1-q}-1}{1-q}$ ($\ln_1 z=\ln z$). $S_q$ recovers $S_{BG}= - k_B \sum_{i=1}^W p_i \ln p_i$ for $q\to 1$. If $A$ and $B$ are two probabilistically independent systems (i.e., $p_{ij}^{A+B}=p_i^Ap_j^B$, $\forall (i,j)$), definition (\ref{qentropy}) implies
\begin{equation}
\frac{S_q(A+B)}{k_B} = \frac{S_q(A)}{k_B}+ \frac{S_q(B)}{k_B}
+ (1-q)\frac{S_q(A)}{k_B}\frac{S_q(B)}{k_B} \,.
\end{equation}
In other words, according to the definition of entropic additivity in \cite{Penrose1970}, $S_q$ is additive if $q=1$, and nonadditive otherwise.
If probabilities are all equal, we straightforwardly obtain
\begin{equation}
S_q=k_B \ln_q W \,.
\end{equation}
If we extremize (\ref{qentropy}) with a (finite) constraint on the width of the probability distribution $\{p_i\}$ (in addition to its normalization), we obtain
\begin{equation}
p_i=\frac{e_q^{-\beta_q\,E_i}}{\sum_{j=1}^W e_q^{-\beta_q\,E_j}} \,,
\label{pq}
\end{equation}
$e_q^z$ being the inverse of the $q$-logarithmic function, i.e., $e_q^z \equiv [1+(1-q)z]^{1/(1-q)}$ ($e_1^z=e^z$); $\{E_i\}$ are the energy levels; $\beta_q$ is an effective inverse temperature.
Complexity frequently emerges in natural, artificial and social systems. It may be caused by various geometrical-dynamical ingredients, which include non-ergodicity, long-term memory, multifractality, and other spatial-temporal long-range correlations between the elements of the system. During the last two decades, many such phenomena have been successfully approached in the frame of nonextensive statistical mechanics. Predictions, verifications and various applications have been performed in high-energy physics \cite{CMS1, ALICE1_3, ATLAS, PHENIX, ShaoYiTangChenLiXu2010}, spin-glasses \cite{PickupCywinskiPappasFaragoFouquet2009}, cold atoms in optical lattices \cite{DouglasBergaminiRenzoni2006}, trapped ions \cite{DeVoe2009},
anomalous diffusion \cite{AndradeSilvaMoreiraNobreCurado2010}, dusty plasmas \cite{LiuGoree2008}, solar physics \cite{BurlagaVinasNessAcuna2006, BurlagaVinasNessAcuna2006_6},
relativistic and nonrelativistic nonlinear quantum mechanics \cite{NobreMonteiroTsallis2011}, among many others.
If the $N$ elements of the physical system are independent (or quasi-independent in some sense), we have that
\begin{equation}
W(N) \sim A \xi^N \;\;\; (A>0;\, \xi>1; \, N\to\infty) \,.
\label{A}
\end{equation}
Therefore, by illustrating the present point for the particular case of equal probabilities, we immediately verify that
\begin{equation}
S_{BG}(N) = k_B \ln W(N) \sim k_B (\ln \xi) N \;\;\;(N\to\infty)\,,
\end{equation}
hence thermodynamical extensivity is satisfied. This reconfirms that, for such systems, the thermodynamically admissible
entropy is precisely given by the additive one, $S_{BG}$, as well known. If, however, strong correlations are present (of the type assumed in the $q$-generalization of the Central Limit and L\'evy-Gnedenko Theorems \cite{UmarovTsallisSteinberg2008, UmarovTsallisSteinberg2008_2}), we can have
\begin{equation}
W(N) \sim B N^\rho \;\;\; (B>0; \, \rho>0; \, N \to\infty) \,.
\label{B}
\end{equation}
In this case, we straightforwardly verify that, for $q=1-\frac{1}{\rho}$,
\begin{equation}
S_{q}(N) = k_B \ln_q W(N) \propto N \;\;\;(N\to\infty)\,,
\end{equation}
which satisfies thermodynamical extensivity, in contrast with $S_{BG}(N) \propto \ln N$, which violates it.
Probabilistic and physical models which belong to this class are respectively available in \cite{TsallisGellMannSato2005} and \cite{CarusoTsallis2008}.
It is clear that, for $N>>1$, expression (\ref{B}) becomes increasingly smaller than (\ref{A}). A similar situation occurs for
\begin{equation}
W(N) \sim C \nu^{N^\gamma}\;\;\; (C>0\, ; \nu > 1; \, 0 < \gamma < 1) \,,
\label{C}
\end{equation}
which also becomes increasingly smaller that (\ref{A}) (though larger than (\ref{B})).
The entropy associated with $\gamma \to 1$ is of course $S_{BG}$. What about $0< \gamma <1$ ?
The answer is in fact already available in the literature (footnote of page 69 in \cite{Tsallis2009}), namely,
\begin{equation}
S_\delta=k_B \sum_{i=1}^W p_i \left(\ln\frac{1}{p_i} \right)^\delta \;\;\; (\delta > 0) \,.
\label{deltaentropy}
\end{equation}
The case $\delta=1$ recovers $S_{BG}$.
This entropy is, like $S_q$ for $q>0$, concave for $0< \delta \le (1+ \ln W$).
And, also like $S_q$ for $q \ne 1$, it is nonadditive for $\delta \ne 1$.
Indeed, for probabilistically independent systems $A$ and $B$ (hence $W^{A+B}=W^AW^B$), we verify $S_{\delta}(A+B)\neq S_{\delta}(A)+S_{\delta}(B)$.
For equal probabilities we have
\begin{equation}
S_\delta = k_B \ln^\delta W \,,
\end{equation}
hence, for $\delta >0$,
\begin{equation}
\frac{S_\delta(A+B)}{k_B} = \left\{ \left[ \frac{S_\delta(A)}{k_B} \right]^{1/\delta} + \left[ \frac{S_\delta(B)}{k_B} \right]^{1/\delta} \right\}^\delta \,.
\label{deltaentropy2}
\end{equation}
It is easily verified that, if $W(N)$ satisfies (\ref{C}), $S_\delta(N)$ is extensive for $\delta=1/\gamma$. This is in fact true even if
\begin{equation}
W(N) \sim \phi(N)\nu^{N^\gamma} \;\;(\nu>1; \, 0<\gamma<1) \,,
\label{stretched}
\end{equation}
$\phi(N)$ being any function satisfying $\lim_{N\to\infty} \frac{\ln \phi(N)}{N^\gamma}=0$.
Let us now unify $S_q$ (Eq.~(\ref{qentropy})) and $S_\delta$ (Eq.~(\ref{deltaentropy})) as follows:
\begin{eqnarray}
S_{q,\delta}
= k_B \sum_{i=1}^W p_i \left(\ln_q \frac{1}{p_i}\right)^\delta \; \left(q \in {\cal R}; \, \delta>0; \sum_{i=1}^W p_i=1 \right)
\label{qdeltaentropy}
\end{eqnarray}
$S_{q,1}$ and $S_{1,\delta}$ respectively recover $S_q$ and $S_\delta$; $S_{1,1}$ recovers $S_{BG}$. Obviously this entropy is nonadditive unless $(q,\delta)=(1,1)$, and it is expansible, $\forall (q>0,\delta>0)$.
It is concave for all $q>0$ and $0<\delta \le (qW^{q-1}-1)/(q-1)$.
In the limit $W \to\infty$, this condition becomes $0<\delta \le 1/(1-q), \,\forall q \in(0,1)$, and any $\delta>0$ for $q\ge 1$; see Figs. \ref{figSdelta_3D} and \ref{figqdelta}.
By the way, several two-parameter entropic functionals different from $S_{q,\delta}$ are in fact available in the literature
(see, for instance, \cite{HanelThurner2011, HanelThurner2011_2}, and also \cite{Tempesta2011}).
In particular the asymptotic behaviours of $S_{q,\delta}$ in the thermodynamic limit coincide, for all values of $(q,\delta)$, with those of the recently introduced Hanel-Thurner entropy $S_{c,d}$ \cite{HanelThurner2011, HanelThurner2011_2} for appropriate values of $(c,d)$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.70\linewidth]{fig1_3D_COLOR.eps}
\end{center}
\caption{Entropy $S_{\delta}$ as a function of the index $\delta$ and the probability $p$ of a binary variable ($W=2$). Concavity is lost for $\delta > 1 + \ln 2$.
}
\label{figSdelta_3D
\end{figure}
\section*{Discussion and conclusion}
We can address now the area law.
It has been verified for those anomalous $d$-dimensional systems that essentially yield $\ln W(N) \propto L^{d-1} \propto N^{(d-1)/d}$, which implies that $W(N)$ is of the type indicated in (\ref{stretched}).
Therefore, $S_\delta =S_{1,\delta}$ for $\delta = d/(d-1)$ is extensive, thus satisfying thermodynamics.
Consistently,
this new black-hole entropy is \emph{extensive for arbitrary distributions of probabilities},
and, for equal probabilities, it can be connected with the well-known Bekenstein-Hawking entropy $S_{BH}$ through
\begin{equation}
\frac{S_{\delta=d/(d-1)}}{k_B} \propto \left(\frac{ S_{BH}}{k_B}\right)^{d/(d-1)} \hspace{1cm} (d=3),
\label{eq:qdeltaS_E_BHS}
\end{equation}
where
\[
S_{BH} = \frac{k_B}{4} \frac{A_H}{G\hbar/c^3}\,,
\]
$A_H$ being the event horizon area.
It is important to stress that Eq.~\eqref{eq:qdeltaS_E_BHS} has \emph{not} been imposed in an \emph{ad hoc} manner just to transform $L^{d-1}$ into $L^d$\,: it has been derived from a new entropic functional, namely $S_{\delta}$.
This entropy $S_{\delta}$ has been defined under the assumption that the current black-hole result $\ln W \propto A_{H}$ is correct.
Also, by using the fact that $d/(d-1) > 0$, we verify that $S_{\delta=d/(d-1)}$ increases monotonically with $S_{BH}$.
This may be seen as a consistent test in what concerns the second principle of the thermodynamics, namely that whenever $S_{BH}$ increases with time, $S_{\delta=d/(d-1)}$ does the same.
At the present state of knowledge we cannot exclude the possibility of extensivity of $S_{q,\delta}$ for other special values of $(q,\delta)$, particularly in the limit $\delta \to\infty$. Indeed, assume for instance that we have $\phi(N) \propto N^\rho$ in (\ref{stretched}), and take the limit $\gamma \to 0$, hence $\delta \to\infty$. The condition $\lim_{N\to\infty} \frac{\ln \phi(N)}{N^\gamma}=0$ is satisfied for any $\gamma >0$, but it is violated for $\gamma =0$, which opens the door for $S_q$, or some other nonadditive entropic functional, being the thermodynamically appropriate entropy.
For example, for the $d=1$ gapless fermionic system in \cite{CarusoTsallis2008}, we have analytically proved the extensivity of $S_q$ for a specific value of $q<1$ which depends on the central charge of the universality class that we are focusing on.
For the $d=2$ gapless bosonic system in \cite{CarusoTsallis2008}, we have numerically found that, once again, it is $S_q$ (with a value of $q<1$) the entropy which is extensive and consequently satisfies thermodynamics. This kind of scenario might be present in many $d$-dimensional physical systems for which $\ln W(N) \propto \ln_{2-d} N$ (i.e., $\propto \ln L$ for $d=1$, and $\propto L^{d-1}$ for $d>1$).
Summarizing, classical thermodynamics, and the thermostatistics of a wide class of systems whose elements are strongly correlated
(for instance, through long-range interactions, or through strong quantum entanglement, or both, such as black holes, quantum gravitational dense systems, and others)
can be reconciled (along lines similar to those illustrated in \cite{GellMannTsallis2004,TsallisGellMannSato2005,CarusoTsallis2008} for simple cases).
It is enough to use for these complex systems the nonadditive entropies such as $S_{q, \delta}$, instead of the usual Boltzmann-Gibbs-von Neumann one.
\begin{figure
\begin{center}
\includegraphics[width=0.60\linewidth]{fig2_COLOR.eps}
\end{center
\caption{Parameter space $(q,\delta)$ of the entropy $S_{q,\delta}$. At the point $(1,1)$ we recover the BG entropy $S_{BG}$.
At $\delta =1$ ($q=1$) we recover the nonadditive entropy $S_q$ ($S_\delta$). For any fixed $W$ there is a frontier $q(\delta)$ such that, for $\delta$ values at its left, the entropy $S_{q,\delta}$ is concave, and, at its right, it neither concave nor convex. The $W=2$ and $W \to\infty$ frontiers are indicated in the plot. The entropy $S_\delta$ is concave for $0<\delta\leq1+\ln W$. If we impose the extensivity of $S_{q,\delta}$ for the class of systems represented by Eq.~(\ref{stretched}), it must be $\delta = 1/\gamma \ge 1$. If $S_{q,\delta}$ is used for other purposes, the region $0<\delta<1$ is accessible as well.}
\label{figqdelta}
\end{figure}
\clearpage
|
1,108,101,564,394 | arxiv | \section{Introduction}
Compressive sensing~(CS) has become an important tool in modern signal processing.
It allows to identify sparse solutions of underdetermined linear systems~\cite{donoho2006compressed}. Under the assumption that the original signal is sparse in some transform domain~\cite{mallat1999wavelet}, CS requires fewer measurements to reconstruct the original signal than expected by the Nyquist sampling theorem~\cite{donoho2006compressed}. Compressive sensing methods have successfully been applied in various fields, including single-pixel cameras~\cite{duarte2008single}, magnetic resonance imaging~\cite{lustig2007sparse}, and seismic imaging~\cite{hennenfent2008simply}.
\begin{figure}[t]
\centering
\includegraphics[width=0.52\textwidth]{./figures/lowrank_prior.png}
\caption{Illustration of the low-rank prior in the images patches. As DUNs operate on image patches, large images are divided into multiple image patches of size 33$\times$33 (red boxes). A singular value decomposition~(SVD) is then performed on all patches per image and the corresponding singular values are plotted into the second row. It can be clearly seen that the singular values quickly decrease, which indicates the low-rank nature of the images.
}
\label{fig:lowrankprior}
\end{figure}
Mathematically, for an original signal $\bar{\textbf{x}} \in \mathbb{R}^N$, the observation $\textbf{y} = \boldsymbol{\Phi} \bar{\textbf{x}} \in \mathbb{R}^M$ is obtained after sampling through a measurement matrix $\boldsymbol{\Phi} \in \mathbb{R}^{M \times N}$, where $M \ll N$.\footnote{Note that for $\bar{\textbf{x}} \in \mathbb{R}^{\sqrt{N}\times\sqrt{N}}$ and $\operatorname{vec}(\bar{\textbf{x}})\in \mathbb{R}^N$ being the vectorization of $\bar{\textbf{x}}$ we have $\|\bar{\textbf{x}}\|_{\text{F}} =\|\operatorname{vec}(\bar{\textbf{x}} )\|_2$.} Here,~$\frac{M}{N}$ is denoted as the so-called CS ratio. Given a matrix $\boldsymbol{\Phi}$ and $\textbf{y}$, the goal of compressed sensing is to reconstruct $\bar{\textbf{x}}$ under some sparseness assumptions, such as structural sparsity~\cite{usman2011k}, dictionary sparsity~\cite{ravishankar2010mr}, and low-rankness~\cite{ravishankar2017low}.
We can write CS as an optimization problem of the form
\begin{equation}
\hat{\textbf{x}} =
\arg \min \limits_{\textbf{x} \in \mathbb{R}^N}
\left[
\frac{1}{2} \left\| \boldsymbol{\Phi} \textbf{x} - \textbf{y} \right\|^2_{\text{F}} + \lambda \mathcal{G}(\textbf{x})
\right]
\label{eq:csdefinition}
\end{equation}
for $\textbf{y} = \boldsymbol{\Phi} \bar{\textbf{x}}$, where $\left\| \cdot \right\|_{\text{F}}$ is the Frobenius-norm, $\mathcal{G}$ a sparseness constraint function, and $\lambda$ controls the penalty strength.
In recent years, with the advancement of neural networks, data-driven CS reconstruction methods have made great progress. In general, they can be divided into two categories: deep non-unfolding networks~(DNUNs) and deep unfolding networks~(DUNs). DNUNs learn the mapping between the observed signal $\textbf{y}$ and the original signal $\bar{\textbf{x}}$ directly from training examples~\cite{shi2019image}.
In contrast, DUNs consider the optimization problem given by Eq.~\eqref{eq:csdefinition} and map the iterative optimization algorithm used to solve Eq.~\eqref{eq:csdefinition}
to a deep neural network architecture. Usually, $K$ optimization steps are mimicked by means of $K$ sequential blocks (reconstruction stages) in the network~\cite{zhang2020amp}.
DUNs learn the matrix $\boldsymbol{\Phi}$, the regularization function $\mathcal{G}$, and parameters of the underlying optimization process simultaneously end-to-end by minimizing an objective function of the form
\begin{equation}
\frac{1}{{2\ndata}}\sum^{{\ndata}}_{i=1} \left\| \bar{\textbf{x}}_i - f_{\text{DUN}}(\boldsymbol{\Phi} \bar{\textbf{x}}_i)\right\|_{\text{F}}^2 \enspace,
\label{eq:originproblem_2}
\end{equation}
where $f_{\text{DUN}}$ denotes the neural network and $\left\{ \bar{\textbf{x}}_i \right\}^{\ndata}_{i=1}$ are~$\ndata$ training instances.
The architecture of $f_{\text{DUN}}(\bar{\textbf{x}}_i)$ results from unfolding the iterative optimization of
Eq.~\eqref{eq:csdefinition}, the parameters of the network that encode (among others) $\boldsymbol{\Phi}^{\text{T}}$ as well as~$\mathcal{G}$ (in our case, $\nabla\mathcal{G}$). Thus, in contrast to model-based CS, the measurement matrix as well as the sparsity regularization are not given \emph{a priori} but are learned from data.
Because of their excellent reconstruction performance~\cite{zhang2018ista, zhang2020optimization}, DUNs have become state of the art in image CS.
However, DUNs usually constrain the signals $\textbf{x}$ to be sparse in some transform domain, ignoring other intrinsic properties, such as low-rankness. The manipulation of image patches in CS has become a common practice making the low-rank property more prominent. The singular value curves of several image patches are shown in Fig.~\ref{fig:lowrankprior}. The trend indicated in the graph and the convergence of the singular values to $0$ indicate the low-rankness property of the images at hand. We extend Eq.~\eqref{eq:csdefinition} by an additional term that reflects the low-rankness of the input signals, i.e.,
\begin{equation}
\hat{\textbf{x}} = \arg \min \limits_{\textbf{x} \in \mathbb{R}^N} \left[ \frac{1}{2} \left\| \boldsymbol{\Phi} \textbf{x} - \textbf{y} \right\|^2_{\text{F}} + \lambda \mathcal{G}(\textbf{x}) + \mu \mathcal{R}(\textbf{x}) \right] \enspace,
\label{eq:lrdefinition}
\end{equation}
where $\mathcal{R}(\textbf{x})$ is the function increasing in the rank of the signal $\textbf{x}$ and $\mu$ controls the penalty degree.
In this paper, we propose an optimization-based deep unfolding network for image compressive sensing, dubbed LR-CSNet, by exploring the low-rank prior of the input images from the perspective of neural networks.
Our main contributions can be summarized as follows:
\begin{enumerate}
\item For the problem formulation, we establish an achievable constraint on the low-rank component and demonstrate its iterative optimization process by variable splitting.
\item We propose LR-CSNet to simulate the iterative optimization process into multiple reconstruction stages and learn an end-to-end mapping between observations and original signal.
\item We design a low-rank generation modul (LRGM) to learn the low-rank components as well as gradient descent and proximal mapping (GDPM) to refine details of the reconstructed image. Furthermore, we enhance the network representation by feature transmission.
\item We demonstrate via extensive experiments that LR-CSNet exhibits a superior performance on natural image datasets compared to state-of-the-art approaches.
\end{enumerate}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{./figures/network_arch.png}
\caption{Illustration of the LR-CSNet network architecture and modules, which includes $K$ reconstruction stages in total. From top to bottom is the overall network architecture, LRGM as well as GDPM in the $k$th reconstruction stage, with the legend and dense block~(DB) in GDPM on the right. Specifically, $\textbf{v}^k = \rho^k_1 \textbf{x}^{k-1} + \rho^k_2 \textbf{z}^{k-1} + \left( 1 - \rho^k_1 - \rho^k_2 \right) \textbf{l}^{k}$. $+$ and $-$ indicates the positive and negative sign of the branch when adding up.}
\label{fig:networkarch}
\end{figure*}
\section{Related Work}
\subsection{Deep Unfolding Networks}
DUNs emulate iterative optimization methods through neural networks and have been successfully applied to image inverse problems~\cite{liu2021stochastic}. For CS, neural networks were combined with the alternating direction method of multipliers~(ADMM) for efficient MRI reconstruction \cite{yang2018admm}. \cite{borgerding2017amp} learn the sparse linear inverse problem from the perspective of approximate message passing~(AMP) and follow-up work showed intensive studies~\cite{zhang2020amp} on image CS. The ISTA-Net+~\cite{zhang2018ista} focuses on modelling the iterative shrinkage thresholding algorithm~(ISTA) with neural networks, whereas the OPINE-Net~\cite{zhang2020optimization} obtained satisfactory results using a binary trainable sampling matrix. While the other models were trained with a fixed CS ratio, the ISTA-Net++~\cite{you2021ista} model trains at multiple CS ratios, reducing computational cost. Finally, COAST~\cite{you2021coast} is able to handle arbitrary sampling matrices and achieves promising results.
\subsection{Low-Rank Representation}
Low-rank representations characterize high-dimensional data with fewer vectors and effectively reveal the overall data structure~\cite{liu2010robust, kong2021infrared}. They are widely applied in the fields such as image restoration~\cite{gu2014weighted, han2021fine}, background modelling~\cite{wei2017self}, infrared small target detection~\cite{zhang2021infrared, zhang2019infrared, wang2021infrared}, and image compressive sensing~\cite{ravishankar2017low}.
Whereas, due to the non-convex nature, its optimal convex approximation is widely adopted, i.e. the nuclear norm $\left\| \textbf{x} \right\|_* = \sum_i \sigma_i$, where $\sigma$ is the singular value.
Even though low-rank representations helped model-driven approaches to achieve great sucess, the necessity of the singular value decomposition~(SVD) greatly limits computational efficiency. The SVD also complicates the integration of the low-rankness condition into a neural network.
\cite{cai2021learned} consider the low-rank term to be the product of two sub-matrices. Inspired by the tensor CP (CANDECOMP/PARAFAC) decomposition, Chen \textit{et.al}~\cite{chen2020tensor} treats a tensor of rank $r$ as the sum of multiple rank unit tensors and apply it as attention to semantic segmentation. Although this problem has been partially studied, it remains a challenging task to learn low-rank priors more efficiently and mapping them into DUNs reasonably.
\section{Proposed LR-CSNet}
In this section, we first define the low rank constrained CS problem and then present our specific update process in terms of optimization.
Thereafter, we describe the process of mapping the optimization into neural networks and give details on the LR-CSNet. Finally, the training parameters and loss function are described in general.
We use the following notation. Plain font $\rho$ indicates scalars, bold lowercase $\textbf{x}$ indicates matrices and vectors, bold capital $\textbf{F}$ indicates deep features, and calligraphic font $\mathcal{G}$ indicates functions.
\subsection{Problem Formulation}
We constrain the L2 norm of the low-rank component $\textbf{l}$ and signal $\textbf{x}$, rather than using the nuclear norm directly, in order to circumvent the costly SVD, i.e. $\mathcal{R}(\textbf{x}) = \frac{1}{2} \left\| \textbf{x} - \textbf{l} \right\|^2_{\text{F}}$.
Then we perform variable splitting to reduce the complex operations during optimization, specifically, we introduce an auxiliary variable $\textbf{z}$ as follows:
\begin{align}
\begin{aligned}
\min \limits_{\textbf{x}, \textbf{z}} & \left[\frac{1}{2} \left\| \boldsymbol{\Phi} \textbf{z} - \textbf{y} \right\|^2_{\text{F}} + \lambda \mathcal{G}(\textbf{x}) + \frac{\mu}{2} \left\| \textbf{z} - \textbf{l} \right\|^2_{\text{F}}\right] \text{s.t.} \quad \textbf{x} = \textbf{z}
\label{eq:objectivefunction}
\end{aligned}
\end{align}
Subsequently, we optimize the unconstrained cost function in Eq. \eqref{eq:costfunction}, where $\mu$ and $\beta$ are the penalty parameters
\begin{equation}
\mathcal{L} \left( \textbf{x}, \textbf{z} \right) = \frac{1}{2} \left\| \boldsymbol{\Phi} \textbf{z} - \textbf{y} \right\|^2_{\text{F}} + \lambda \mathcal{G}(\textbf{x}) + \frac{\mu}{2} \left\| \textbf{z} - \textbf{l} \right\|^2_{\text{F}} + \frac{\beta}{2} \left\| \textbf{x} - \textbf{z} \right\|^2_{\text{F}}
\label{eq:costfunction}
\end{equation}
For a differentiable function $f(\textbf{x})$ with $\nabla f(\textbf{x})$ being $l$-Lipschitz continuous (i.e. $\forall \textbf{x}_1, \textbf{x}_2: \left\| \nabla f(\textbf{x}_1) - \nabla f(\textbf{x}_2) \right\|$ $\leq l \left\| \textbf{x}_1 - \textbf{x}_2 \right\|$, where $l$ is a constant), the Taylor expansion at $\textbf{x}_0$ leads to an upper bound
$f(\textbf{x}) \le \hat{f}(\textbf{x}, \textbf{x}_0) =
f(\textbf{x}_0) + \left< \nabla f(\textbf{x}_0), \textbf{x} - \textbf{x}_0 \right> + \frac{l}{2} \left\| \textbf{x} - \textbf{x}_0 \right\|^2 = \frac{l}{2} \left\| \textbf{x} - \textbf{x}_0 + \frac{1}{l} \nabla f \left( \textbf{x}_0 \right) \right\|^2_2 + C$, where $C = - \frac{1}{2l} \left\| \nabla f(\textbf{x}_0) \right\|^2 + f(\textbf{x}_0)$.
Further, we optimize the variables $\textbf{z}$ and $\textbf{x}$ separately. The low-rank component $\textbf{l}^k$ is hypothesised to be independent of $\textbf{z}$, $\textbf{x}$ and is generated by LRGM.
\paragraph{Updating $\textbf{z}^k$:} The optimization objective is given by
\begin{multline}
\textbf{z}^k = \arg \min \limits_{\textbf{z}} \bigg[\frac{1}{2} \left\| \boldsymbol{\Phi} \textbf{z} - \textbf{y} \right\|^2_{\text{F}} + \\ \frac{\mu}{2} \left\| \textbf{z} - \textbf{l}^{k} \right\|^2_{\text{F}} + \frac{\beta}{2} \left\| \textbf{x}^{k-1} - \textbf{z} \right\|^2_{\text{F}}\bigg]\enspace.
\label{eq:optimizez}
\end{multline}
In order to avoid the complex operations such as matrix inversion that occur in the update process, we perform the Taylor expansion at $\textbf{z}^{k-1}$ for the first term in Eq.~\eqref{eq:optimizez}, i.e. we replace $\frac{1}{2} \left\| \boldsymbol{\Phi} \textbf{z} - \textbf{y} \right\|^2_{\text{F}}$ by $\frac{l_1}{4} \left\| \textbf{z} - \textbf{z}^{k-1} + \frac{1}{l_1} \boldsymbol{\Phi}^{\top} \left( \boldsymbol{\Phi} \textbf{z}^{k-1} - \textbf{y} \right) \right\| + C_1$, which as a linear function is $l_1$-Lipschitz continuous and get the update step using $s = l_1 + 2\mu + 2\beta$:
\begin{equation}
\textbf{z}^k = \frac{1}{s} \left( 2 \beta \textbf{x}^{k-1} + l_1 \textbf{z}^{k-1} + 2 \mu \textbf{l}^{k} - \boldsymbol{\Phi}^\top \boldsymbol{\Phi} \textbf{z}^{k-1} + \boldsymbol{\Phi}^\top \textbf{y} \right)
\label{eq:updatez}
\end{equation}
\paragraph{Updating $\textbf{x}^k$:} The optimization objective is given by
\begin{equation}
\textbf{x}^k = \arg \min \limits_{\textbf{x}} \left[\lambda \mathcal{G}(\textbf{x}) + \frac{\beta}{2} \left\| \textbf{x} - \textbf{z}^{k} \right\|^2_{\text{F}}\right]\enspace.
\label{eq:optimizex}
\end{equation}
As a function of enforcing the signal to be sparse in some transform domain, $\mathcal{G} (\textbf{x})$ is not specifically mathematically constrained. Similarly, we perform a Taylor expansion of $\mathcal{G} (\textbf{x})$ at $\textbf{x}^{k-1}$, which is converted into the $\nabla \mathcal{G} (\textbf{x})$ form with L2 norm constraints and arrive at
\begin{equation}
\textbf{x}^k = \frac{\lambda l_2}{\lambda l_2 + \beta} \textbf{x}^{k-1} + \frac{\beta}{\lambda l_2 + \beta} \textbf{z}^{k} - \frac{\lambda}{\lambda l_2 + \beta} \nabla \mathcal{G} \left( \textbf{x}^{k-1} \right)\enspace.
\label{eq:updatex}
\end{equation}
We replace the unknown function $\nabla \mathcal{G} (\textbf{x})$ with convolutional layers in LR-CSNet, which also satisfies that $\nabla \mathcal{G} (\textbf{x})$ is $l_2$-Lipschitz continuous.
\paragraph{Overall:} In end-to-end learning, we can set complex penalty parameters as learnable variables, so the overall optimization steps are
\begin{align}
\begin{aligned}
\textbf{z}^k = & \rho^k_1 \textbf{x}^{k-1} + \rho^k_2 \textbf{z}^{k-1} + \left( 1 - \rho^k_1 - \rho^k_2 \right) \textbf{l}^{k} \\
& - \eta^k \boldsymbol{\Phi}^\top \boldsymbol{\Phi} \textbf{z}^{k-1} + \eta^k \boldsymbol{\Phi}^\top \textbf{y} \\
\textbf{x}^k = & \alpha^k \textbf{x}^{k-1} + \left( 1 - \alpha^k \right) \textbf{z}^{k} - \gamma^k \nabla \mathcal{G} \left( \textbf{x}^{k-1} \right) \enspace,
\label{eq:updateprocess}
\end{aligned}
\end{align}
where $\rho_1 = \frac{2 \beta}{l_1 + 2\mu + 2\beta}$, $\rho_2 = \frac{l_1}{l_1 + 2\mu + 2\beta}$, $\eta = \frac{1}{l_1 + 2\mu + 2\beta}$, $\alpha = \frac{\lambda l_2}{\lambda l_2 + \beta}$ and $\gamma = \frac{\lambda}{\lambda l_2 + \beta}$. These parameters are trained independently in each reconstruction stage.
\subsection{Network Architecture}
\label{subsect:networkarch}
In this section we elaborate on the network architecture and module design of LR-CSNet based on the optimization process of Eq.~\eqref{eq:updateprocess}.
As shown in Fig.~\ref{fig:networkarch}, given an original signal $\bar{\textbf{x}} \in \mathbb{R}^{\sqrt{N}\times \sqrt{N}}$, we perform sampling and end-to-end image reconstruction through the network.
During sampling, the original image $\bar{\textbf{x}}$ passes through a convolutional layer with a kernel size and step size of 33, where the input and output channels are $1$ and $M$, respectively. In this way, the sampling process for $\textbf{y} = \boldsymbol{\Phi} \bar{\textbf{x}}$ is simulated and the observation $\textbf{y} \in \mathbb{R}^{1\times 1\times M}$ is obtained.
In the reconstruction phase, $\textbf{y}$ is passed through convolutional layers with kernel size and stride of $1$, where the input and output channels are $M$ and $33\times 33$ respectively. This operation is used to simulate $\textbf{x}^0 = \boldsymbol{\Phi}^{\top} \textbf{y}$, where the convolutional layer share weights with the one in the sampling process. Then the reconstructed signal is reshaped to $\textbf{x}^0 \in \mathbb{R}^{\sqrt{N}\times \sqrt{N}}$ by $\textit{PixelShuffle}(33)$~\cite{zhang2020optimization}. The reconstructed image is then passed through $K$ reconstruction stages to simulate the iterative updates. Each reconstruction stage consists of two modules: LRGM and GDPM.
\subsubsection{Low-Rank Generation Module (LRGM)} LRGM is used to generate the low-rank matrix $\textbf{l}^k$ of the current stage, which contains the majority of the information in the background.
A low-rank matrix can be considered as the result of multiplying two sub-matrices together, i.e. $\textbf{l} = \textbf{p} \textbf{q}$, where $\textbf{p} \in \mathbb{R}^{\sqrt{N}\times r}$, $\textbf{q} \in \mathbb{R}^{r\times \sqrt{N}}$, and $r$ is the rank number.
LRGM takes the updated variables from the previous stage as input and concatenates it with the transferred tensor $\textbf{F}^{k-1}$ after one convolutional layer, as shown in Fig. \ref{fig:networkarch}.
Subsequently, the deep feature is adaptively pooled into two tensors of scale $\sqrt{N} \times r \times C$ and $r \times \sqrt{N} \times C$ according to the rank number $r$ respectively, where $C$ is the channel number.
The two sub-matrices $\textbf{p}$ and $\textbf{q}$ are obtained through two 1$\times$1 convolutional layers for feature separation and dimensionality reduction.
Finally, these sub-matrices are multiplied to obtain the updated low-rank matrix $\textbf{l}^k$.
In this way LRGM is able to guarantee that $\operatorname{rank}(\textbf{l}^k) \le r$.
\subsubsection{Gradient Descent and Proximal Mapping (GDPM)} GDPM is used to update the variables $\textbf{z}^k$ and $\textbf{x}^k$ according to Eq. \eqref{eq:updateprocess}, whereby the scalars are learnable variables.
In Fig. \ref{fig:networkarch}, after obtaining $\textbf{z}^k$, it is concatenated with the low-rank matrix $\textbf{l}^k$ and passed through a convolution layer.
$\textbf{l}^k$ contains more structured image information and can provide guidance for the neural network in learning image details. Then we simulate the function $\nabla \mathcal{G}$ with two dense blocks (DBs)~\cite{zhang2018residual}.
Since $\nabla \mathcal{G}$ is learning high-frequency details in the image, the residual connections by \cite{zhang2018residual} were not applied here. It is worth mentioning that DB is essentially an accumulation of multiple convolutional layers, which clearly satisfies Lipschitz continuous, guaranteeing the validity of this module. In addition, the last deep feature is concatenated with $\textbf{F}^{k-1}$ and the transferred tensor $\textbf{F}^{k}$ is updated through a 1x1 convolutional layer. Finally, the transferred tensor $\textbf{F}^{k}$, the updated variables $\textbf{z}^{k}$, and $\textbf{x}^{k}$ are delivered to the next reconstruction stage.
\subsection{Network Parameter and Loss Function}
The trainable parameters in LR-CSNet consist of four components: 1) the same measurement matrix $\boldsymbol{\Phi}$ in each reconstruction stage, 2) the auxiliary scalars $\rho_1$, $\rho_2$, $\eta$, $\alpha$, $\gamma$, 3) the network weights $\Theta_{l}$ in LRGM, and 4) the weights $\Theta_{g}$ in GDPM.
Thus, all training parameters are denoted as $\Theta = \{ \boldsymbol{\Phi} \} \cup \{ \rho^k_1, \rho^k_2, \eta^k, \alpha^k, \gamma^k \}^{K}_{k=1} \cup \{ \Theta^k_{l}, \Theta^k_{g} \}^{K}_{k=1}$, where $K$ is the total reconstruction stages.
$\boldsymbol{\Phi}$ and $\boldsymbol{\Phi}^{\top}$ share weights~\cite{zhang2020optimization}.
The loss function of the network, as it is common practice~\cite{zhang2020optimization}, consists of two components for the given training data $\{ \bar{\textbf{x}}_i \}^{\ndata}_{i=1}$: the fidelity loss $\mathcal{L}_{\text{fidelity}}$ to ensure that the reconstruction result $\textbf{x}^K_i$ closely approximates the input $\bar{\textbf{x}}_i$ and the orthogonal loss $\mathcal{L}_{\text{orth}}$ to impose an orthogonal constraint on the measurement matrix. The combined loss function is
\begin{align}
\begin{aligned}
\mathcal{L} (\Theta) & = \mathcal{L}_{\text{fidelity}} + \tau \mathcal{L}_{\text{orth}} \\
& = \frac{1}{N\ndata} \sum^{\ndata}_{i=1} \left\| \textbf{x}^K_i - \bar{\textbf{x}}_i \right\|^2_{\text{F}} + \frac{\tau}{M^2} \left\| \boldsymbol{\Phi} \boldsymbol{\Phi}^{\top} - \textbf{E} \right\|^2_{\text{F}}\enspace,
\label{eq:loss}
\end{aligned}
\end{align}
where $\ndata$ is the total amount of training data, \textbf{E} the unit matrix, and $\tau$ a constant (set to $0.01$ for our experimental evaluation).
\begin{table}
\renewcommand\arraystretch{1.3}
\begin{center}
\caption{Ablation study on the effects of our introduced model components with a fixed CS ratio of 25\%.
\label{Tab:Ablationmodules}}
\begin{tabular}{ccc|c|c}
\Xhline{1.3pt}
\multirow{2}{*}{LRGM} & \multirow{2}{*}{Dense} & \multirow{2}{*}{Trans} & \multicolumn{2}{c}{PSNR/SSIM} \\
\cline{4-5}
& & & Set11 & BSD68 \\
\hline
$\surd$ & - & - & 35.12/0.9536 & 31.87/0.9127 \\
- & $\surd$ & - & 35.27/0.9552 & 31.94/0.9136 \\
- & - & $\surd$ & 35.28/0.9547 & 31.95/0.9141 \\
\hline
- & $\surd$ & $\surd$ & 35.47/0.9565 & 32.09/0.9158 \\
$\surd$ & - & $\surd$ & 35.37/0.9556 & 32.02/0.9148 \\
$\surd$ & $\surd$ & - & 35.44/0.9561 & 32.00/0.9148 \\
\CC{90} $\surd$ & \CC{90} $\surd$ & \CC{90} $\surd$ & \CC{90} \textbf{35.54/0.9567} & \CC{90} \textbf{32.12/0.9162} \\
\Xhline{1.3pt}
\end{tabular}
\end{center}
\end{table}
\section{Experiments}
In this section we first give details on the widely applied datasets, the evaluation metrics, and the network implementation.Then, we demonstrate the validity of each module in this paper through extensive ablation studies and investigate the effect of key parameters. Finally, we compare LR-CSNet with other state-of-the-art methods in both quantitative and qualitative aspects to validate the performance of our approach.
\begin{figure}
\centering
\includegraphics[width=0.43\textwidth]{./figures/stage_rank_number.png}
\caption{Experiments the number of reconstruction stages $K$ and the rank number $r$ in LRGM with CS of 25\% on Set11. We create a trade-off between performance and computing efficiency by setting $K=9$ and $r=8$ (see increased marker size for $K$ and the dashed line for $r$).}
\label{fig:stagerank}
\end{figure}
\begin{table*}
\renewcommand\arraystretch{1.3}
\begin{center}
\caption{Quantitative comparison of average PSNR/SSIM for different CS ratios on the Set11, BSD68, and Urban100.
\label{table:comparesota}}
\begin{tabular}{c|c|cccccc}
\Xhline{1.3pt}
Dataset & Ratio & ISTA-Net+ & CSNet+ & AdapRecon & OPINE-Net+ & AMP-Net & LR-CSNet \\
\Xhline{1.3pt}
\multirow{5}{*}{Set11} & 1\% & 17.42/0.4029 & 19.87/0.4977 & 19.63/0.4848 & 20.02/0.5362 & 20.04/0.5132 & \CC{90} \textbf{20.85/0.5583} \\
& 4\% & 21.32/0.6037 & 23.93/0.7338 & 23.87/0.7279 & 25.69/0.7920 & 24.64/0.7527 & \CC{90} \textbf{26.16/0.8040} \\
& 10\% & 26.64/0.8087 & 26.04/0.7971 & 27.39/0.8521 & 29.81/0.8884 & 28.84/0.8765 & \CC{90} \textbf{30.35/0.8987} \\
& 25\% & 32.59/0.9254 & 29.98/0.8932 & 31.75/0.9257 & 34.86/0.9509 & 34.42/0.9513 & \CC{90} \textbf{35.64/0.9573} \\
& 50\% & 38.11/0.9707 & 34.61/0.9435 & 35.87/0.9625 & 40.17/0.9797 & 40.12/0.9818 & \CC{90} \textbf{41.03/0.9826} \\
\hline
\multirow{5}{*}{BSD68} & 1\% & 19.14/0.4158 & 21.91/0.4958 & 21.50/0.4825 & 21.88/0.5162 & 21.97/0.5086 & \CC{90} \textbf{22.32/0.5282} \\
& 4\% & 22.17/0.5486 & 24.63/0.6564 & 24.30/0.6491 & 25.20/0.6825 & 25.40/0.6985 & \CC{90} \textbf{25.53/0.6972} \\
& 10\% & 25.32/0.7022 & 27.02/0.7864 & 26.72/0.7821 & 27.82/0.8045 & 27.41/0.8036 & \CC{90} \textbf{28.21/0.8159} \\
& 25\% & 29.36/0.8525 & 30.22/0.8918 & 30.10/0.8901 & 31.51/0.9061 & 31.56/0.9121 & \CC{90} \textbf{32.12/0.9162} \\
& 50\% & 34.04/0.9424 & 34.82/0.9590 & 33.60/0.9479 & 36.35/0.9660 & 36.64/0.9707 & \CC{90} \textbf{37.29/0.9720} \\
\hline
\multirow{5}{*}{Urban100} & 1\% & 16.90/0.3846 & 19.26/0.4632 & 19.14/0.4510 & 19.38/0.4872 & 19.62/0.4967 & \CC{90} \textbf{19.65/0.4971} \\
& 4\% & 19.83/0.5377 & 21.96/0.6430 & 21.92/0.6390 & 23.36/0.7114 & 22.82/0.6963 & \CC{90} \textbf{23.41/0.7210} \\
& 10\% & 24.04/0.7378 & 24.76/0.7899 & 24.55/0.7801 & 26.93/0.8397 & 26.05/0.8287 & \CC{90} \textbf{27.41/0.8547} \\
& 25\% & 29.78/0.8954 & 28.13/0.8827 & 28.21/0.8841 & 31.86/0.9308 & 30.94/0.9273 & \CC{90} \textbf{32.50/0.9391} \\
& 50\% & 35.24/0.9614 & 32.97/0.9503 & 31.88/0.9434 & 37.23/0.9741 & 36.54/0.9744 & \CC{90} \textbf{37.87/0.9776} \\
\Xhline{1.3pt}
\end{tabular}
\end{center}
\end{table*}
\subsection{Datasets and Evaluation Metrics}
We test LR-CSNet on three natural image dataset benchmarks that are widely used in CS: Set11~\cite{Kulkarni_2016_CVPR}, BSD68~\cite{martin2001database}, and Urban100~\cite{huang2015single}. As training data, we use image patches $\{ \bar{\textbf{x}}_i \}^{\ndata}_{i=1}$ of size $33 \times 33$ as published in~\cite{zhang2018ista}, where the total number is $\ndata = 88912$. For fine-tuning, we train with an additional $36000$ image patches of size $99 \times 99$ from BSD300~\cite{martin2001database
, which is also publicly available.
As evaluation metrics, we choose peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM), which are widely adopted in image restoration, with higher values of both indicating better reconstruction results.
\subsection{Implementation Details}
Our implementation is based on PyTorch~\cite{paszke2019pytorch} and all experiments are performed on NVIDIA Titan RTX. We train the network on a set of CS ratios $\{1\%, 4\%, 10\%, 25\%, 50\%\}$, where we train 150 epochs using $33 \times 33$ image patches with batch size of 128, followed by a fine-tuning phase of 100 epochs using $99 \times 99$ image patches with batch size of 32. We optimize the parameters using Adam~\cite{kingma2014adam} with a momentum of~$0.9$ and weight decay of~$0.999$. The learning rate was set to a constant $10^{-4}$.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{./figures/recon_stage_compare.png}
\caption{Illustration of intermediate results on Set11 'Monarch' with CS ratio of 25\%. The feature map is the output of $\nabla \mathcal{G}^k$ (upper) and the reconstruction result $\textbf{x}^k$. The metrics of the corresponding stage (lower) are shown respectively, where $k \in \{1, 3, 6, 9\}$.}
\label{fig:stageresults}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.85\textwidth]{./figures/sota_compare.jpg}
\caption{Visual comparison on 'img\_011' with CS ratio of 10\% (upper) and on 'img\_059' with CS ratio of 50\% (lower). The best performance is highlighted.}
\label{fig:qualitative}
\end{figure*}
\subsection{Ablation Study and Parameter Setting}
We present the ablation study in Fig.~\ref{fig:stagerank}~\ref{fig:stageresults}, and Table~\ref{Tab:Ablationmodules} to explore the impact of each module and the changes in key parameters.
\subsubsection{Impact of LRGM} Table~\ref{Tab:Ablationmodules} explores the effectiveness of each module by comparing each possible combination. Removing 'LRGM' means removing the constraint on low-rank from the problem formulation in Eq.~\eqref{eq:objectivefunction}, with the derivation and settings remaining the same as before. The results show that LRGM always contributes to the performance.
\subsubsection{Impact of Dense} Removing 'Dense' is to replace the dense block in Fig.~\ref{fig:networkarch} with 6 convolutional layers, which implies a reduction in network parameter number and feature reuse capability. Again, the results demonstrate a degradation in performance without 'Dense'.
\subsubsection{Impact of Transmission} Transmission is the integration of deep features from the previous reconstruction stages into the current stage, which theoretically enables a more effective aggregation of information. Removing 'Trans' is removing the $\textbf{F}$ in Fig.~\ref{fig:networkarch}.
The experiments show that 'Trans' does improve the reconstruction accuracy of the network.
\subsubsection{Rank Number $r$} From Table~\ref{fig:stagerank}, we employ ranks $r \in \{4,8,12,16,24,32\}$ for the LRGM module to analyze their impact.
In general, a larger rank number indicates that more information can potentially be learned.
However, Fig.~\ref{fig:stagerank} shows that $r = 8$ performs best.
This indicates that larger ranks lead to redundant information that does not help improve network performance. We therefore set $r = 8$.
\subsubsection{Stage Number $K$} We explore the performance gain from the number of reconstruction stages $K$. As shown in Fig.~\ref{fig:stagerank}, we set $K \in \{ 1,3,6,9,12 \}$. The process of making $K$ larger brings a significant gain and also increases the number of network parameters. We found a trade-off at $K = 9$.
\subsection{Comparison With State-of-the-Art}
\subsubsection{Quantitative Evaluation} We compare LR-CSNet with five state-of-the-art methods, including two DNUNs: CSNet+~\cite{shi2019image}, AdapRecon~\cite{lohit2018convolutional}, and three DUNs: ISTA-Net+~\cite{zhang2018ista}, OPINE-Net+~\cite{zhang2020optimization}, and AMP-Net~\cite{zhang2020amp}.
We summarize the evaluation metrics of these methods on multiple datasets in Table \ref{table:comparesota}.
It can be seen that deep non-folding networks are stacking more convolutional layers which does not increase performance. Whereas ISTA-Net+ operates directly on deep features to simulate soft-thresholding, which limits the representation capability, resulting in poor performance. Meanwhile, OPINE-Net+ uses convolutional layers to simulate the analytical solution of an optimization problem such as the sum of L2 norm and L1 norm. AMP-Net focuses on removing the boundary effects between image patches using denoising techniques. These approaches ignore the low-rank properties of the image patches, resulting in networks that capture structural information only to a limited extent.
As shown in the table, LR-CSNet achieves the best reconstruction results at multiple CS ratios.
\subsubsection{Qualitative Evaluation}
Fig.~\ref{fig:stageresults} visualizes the reconstruction results at each stage, where higher stages are reconstructed more acculately and the information learnt by $\nabla \mathcal{G}^k$ becomes increasingly more detailed.
In addition, to illustrate the reconstruction effect of LR-CSNet more intuitively, we show the reconstruction effect of state-of-the-art approaches and LR-CSNet on two images as in Fig.~\ref{fig:qualitative}, where the red-boxed parts are enlarged and placed on the right side. The corresponding method and evaluation metrics are listed below and the best value is highlighted. Compared to the other methods, LR-CSNet is better at capturing the overall structure of the image and retains detailed information. This is due to the network taking into account the low-rank attributes of the image patches, together with $\nabla \mathcal{G}$ to learn high-frequency information, leading to its ability to obtain better reconstruction accuracy.
\section{Conclusion}
In this paper, we propose a deep unfolding network for natural image compressive sensing (CS) called LR-CSNet.
As real-world image patches are often well-
represented by low-rank approximations, we add a low-rank prior to the CS reconstruction.
We unfold the corresponding iterative optimization problem using variable splitting, leading to a neural network for CS that can be trained end-to-end.
Extensive experiments support the effectiveness of our approach.
\bibliographystyle{plain}
|
1,108,101,564,395 | arxiv | \section{Vacuum case}
Recently \cite{PL,hep-ph2} a new approach to implement recurrence
relations \cite{ch-tk} for the Feynman integrals was proposed.
In this work we extend the general formulas for the solutions
of the recurrence relations to the multi--loop case.
Let us consider first vacuum $L$-loop integrals with $N=L(L+1)/2$
denominators (so that one can express through them any scalar product
of loop momenta) of arbitrary degrees:
\begin{eqnarray}
&&B(\underline{n},D)=
m^{2\Sigma n_i-LD}
\int \cdots \int \frac{d^Dp_1\ldots d^Dp_L}
{D_1^{n_1}\ldots D_N^{n_N}},
\label{integral}\\
&&D_a=\sum_{i\geq j}A^{(ij)}_a p_i\cdot p_j -\mu_a m^2, \quad
p_k\cdot p_l=\sum_{a=1}^N(A^{-1})_{(kl)}^a(D_a+\mu_a m^2).
\label{den}
\end{eqnarray}
The recurrence relations that result from integration by parts,
by letting $(\partial/\partial p_i)\cdot p_k$ act on
the integrand \cite{ch-tk}, are:
$$D\delta_k^i B(\underline{n},D)=
2\sum_{a,d=1}^{N}\sum_{l=1}^{L}A_d^{(il)} n_d{\bf I}^{d+}
(A^{-1})_{(kl)}^a({\bf I}^-_a+\mu_a) B(\underline{n},D),
$$
where
${\bf I}^\pm_c B(\ldots, n_c,\ldots ) = B(\ldots, n_c\pm1,\ldots )$,
in particular ${\bf I}^\pm_c n_a = n_a\pm \delta^c_a$.
Using the relations
$$[n_d{\bf I}^{d+},{\bf I}^-_a]=\delta_a^d,
\qquad
\sum_{a=1}^{N}A_a^{(il)}(A^{-1})_{(kj)}^a=\delta^{(i}_{(k}\delta^{l)}_{j)},$$
they can be represented as
\begin{eqnarray}
\frac{D-L-1}{2}\delta_k^i B(\underline{n},D)&=&
\sum_{a,d=1}^{N}\sum_{l=1}^{L}
(A^{-1})_{(kl)}^a({\bf I}^-_a+\mu_a)
A_d^{(il)} n_d{\bf I}^{d+}
B(\underline{n},D).
\label{rr1}
\end{eqnarray}
The common way of using these relations is step--by--step reexpression
of the integral (\ref{integral}) with some values of $n_i$ through a set of
integrals with shifted values of $n_i$, with the final goal to reduce
this set to a linear combination of several "master" integrals $N_k(D)$
with some "coefficient functions" $F^k(\underline{n},D)$:
$$B(\underline{n},D)=\sum_k F^k(\underline{n},D)N_k(D)$$.
Nevertheless, to find proper combinations of these relations
and a proper sequence of its use is the matter of art even for
the tree--loop integrals with one mass \cite{REC}.
Then, even in cases when such procedures were
constructed, they lead to very time and memory consuming calculation
because of large reproduction rate at every recursion step.
Instead, let us construct the $F^k(\underline{n},D)$
directly as solutions of the given recurrence relations. Note, that if
we find any set of the solutions, we could construct $F^k(\underline{n},D)$
as their linear combinations. Let us try the solution of (\ref{rr1}) in the
following form:
\begin{eqnarray}
f^k(\underline{n})=
\frac{1}{(2\pi\imath)^N}
\oint \cdots \oint
\frac
{dx_1 \cdots dx_N}
{x_1^{n_1} \cdots x_N^{n_N}}g(x_a)
\label{solution0}
\end{eqnarray}
where integral symbols denote $N$ subsequent complex
integrations with contours
which will be described later. Acting by some operator
$O_i({\bf I}^-_a, n_a{\bf I}^{a+})$ (all decreasing operators should be
placed to the left) on
(\ref{solution0}) and performing the integration by parts one gets
(s.t. are surface terms):
\begin{eqnarray}
O_i({\bf I}^-_a, n_a{\bf I}^{a+})f^k(\underline{n})=
\frac{1}{(2\pi\imath)^N}
\oint \cdot\cdot \oint
\frac
{dx_1 \cdot\cdot dx_N}
{x_1^{n_1} \cdot\cdot x_N^{n_N}}
O_i(x_a, \partial_a)g(x_a)+\mbox{(s. t.)}.\nonumber
\end{eqnarray}
So, if we choose the $g(x_a)$ as the solution of
$O_i(x_a, \partial_a)g(x_a)=0$
and cancel the surface terms by proper choosing of integration contours
(for example, closed or ended in the zero points)
we find that (\ref{solution0}) is a solution of relations
$O_i({\bf I}^-_a, n_a{\bf I}^{a+})f^k(\underline{n})=0$,
and different choices of contours correspond to different
solutions.
The differential equations for (\ref{rr1})
have the solution $g(x_a)=P(x_a+\mu_a)^{(D-L-1)/2}$, where
$$P(x_a)=\det(\sum_{a=1}^N (A^{-1})_{(kl)}^a x_a)$$
is the polynomial in $x_a$ of degree $L$,
so we get the desirable solutions of (\ref{rr1}):
\begin{eqnarray}
f^k(\underline{n},D)=
\frac{1}{(2\pi\imath)^N}
\oint \cdot\cdot \oint
\frac
{dx_1 \cdot\cdot dx_N}
{x_1^{n_1} \cdot\cdot x_N^{n_N}}
\det((A^{-1})_{(kl)}^a(x_a+\mu_a))^{\frac{D-L-1}{2}}.
\label{solution}
\end{eqnarray}
Finally, let us derive from (\ref{rr1}) the recurrence relations with
D-shifts. Note that if $f^k(n_i,D)$ is a solution of (\ref{rr1}), then
by direct substitution to (\ref{rr1}) one can check that
$P({\bf I}^-_a+\mu_a)f^k(n_i,D-2)$ also is a solution.
Hence, if $f^k(n_i,D)$ is a complete set of solutions, then
\begin{eqnarray}
f^k(n_i,D)=\sum_n S^k_n(D)P({\bf I}^-+\mu_i)f^n(n_i,D-2),
\nonumber
\end{eqnarray}
where the coefficients of mixing matrix $S$ is numbers, that is do
not act on $n_i$. For the solutions (\ref{solution}) the $S$
is the unit matrix (the increasing of $D$ by 2 leads to appearing of factor
$P(x_a)$ in the integrand of (\ref{solution})), but the desire to come to
some specific set of master integrals may lead to nontrivial mixing.
These relations look different
from recently proposed in \cite{tarD}, although further investigations
can give some connections with them.
To check the efficiency of this approach we evaluated (using REDUCE)
the first 5 moments in the small $q^2$ expansion of the 3-loop QED
photon vacuum polarization.
The 3-loop contribution to the moments are expressed through about $10^5$
three--loop
scalar vacuum integrals with four massive and two massless lines.
The integral (\ref{solution}) in this case can be solved to finite
sums of the Pochhammer's symbols (see \cite{PL}).
Moreover, it is not necessary to evaluate these integrals separately.
Instead, we evaluated a few integrals of (\ref{solution}) type, but
with $P^{D/2-2}$ producted by a long polynomial in $x_i$
(the results see in \cite{PL,3l}, they are in agreement
with QCD calculations \cite{CKS} made by FORM).
The comparison with the recursive approach shows a reasonable
progress: the common way used in \cite{3l} demands
several CPU hours on DEC-Alpha to calculate full $D$
dependence of the first moment, and further calculations became
possible only after truncation in $(D/2-2)$. In the present approach the
full $D$ calculation for the first moment demands a few minutes on PC.
\section{Non--vacuum case}
Suppose that
integrals (\ref{integral}) depend on $R$ external momenta $p_i$
($L < i \leq L+R$). The number of the denomenators are now
$N_1=L(L+1)/2+LR$,
and the number of additional ("external") invariants are $N_2=R(R+1)/2$.
Let us expand the integrals in formal seria over
"denominator--like" objects $D_a$ of (\ref{den}) type with
$a=N_1+1,..,N_1+N_2$, depending on external momenta only:
$$B(n_{l, (l=1,\dots, N_1)}, p_{k, (k=L+1,\dots,L+R)})=
\int \cdots \int \frac{d^Dp_1\ldots d^Dp_L}{D_1^{n_1}\ldots
D_{N_1}^{n_{N_1}}}=$$
\begin{equation}
=\sum_{{n_i}(i>N_1)}m^{-2\Sigma n_i+2N_2+LD}b(n_i,_{(i=1,\dots, N_1+N_2)})
\prod_{i=N_1+1}^{N_1+N_2}
D_i^{n_i-1}.
\nonumber
\label{integral1}
\end{equation}
We define such general expansion in order to write
the recurrence relations in compact form, in practice
the coefficients $A^{(ij)}_a$ and $\mu_a$ may be very simple.
The expansion with negative $n_i$ corresponds to the large momenta
expansion, with positive ones to the expansion near points $\mu_a\,m^2$.
The $n_i$ can also be noninteger, but with unit shifts.
Acting by
$(\partial/\partial p_i)\cdot p_k$, $(i=1,\dots,L; k=1,\dots,L+R)$
on the integrand we get $N_1$ recurrence relations.
The additional $N_2$ relations we get acting by
$p_k \cdot (\partial/\partial p_i)$, $(i,k=L+1,\dots,L+R)$ on
both sides of (\ref{integral1}).
These new relations look like the old ones with only exception that they
have no terms proportional to space--time dimention $D$.
The complete set of recurrence relations is now
$$((D-L-R-1)\,\delta_k^i-(D-R-1)\,\hat{\delta_k^i})
\,b(\underline{n},D)= \qquad
$$
$$
\qquad
=2\sum_{a,d=1}^{N_1+N_2}\sum_{l=1}^{L+R}
(A^{-1})_{(kl)}^a({\bf I}^-_a+\mu_a)
A_d^{(il)} n_d{\bf I}^{d+}
b(\underline{n},D),
$$
where $\hat{\delta_k^i}$=($\delta_k^i$ if $i,k> L$, else 0).
The corresponding differential equations have the solution
$g(x_a)=g'(x_a+\mu_a)$, where
\begin{equation}
g'(\underline{x})=\det\Bigl((A^{-1})_{(kl)}^a
x_a\Bigr)^{\frac{D-L-R-1}{2}}
{\det}_0\Bigl((A^{-1})_{(kl)}^a
x_a\Bigr)^{-\frac{D-R-1}{2}},
\label{solution1}
\end{equation}
and $\det_0$ denotes the minor with $k,l>L$.
So, one can use the representation (\ref{solution0}),
but the problem of resolving it to explicit formulas demands futher
investigations.
Finally note that one can formally obtain the formulas (\ref{solution},
\ref{solution1}) by "change of integration variables" from
loop momenta to "denomenator--like objects" $D_a$. The
weight function for this change is
$$
\int d^Dp_1\cdot\cdot d^Dp_L
\prod_i \delta(D_i/m^2-x_i)
\propto \det((A^{-1})_{(kl)}^a(x_a+\mu_a))^{\frac{D-L-1}{2}}.
$$
|
1,108,101,564,396 | arxiv | \section{#1}}
\newcommand{\sect}[1]{{\it \textbf{#1} ---}}
\begin{document}
\title{NLO Effects for Doubly Heavy Baryon in QCD Sum Rules}
\date{\today}
\author{Chen-Yu \surname{Wang}}
\affiliation{School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China}
\author{Ce \surname{Meng}}
\affiliation{School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China}
\author{Yan-Qing \surname{Ma}}
\affiliation{School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China}
\affiliation{Center for High Energy Physics, Peking University, Beijing 100871, China}
\affiliation{Collaborative Innovation Center of Quantum Matter, Beijing 100871, China}
\author{Kuang-Ta \surname{Chao}}
\affiliation{School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China}
\affiliation{Center for High Energy Physics, Peking University, Beijing 100871, China}
\affiliation{Collaborative Innovation Center of Quantum Matter, Beijing 100871, China}
\begin{abstract}
With the QCD sum rules approach, we study the newly discovered doubly heavy baryon $\Xi_{cc}^{++}$.
We analytically calculate the next-to-leading order (NLO) contribution of perturbative part of $J^{P} = \frac{1}{2}^{+}$ baryon current with two identical heavy quarks,
and then reanalyze the mass of $\Xi_{cc}^{++}$ at the NLO level.
We find that the NLO correction significantly improves both scheme dependence and scale dependence, while it is hard to control these theoretical uncertainties at leading order.
With NLO contribution, the obtained mass is $m_{\Xi_{cc}^{++}} = 3.67_{-0.10}^{+0.09} \text{~GeV}$, which is consistent with the LHCb measurement.
\end{abstract}
\pacs{12.38.Bx, 12.38.Lg, 14.20.Lq}
\keywords{doubly heavy baryon; next-to-leading order; QCD sum rules}
\maketitle
\sect{Introduction}
Quark model predicts rich structures of hadronic states according to symmetries of quarks.
Numerous predicted states have been observed experimentally, which indicates the validity of quark model.
Yet, for a class of states, which contain more than one heavy quark, the discovery has not been confirmed for decades.
Recently, LHCb collaboration observed a highly significant structure in the $\Lambda_{c}^{+} K^{-} \pi^{+} \pi^{+}$ mass spectrum,
which is interpreted as the doubly charmed baryon $\Xi_{cc}^{++}$ \cite{Aaij:2017ueg} with mass $3621 \pm 0.72 \pm 0.27 \pm 0.14 \text{~MeV}$.
Early experimental studies of $\Xi_{cc}^{+}$ were performed by SELEX \cite{Mattson:2002vu}, Babar \cite{Aubert:2006qw} and Belle \cite{Chistov:2006zj} collaborations.
The discovery of $\Xi_{cc}^{++}$ demands more rigorous theoretical studies.
A number of methods have been used in the literature \cite{Hudspith:2017bbh, Namekawa:2013vu, Lewis:2001iz, Sun:2016wzh, Kiselev:2017eic, Shah:2017liu, Gadaria:2016omw, Roberts:2007ni, Ebert:2002ig}.
Among them, the QCD sum rules,
which are based on the first principle of QCD, are powerful tools to study various properties of hadronic states \cite{Shifman:1978bx, Shifman:1978by}.
Many works have devoted to the study of doubly heavy baryons within QCD sum rules \cite{Bagan:1992za, Kiselev:1999zj, Zhang:2008rt, Wang:2010hs, Tang:2011fv, Aliev:2012ru, Chen:2017sbg},
and got very interesting results.
But in all these works, only leading-order (LO) in $\alpha_{s}$ expansion of perturbative contribution and Wilson coefficients of vacuum condensates are considered.
Without higher order contributions, it is hard to control theoretical uncertainties in QCD sum rules, which limits the predictive power.
It was in fact known long time ago that next-to-leading order (NLO) correction has sizable contributions to meson and nucleon sum rules \cite{Reinders:1984sr, Jamin:1987gq, Ovchinnikov:1991mu}.
Therefore, the study of NLO effect for doubly heavy baryons in QCD sum rules is badly needed.
Higher order calculations in QCD sum rules become harder and harder when more particles or more massive particles are involved.
For mesons, the state-of-art calculation has developed to $\mathcal{O}(\alpha_{s}^{4})$ in terms of mass expansion \cite{Schwinger:1989ix, Maier:2011jd, Baikov:2009uw, Chetyrkin:2000zk, Baikov:2008jh, Baikov:2004ku}.
While for baryons, only the $\mathcal{O}(\alpha_{s})$ correction (or NLO) is available in the literature for nucleon and singly heavy baryon \cite{Jamin:1987gq, Ovchinnikov:1991mu, Groote:2008dx}.
In this Letter, we calculate the NLO correction to perturbative contribution of doubly heavy $J^{P} = \frac{1}{2}^{+}$ baryon, and show its important effects in QCD sum rules.
With the help of integration-by-parts \cite{Chetyrkin:1981qh, Laporta:2001dd} and differential equations \cite{Henn:2013pwa, Henn:2014qga} methods, we get a fully analytical expression.
We reproduce the massless result in the literature when we set all quark masses to zero.
Based on this calculation, we reanalyze the newly discovered $\Xi_{cc}^{++}$ in QCD sum rules.
\sect{QCD Sum Rules}
The central object in QCD sum rules is the following two-point correlation function \cite{Shifman:1978bx, Ioffe:1981kw}
\begin{align}
\Pi(q^{2})
& =
i
\int \mathrm{d}^{4} x \,
e^{i q x}
\langle \Omega | T \{ \eta(x) \overline{\eta}(0) \} | \Omega \rangle
\nonumber \\
& =
\Pi_{1}(q^{2}) \slashed{q} + \Pi_{2}(q^{2}) \, ,
\end{align}
where $\Omega$ denotes the QCD vacuum,
and $\eta$ is the baryon current to be defined later.
On the one hand, one can calculate $\Pi(q^{2})$ using operator product expansion,
which gives
\begin{equation}
\Pi(q^{2})
=
C_{1}(q^{2})
+
\sum_{i}
C_{i}(q^{2}) \langle O_{i} \rangle \, ,
\end{equation}
where $C_{1}$ is the perturbative contribution and $C_{i}$ is the Wilson coefficient of a gauge invariant Lorentz scalar operator $O_{i}$.
Both $C_{1}$ and $C_{i}$ are perturbatively calculable.
$\langle O_{i} \rangle$ is a shorthand for the vacuum condensates $\langle \Omega | O_{i} | \Omega \rangle$,
which is a nonperturbative but universal quantity.
It means that the value of $\langle O_{i} \rangle$ determined from other processes should be the same as its value in the process considered in this Letter.
On the other hand, $\Pi(q^{2})$ satisfies the dispersion relation
\begin{equation}
\Pi(q^{2})
=
\frac{1}{\pi}
\int_{0}^{\infty} \mathrm{d} s \,
\frac{\Im \Pi(s + i \epsilon)}{s - q^{2}}
=
\int_{0}^{\infty} \mathrm{d} s \,
\frac{\rho(s + i \epsilon)}{s - q^{2}} \, ,
\end{equation}
where $\rho$ is the spectrum density.
Based on the optical theorem, one assume spectrum density $\rho(q^{2})$ to be \cite{Ioffe:1981kw}
\begin{equation}
\rho(q^{2})
=
\lambda_{H}^{2}
(\slashed{q} + m_{H})
\delta(q^{2} - m_{H}^{2})
+
\rho_{c}(q^{2})
\theta(q^{2} - s_{th}) \, ,
\end{equation}
where $s_{th}$ is the threshold of continuum spectrum,
$\lambda_{H}$ is defined by $\lambda_{H} u(p, s) = \langle 0 | \eta(0) | H(p, s) \rangle$,
where $u(p, s)$ is the Dirac spinor of hadron.
Define
\begin{align}
\label{eq:rho}
\frac{\Im C_{1}(q^{2})}{\pi}
& =
\rho_{1, 1}(q^{2}) \slashed{q}
+
\rho_{2, 1}(q^{2}) \, ,
\\
\frac{\Im C_{i}(q^{2})}{\pi}
& =
\rho_{1, i}(q^{2}) \slashed{q}
+
\rho_{2, i}(q^{2}) \, ,
\end{align}
and employ quark-hadron duality and Borel transformation,
we obtain a sum rule corresponding to $\Pi_{1}(q^{2})$ \cite{Ioffe:1981kw}
\begin{align}
\label{eq:sum-1}
\lambda_{H}^{2}
e^{- \frac{m_{H}^{2}}{m_{B}^{2}}}
& =
\int_{s_{th}}^{s_{0}} \mathrm{d} s \,
\rho_{1, 1}(s)
e^{- \frac{s}{m_{B}^{2}}}
\nonumber \\
& \phantom{= {}} +
\sum_{i}
\langle O_{i} \rangle
\int_{s_{th}}^{\infty} \mathrm{d} s \,
\rho_{1, i}(s)
e^{- \frac{s}{m_{B}^{2}}} \, ,
\end{align}
where $s_{0}$ is the threshold parameter,
and $m_{B}$ is the Borel parameter,
which are introduced in quark-hadron duality and Borel transformation respectively.
One can also obtain a similar sum rule corresponds to $\Pi_{2}(q^{2})$,
but we will not discuss it in this Letter.
To obtain the baryon mass,
we differentiate both sides of Eq.~(\ref{eq:sum-1}) with respect to $- m_{B}^{- 2}$ and solve for $m_{H}^{2}$,
which results in
\begin{align}
\label{eq:mass}
m_{H}^{2}
& =
\nonumber \\
&
\frac{
\int_{s_{th}}^{s_{0}} \mathrm{d} s \,
\rho_{1, 1}(s)
s
e^{- \frac{s}{m_{B}^{2}}}
+
\sum_{i}
\langle O_{i} \rangle
\int_{s_{th}}^{\infty} \mathrm{d} s \,
\rho_{1, i}(s)
s
e^{- \frac{s}{m_{B}^{2}}}
}{
\int_{s_{th}}^{s_{0}} \mathrm{d} s \,
\rho_{1, 1}(s)
e^{- \frac{s}{m_{B}^{2}}}
+
\sum_{i}
\langle O_{i} \rangle
\int_{s_{th}}^{\infty} \mathrm{d} s \,
\rho_{1, i}(s)
e^{- \frac{s}{m_{B}^{2}}}
} \, .
\end{align}
In this Letter, we keep vacuum condensates up to dimension 4,
\begin{equation}
\langle O_{i} \rangle
\in
\left\{
\langle \overline{q}_{j}^{a} q_{j}^{a} \rangle,
\langle G_{\mu \nu}^{a} G^{a \mu \nu} \rangle
\right\} \, ,
\end{equation}
and evaluate $\rho_{1, \overline{q} q}$ up to $\mathcal{O}(m_{q})$.
Contributions from even higher dimension operators are highly power suppressed and thus can be neglected up to a desired precision.
\sect{Baryon Currents}
The most general current of baryon containing two identical heavy quarks is
\begin{equation}
\label{eq:current}
\epsilon^{a b c} \left( Q^{a} C \Gamma_{1} Q^{b} \right) \Gamma_{2} q^{c} \, ,
\end{equation}
where
$Q$ is the heavy quark with mass $m_{Q}$,
while $q$ is the light quark with mass $m_{q}$.
$\epsilon^{a b c}$ is the antisymmetric matrix in color space,
$C$ is the charge conjugation matrix,
and $\Gamma_{1}$ and $\Gamma_{2}$ are Dirac matrices with possible Lorentz indices suppressed.
Spinor indices are contracted within the bracket,
and therefore transposing the bracket part should keep the current intact.
Note that $C^{T} = - C$, one can see that $\Gamma_{1}$ can only be $\gamma_{\mu}$ or $\sigma_{\mu \nu}$ \cite{Ioffe:1981kw}.
For a $J^{P} = \frac{1}{2}^{+}$ baryon, there are only two possible currents
\begin{align}
\label{eq:eta1}
\eta_{1}
& =
\epsilon^{a b c} \left( Q^{a} C \gamma_{\mu} Q^{b} \right) \gamma^{\mu} \gamma^{5} q^{c} \, ,
\\
\label{eq:eta2}
\eta_{2}
& =
\epsilon^{a b c} \left( Q^{a} C \sigma_{\mu \nu} Q^{b} \right) \sigma^{\mu \nu} i \gamma^{5} q^{c} \, ,
\end{align}
where $\eta_{1}$ corresponds to Ioffe current \cite{Ioffe:1981kw} if we take $Q$ as $u$ quark and $q$ as $d$ quark.
It is well-known that $\eta_{1}$ and $\eta_{2}$ are renormcovariant \cite{Ioffe:1982ce},
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d} \ln \mu^{2}}
\begin{pmatrix}
\eta_{1}
\\
\eta_{2}
\end{pmatrix}
=
\begin{pmatrix}
\gamma_{1} & 0
\\
0 & \gamma_{2}
\end{pmatrix}
\begin{pmatrix}
\eta_{1}
\\
\eta_{2}
\end{pmatrix} \, .
\end{equation}
Thus it is advantageous to work with these currents when calculating the NLO correction.
There exist other choices of current \cite{Chung:1981cc, Bagan:1992za, Leinweber:1995fn}, which
can be expressed by $\eta_{1}$ and $\eta_{2}$ with the help of Fierz identity,
\begin{align}
\label{eq:mix}
\eta_{\text{mix}}
& =
\epsilon^{a b c}
\left[
\left( Q^{a} C \gamma^{5} q^{b} \right) Q^{c}
+
b
\left( Q^{a} C q^{b} \right) \gamma^{5} Q^{c}
\right]
\nonumber \\
& =
\frac{b - 1}{4} \eta_{1} + i \frac{b + 1}{8} \eta_{2} \, ,
\end{align}
where $b$ is a complex mixing parameter.
\sect{NLO Correction to $C_{1}$}
It is known that $C_{1}$ and $C_{i}$ can be calculated perturbatively,
and results at LO are available in Refs.~\cite{Bagan:1992za, Narison:2010py}.
Among them, the most important one is $C_{1}$,
because all other coefficients will be multiplied by higher dimensional operators which are power suppressed.
Thus the main theoretical uncertainty is due to NLO correction to $C_{1}$.
In order to perform NLO calculation for $C_{1}$,
we use FeynArts \cite{Kublbeck:1990xc, Hahn:2000kx} to generate all Feynman diagrams (see Fig.~(\ref{fig:amp})),
and FeynCalc \cite{Mertig:1990an, Shtabovenko:2016sxi} to manipulate resulting amplitude.
After these steps, we are left with some three-loop-like scalar integrals.
These integrals can be further simplified by integration-by-parts (IBP) method \cite{Chetyrkin:1981qh, Laporta:2001dd}.
FIRE \cite{Smirnov:2014hma} and LiteRed \cite{Lee:2013mka} are used to reduce the full amplitude to
linear combination of a complete set of 29 master integrals (see Fig.~(\ref{fig:top})),
\begin{equation}
C_{1}^{\text{NLO}}(\varepsilon, q^{2}, m_{Q}) = \sum_{k} c_{k}(\varepsilon, q^{2}, m_{Q}) I_{k}(\varepsilon, v) \, ,
\end{equation}
where $\varepsilon$ is defined by dimension $D = 4 - 2 \varepsilon$,
$v = \sqrt{1 - \frac{4 m_{Q}^{2}}{q^{2}}}$,
and all coefficients $c_{k}$ are purely imaginary.
Note that here $I_{k}$ is defined to be dimensionless.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\linewidth]{diagram}
\caption{
NLO Feynman diagrams for $C_{1}$.
External legs are amputated.
}
\label{fig:amp}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\linewidth]{mi}
\caption{
Topologies of master integrals,
where solid and dashed lines denote massive and massless propagators respectively.
External legs are amputated.
}
\label{fig:top}
\end{figure}
Since we are only interested in the imaginary part of the two-point function $\Pi(p^{2})$,
we just need to evaluate corresponding cut diagrams of $I_{k}$.
But evaluating four-body phase space in the presence of two massive particles is still a formidable task.
To proceed, we employ differential equation method \cite{Henn:2013pwa, Henn:2014qga}
by first differentiating $I_{k}$ with respect to $v$, then reducing the resulting integrals using IBP,
and obtaining a system of differential equations,
\begin{equation}
\frac{\mathrm{d} \boldsymbol{I}(\varepsilon, v)}{\mathrm{d} v}
=
\boldsymbol{A}(\varepsilon, v) \boldsymbol{I}(\varepsilon, v) \, ,
\end{equation}
where $\boldsymbol{I}$ represents the vector of master integrals $I_{k}$,
and $\boldsymbol{A}$ is a $29 \times 29$ matrix.
To solve this differential equation,
we implement algorithm proposed in \cite{Lee:2014ioa} to transform the equation into so-called $\varepsilon$-form \cite{Henn:2013pwa},
\begin{equation}
\label{eq:epsilon}
\frac{\mathrm{d} \boldsymbol{I}'(\varepsilon, v)}{\mathrm{d} v}
=
\varepsilon
\sum_{i} \frac{\boldsymbol{B}_{i}}{v - v_{i}} \boldsymbol{I}'(\varepsilon, v) \, ,
\end{equation}
where
$v_{i} \in \left\{ 0, \pm 1, \pm \sqrt{3} i \right\}$,
$\boldsymbol{B}_{i}$ are constant matrices,
and $\boldsymbol{I}'$ is related to $\boldsymbol{I}$ with an invertible linear transformation.
The virtue of this $\varepsilon$-form is that
the right hand side of Eq.~(\ref{eq:epsilon}) is proportional to $\varepsilon$,
which can be easily solved iteratively in terms of Goncharov polylogarithms \cite{Goncharov:2001iea}.
The boundary value of $\boldsymbol{I}(\varepsilon, v)$ at $v = 1$, i.e. $m_{Q} = 0$,
is nothing but massless four-body phase space integral,
which is very easy to work out.
By evaluating the boundary value $\boldsymbol{I}(\varepsilon, 1)$,
and solving the equation iteratively, we finally obtain $I_{k}$ and finish our calculation.
We find that the Coulomb divergence, which appears as $v \to 0$, does not present at this order.
Then by combining all terms together, infrared divergences are canceled out,
so we only need to deal with ultraviolet divergences.
After performing wavefunction and mass renormalization of quarks ($m_{Q}$ is renormalized either in $\overline{\text{MS}}$ scheme or on-shell scheme),
the remained ultraviolet divergences can be removed by operator renomalization of $\eta_{1}$ and $\eta_{2}$.
We renormalize them in $\overline{\text{MS}}$ scheme,
anomalous dimensions of which are
\begin{equation}
\gamma_{1}
=
\gamma_{2}
=
\frac{\alpha_{s}}{2 \pi} \, ,
\end{equation}
which confirm the results in Refs.~\cite{Peskin:1979mn, Ovchinnikov:1991mu}.
We then get the finite result at NLO.
Our NLO result confirms the massless result \cite{Jamin:1987gq, Ovchinnikov:1991mu} in the limit of $m_{Q} \to 0$.
Our analytical result is provided as ancillary file of the arXiv preprint.
\sect{Phenomenology}
In our analysis, we use
\begin{equation}
\label{eq:eta}
\eta
=
\eta_{1} + \theta \eta_{2} \, ,
\end{equation}
with $\theta$ a complex mixing parameter.
We choose following parameters \cite{Olive:2016xmw, Dominguez:1994ce, Bagan:1992za, Dominguez:2014pga}:
\begin{align}
\label{eq:pfirst}
m_{u}(2 \text{~GeV})
& =
2.2_{- 0.4}^{+ 0.6} \text{~MeV} \, ,
\\
m_{d}(2 \text{~GeV})
& =
4.7_{- 0.4}^{+ 0.5} \text{~MeV} \, ,
\\
m_{c}^{\overline{\text{MS}}}(m_{c})
& =
1.28 \pm 0.03 \text{~GeV} \, ,
\\
m_{c}^{\text{on-shell}}
& =
1.46 \pm 0.07 \text{~GeV} \, ,
\\
\langle \overline{q} q \rangle(2 \text{~GeV})
& =
-
\frac{1}{2}
\frac{f_{\pi}^{2} m_{\pi}^{2}}{m_{u} + m_{d}}
\nonumber \\
& =
- \left( 0.287 \pm 0.019 \text{~GeV} \right)^{3} \, ,
\\
\langle g_{s}^{2} G G \rangle
& =
4 \pi^{2} (0.037 \pm 0.015) \text{~GeV}^{4} \, ,
\label{eq:plast}
\end{align}
and $\alpha_{s}(m_{Z} = 91.1876 \text{~GeV}) = 0.1181$.
According to Eq.~(\ref{eq:mass}),
the evolution of the current $\eta$ is irrelevant to the estimation of hadron mass,
thus we do not include it.
We use two-loop running of coupling constant $\alpha_{s}$ and heavy quark mass $m_{Q}$.
We evolve vacuum condensates according to their one-loop anomalous dimensions:
$\gamma_{\langle \overline{q} q \rangle} = - \gamma_{m_{q}}$
and
$\gamma_{\langle g_{s}^{2} G G \rangle} = 0$ \cite{Albuquerque:2013ija}.
By default choice in the following, we choose central values for all parameters,
set renormalization scale $\mu = m_{B}$ \cite{Shifman:1978bx, Bertlmann:1981he},
and choose ${\overline{\text{MS}}}$ scheme for heavy quark mass renormalization.
In Eq.~(\ref{eq:mass}), the baryon mass $m_{H}$ depends on two parameters: $m_{B}$ and $s_{0}$.
In order to obtain a reliable result,
we should keep $m_{B}$ inside the so-called Borel window to ensure the validity of OPE,
and the choice of $s_{0}$ should ensure the ground-state pole contribution domination.
Since $m_{H}$ is a property of hadron, it does not depend on $m_{B}$ and $s_{0}$,
thus within the valid parameter space (we shall call it ``window'' hereafter),
we should find a region in which $m_{H}$ depends weakly on $m_{B}$ and $s_{0}$.
$m_{H}$ in this region is considered to be the estimated hadron mass in QCD sum rules.
\begin{table*}[ht]
\caption{
Parameters of plateau and predictions for $m_{\Xi_{cc}^{++}}$ in different mixing and mass renormalization schemes.
}
\begin{ruledtabular}
\begin{tabular}{ccccccccc}
$\theta$ & $m_{Q}$ scheme & Order & $m_{B}^{2} \text{~(GeV$^{2}$)}$ & $s_{0} \text{~(GeV$^{2}$)}$ & $m_{\Xi_{cc}^{++}} \text{~(GeV)}$ & Error from $m_{B}^{2}$ & Error from $s_{0}$ & Error from $m_{Q}$
\\ \hline
\multirow{2}{*}{$0.018 i$} & \multirow{2}{*}{$\overline{\text{MS}}$} & LO & $2.0 \pm 0.3$ & $17 \pm 2$ & $3.58_{-0.11}^{+0.09}$ & ${-0.00} \; {+0.01}$ & ${-0.09} \; {+0.07}$ & ${-0.05} \; {+0.05}$
\\
& & NLO & $1.7 \pm 0.3$ & $17 \pm 2$ & $3.67_{-0.10}^{+0.09}$ & ${-0.01} \; {+0.01}$ & ${-0.08} \; {+0.05}$ & ${-0.05} \; {+0.05}$
\\ \hline
\multirow{2}{*}{$0.018 i$} & \multirow{2}{*}{on-shell} & LO & $1.7 \pm 0.3$ & $17 \pm 2$ & $3.85_{-0.14}^{+0.16}$ & ${-0.01} \; {+0.04}$ & ${-0.09} \; {+0.07}$ & ${-0.10} \; {+0.10}$
\\
& & NLO & $1.4 \pm 0.3$ & $17 \pm 2$ & $3.66_{-0.14}^{+0.12}$ & ${-0.06} \; {+0.05}$ & ${-0.08} \; {+0.05}$ & ${-0.10} \; {+0.09}$
\\ \hline
\multirow{2}{*}{$- \frac{i}{3}$} & \multirow{2}{*}{$\overline{\text{MS}}$} & LO & $4.4 \pm 0.3$ & $23 \pm 2$ & $3.80_{-0.12}^{+0.10}$ & ${-0.04} \; {+0.04}$ & ${-0.09} \; {+0.08}$ & ${-0.03} \; {+0.03}$
\\
& & NLO & $4.0 \pm 0.3$ & $23 \pm 2$ & $3.85_{-0.12}^{+0.10}$ & ${-0.05} \; {+0.04}$ & ${-0.09} \; {+0.08}$ & ${-0.03} \; {+0.03}$
\end{tabular}
\end{ruledtabular}
\label{tab:result}
\end{table*}
We define relative contributions of condensates and continuum spectrum as
\begin{align}
r_{i}
& =
\frac{
\langle O_{i} \rangle
\int_{s_{th}}^{\infty} \mathrm{d} s \,
\rho_{1, i}(s)
e^{- \frac{s}{m_{B}^{2}}}
}{
\int_{s_{th}}^{\infty} \mathrm{d} s \,
\rho_{1, 1}(s)
e^{- \frac{s}{m_{b}^{2}}}
} \, ,
\\
r_{\text{cont.}}
& =
\frac{
\int_{s_{0}}^{\infty} \mathrm{d} s \,
\rho_{1, 1}(s)
e^{- \frac{s}{m_{B}^{2}}}
}{
\int_{s_{th}}^{\infty} \mathrm{d} s \,
\rho_{1, 1}(s)
e^{- \frac{s}{m_{B}^{2}}}
} \, ,
\end{align}
and impose the following constraints on our sum rule
\begin{equation}
\label{eq:window}
\left| r_{i} \right| \le 30 \% ,
\hspace{0.3cm}
\left| \sum_{i} r_{i} \right| \le 30 \% ,
\hspace{0.3cm}
\left| r_{\text{cont.}} \right| \le 30 \% .
\end{equation}
We find that with mixing parameter $\theta = 0.018 i$,
we can obtain a very stable plateau of $m_{B}$ and $s_{0}$, as shown in Fig.~(\ref{fig:msbar}).
Note, however, that QCD sum rules alone cannot tell which mixing current is the physical one.
For example, there is a family of mixing parameters that can yield similar good plateau of $m_{B}$ and $s_{0}$.
We thus also provide another set of results by choosing $\theta = - \frac{i}{3}$,
which corresponds to the mixing used in \cite{Bagan:1992za}.
\begin{figure}[ht]
\centering
\begin{subfigure}[ht]{0.6\linewidth}
\includegraphics[width=\linewidth]{msbar-b}
\end{subfigure}
\\
\begin{subfigure}[ht]{0.6\linewidth}
\includegraphics[width=\linewidth]{msbar-s}
\end{subfigure}
\caption{
Prediction of $m_{\Xi_{cc}^{++}}$ as a function of $m_{B}^{2}$ and $s_{0}$.
Shadows correspond to windows defined by Eq.~(\ref{eq:window}).
}
\label{fig:msbar}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\linewidth]{msbar-ope}
\caption{
Contributions of various terms on the right hand side of Eq.~(\ref{eq:sum-1}).
}
\label{fig:ope}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\linewidth]{msbar-mu}
\caption{
Prediction of $m_{\Xi_{cc}^{++}}$ as a function of $\mu$.
}
\label{fig:scale}
\end{figure}
The relative importance of each term in OPE is shown in Fig.~(\ref{fig:ope}),
where $m_{B}^{2}$ and $s_{0}$ are set to their central values shown in Tab.~(\ref{tab:result}).
We find that the NLO correction has important contribution.
In $m_{Q}^{\overline{\text{MS}}}$ scheme,
the ratio of NLO correction to LO is about $29 \%$ ($19 \%$) if $\theta = 0.018 i$ ($\theta = - \frac{i}{3}$).
While in $m_{Q}^{\text{on-shell}}$ scheme, these ratios reaches $233 \%$ ($146 \%$),
signaling the bad convergence of perturbative expansion,
which is the reason why we choose $\overline{\text{MS}}$ scheme by default.
Nevertheless, with NLO correction,
the difference of predicted $m_{\Xi_{cc}^{++}}$ between $\overline{\text{MS}}$ scheme and on-shell scheme for $m_{Q}$ is substantially reduced.
As shown in Tab.~(\ref{tab:result}),
the mass differences obtained from LO and $\text{LO} + \text{NLO}$ results are $0.27 \text{~GeV}$ and $0.01 \text{~GeV}$, respectively.
Thus NLO correction largely reduces the scheme dependence.
To study the renormalization scale $\mu$ dependence, we fix all other parameters by their default choices (or central values) and freely vary $\mu$.
The variation of $m_{\Xi_{cc}^{++}}$ with respect to $\mu$ is shown in Fig.~(\ref{fig:scale}).
We find the scale dependence is much weaker when NLO correction is included.
More precisely, the error of $m_{\Xi_{cc}^{++}}$ induced by $\mu = m_{B} \pm 0.2 \text{~GeV}$ is $_{-0.08}^{+0.06}$ and $_{-0.00}^{+0.03}$ in LO and $\text{LO} + \text{NLO}$, respectively.
Our final results for $m_{\Xi_{cc}^{++}}$ are shown in Tab.~(\ref{tab:result}).
Errors of $m_{B}^{2}$, $s_{0}$ and parameters listed in Eq.~(\ref{eq:pfirst})-(\ref{eq:plast}) are used to determine the error of $m_{\Xi_{cc}^{++}}$.
We find that our NLO result is consistent with the LHCb measurement.
As a comparison, we also list the results with $m_{Q}^{\text{on-shell}}$ renormalization scheme or with $\theta = - \frac{i}{3}$.
We find that all plots above are almost unchanged when changing $m_{q}$ from $m_{u}$ to $m_{d}$,
thus our prediction of mass of $\Xi_{cc}^{+} (ccd)$ is almost the same as that of $\Xi_{cc}^{++}(ccu)$.
\sect{Summary}
The NLO calculation for hadrons with massive quarks in QCD sum rules is important but hard to do.
With the help of recent development of multi-loop calculation technique,
we are able to analytically calculate NLO perturbative correction to the imaginary part of the two-point correlation function of
$J^{P} = \frac{1}{2}^{+}$ baryon current with two identical heavy quarks.
We apply our result to the QCD sum rules analysis of newly discovered baryon $\Xi_{cc}^{++}$ by LHCb \cite{Aaij:2017ueg}.
The QCD sum rules estimation of $m_{\Xi_{cc}^{++}}$ is $3.67_{-0.10}^{+0.09} \text{~GeV}$, which is consistent with the LHCb measurement within uncertainties.
By comparing LO with $\text{LO} + \text{NLO}$ results, we find the NLO perturbative correction substantially reduces $m_{Q}$ renormalization scheme dependence and renormalization scale $\mu$ dependence,
thus makes the theoretical uncertainties under better control.
\begin{acknowledgments}
We thank H. X. Chen and S. L. Zhu for many useful communications and discussions.
The work is supported in part by
the National Natural Science Foundation of China (Grants No. 11475005 and No. 11075002),
and the National Key Basic Research Program of China (No. 2015CB856700).
\end{acknowledgments}
|
1,108,101,564,397 | arxiv |
\section{Introduction}
\label{sec:intro}
ΔΣ modulators \cite{Norsworthy:DSDC-1996, Reiss:JAES-56-1+2,
Schreier:UDSDC-2004, DeLaRosa:TCAS1-58-1} are nowadays widely used in a
variety of systems, usually as \ac{A/D} and \ac{D/D} interfaces. The latter may
in turn simplify sample rate conversion, \ac{D/A} conversion, or power
amplification in actuation tasks. Typical applications range from data
conversion itself to signal processing in wide sense, including digital audio
\cite{Reefmann:SPDSD-2002}, frequency synthesis \cite{Yang:TVLSI-17-6},
switched mode amplification \cite{Gaalaas:AD-40-6, Ghannouchi:CASM-2010-4},
power conversion and actuation \cite{Dallago:TCAS1-44-8, Jacob:TIE-2012},
digital communications \cite{Galton:TMTT-50-1, Ghannouchi:CASM-2010-4}, sensing
\cite{Dong:SJ-7-1} and more. Recently, more exotic applications, such as in
optimization, have been proposed too \cite{Callegari:TSP-58-12,
Bizzarri:ISCAS-2010, Bizzarri:CSDM-2010}.
ΔΣ modulators are signal encoders (or re-coders) capable of trading rate for
accuracy in order to translate high-resolution slowly (or non) sampled signals
into low-resolution rapidly sampled signals with little loss of fidelity. This
property is achieved through a feedback architecture involving a quantizer and
linear filters which provide \emph{noise shaping}, i.e., the ability to
unevenly distribute the quantization noise power so that some frequency bands
get most of it and others almost none.
ΔΣ modulators are almost invariably used in conjunction with filters as in
Fig.~\ref{fig:block-dia-generic}, to recover useful information that is
otherwise polluted by quantization noise. In fact, the digital stream at the
output of the modulator has a much wider bandwidth than the input waveform,
thanks to its high sampling rate (oversampling). Furthermore, it contains two
components. The first one reflects the input signal itself, thus occupying just
the set of frequencies $\mathcal{B}$ constituting the signal band. The second
one is quantization noise, whose power is approximately fixed, depending on the
quantizer resolution. In principle, the noise \ac{PDS} extends throughout all
the available bandwidth. Actually, the modulator lets the noise \ac{PDS}
concentrate more in certain regions than in others. If these regions do not
overlap with $\mathcal{B}$, then a filter can be used to get rid of (most of
the) noise component without affecting the signal one. These considerations
make evident how an output or reconstruction filter is mandatory. Indeed, the
modulator role is precisely to \emph{shape the noise} so that it can be made
\emph{orthogonal} to the signal (and thus linearly separable).
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.52]{block_dia_generic}
\end{center}
\caption{Typical deployment of a ΔΣ modulator.}
\label{fig:block-dia-generic}
\end{figure}%
Fig.~\ref{fig:sample-spectra} shows the typical behavior in the frequency
domain for a modulator suitable for \ac{LP} signals ($f_\Phi$ indicates
the modulator output rate). Fig.~\ref{fig:block-dia-specialized} shows
some specializations of the generic architecture to binary \ac{A/D} conversion,
\ac{D/A} conversion and switched-mode power amplification.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\hlw]{sample_spectrum}
\end{center}\vskip -1ex
\caption{Typical noise shaping and noise removal in a \ac{LP} ΔΣ modulator.}
\label{fig:sample-spectra}
\end{figure}
The above premises explain why typical design flows \cite{Schreier:UDSDC-2004,
Schreier:DELSIG, Nagahara:TSP-60-6} want the modulator noise shaping
properties to be based only on the signal features (and notably the width of
$\mathcal{B}$). To be separable from the signal, the quantization noise needs
just to be as \emph{orthogonal} as possible to it. Hence, typical flows let the
modulator shape its noise \ac{PDS} so that it is as low as possible in the
signal band and (consequently) high elsewhere, with a transition between the
two regions as steep as possible to prevent superposition. Indeed, this is what
\emph{theoretically} enables the most thorough noise separation.
However, the fact that two items are \emph{theoretically} well separable does
not mean that they necessarily get well separated \emph{in practice}. Typical
design flows assure that a linear filter exists capable of guaranteeing an
almost perfect noise removal. Nevertheless, they cannot assure that such filter
is actually deployed. As a matter of fact, there are favourable situations
where the designer has very good control over the output filter. In this case,
conventional design flows are probably optimal. For instance, in \ac{A/D}
conversion (Fig.~\ref{sfig:block-dia-adc}) the output filter is digital so that
a good filter can be implemented without excessive cost. In other cases, the
designer has only a limited control over the output filter. For instance, in
\ac{D/A} conversion (Fig.~\ref{sfig:block-dia-dac}), one has an analog
reconstruction filter whose cost may rapidly grow with its specification (in
particular with roll-off). This situation may also arise in signal synthesis
\cite{Bizzarri:ECCTD-2009}. Even worse, there may be cases where the filter is
in part pre-assigned leaving the designer with extremely limited or no control
at all over it. For instance, in actuation (Fig.~\ref{sfig:block-dia-ampli})
the filter is often partially (if not completely) provided by the electric
machine used for the actuation itself. As an example, consider that the popular
Texas Instruments LM4670 switching audio amplifier is marketed as a
\emph{filterless} solution where the \ac{LP} filter is provided by the speaker
parasitic inductance and inertia (and possibly by the listener's ear)
\cite{TI:LM4670}. A similar situation may arise in ac motor drives
\cite{Bizzarri:ISCAS-2012, Callegari:ICECS-2012}.
\begin{figure}[t]
\begin{center}
\subfloat[\label{sfig:block-dia-adc}]{%
\includegraphics[scale=0.52]{block_dia_adc}}\\
\subfloat[\label{sfig:block-dia-dac}]{%
\includegraphics[scale=0.52]{block_dia_dac}}\\
\subfloat[\label{sfig:block-dia-ampli}]{%
\includegraphics[scale=0.52]{block_dia_ampli}}
\end{center}
\caption{Specializations of the architecture in
Fig.~\ref{fig:block-dia-generic} for \ac{LP} signals and a binary-output ΔΣ
modulator: \ac{A/D} converter with \acs{PCM}
output~\protect\subref{sfig:block-dia-adc}; \ac{D/A} converter with
\acs{PCM} input~\protect\subref{sfig:block-dia-dac}; switched-mode
amplifier with \acs{PCM} input~\protect\subref{sfig:block-dia-ampli}.}
\label{fig:block-dia-specialized}
\end{figure}
We claim that whenever the designer has limited or no control over the output
filter, the noise shaping properties of the ΔΣ modulator should not be designed
after the signal properties alone. Conversely, the designer, aware of the
limitations induced by a sub-optimal output filter, should explicitly consider
them to pursue the best possible reduction of the quantization noise. In the
following Sections, we formalize this claim providing a novel design flow for
ΔΣ modulators, based on the output filter features. Figure~\ref{fig:flows}
graphically summarizes the differences between a traditional flow and ours. In
the former \subref{sfig:flow-traditional}, the modulator noise shaping features
are designed after the signal properties alone. When these features are
obtained, an \emph{optimum} output filter is designed to take the best possible
advantage of them. In other words, the output filter is not a constraint, but a
degree of freedom to be exploited to make the most of the modulator noise
shaping profile. In our flow \subref{sfig:flow-new}, the first thing being
assigned is the output filter, for which not just the signal properties but
also other factors related to the context where the modulator is applied must
be considered (as it happens in the examples in Figs.~\ref{sfig:block-dia-dac}
and~\ref{sfig:block-dia-ampli}). Then, the modulator noise shaping features are
designed to cope at best with the filter. Thus, for us the output filter is a
constraint to be managed.
\begin{figure}[t]
\begin{center}
\subfloat[\label{sfig:flow-traditional}]{%
\includegraphics[scale=0.52]{flow_traditional}}\qquad
\subfloat[\label{sfig:flow-new}]{%
\includegraphics[scale=0.52]{flow_new}}
\end{center}
\caption{Traditional \protect\subref{sfig:flow-traditional} and proposed
\protect\subref{sfig:flow-new} design flows.}
\label{fig:flows}
\end{figure}
The proposed approach stems from interpreting the modulator as a heuristic
solver for a \ac{FA} problem \cite{Callegari:TSP-58-12}. It results in a
\ac{FIR} \ac{NTF}, obtained via \ac{SDP} \cite{Boyd:CO-2009}. The restriction
to \ac{FIR} \acp{NTF} often results in higher order modulators than in
conventional flows, but in many applications this is not an issue, as discussed
in \cite{Nagahara:TSP-60-6}. The minimization used to define the filter
coefficients respects the most important design constraints for ΔΣ modulators
\cite{Schreier:UDSDC-2004, Lee:Thesis-1987, Chao:TCAS-37-7} and thus enables a
robust design. It is formalized taking advantage of the \ac{KYP} lemma
\cite{Rantzer:SCL-28-1, Iwasaki:TAC-50-1}. This is not the first time that the
\ac{KYP} lemma is applied to the optimization of ΔΣ modulators, yet in
precedent cases the goals of the optimization were quite different
\cite{Nagahara:TSP-60-6, Osqui:ACC-2007}. Furthermore, previous attempts at
considering the output filter features in the design of ΔΣ modulators are
scarce and followed different strategies \cite{Gustavsson:TCAS1-57-12}.
In the last part of the paper, we provide extensive design examples, showing
how the proposed design strategy can consistently outperform conventional ones
and is also much more flexible, being capable of managing all kinds of
modulators, including multi-band ones, in a completely homogeneous
way. Furthermore, the strategy is often more robust. The examples show that, in
conjunction with output filters lacking too steep features, it results in
\acp{NTF} lacking steep features too. Consequently, as a positive side effect,
one often gets less extreme modulators that tend to be less prone to
misbehavior and deviation from expected performance.
\section{Notation}
For the sake of clarity and compactness, we make use of specific notations
relative to matrices and dynamical systems.
Matrices and vectors are generally indicated by capital italic letters in a
bold font as in $\mat A$, although for homogeneity with previous publications
and the Literature some vectors may be uncapitalized as in $\vec x$. When it
is necessary to extract a sub-matrix from a matrix, the following notation
applies: $\mat A_{1:3,4:5}$ is the sub-matrix obtained from entries that belong
to the rows from 1 to 3 and the columns from 4 to 5 in $\mat A$. The colon is
saved if the two values on its sides are the same. For instance, $\mat
A_{1,4:5}$ is the sub-matrix (row-vector) obtained from values in the first row
and in columns from 4 to 5 in $\mat A$. A thick dot $\bullet$ can used as a
shorthand for \emph{beginning} or \emph{end} depending on the side of the colon
where it appears. For instance $\vec a_{2:\bullet}$ is a sub-vector containing
entries from the second to the last one in $\vec a$. Coherently with the
notation concepts illustrated above, $\bullet:\bullet$ can be replaced by a
single thick dot to be interpreted as a shorthand for \emph{all}. For instance,
$\mat A_{1,\bullet}$ is the first row of $\mat A$. The same notation can be
used to indicate matrix and vector elements. For instance, $\mat A_{2,1}$ is
the element at the second row, first column in matrix $\mat A$. Note that when
this kind of indexing is applied, indexes always start at 1. In cases where
matrices or vectors need to be filled with values taken from other sequences,
parenthesis are used as in the following example: $\mat A = (a_{i,j})$ for $i
\in (0, \dots, 7)$ and $j \in (-1, \dots, 3)$ means that $\mat A$ is an
$8\times 5$ matrix, where $\mat A_{1,1} = a_{0,-1}$, and so on. When it is not
otherwise declared, vectors are \emph{column} vectors. The transposition
operator $\transposed$ is often used to more compactly enumerate their entries
in a row, as in $\vec a = (a_1, a_2, a_3)\transposed$.
With respect to dynamical systems, a compact matrix notation is often used for
their state space model. For instance, if one has a discrete-time system
$\mathcal{G}$ with model
\[
\begin{cases}
\vec x(k+1) = \mat A \vec x(k) + \mat B \vec u(k)\\
\vec y(k) = \mat C \vec x(k) + \mat D \vec u(k)
\end{cases}
\]
one may write
\[
\mathcal{G}=
\left(\hspace{-0.5ex}
\begin{array}{c|c}
\mat A & \mat B\\[-0.5ex]
\hlx{hv}
\mat C & \mat D
\end{array}
\hspace{-0.5ex}\right)(z)
\]
where $(z)$ recalls the nature of the equations, based on time differences.
\section{Background}
\subsection{The ΔΣ modulator architecture and design constraints}
\label{ssec:design-constraints}
Fig.~\ref{sfig:ds-real} represents a generic architecture for a ΔΣ modulator
including a \emph{feedforward} filter $\FF(z)$, a \emph{feedback} filter
$\FB(z)$ and a quantizer. All signals are assumed discrete-time and the
operation is timed by a fast clock with frequency $f_\Phi$.
\begin{figure}[t]
\begin{center}
\subfloat[\label{sfig:ds-real}]{%
\includegraphics[scale=0.55]{ds}
}\qquad
\subfloat[\label{sfig:ds-linearised}]{%
\includegraphics[scale=0.55]{ds-eq}
}%
\end{center}
\caption{General ΔΣ architecture and approximate linear model.}
\label{fig:ds-generic}
\end{figure}
Owing to the quantizer, the modulator is hard to tackle formally. A common
approach is to approximate it by the ``linearized'' architecture
in~\subref{sfig:ds-linearised}, where the quantisation noise $\epsilon(n)$ is
assumed to be white and independent of the input signal. The quantity $c$
models the \emph{average} quantizer gain. When the loop operates correctly
(namely, when the quantizer is not \emph{overloaded}), $c$ can be
assumed approximately unitary \cite{Schreier:UDSDC-2004, Nagahara:TSP-60-6} as
we actually take it in our discussion. In the same conditions, the quantization
noise amplitude can be assumed to be half the quantization step
$\Delta$. Typically, the quantization noise distributes approximately uniformly
in value, leading to a \ac{PDF} $\rho_\epsilon(x)$ approximately equal to
$\nicefrac{1}{\Delta}$ as long as $\nicefrac{-\Delta}{2} \le x \le
\nicefrac{\Delta}{2}$ and approximately null otherwise. This consideration
enables a first, rough estimation of the input noise power in the linearized
modulator, setting it at $\sigma^2_\epsilon = \int_{-\infty}^{\infty} x^2
\rho_\epsilon(x) \; d\!x = \nicefrac{\Delta^2}{12}$. This is the average
quantization noise energy per sample. With it, the whiteness assumption on the
quantization noise lets one easily express the quantization noise \ac{PDS} over
the normalized angular frequency axis $\omega$ as
$E(\omega)=\nicefrac{\Delta^2}{12\pi}$. Note that we use single sided spectra
and $\omega\in[0,\pi]$.
The linearity of the approximate model is further exploited to decompose the
output in the contributions due to the input and to the quantisation noise
yielding $X(z) = \STF(z)\, W(z) + \NTF(z)\, E(z)$ where
$\STF(z)=\nicefrac{\FF(z)}{(1+ \FF(z)\, \FB(z))}$ and $\NTF(z)=\nicefrac{1}{(1+
\FF(z)\, \FB(z))}$. Alternatively, if $\STF(z)$ and $\NTF(z)$ are assigned,
one has
\begin{equation}
\begin{cases}
\FF(z)=\frac{\STF(z)}{\NTF(z)}\\
\FB(z)=\frac{1-\NTF(z)}{\STF(z)}.
\end{cases}
\label{eq:FF+FB}
\end{equation}
In typical applications, one wants the signal component to be passed from input
to output without alteration, so the \ac{STF} is unitary, and linear phase
(e.g, $\STF(z)=z^{-d}$, where $d$ is an integer delay).
It is now possible to discuss some constraints posed by the need to practically
implement the loop.
\begin{asparaenum}
\item For the stability of the linearized model, one needs the \ac{NTF} to be
stable (the stability of the \ac{STF} is automatically guaranteed by taking
it to be $z^{-d}$).
\item The \ac{NTF} needs to be causal (the causality of the \ac{STF} is
automatically guaranteed by taking it to be $z^{-d}$).
\item $\FF(z)$ and $\FB(z)$ need to be causal.
\item The loop can not be algebraic. This means that $\FF(z) \FB(z) =
\nicefrac{(1-\NTF(z))}{\NTF(z)}$, which represents the loop transfer
function, needs to be strictly causal.
\setcounter{savecounterI}{\theenumi}
\end{asparaenum}
Furthermore, it is necessary to observe that the conditions at point 1 are
necessary and sufficient for the stability of the linearized loop, but not
sufficient for the stability of the real loop, because the quantizer is
strongly non-linear. Determining strict conditions for the stability of the
non-linear loop is still an impossible task in general terms. However, the
analysis in \cite{Schreier:UDSDC-2004} and \cite[Sec.~IV-A]{Nagahara:TSP-60-6}
clearly indicate that such stability cannot be given just by the properties of
the loop filters, but must necessarily depend also on the maximum amplitude
that the input signal $w(n)$ takes over time, namely $\norm{w}_\infty =
\max_{n\in\Nset{N}} w(n)$, so that the stability is often more difficult to
achieve at large inputs. The same analysis ties the stability to the peak of
the amplitude response of the \ac{NTF}, indicating that the higher the peak
value, the more critical the stability. Following this consideration, practical
designs tend to rely on an empirical rule based on the limitation of the
\ac{NTF} gain (Lee criterion) \cite{Schreier:UDSDC-2004,
Chao:TCAS-37-7}. Hence, one has a further constraint in addition to those
above:
\begin{asparaenum}
\setcounter{enumi}{\thesavecounterI}
\item The peak of the \ac{NTF} amplitude response must be bounded to
a low value, namely
\begin{equation}
\max_{\omega \in [0,\pi]} \abs{\NTF\left(\ee^{j\omega}\right)} < \gamma
\label{eq:lee-constraint}
\end{equation}
\end{asparaenum}
where the constant $\gamma$ depends on the number of quantization levels. For
binary quantizers, $\gamma$ should be less than $2$ and is typically set at
$1.5$. Incidentally, note that when a modulator turns out to overload its
quantizer, it is often possible to retry its design with a lower $\gamma$. For
modulators where the \ac{NTF} is high-order, which are more subject to
misbehavior, it is frequent to reduce $\gamma$ to $1.4$ or even slightly lower
values. However, having to reduce $\gamma$ too much also reduces the
effectiveness of the \ac{NTF}.
From the discussion about point 5, it should be clear that differently from the
previous four, this is neither a necessary nor sufficient condition. Rather, it
is just a requirement capable of making the ΔΣ modulator more \emph{likely} to
operate correctly in a wide range of practical cases.
Before proceeding to introduce design flows, it is worth mentioning that it can
be convenient to slightly reword the criteria 1-4. With reference to point 4,
say that the \ac{NTF} is $\nicefrac{B_\NTF(z)}{A_\NTF(z)}$. The loop function
is thus $\nicefrac{(A_\NTF(z)-B_\NTF(z))}{A_\NTF(z)}$. To have it strictly
causal, the order of $A_\NTF(z)$ must be the same as the order of
$B_\NTF(z)$. Hence, one has
\begin{asparaenum}
\item[2a)] The \ac{NTF} needs to be causal but not strictly causal.
\end{asparaenum}
Furthermore, there must be cancellations (at least one) in
$A_\NTF(z)-B_\NTF(z)$. The first cancellation can only happen if
\begin{asparaenum}
\item[4a)] The first coefficient in the impulse response of the \ac{NTF} is
unitary.
\end{asparaenum}
If 2a is satisfied, the causality of $\FF(z)$ is always guaranteed, while the
causality of $\FB(z)$ can certainly be guaranteed by taking $\STF(z)=1$. Hence,
one can, with no loss of generality, always take $\STF(z)=1$ and consider
conditions 1, 2a, 4a, 5 instead of 1-5. This makes the modulator design
completely determined after the design of the \ac{NTF}.
\subsection{Conventional design flows}
From the Introduction, it should be clear that common design flows place great
attention to the noise present at the output of the modulator in the signal
band $\set{B}$. Note that $\set{B}$ is a subset of the normalized angular
frequency interval $[0,\pi]$ that may contain multiple sub-bands. The in-band
noise power at the output of the modulator is
\begin{multline}
\sigma^2_{\set{B}} = \int_{\set{B}}
E(\omega) \abs{\NTF\left(\ee^{\ii\omega}\right)}^2 \, d\omega = \\
\frac{\Delta^2}{12\pi} \int_{\set{B}}
\abs{\NTF\left(\ee^{\ii\omega}\right)}^2 \, d\omega .
\label{eq:inbandnoise}
\end{multline}
For instance, in the renown case of a first order, binary \ac{LP} modulator
with $\FF(z)=\nicefrac{1}{z-1}$ (feedforward path is an accumulator) and
$\FB(z)=1$, one has $\STF(z)=z^{-1}$ and $\NTF(z)=1-z^{-1}$. The \ac{NTF} is a
first order differentiator having the \ac{HP} response
\begin{equation}
\abs{\NTF\left(\ee^{\ii\omega}\right)}=
\abs{1-\ee^{-\ii\omega}}=
2 \sin(\nicefrac{\omega}{2}) .
\label{eq:ntf-response-1}
\end{equation}
Letting $B$ be the overall width of set $\mathcal{B}$ and defining the \ac{OSR}
as $\OSR=\nicefrac{f_\Phi}{2B}$, in this \ac{LP} case the integral in
Eqn.~\eqref{eq:inbandnoise} turns out to be computed on
$[0,\nicefrac{\pi}{\OSR}]$ and thus reduces to the well known result
\begin{equation}
\sigma^2_{\mathcal{B}} = \frac{\Delta^2}{12\pi}
\int_{0}^{\nicefrac{\pi}{\OSR}} 4 \sin^2(\nicefrac{\omega}{2})\,
d\omega \approx \frac{\Delta^2}{12} \frac{\pi^2}{3 \OSR^3}
\label{eq:noise-diff-1}
\end{equation}%
where the approximation is valid when the \ac{OSR} is large enough. The result
above can be generalized to higher order \acp{NTF} taking
\begin{equation}
\NTF(z)=(1-z^{-1})^P=\frac{(z-1)^P}{z^P}
\label{eq:ntf-differentiator-P}
\end{equation}
i.e., making the \ac{NTF} a $P$ order differentiator (all poles in $0$ and all
zeros in $1$). This changes Eqn.~\eqref{eq:noise-diff-1} into the more general
\begin{equation}
\sigma^2_{\mathcal{B}} \approx \frac{\Delta^2}{12} \frac{\pi^{2P}}{(2P+1)
\OSR^{2P+1}} .
\label{eq:noise-diff-P}
\end{equation}%
Three things are worth noticing: (i) the \ac{NTF} in
Eqn.~\eqref{eq:ntf-differentiator-P} is only suitable for \ac{LP} modulators;
(ii) it does not respect criterion 5 (for instance the amplitude response in
Eqn.~\eqref{eq:ntf-response-1} peaks at 4); and (iii) it does not minimize
$\sigma^2_{\set{B}}$.
Conventional design flows consequently aim at choosing a $P$ order \ac{NTF}
minimizing the expression in \eqref{eq:inbandnoise} while respecting
requirements 1, 2a, 4a, 5. Such a minimization is by no means easy and is
generally practiced with approximation and iterative methods. The Literature
proposes different variants. Notable ones are thoroughly described in
\cite{Schreier:UDSDC-2004} and effectively implemented in
\cite{Schreier:DELSIG}. An alternative strategy still based on the signal
features alone aims at a \emph{min-max} optimization of the quantization noise,
i.e., at minimizing the peak of the integrand in \eqref{eq:inbandnoise}, rather
than the integral itself\cite{Nagahara:TSP-60-6}.
Often, a key idea is to initially focus on a \ac{LP} modulator (namely, a
\ac{HP} \ac{NTF}), which can later be mutated into a \ac{BP} modulator if
needed (namely, transforming the \ac{NTF} into a \ac{BS} transfer function).
Intuitively, a starting point can be Eqn.~\eqref{eq:ntf-differentiator-P},
which satisfies requirement 4a. The zeros can then be moved away from $z=1$
(dc) to spread them onto the portion of the unit circle corresponding to
frequency values from $0$ to $\nicefrac{\pi}{\OSR}$. This guarantees that the
\ac{NTF} can remain more uniformly low in the signal band. An optimal zero
placement can be obtained by considering \eqref{eq:inbandnoise} for an \ac{NTF}
with $P$ zeros placed onto the unit circle and by nulling its gradient taken
with respect to such zeros. In \cite{Schreier:UDSDC-2004} optimal values are
tabled for $P=1,\dots, 8$. A second step is to push the poles away from $0$
closer to $z=1$, letting them lay within the unit circle onto a curve
surrounding $z=1$ and confining the portion of the unit circle corresponding to
the signal bandwidth. This has the effect of limiting the gain of the \ac{NTF}
out of the signal band and is instrumental in respecting requirement 5. Some
common assumptions used in this optimization are that: (i) the zeros of the
\ac{NTF} can be assigned (almost) independently of the poles (namely, the poles
have negligible effect on the in-band noise or, alternatively, the denominator
of the \ac{NTF} has an almost flat response in the signal band); and (ii)
requirement 5 can be verified by assuming that the \ac{NTF} peaks at
$\omega=\pi$.
As an example, the approach above is implemented in the \texttt{synthesizeNTF}
function in the well known DELSIG toolbox \cite{Schreier:DELSIG}. The idea is
to take the \ac{NTF} zeros in $z=1$ or to spread them according to the
minimization procedure described above and then to take the poles of the
\ac{NTF} so that they correspond to a maximally flat \ac{LP} filter. The
bandwidth of the filter implied by the poles is then adjusted until requirement
5 is satisfied. In practice an independent optimization of the zeros and the
poles of the \ac{NTF} is practiced, which may lead to some issues for
particular design specifications \cite[Sec.~8.1]{Schreier:UDSDC-2004}.
A slight generalization of this procedure consists in choosing an \ac{NTF}
approximation type (e.g., Butterworth, inverse Chebyshev, etc.), and designing
a \ac{HP} \ac{NTF} so that it is in the form $\prod_{i=1}^{P}
\nicefrac{(z-z_i)}{(z-p_i)}$, where $z_i$ and $p_i$ are the zeros and poles (so
as to satisfy requirements 2a and 4a), and it has a cut-off angular frequency
$\omega_t$ \cite[Sec.~4.4.1]{Norsworthy:DSDC-1996}. Initially $\omega_t$ is set
just slightly above the upper edge of the signal bandwidth. Then, the value of
$\abs{\NTF(-1)}$ is verified and the filter is iteratively re-designed with
different values of $\omega_t$, until condition 5 is fulfilled. This means
reducing $\omega_t$ if the \ac{NTF} peaks at too high values and enlarging it
otherwise. For some filter forms, alternatively to (or together with)
$\omega_t$, the stop-band gain (or ripple) can be used as a degree of freedom
to satisfy condition 5. For instance, the DELSIG
\texttt{synthesizeChebyshevNTF} uses an inverse Chebyshev form (i.e., Chebyshev
filter with ripple in the stopband) \cite{Schreier:DELSIG}.
As a final remark, note that the most recent design flows may rely on more
sophisticated optimization strategies \cite{Nagahara:TSP-60-6}, but still base
them on the signal features (namely, on $\mathcal{B}$) alone.%
\subsection{Criticism of conventional design flows}
\label{sec:issues}
Here we briefly summarize some potential issues with conventional design flows.
\begin{enumerate}[a)]
\item Conventional flows share a major trait in assuming that a modulator would
be \emph{perfect} if it could push all the quantization noise away from the
signal band. However, ΔΣ modulators are almost always used together with
output/reconstructions filters as in Figs.~\ref{fig:block-dia-generic}
and~\ref{fig:block-dia-specialized}. Thus, this view of perfection in the
modulator implies another assumption of perfection on the filter side. For a
perfect modulator, the modulator+filter ensemble can work optimally only if
the filter can let through all that is in the signal band and reject all that
is outside. Unfortunately, this on-off behavior is impossible to
implement. When the output filter is imperfect, a modulator that is perfect
under this standard can lead to an overall modulator+filter behavior worse
than that of an imperfect modulator. Thus, in many cases one is better off
adopting a different view of perfection, taking into account the features of
the output filter from the very start. This is particularly important in
cases like those in Figs.~\ref{sfig:block-dia-dac}
and~\ref{sfig:block-dia-ampli} where the output filter is analog and its
specifications cannot be made too strict without incurring into high
costs. Nonetheless, even in cases like Fig.~\ref{sfig:block-dia-adc}, where
the output filter is digital, taking its response into account can be
beneficial. In fact, the filter is functional to decimation and the
out-of-band noise that may leak out of it is no less important than the
in-band noise since the decimator/resampler aliases it onto the signal band.
\item Many conventional design flows, where the location of the \ac{NTF} zeros
is decided independently from the \ac{NTF} poles, provide good results only
if the denominator polynomial of the \ac{NTF} turns out to be almost constant
in the signal band. In some cases, and particularly when the \ac{OSR} is
relatively low, this assumption may prove untrue, leading to a sub-optimal
design.
\item In some design flows, the cut-off frequency of the \ac{NTF} is used as a
degree of freedom to satisfy the constraint $\norm{\NTF}_\infty<\gamma$. In
some cases, particularly when $\gamma$ is close to $1$ or the \ac{OSR} is
low, this may result in some noise leaking into the signal band.
\item Most conventional design flows result in \acp{NTF} with steep transitions
between the pass band and the stop band, particularly when the \ac{NTF} is
high order. This behavior is obviously inherent in the minimization of
$\sigma^2_{\set{B}}$. However, it may exacerbate the differences between the
linearized and real modulator model. In turn, this may lead to inaccurate SNR
predictions and in extreme cases to a lower robustness against
instability.\footnote{%
Even if we cannot provide a formal proof of this phenomenon, we have
observed it in many of cases and it has intuitive explanations. We report
one. If a modulator could be implemented \emph{fully respecting} the
specification of an extremely steep NTF (e.g., brick-wall), then violations
of information theory principles could occur. In fact, one could recover
the modulator input information \emph{without any loss due to quantization}
by using a brick-wall output filter, thus obtaining an information rate at
the modulator output equal to that at the input. Since the latter can be
arbitrarily large, this could imply an output information rate higher than
the bit-rate, which is absurd. Thus, one can expect the conformance between
the approximated linear model and the actual nonlinear model to deteriorate
as the modulator is designed to have steeper NTFs, bringing in unexpected
effects potentially including instability.}
\item Many conventional design flows start with an \ac{LP} modulator and obtain
other modulator types (e.g. \ac{BP}) via transformations. This makes it
extremely hard if not impossible to cope with unusual modulator types (e.g.,
multi-band) that may be required by some applications.
\end{enumerate}
\section{The proposed \ac{NTF} optimization}
In this Section, we mainly deal with point a) in the list above. Nonetheless,
as a side effect, the proposed solution also addresses all the other points.
It is worth anticipating that our design strategy assumes and requires the
\ac{NTF} to be \ac{FIR}. In practice, this is not a severe limitation. As a
matter of fact, this choice is perfectly in line with the elementary, original
high-order \ac{NTF} form in Eqn.~\eqref{eq:ntf-differentiator-P}, where all the
poles fall in the origin. With respect to this, we merely move the
zeros. Consequently, our proposal can be seen as a strategy where only the
\ac{NTF} zeros are optimized. Lack of optimization for the poles means that
results comparable to strategies which optimize the poles can only be achieved
at a higher filter order. Indeed, this is the case. Yet, taking a higher filter
order is not a problem since contrarily to conventional designs, we can
synthesize high order \acp{NTF} (even 30-50 or more) without hindering the
modulator stability. Furthermore, other \ac{FIR} based strategies exist
\cite{Nagahara:TSP-60-6}, also requiring large modulator orders.
\subsection{Form to be optimized}
As hinted in \cite{Callegari:TSP-58-12,Bizzarri:ECCTD-2009}, in order to deal
with the output filter, one should interpret the ΔΣ modulator as a heuristic
solver for an \ac{FA} problem. Fig.~\ref{fig:filtered-approx} illustrates the
problem nature. This consists in finding a discrete sequence $x(n)$ such that
it is as similar as possible to a high-resolution or continuous valued input
sequence $w(n)$, once passed through a filter $H(z)$. Clearly, $w(n)$ plays the
role of the modulator input, $x(n)$ of the modulator output and $H(z)$ is the
output filter. As shown in the figure, the concept of ``similarity'' can be
formalized as a minimization of the average power at the output of the filter
fed by $w(n)-x(n)$.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.55]{fa}
\end{center}
\caption{Interpretation of the modulator operation as the solution of an
\ac{FA} problem.}
\label{fig:filtered-approx}
\end{figure}
To solve the \ac{FA} problem, the ΔΣ modulator rather than being designed after
the minimization of $\sigma^2_{\set{B}}$ in Eqn.~\eqref{eq:inbandnoise} needs
to be designed after the minimization of
\begin{multline}
\sigma^2_{H} = \int_{0}^\pi E(\omega)
\abs{\NTF\left(\ee^{\ii\omega}\right)}^2 \,
\abs{H \left(\ee^{\ii\omega}\right)}^2 \, d\omega = \\
\frac{\Delta^2}{12\pi} \int_{0}^{\pi}
\abs{\NTF\left(\ee^{\ii\omega}\right)}^2 \, \abs{H
\left(\ee^{\ii\omega}\right)}^2\, d\omega .
\label{eq:filterednoise}
\end{multline}
Let us assume that $\NTF(z)$ is achieved by a $P$ order FIR filter, with
coefficients $a_i$, collectable in a vector $\vec a = (a_0, \dots,
a_{P})\transposed$. Namely,
\begin{equation}
\NTF(z)=\sum_{k=0}^{P} a_k z^{-k} .
\end{equation}
Let us also assume that $H(z)$ corresponds to a filter whose impulse response
can safely be truncated to a finite number of samples collectable in a vector
$\vec h=(h_0,\dots,h_{M})\transposed$.
Let us finally define $G(z)=\NTF(z) H(z)$, so that the quantity object of
minimization can be written as $\int_{0}^{\pi} \abs{G(\ee^{\ii\omega})}^2
\:d\!\omega$. The impulse response corresponding to $G(z)$ can obviously be
obtained as the convolution of $a_i$ and $h_i$, as in
\begin{equation}
g_i=[a * h]_i = \sum_{j=-\infty}^{\infty} a_j\, h_{i-j}
\end{equation}
where $*$ is the convolution operator and we take $a_i=0$ for $i<0$ or $i>P$
and $h_i=0$ for $i<0$ or $i>M$. Clearly, there are only a finite number of
non-null entries in $g_i$, for $i=0,\dots,M+P$.
Eventually, recall the discrete form of Parseval's theorem, referred to $G(z)$
\begin{equation}
\frac{1}{2\pi}\int_{-\pi}^{\pi} \abs{G(\ee^{\ii\omega})}^2 \:d\!\omega =
\sum_{i=-\infty}^{\infty} |g_i|^2 .
\end{equation}
By substitution, neglecting all multiplicative constant terms, the quantity
object of minimization can be expressed as
\begin{equation}
\sum_{i=0}^{M+P} \left( \sum_{j=0}^{P} a_j\, h_{i-j} \right)^2
\end{equation}
that can be further expanded into
\begin{equation}
\sum_{i=0}^{M+P}\; \sum_{j=0}^{P}\; \sum_{k=0}^{P}
a_j\, a_k\, h_{i-j}\, h_{i-k} .
\end{equation}
By swapping the sums, one gets
\begin{equation}
\sum_{j=0}^{P}\; \sum_{k=0}^{P}
a_j\, \left(\sum_{i=0}^{M+P}h_{i-j}\, h_{i-k}\right)\; a_k .
\end{equation}%
that can be put in a more compact form exploiting a $(P+1)\times (P+1)$ matrix
$\mat Q$ defined as
\begin{equation}
\mat Q = (q_{j,k}) ~~ \text{where} ~~
q_{j,k} = \sum_{i=0}^{M+P} h_{i-j}\, h_{i-k}
\end{equation}
for $j, k \in \{0,\dots, P\}$. With this, the form to be minimized becomes
\begin{equation}
\vec a\transposed\; \mat Q\; \vec a .
\label{eq:quadratic-minimization}
\end{equation}
Hence it is evident that one has a \emph{quadratic optimization problem}
defined over the filter coefficients. Clearly, $\mat Q$ must be positive
semi-definite, since $\sigma^2_H$ in~\eqref{eq:filterednoise} cannot be
negative.
It is worth observing that in the definition of $\mat Q$ the summation can be
extended to infinity as in
\begin{equation}
q_{j,k} = \sum_{i=-\infty}^{\infty} h_{i-j}\, h_{i-k}
\end{equation}
to make evident that $\mat Q$ is Toeplitz \cite{Gray:FTCIT-2-3} symmetric (it
is in fact an auto-covariance matrix). This is interesting not just as a
structural property, but also to reduce the computational burden of $\mat Q$ to
the mere computation of its first row or first column. Focusing on the first
row $q_{0,k} = \sum_{i=-\infty}^{\infty} h_{i}\, h_{i-k}$ one can also notice
that to further reduce the computation burden the summation bounds can be
restricted to $q_{0,k} = \sum_{i=k}^{M} h_{i}\, h_{i-k}$.
\subsection{Application of the design constraints}
The particular choice of a \ac{FIR} structure for the \ac{NTF} makes its
stability (point 1 in Sec.~\ref{ssec:design-constraints}) and causality (point
2a) inherent. Thus, only two constraints remain: the need for a unitary first
coefficient in the impulse response (point 4a); and the containment of
$\norm{\NTF}_\infty$ (point 5, Lee criterion). \smallskip
\subsubsection{Unitary the first coefficient of the impulse
response}
thanks to the \ac{FIR} nature of the \ac{NTF} this merely requires fixing $\vec
a_1 = a_0 = 1$. While this equality can be taken as a constraint for the
minimization of \eqref{eq:quadratic-minimization}, it is actually more
convenient to use it to reduce the problem size. To this aim, observe that
\begin{multline}
\vec a\transposed\;\mat Q\;\vec a =\\
\begin{pmatrix}
\vec a_1 & \vec a_{2:\bullet}\transposed
\end{pmatrix}\;
\begin{pmatrix}
\mat Q_{1,1} & \vec Q_{1,2:\bullet}\\
\vec Q_{2:\bullet,1} & \mat Q_{2:\bullet,2:\bullet}
\end{pmatrix}\;
\begin{pmatrix}
\vec a_1\\\vec a_{2:\bullet}
\end{pmatrix}=
a_0^2\, q_{0,0} +\\
a_0\, \vec a_{2:\bullet}\transposed\;\vec Q_{2:\bullet,1}+
a_0\, \vec Q_{1,2:\bullet}\; \vec a_{2:\bullet}+
\vec a_{2:\bullet}\transposed\;\mat Q_{2:\bullet,2:\bullet}\;\vec
a_{2:\bullet} .
\end{multline}%
Thanks to the symmetry of $\mat Q$, the two central entries can be rewritten as
$2 a_0\; \mat Q_{1,2:\bullet}\; \vec a_{2:\bullet}$. After the constantness of
$q_{0,0}$ and thanks to $a_0=1$ the quantity object of minimization eventually
reduces to
\begin{equation}
\vec a_{2:\bullet}\transposed\;
\mat Q_{2:\bullet,2:\bullet}\;
\vec a_{2:\bullet} +
2\; \mat Q_{1,2:\bullet}\; \vec a_{2:\bullet} .
\label{eq:objective}
\end{equation}
In other words, the constraint can be exploited for the reduction of the
problem size at the mere cost of augmenting the minimization form by a linear
term.
\subsubsection{Lee criterion}
Eqn.~\eqref{eq:lee-constraint} represents an extremely complicated constraint.
Indeed, it can be recast based on a \emph{universal quantification}
\begin{equation}
\forall \omega \in [0,\pi] \quad \abs{\NTF\left(\ee^{\ii\omega}\right)}
< \gamma
\end{equation}
making evident that it summarizes an \emph{infinitely large} set of
inequalities in the frequency domain. Furthermore, the filter coefficients
appear in $\abs{\NTF\left(\ee^{\ii\omega}\right)}$ in a nonlinear fashion.
Fortunately, the \ac{KYP} lemma provides an extremely efficient way to convert
universally quantified frequency domain inequalities of this sort into an
alternative formulation based on the dual \emph{existential quantifier}. This
is extremely convenient since a minimization problem can often deal with
existential quantifiers with the mere introduction of dummy
variables. Furthermore, the \ac{KYP} lemma gets the frequency domain inequality
expressed through an arbitrary realization of a dynamical system providing the
frequency domain behavior. This is also very convenient since it means that an
inequality over the \ac{NTF} can be directly expressed via the \ac{NTF}
coefficients.
From the \ac{KYP} lemma, the following property holds.
\begin{property}[Bounded real lemma]
\label{prop:bounded-real}\hnull\\
If a transfer function $T(z)$ admits a state space representation
$\mathcal{T}$ such that
\begin{equation}
\mathcal{T}=
\left(\hspace{-0.5ex}
\begin{array}{c|c}
\mat A & \mat B\\[-0.5ex]
\hlx{hv}
\mat C & \mat D
\end{array}
\hspace{-0.5ex}\right)(z)
\end{equation}
and $\mathcal{T}$ is stable and controllable, then an inequality such as
\begin{equation}
\norm{T}_\infty \leq \gamma
\end{equation}
can be recast in terms of the coefficients in $\mat A$, $\mat B$, $\mat C$
and $\mat D$ by asserting that
\begin{multline}
\exists \mat P
\text{ square symmetric positive definite matrix such that}\\
\begin{pmatrix}
\mat A\transposed\,\mat P\,\mat A - \mat P &
\mat A\transposed\,\mat P\,\mat B &
\mat C\transposed\\
\mat B\transposed\,\mat P\,\mat A &
\mat B\transposed\,\mat P\,\mat B - \gamma^2&
\mat D\\
\mat C & \mat D & -1
\end{pmatrix} \le 0
\label{eq:matrix-inequality}
\end{multline}
where the $\le$ sign is here used to denote a generalized inequality stating
negative semi-definiteness.
\end{property}
For an informal discussion of this property, see Appendix~\ref{app:lemma}.
Now, let $\mathcal{T}$ be a realization of the \ac{NTF} such that
\begin{multline}
\left(\hspace{-0.5ex}
\begin{array}{c|c}
\mat A & \mat B\\[-0.5ex]
\hlx{hv}
\mat C & \mat D
\end{array}
\hspace{-0.5ex}\right)(z)=\\
\left(\hspace{-0.5ex}
\begin{array}{ccccc|c}
0 & 1 & 0 & \cdots & 0 & 0 \\
0 & 0 & 1 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 1 & 0 \\
0 & 0 & 0 & \cdots & 0 & 1 \\[-0.5ex]
\hlx{hv}
a_P & a_{P-1} & a_{P-2} & \cdots & a_1 & a_0
\end{array}
\hspace{-0.5ex}\right)(z) .
\end{multline}
This is a canonical realization, where $\mat A$ is responsible of making each
state variable a delayed version of the preceding one, so that the state
variables end up being a memory of the last $P$ input samples
\cite{Antoniou:DFAD-2000}. Evidently, this realization is minimal (thus
controllable) and only $\mat C$ depends on the filter coefficients that are
object of optimization. Hence, the left hand side of
inequality~\eqref{eq:matrix-inequality} is affine in the coefficients object of
minimization. Furthermore, it is \emph{affine} in the entries of $\mat
P$. Hence, collecting in a vector $\vec \xi$ all the filter coefficients $a_1,
\dots, a_P$ and all the independent entries in $\mat P$ to get an $L$ entry
vector $(\xi_1, \dots \xi_L)\transposed$, it must be
that~\eqref{eq:matrix-inequality} can be expressed as $\mat M(\vec \xi) \leq 0$
with
\begin{equation}
\mat M(\vec \xi) = \mat M_0 + \sum_{i=1}^L \mat M_i\; \xi_i
\end{equation}
where $\mat M_0, \dots \mat M_L$ are symmetric matrices. Regardless of the
entries of the $\mat M_i$ matrices (that are unimportant here), this shows
that~\eqref{eq:matrix-inequality} is a \ac{LMI}. Such property is quite
important, as it states that~\eqref{eq:matrix-inequality} is a convex
constraint.
\subsection{Summary of the optimization problem}
It is now possible to summarize the optimization problem. To find $P$ filter
coefficients $a_1, \dots, a_P$, one needs to build a problem with $L$ variables
$\xi_1, \dots, \xi_L$. The first $P$ of them are the filter coefficients
themselves, while the last $L-P$ are entries of matrix $\mat P$, functional to
the solution of the problem, but uninteresting and due to be eventually
discarded.
The problem consists in the minimization of the convex quadratic form in
Eqn.~\eqref{eq:objective}, which is also a convex quadratic form in $\vec
\xi$. One has the constraint $\mat P>0$, which is an \ac{LMI} in $\vec \xi$ and
the constraint in Eqn.~\eqref{eq:matrix-inequality}, which has just been shown
to be an \ac{LMI} in $\vec \xi$. Altogether, one has a problem that can be
tackled by \ac{SDP}. In recent times, \emph{interior point methods}
\cite{Boyd:CO-2009,DeKlerk:ASP-2002} allow problems of this sort to be solved
in polynomial time with respect to the problem size. On commodity hardware,
problems with a thousand of variables or more can be solved in a few
seconds. In our case the number of variables is dominated by the number of
independent entries in $\mat P$, which is $P\times P$. Since $\mat P$ is
symmetric, this means having $\nicefrac{P}{2}\cdot(P+1)$ independent entries
and $\nicefrac{P}{2}\cdot(P+2)$ variables overall. Consequently, one can easily
go up to filter orders of $50$ or more. In practice, as
Section~\ref{sec:examples} shows, there is hardly a need to reach such high
orders. That section provides practical examples obtained with an \ac{SDP} code
distributed under a free, open source licence, proving that tackling our
optimization problem is not just possible, but also quite affordable.
\subsection{Positive side effects of the proposed optimization}
After having illustrated how the proposed \ac{NTF} optimization deals with
issue a) in Sec.~\ref{sec:issues}, it is worth considering also the other
items. Points b) and c) are automatically eliminated thanks to the different
design strategy. Particularly, the proposed methodology is completely agnostic
of the \ac{OSR}.
With respect to point d), our strategy tends to provide \acp{NTF} that are
\emph{only as steep as needed} and typically just as
steep as the reconstruction filter is. Eventually, with
respect to point e), our strategy can deal with any kind of modulator, even the
most unusual one, with no need for frequency transformations. For instance, a
multi-band modulator will obviously have a multi-band output filter. Feeding
such filter into the our design procedure, automatically leads to the required
\ac{NTF}.
\section{Design examples and comparison to conventional design flows}
\label{sec:examples}
The design examples proposed in this Section have been tested by coding our
strategy in the Python programming language taking advantage of the Numpy and
Scipy packages \cite{Oliphant:CS+E-9-3}. Python is a modern general purpose
programming language renown for its conciseness and extensibility. Numpy and
Scipy expand it into a powerful, matrix-oriented numerical computing
environment that can be freely deployed on all major computing platforms.
\ac{SDP} has been addressed using a further Python module, the CVXOPT free
software package for convex optimization
\cite{Andersen:OML-3-2011}. Specifically, CVXOPT has been used as a backend
solver, while the CVXPY package \cite{DeRubira:CVXPY-2012} has been employed as
a modeling framework to express the optimization problem in a more natural form
under the rules of \ac{DCP} \cite{Grant:GOTI-7-2006}. Comparison to
conventional ΔΣ modulator design flows has been practiced by the DELSIG toolbox
\cite{Schreier:DELSIG} and the code provided with
\cite{Nagahara:TSP-60-6}. DELSIG has also been used for the time domain
simulation of the modulators. A sample of our code is available. Please refer
to Appendix~\ref{app:code} for information on how to obtain it.
\subsection{\ac{LP} modulator with first order output filter}
\label{sec:lp1}
The first design case regards a binary \ac{LP} modulator, targeting signal
synthesis and coupled with a 1\Us{st} order reconstruction filter. The signal
band extends from dc to \unit[1]{kHz} and the \ac{OSR} is $1024$ (i.e., the
modulator operates at approximately \unit[2]{MHz}). To avoid spurious
attenuation at the filter output, the reconstruction filter is designed with
its cut-off frequency set at \unit[2]{kHz}. The Lee coefficient $\gamma$ is set
at $1.5$. Fig.~\ref{fig:lp1-filters} shows the output filter profile and the
\acp{NTF} obtained with the \texttt{synthesizeNTF} function in DELSIG (for a
4\Us{th} order modulator with optimized zeros) and our approach (for a
12\Us{th} order \ac{FIR} \ac{NTF}).\footnote{%
We tried to compare also to the method in \cite{Nagahara:TSP-60-6}, but the
provided code seems to run into numerical stability problems for this very
large \ac{OSR}.}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.65]{lp1-filters}%
\end{center}%
\caption{Output filter, proposed \ac{NTF} and \ac{NTF} obtained by a
conventional design flow, for the test case in Section~\ref{sec:lp1}.}
\label{fig:lp1-filters}
\end{figure}
The choice of a 12\Us{th} order for the proposed strategy is not casual. As a
matter of fact, in the proposed approach the performance improves (i.e.,
$\sigma_H$ reduces) as the order is increased. However, the improvement is
initially very rapid, then it slows down. This is well evident in figure
\ref{fig:lp1-convergence}, which shows the convergence to the optimal \ac{NTF}
shape. For orders higher than 6, the \ac{NTF} shape is almost
invariant. Clearly, it is convenient to stop increasing the order as soon as
$\sigma_H$ levels off, which happens slightly above 10. Interestingly, a
similar convergence is not experienced in other design strategies, which keep
delivering different (and improved, according to their merit factors) \acp{NTF}
as the order is increased, so that the limit is the loss of robust behavior or
the loss of numerical accuracy in the optimizer. Incidentally, this is the
reason why we can compare modulators having different orders. Indeed, we
compare the best modulator designed with the proposed strategy to modulators
designed with other strategies at a reasonable trade-off between quality and
robustness.
\begin{figure}[t]
\begin{center}
\subfloat[\label{sfig:lp1-conv-shapes}]{%
\includegraphics[scale=0.65]{lp1-convergence}}\\
\subfloat[\label{sfig:lp1-conv-perf}]{%
\includegraphics[scale=0.65]{lp1-convergence2}}
\end{center}
\caption{Convergence of the \ac{NTF} to its optimal shape for increasing
filter orders $5, 6, 9, 13, 18, 25$ \protect\subref{sfig:lp1-conv-shapes}
and improvement in minimization goal as the \ac{FIR} order is increased
\protect\subref{sfig:lp1-conv-perf}. In
\protect\subref{sfig:lp1-conv-shapes}, the six curves superimpose almost
perfectly. Data for the test case in Section~\ref{sec:lp1}.}
\label{fig:lp1-convergence}
\end{figure}%
Interestingly, the \ac{NTF} obtained by our design flow turns out to be by far
less aggressive than the conventional one, exhibiting a much lower attenuation
in the signal band. Nonetheless, its performance is better. An estimation of
the output SNR, for an input sinusoid with amplitude $A=0.4$ (normalized to the
quantization levels set at $\pm 1$) can be obtained as
\begin{equation}
\mathit{SNR}_{\text{expected}}=\frac{A^2}{2\sigma^2_H}
\end{equation}
and gives \unit[42.9]{dB} for the proposed approach and \unit[38.4]{dB} for the
conventional (\texttt{synthesizeNTF}) \ac{NTF} with a difference over
\unit[4.5]{dB}. Computing the SNR by time domain simulation (that let one use
the actual nonlinear modulator model) returns \unit[42.4]{dB} for the proposed
technique and \unit[40.3]{dB} for the conventional one, so that the advantage
reduces to about \unit[2]{dB}, still being well perceivable. The SNR numbers
have been obtained by replicating in software the architecture in
Fig.~\ref{fig:filtered-approx}. Excited with the modulator input alone as
$w(n)$, it lets one measure the signal power at $e(n)$, while excited with both
the modulator input at $w(n)$ and output at $x(n)$ it lets one measure the
noise power at $e(n)$.
A justification for the apparent paradox of having a better behavior with a
less aggressive \ac{NTF} comes from Fig.~\ref{fig:lp1-ortho}, which shows
$\abs{H\left(\ee^{\ii\omega}\right)}^2\abs{\NTF\left(\ee^{\ii\omega}\right)}^2$
in the three cases. Here, thanks to the linear scale, it is well evident that
the advantage of the conventional \acp{NTF} within the signal band is more than
compensated by the advantage of the proposed \ac{NTF} out of it.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.65]{lp1-ortho}
\end{center}%
\caption{Integrand appearing in the definition of $\sigma^2_H$ for the
proposed \ac{NTF} and the \acp{NTF} obtained by a conventional design
flow. Data for the test case in Section~\ref{sec:lp1}.}
\label{fig:lp1-ortho}
\end{figure}
It is even more interesting to note that a ΔΣ modulator based on the proposed
\ac{NTF} can behave much more robustly than a conventional one. Even with large
input signals, it can operate correctly, without overloading its
quantizer. Conversely, the 4\Us{th} order modulator obtained by the
\texttt{synthesizeNTF} design flow is much more fragile due to its
steepness. For instance, at a signal amplitude $A=0.7$ it already breaks,
unless the Lee coefficient $\gamma$ is lowered to $1.4$. Conversely, the
proposed \ac{NTF} makes the modulator work correctly up to $A=1$. Furthermore,
for $A$ values in little excess of $1$ where \emph{by definition} the modulator
is not meant to operate, one initially sees a graceful degradation of
performance, rather than a full breakage. For instance at $A=1.1$ one sees the
SNR reducing to \unit[30]{dB}. The other way round, this increased robustness
can be used to bring the Lee coefficients to higher values without breaking the
modulator operation, cashing a further little advantage in terms of SNR. For
instance, for the output filter under exam, the proposed design technique lets
a modulator be designed with $\gamma=2$ gaining $1$ further \unit{dB} in
SNR. As a matter of fact, we have verified that even $\gamma=4$ is tolerated,
although without any SNR advantage.
Intuitively, the increased robustness is a consequence of the fact that the
proposed technique makes the \ac{NTF} no steeper than it is really needed,
matching the steepness of the output filter. Incidentally, this explains the
convergence in \ref{fig:lp1-convergence}. Once the required steepness is
reached, there is no need to rise the order any further.
Fig.~\ref{fig:lp1-pz} compares our pole/zero positioning to
conventional ones.
\begin{figure}[t]
\begin{center}
\subfloat[\label{sfig:lp1-pz-opti}]{%
\includegraphics[scale=0.65]{lp1-pz_opti}}\quad
\subfloat[\label{sfig:lp1-pz-delsig}]{%
\includegraphics[scale=0.65]{lp1-pz_delsig}}%
\end{center}%
\caption{Comparison of pole-zero placement for the proposed design
strategy~\protect\subref{sfig:lp1-pz-opti} and a conventional ones,
namely \texttt{synthesizeNTF} from
DELSIG~\protect\subref{sfig:lp1-pz-delsig}. Data from the
test case in Section~\ref{sec:lp1}.}
\label{fig:lp1-pz}
\end{figure}
A final remark can be dedicated to the computational resources needed by the
optimization. For \ac{FIR} order $P=12$, a business laptop computer with a Core
2 Duo (Penryn, 2009) CPU and \unit[4]{GB} of RAM shared with the video card,
can perform the \ac{SDP} almost instantaneously.
\subsection{\ac{BP} modulator with output filter with steep
features}\label{sec:bp1}
The second test case regards again a binary modulator, this time for \ac{BP}
signals. The signal band is centered at $\unit[1]{kHz}$ and extends for
$\unit[400]{Hz}$. The \ac{OSR} is set at 64. This time, the output filter is
much steeper than in the previous example, consisting in an 8\Us{th} order
Butterworth filter. The Lee coefficient $\gamma$ is set at
$1.5$. Fig.~\ref{fig:bp1-filters} shows the output filter profile and the
\acp{NTF} obtained with: the \texttt{synthesizeNTF} function in DELSIG (for a
4\Us{th} order modulator with optimized zeros); the method in
\cite{Nagahara:TSP-60-6} (for a 49\Us{th} order modulator); and our approach
(for a 49\Us{th} order \ac{FIR} \ac{NTF}).
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.65]{bp1-filters}
\end{center}
\caption{Output filter, proposed \ac{NTF} and \acp{NTF} obtained by two
conventional design flow, for the test case in Section~\ref{sec:bp1}.}
\label{fig:bp1-filters}
\end{figure}
In this case, the higher roll-off of the output filter requires a higher
\ac{NTF} order for our methodology, as evident for the convergence analysis in
Fig.~\ref{fig:bp1-convergence}. From the second plot, a 32\Us{th} order
\ac{NTF} would already give good results. Note that the 49\Us{th} order
\ac{FIR} takes a couple of minutes to compute via \ac{SDP} on the same laptop
used for the previous test case.
\begin{figure}[t]
\begin{center}
\subfloat[\label{sfig:bp1-conv-shapes}]{%
\includegraphics[scale=0.65]{bp1-convergence}}\\
\subfloat[\label{sfig:bp1-conv-perf}]{%
\includegraphics[scale=0.65]{bp1-convergence2}}
\end{center}
\caption{Convergence of the \ac{NTF} to its optimal shape for increasing
filter orders $5, 8, 11, 15, 21, 28, 37, 49$
\protect\subref{sfig:bp1-conv-shapes} and improvement in minimization goal
as the \ac{FIR} order is increased \protect\subref{sfig:bp1-conv-perf}. In
\protect\subref{sfig:bp1-conv-shapes}, the last 2 curves superimpose almost
perfectly. Data for the test case in Section~\ref{sec:bp1}.}
\label{fig:bp1-convergence}
\end{figure}%
As for the previous test case, it is interesting to observe the integrand
appearing in the definition of $\sigma^2_H$. This is plotted in
Fig.~\ref{fig:bp1-ortho}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.65]{bp1-ortho}%
\end{center}%
\caption{Integrand appearing in the definition of $\sigma^2_H$ for the
proposed \ac{NTF} and the \acp{NTF} obtained by two conventional design
flows. Data for the test case in Section~\ref{sec:bp1}.}
\label{fig:bp1-ortho}
\end{figure}
In this case, for a sinusoidal input with $A=0.75$ we get an SNR (from time
domain simulation) after the output filter of \unit[69.2]{dB} for our design
approach, \unit[67.0]{dB} for the \texttt{synthesizeNTF} design flow, and
\unit[68.1]{dB} for the method in \cite{Nagahara:TSP-60-6}. Note that trying to
pick a 6\Us{th} order \ac{NTF} with the \texttt{synthesizeNTF} design flow
would lead to a misbehaving modulator, while our approach enables increasing
the \ac{FIR} order even above $49$.
Fig.~\ref{fig:bp1-pz} compares our pole/zero positioning to the conventional
ones for this test case.
\begin{figure}[t]
\begin{center}
\subfloat[\label{sfig:bp1-pz-opti}]{%
\includegraphics[scale=0.65]{bp1-pz_opti}}\quad
\subfloat[\label{sfig:bp1-pz-delsig}]{%
\includegraphics[scale=0.65]{bp1-pz_delsig}}\\[-1cm]
\subfloat[\label{sfig:bp1-pz-naga}]{%
\includegraphics[scale=0.65]{bp1-pz_naga}}
\end{center}
\caption{Comparison of pole-zero placement for the proposed design
strategy~\protect\subref{sfig:bp1-pz-opti} and two conventional ones,
namely \texttt{synthesizeNTF} from
DELSIG~\protect\subref{sfig:bp1-pz-delsig} and Nagahara's strategy in
\cite{Nagahara:TSP-60-6}~\protect\subref{sfig:bp1-pz-naga}. Data from the
test case in Section~\ref{sec:bp1}.}
\label{fig:bp1-pz}
\end{figure}
This test case shows that the advantage of the proposed approach may fade a
little when the output filter has steep cutoff characteristics close to an
ideal on-off behavior. This is quite reasonable, since an on-off filter
behavior is exactly the premise on which conventional design flows are
funded. Yet, even in this case some advantages remain evident, including those
in terms of SNR.
\subsection{Multi-band modulator
\label{sec:bpm}%
The last case that we consider is that of a multi-band modulator. This is
intractable in many conventional design flows \cite{Schreier:DELSIG}, although
it can be managed by the recent methodology in
\cite{Nagahara:TSP-60-6}.\footnote{Note that the method in
\cite{Nagahara:TSP-60-6} targets a slightly different goal for the single and
the multi-band case. Furthermore, it requires a new matrix inequality for
each band so that it can become increasingly demanding in terms of
computational power when their number is increased. Finally, the sample code
delivered with the paper cannot deal with cases where the signal bands have
different widths, although it can be easily extended for it.} %
Assume that the input signal has two bands, one centered at \unit[1]{kHz} and
\unit[400]{Hz} wide and the other centered at \unit[10]{kHz} and \unit[4]{kHz}
wide. Let the \ac{OSR} be 64 (i.e., $f_\Phi=\unit[2\cdot 64\cdot
(4000+400)]{Hz}$). Consider the case of a 2-band 8\Us{th} order Butterworth
filter at the output. As usual, for the modulator design consider the binary
case, with $\gamma=1.5$. Fig.~\ref{fig:bp1-filters} shows the output filter
profile and the \acp{NTF} obtained with in our approach (for a 50\Us{th} order
\ac{FIR} \ac{NTF}). Obviously, it is not possible to design an \ac{NTF} for
this case using \texttt{synthesizeNTF}, so we provide comparison only to the
the method in \cite{Nagahara:TSP-60-6} for the same \ac{FIR} order.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.65]{bpm-filters}
\end{center}
\caption{Output filter, proposed \ac{NTF}, and conventional \ac{NTF}
synthesized by the method in \cite{Nagahara:TSP-60-6} for the test case
in Section~\ref{sec:bpm}.}%
\label{fig:bpm-filters}
\end{figure}
As for the previous test cases, Fig.~\ref{fig:bpm-convergence} shows the
\ac{NTF} convergence to the optimal shape as the \ac{FIR} order $P$ is
increased. Evidently, \ac{FIR} orders of 16 would already be enough to achieve
a good SNR.
\begin{figure}[t]
\begin{center}
\subfloat[\label{sfig:bpm-conv-shapes}]{%
\includegraphics[scale=0.65]{bpm-convergence}}\\
\subfloat[\label{sfig:bpm-conv-perf}]{%
\includegraphics[scale=0.65]{bpm-convergence2}}
\end{center}
\caption{Convergence of the \ac{NTF} to its optimal shape for increasing
filter orders $5, 8, 11, 15, 21, 28, 37, 49$
\protect\subref{sfig:bpm-conv-shapes} and improvement in minimization goal
as the \ac{FIR} order is increased \protect\subref{sfig:bp1-conv-perf}. In
\protect\subref{sfig:bpm-conv-shapes}, the last 3 curves superimpose very
well. Data for the test case in Section~\ref{sec:bpm}.}
\label{fig:bpm-convergence}
\end{figure}
In this case, simulating the modulator for an input signal given by the
superposition of two tones at 1 and 10 \unit{kHz}, with amplitude $A=0.45$ for
both of them, gives an SNR of over \unit[46]{dB} at the filter output. The
modulator based on the method in \cite{Nagahara:TSP-60-6} is unstable at this
large input. At $A=0.40$ it operates correctly, though, and it can be taken as
a reference. In this condition, our method delivers a \unit[48.2]{dB} SNR, vs a
\unit[42.3]{dB} SNR for the reference algorithm, namely we have an almost
\unit[6]{dB} advantage. This large advantage should be no surprise, since we
explicitly optimize for the SNR on the filtered output, while
\cite{Nagahara:TSP-60-6} uses a different merit factor.
What is interesting about this multi-band case is that it shows a rather
counter-intuitive behaviour. Looking at the input signal structure, one would
probably think that the modulator should put its quantization noise in 3
regions: at low frequencies, before the first signal band, at intermediate
frequencies, between the signal bands and at high frequencies, above the second
signal band. Furthermore, one would think that the modulator should have zones
of very high attenuation for the noise in the two signal bands. Conversely,
our design approach shows that it is more convenient to use all the available
degrees of freedom on the \ac{NTF} to optimize the noise shaping at the high
frequencies, completely ignoring the two lower bands that are anyway extremely
thin and thus incapable to contain much noise. Additionally, it shows that it
would be a waste to strive to remove too much noise from the first signal band,
that is anyway very thin and thus incapable to contribute much to the overall
SNR. This is very well evident from the graph in Fig.~\ref{fig:bpm-ortho}
which shows
$\abs{H\left(\ee^{\ii\omega}\right)}^2\abs{\NTF\left(\ee^{\ii\omega}\right)}^2$,
namely the integrand in the expression of $\sigma^2_H$, both for our \ac{NTF}
and the one obtained following \cite{Nagahara:TSP-60-6}. Thanks to the linear
scale, it is apparent that it is much more important to practice a good noise
allocation at the high frequencies above \unit[12]{kHz} than in the thin bands
between dc and \unit[800]{Hz} and between 1200 and
\unit[8000]{Hz}. Furthermore, it is interesting to look at the first peak in
the plot. This is due to the fact that the \ac{NTF} attenuates much less in the
first signal band than in the second. However, the linear scale makes this peak
appear as it really is: so thin that its \emph{mass} and thus its contribution
to the overall SNR is anyway very modest. Indeed, the \ac{NTF} based on
\cite{Nagahara:TSP-60-6}, that strives to remove a lot of noise also from the
first signal band, lacks this peak, but pays it with a much higher integrand
right above the second band.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.65]{bpm-ortho}%
\end{center}%
\caption{Integrand appearing in the definition of $\sigma^2_H$ for the
proposed \ac{NTF} and a conventional \ac{NTF} designed following the
method in \cite{Nagahara:TSP-60-6}. Data for the test case in
Section~\ref{sec:bpm}.}%
\label{fig:bpm-ortho}
\end{figure}
Finally, Fig.~\ref{fig:bpm-pz} shows the pole-zero location, which is somehow
similar to that in Fig.~\ref{fig:bp1-pz}, given that also in this case we end
up with a \ac{BS} \ac{NTF}.
\begin{figure}[t]
\begin{center}
\subfloat[\label{sfig:bpm-pz-opti}]{%
\includegraphics[scale=0.65]{bpm-pz_opti}}\quad
\subfloat[\label{sfig:bpm-pz-naga}]{%
\includegraphics[scale=0.65]{bpm-pz_naga}}
\end{center}
\caption{Pole-zero placement for the proposed design
strategy~\protect\subref{sfig:bpm-pz-opti} and the conventional one in
\cite{Nagahara:TSP-60-6}~\protect\subref{sfig:bpm-pz-naga} for the test
case in Section~\ref{sec:bpm}.}
\label{fig:bpm-pz}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this paper we have proposed a new design flow for ΔΣ modulators. Contrarily
to conventional strategies, our methodology is aware of the output filter that
in most practical applications is placed at the modulator output. This is an
important difference, since common strategies assume that an ideal filter is
available, with a steep on-off behavior between the signal band and the noise
band. Consequently, they only strive at putting most of the noise out of the
signal band. Conversely, our strategy strives to shape the quantization noise
so that the overall noise after the output filter can be minimized.
In practice the two approaches may become similar when a steep filter is indeed
available. Yet, they significantly differ when one can only count on a filter
with a non-ideal roll-off. In this latter case, our strategy consistently
provides a better behavior. It is worth underlining that this situation emerges
in a large number of applications. Specifically, whenever one uses ΔΣ
modulation for signal synthesis, power conversion or actuation, the output
filter is analog and often lacks aggressive specifications. As a matter of
fact, when one talks about actuation (ac motor drives, audio amplification), it
frequently happens that the filter is realized taking advantage of the actuator
input impedance. In this case, the filter designer may get quite confined in
his choices. Extensive simulations (some of which presented in the paper) have
shown evident SNR improvements in cases where the modulator works in
conjunction with not-so-good output filters. However, even when output filters
with a good roll-off are available there might be minor SNR increases.
Another distinguished advantage of the proposed approach is that it does never
lead to a \ac{NTF} that is steeper than it is strictly needed. This helps
keeping the modulator stable and resilient to input conditions that could
otherwise undermine its behavior (e.g., too large input signals).
Finally, the proposed methodology lets one tackle design cases that are often
unmanageable by conventional flows. As an example, we have proposed the case of
a modulator for multi-band signals.
The approach that we have described is based on the \ac{KYP} lemma and obtains
the \ac{NTF} through an optimization process exploiting \ac{SDP}. The resulting
\acp{NTF} are in the \ac{FIR} class. Current day algorithms enable a very rapid
design process even for high-order modulators. Typical computation times may be
between a few seconds to a few minutes even on a standard laptop. In practice,
our simulations show that, in most cases, \ac{FIR} orders can be kept
relatively low.
As a final remark, note that being based on a formal process, our strategy can
return truly optimal \ac{NTF} shapes. Interestingly, there are cases when these
may at first appear counter-intuitive so that only a more thorough exam lets
one see a justification.
|
1,108,101,564,398 | arxiv | \section{Introduction}\label{SecIntro}
A crucial step in the history of General Relativity (GR) was Einstein's adoption of the principle of general covariance\footnote{The terms general covariance, diffeomorphism invariance, and background independence are sometimes used interchangeably in the physics literature. In this paper, I understand these terms as distinct as laid out in~\cite{Pooley2015}. In Sec.~\ref{SecGenCov}, I will give explicit examples which separate these three concepts.} which states that the form of our physical laws should be independent of any choice of coordinate systems. At first, Einstein thought this property was unique to GR and that this is what set his theory apart from all of its predecessors. However, in 1917 Kretschmann pointed out that any physical theory can be written in a generally covariant form (i.e., in a coordinate-independent way). See~\cite{Norton1993} for a historical review of this point.
The modern understanding of the principle of general covariance is best summarized by Friedman~\cite{Friedman1983}:
\begin{quote}
\dots the principle of general covariance has no physical content whatever: it specifies no particular physical theory; rather it merely expresses our commitment to a certain style of formulating physical theories.
\end{quote}
However, despite this lack of physical content\footnote{Some argue that this principle does, in fact, have physical content at least when it is applied to isolated subsystems~\cite{TehFreidel}. E.g., in Galileo's thought experiment when only the ship subsystem is boosted relative to the un-boosted shore.}, the conceptual benefits of writing a theory in a coordinate-free way are immense. A generally covariant formulation of a theory has at least two major benefits: 1) it more clearly exposes the theory's geometric background structure, and 2) it thereby helps clarify our understanding of the theory's symmetries (i.e., its structure/solution preserving transformations). It does both of these by disentangling the theory's substantive content from representational artifacts which arise in particular coordinate representations~\cite{Pooley2015,Norton1993,EarmanJohn1989Weas}. Thus, general covariance is an indispensable tool for a modern understanding of spacetime theories. This paper seeks to extend this tool to discrete spacetime theories (i.e., lattice theories\footnote{As I will discuss later, calling these ``lattice theories'' is a bit of a misnomer. This would be analogous to referring to classical spacetime theories as ``coordinate theories''. As I will discuss, in both cases the coordinate systems/lattice structure are mere representational artifacts and so do not deserve ``first billing'' so to speak. Hence, I prefer to call them ``discrete spacetime theories''. Although I am not quite satisfied with this term either. Indeed, as I will discuss, taking the spacetime itself to be discrete also causes issues: systematically under-predicting symmetries. Presently, I think ``discretely representable spacetime theories'' is the most apt term for them.}). One aim of this extension is to (albeit indirectly) contribute to the debate regarding background structures outlined below with possible applications towards quantum gravity.
It is now widely believed that the key conceptual shift which sets GR apart from its predecessors is not its general covariance as Einstein thought but rather its background independence, i.e., its complete lack of background structure. This consensus, however, is largely only verbal as there is still substantial debate about what exactly should and should not count as background structure~\cite{Norton1993,Pitts2006,Pooley2010,ReadThesis,Pooley2015,Teitel2019,Belot2011}. The central goal in this debate is to find a relatively simple notion of background structure such that all intuitively GR-like spacetime theories lack background structure while all intuitively GR-unlike spacetime theories (e.g., special relativity, Newtonian gravity, etc.) have background structure. That is, the task is to populate this landscape with interesting theories and then compare different ways of carving this space up\footnote{This runs parallel in many ways to recent efforts to extend, populate, and carve the quantum-classical landscape~\cite{Janotta_2014,pub.1136830387,pub.1143048427,pub.1135756008,pub.1105420091,pub.1052891771,pub.1104045230}.}.
In addition to its foundational importance, progress in this debate is also expected to help guide us in our future theorizing. Indeed, it is often claimed that any successor theory to GR (e.g., quantum gravity) should follow GR's lead and be similarly background independent~\cite{rovelli_2001,rovelli_2004,SmolinLee2006TCfB,SmolinLee2008Ttwp,deHaroSebastian2017Daeg,DeHaroSebastian2017TIoD} (whatever this ends up meaning~\cite{ReadThesis}). One may think that this claim is frustratingly aspirational in two ways: If we cannot agree on what precisely ``background structure'' means in the context of classical spacetime theories, what hope do we have of extending this notion to quantum spacetime theories (whatever those are exactly)? Isn't it too soon to try to extend these unclear ideas into new territory?
This sort of thinking goes wrong in two ways: firstly, thinking that conceptual clarity must precede extension, and secondly, thinking that consequently clarity can help with extension but not vice-versa. However, a brief review of the history of science suggests otherwise: the puzzles and uncertainties of old concepts are often solved or clarified by extending them into new domains. While extending the scope of our unsettled debates into new territory might ultimately only add to our confusion, it might also give us exactly what we need: new clarifying examples and intuitions. Extending the landscape gives us both new room to populate and new ways to carve. In his light, our revised task is to extend, populate, and carve the landscape.
An extension of the background structure debate towards various approaches to quantum gravity has been carried out in~\cite{ReadThesis}. From such an analysis one can expect to achieve two things: 1) directly, a better understanding of what kinds of background structures and symmetries different theories of quantum gravity might have, and 2) indirectly, a better understanding of how our different notions of background structure relate to one another.
This paper develops another extension, not towards quantum spacetime theories but towards discrete spacetime theories (i.e., lattice theories). These extensions are not unrelated: quantization literally means discretization. Many approaches to quantum gravity assume some kind of discrete spacetime: causal sets, cellular automata, loop quantum gravity, spin foams, etc. Indeed, we have good reason to believe that a full non-perturbative theory of quantum gravity will have something like a finite density of degrees of freedom\footnote{Firstly, basic thought experiments in quantum gravity suggest the existence of something like a minimum possible length scale at approximately the Planck length. Measurements which resolve things at this scale, or attempts to store information at this scale both seem to lead to the creation of black holes. Relatedly, the Bekenstein bound suggests that only a finite amount of information can be stored in any given volume (with this bound scaling with the region's area) before a black hole forms.} to borrow a phrase from Achim Kempf~\cite{UnsharpKempf,Kempf2003,Kempf2004a,Kempf2004b,Kempf2006}. From such an analysis one can expect to achieve two things: 1) directly, a better understanding of what kinds of background structures and symmetries our discrete spacetime theories might have, and 2) indirectly, a better understanding of how our different notions of background structure relate to one another.
\subsection*{Central Claims}
\begin{figure}[t!]
\includegraphics[width=0.4\textwidth]{PaperFigures/FigLat.pdf}
\caption{Two of the lattice structures considered throughout this paper. The arrows indicate the indexing conventions for the lattice sites.}\label{FigLat}
\end{figure}
Why do we need a discrete analog of general covariance? Intuitively, the world might be fundamentally set on a lattice. Indeed, as discussed above, quantum gravity seems to point towards this possibility.
Let's consider the following empirical situation and follow our first intuitions interpreting it. Suppose that by empirically investigating the world on the smallest scales we discover microscopic symmetry restriction. Namely, we find that only quarter turns or one-sixth turns (i.e., not continuous rotations) preserve the dynamics.
Intuitively, these are ``lattice artifacts'' which reflect the symmetries of the underlying lattice structure. For instance, Fig.~\ref{FigLat} shows a square lattice and a hexagonal lattice. Intuitively, a theory set on a square lattice can only have the symmetries of that lattice (i.e., discrete shifts, quarter rotations, and certain mirror reflections). Similarly for a hexagonal lattice but with one-sixth rotations. Even with an unstructured lattice we would still be restricted to discrete symmetries (i.e., permutations of lattice sites).
Thus, assuming that the world has an underlying lattice structure would explain our restricted rotation symmetries we found at a microscopic scale. Moreover, under this assumption we could discover which kind of lattice structure the world has (e.g., square vs hexagonal) by investigating the theory's dynamical symmetries and finding the matching lattice structure.
Buying into this underlying-lattice-assumption, one might investigate further and try to discover what kind of interactions there are on this lattice: nearest neighbor, next-to-nearest neighbor, infinite-range, etc. Intuitively, one could discover this sort of thing through microscopic investigations.
Suppose that after substantial empirical investigation we find such ``lattice artifacts'' and moreover we have great predictive success when modeling the world as being set on (for instance) a square lattice with next-to-nearest neighbor interactions. Does this really prove that the world is fundamentally set on such a lattice? No, all this would prove is: The dynamics of the world can be \textit{faithfully represented} on such a lattice with such interactions, at least empirically. Anything can be faithfully represented in any number of ways. Some extra-empirical work must be done to know which of these representations we should take seriously. That is, we must ask which parts of the theory are substantive and which parts are representational? We need a discrete analog of general covariance.
Proceeding without one for the moment, there are some intuitive reasons for taking such lattice structures seriously. One's might have the following three interconnected first intuitions regarding the substantive role that the lattice and lattice structure play in discrete spacetime theories:
\begin{itemize}
\item[1.] They restrict our possible symmetries. Taking the lattice structure to be a part of the theory's fixed background structure, our possible symmetries are limited to those which preserve this fixed structure. As discussed above, intuitively a theory set on a square lattice can only have the symmetries of that lattice. Similarly for a hexagonal lattice, or even an unstructured lattice.
\item[2.] Differing lattice structures distinguishes our theories. Two theories with different lattice structures (e.g., square vs hexagonal) cannot be identical. They have different fixed background structures and as therefore (as suggested above) have different symmetries.
\item[3.] The lattice is fundamentally ``baked-into'' the theory. Firstly, it is what the fundamental fields are defined over: they map lattice sites (and possibly times) into some value space. Secondly, the lattice is what the lattice structure structures. Thirdly, it is what limits us to discrete permutation symmetries in advance of further limitations from the lattice structure.
\end{itemize}
However, as this paper demonstrates, each of the above intuitions are doubly wrong and overhasty.
What goes wrong with the above intuitions is that we attempted to directly transplant our notions of background structure and symmetry from continuous to discrete spacetime theories. This is an incautious way to proceed and is apt to lead us astray. Recall that, as discussed above, our notions of background structure and symmetry are best understood in light of general covariance. It is only once we understand what is substantial and what is representational in our theories, that we have any hope of understanding them. Therefore, we ought to instead first transplant a notion of general covariance into our discrete spacetime theories and then see what conclusions we are led to regarding the role that the lattice and lattice structure play in our discrete spacetime theories. This paper does just that.
As my discrete analog of general covariance will reveal: lattice structure is rather less like a fixed background structure and rather more like a coordinate system, i.e., merely a representational artifact. Indeed, this paper develops a rich analogy between the lattice structures which appear in our discrete spacetime theories and the coordinate systems which appear in our continuum spacetime theories. Three lessons learned throughout this paper\footnote{These lessons are also visible in some corners of the physics literature, particularly in the work of Achim Kempf~\cite{UnsharpKempf,Kempf2000b,Kempf2003,Kempf2004a,Kempf2004b,Kempf2006,Martin2008,Kempf_2010,Kempf2013,Pye2015,Kempf2018} among others~\cite{PyeThesis,Pye2022,BEH_2020}. For an overview see~\cite{Kempf2018}.} point us in this direction. Each of these lessons negates one of the above discussed intuitions.
Firstly, as I will show, taking any lattice structure seriously as a fixed background structure systematically under predicts the symmetries that discrete theories can and do have. Indeed, as I will show neither the lattice itself nor the lattice structure in any way restrict a theory's possible symmetries. In fact, there is no conceptual barrier to having a theory with continuous translation and rotation symmetries formulated on a discrete lattice. As I will discuss, this is analogous to the familiar fact that there is no conceptual barrier to having a continuum theory with rotational symmetry formulated on a Cartesian coordinate system.
Secondly, as I will show, discrete theories which are presented to us with very different lattice structures (i.e., square vs. hexagonal) may nonetheless turn out to be completely equivalent theories. Moreover, given any discrete theory with some lattice structure we can always\footnote{There is some subtlety here which will be discussed in Sec.~\ref{SecHeat3}.} re-describe it using a different lattice structure. As I will discuss, this is analogous to the familiar fact that our continuum theories can be described in different coordinates, and moreover we can switch between these coordinate systems freely.
Thirdly, as I will show, in addition to being able to switch between lattice structures, we can also reformulate any discrete theory in such a way that it has no lattice structure whatsoever. Indeed, we can always do away with the lattice altogether. As I will discuss, this is analogous to the familiar fact that any continuum theory can be written in a generally covariant (i.e., coordinate-free) way.
These three lessons combine to give us a rich analogy between lattice structures and coordinate systems. As I will discuss, there are actually two ways of fleshing out this analogy. Thus, in actuality, we have two discrete analogs to general covariance. These two approaches differ in how they treat lattice structure once it has been revealed to be coordinate-like and so merely representational. In light of this difference I shall call them internal and external, see Sec.~\ref{SecDisGenCov}. I find reason to favor the external approach, but this will be discussed later. In either case, as one would hope, these discrete analogs of general covariance help us disentangle a discrete theory's substantive content from its merely representational artifacts.
Having exposed lattice structure as a merely representational artifact, it becomes clear that the lattice structure supposedly underlying any discrete ``lattice'' theory has the same level of physical import as coordinates do, i.e., none at all. Thus, the world cannot be ``fundamentally set on a square lattice'' (or any other lattice) any more than it could be ``fundamentally set in a certain coordinate system''. Lattice structures are just not the sort of thing that can be fundamental; they are thoroughly representational. Spacetime cannot be discrete (even when it might be representable as such).
\subsection*{Outline of the Paper}
In Sec.~\ref{SecGenCov}, I will follow~\cite{Pooley2015} in overviewing the differences between the concepts of general covariance, diffeomorphism invariance, and background independence. To demonstrate these ideas and to lay the groundwork needed to extend them to discrete spacetime theories, I will work through several example theories. Namely, I will consider the Klein Gordon equation and the heat equation. This section will also make an (ultimately wrong) attempt at transferring some of these ideas to discrete spacetime theories generally.
To make this more concrete, in Sec.~\ref{SecSevenHeat} I will introduce seven discrete heat equations in an interpretation-neutral way and solve their dynamics. Then, in Sec.~\ref{SecHeat1}, I will make a first attempt at interpreting these theories. I will (ultimately wrongly) identify their underlying manifold, locality properties, and symmetries. Among other issues, a central problem with this first attempt is that it takes the lattice to be a fundamental part of the underlying manifold and thereby unequivocally cannot support continuous translation and rotation symmetries. This systematically under predicts the symmetries that these theories can and do have.
In Sec.~\ref{SecHeat2} I will provide a second attempt at interpretation which fixes this issue (albeit in a less than satisfying way). In particular, in this second attempt I deny that the lattice is a fundamental part of the underlying manifold. Instead I ``internalize'' it. That is, in this second attempt the lattice is associated with the theory's value space and not the underlying manifold. Fruitfully, this second interpretation does allow for continuous translation and rotation symmetries. Indeed, it exposes such hidden symmetries in our seven discrete heat theories. However, the key move here of ``internalization'' has several unsatisfying consequences. For instance, the continuous translation and rotation symmetries we find are here classified as internal (i.e, associated with the value space as opposed to the manifold) whereas intuitively they ought to be external.
We thus will need a third attempt at interpreting these theories. However, before that in Sec.~\ref{SecSamplingTheory}, I will provide an informal overview of the primary mathematical tools used in this paper. Namely, I will review the basics of Nyquist-Shannon sampling theory and bandlimited functions.
With this review complete, in Sec.~\ref{SecHeat3} I will use these tools to provide a third and final attempt at interpreting these theories. A perspective similar to this third interpretation has been put forward in the physics literature by Achim Kempf~\cite{UnsharpKempf,Kempf2000b,Kempf2003,Kempf2004a,Kempf2004b,Kempf2006,Martin2008,Kempf_2010,Kempf2013,Pye2015,Kempf2018} among others~\cite{PyeThesis,Pye2022,BEH_2020}. For an overview see~\cite{Kempf2018}. Like my second attempt, this third interpretation can support continuous translation and rotation symmetries. However, unlike the second attempt it realizes them as external symmetries (i.e., associated with the underlying manifold, not the theory's value space). Roughly, this is accomplished by (in a principled way) inventing a continuous manifold for the fields to live on. The discrete theory is then embedded onto this manifold as a bandlimited function.
In Sec.~\ref{SecDisGenCov}, I will review the lessons learned in these three attempts at interpretation. As I will discuss, the lessons learned combine to give us a rich analogy between lattice structures and coordinate systems. As I will discuss, there are actually two ways of fleshing out this analogy: one internal and one external. This section spells out these analogies in detail, each of which gives us a discrete analog of general covariance. I find reason to prefer the external notion, but either is likely to be fruitful for further investigation/use.
Finally, Sec. \ref{SecConclusion} and Sec. \ref{SecOutlook} will summarize the results of this paper and provide an outlook of future work.
\section{Brief Review of General Covariance, Diffeomorphism Invariance, and Background Independence}\label{SecGenCov}
As discussed in the introduction, a crucial step in the history of GR was Einstein's adoption of the principle of general covariance. While, ultimately, this principle is merely stylistic (with no physical content per se) it nonetheless commits us to a good and useful style of theorizing. As discussed above, a generally covariant formulation of a theory disentangles its substantive content from its merely representational artifacts. In particular, reformulating in this way more clearly exposes the theory's geometric background structure, and thereby helps clarify our understanding of the theory's symmetries.
In the physics literature, three closely related concepts are often confused: general covariance, diffeomorphism invariance, and background independence (i.e., a complete lack of background structure). To demonstrate these ideas and to lay the groundwork needed to extend them to discrete spacetime theories, let's go through some examples.
\subsection{Klein Gordon Equation}
Consider a real scalar field $\phi:\mathcal{M}\to\mathbb{R}$ with mass $M$, satisfying the Klein Gordon equation,
\begin{align}\label{KG0}
\partial_{t}^2\phi(t,x,y,z) = (\partial_x^2+\partial_y^2+\partial_z^2-M^2) \, \phi(t,x,y,z).
\end{align}
This formulation is not generally covariant since when it is rewritten in different coordinates it changes form. For instance, in the coordinates $t'=t$, $x'=x$, $y'=y$ and $z'=z+\frac{1}{2}a\,t^2$, we have,
\begin{align}
\nonumber
\partial_{t'}^2\phi(t',x',y',z')
&= (\partial_{x'}^2+\partial_{y'}^2+\partial_{z'}^2-M^2) \, \phi(t',x',y',z')\\
&- a \, \partial_{z'} \, \phi(t',x',y',z').
\end{align}
An extra term shows up when we move into a non-inertial coordinate system. Let's fix this. Introducing a fixed Lorentzian metric field, \mbox{$\eta^\text{ab}=\text{diag}(-1,1,1,1)$} we can rewrite Eq.~\eqref{KG0} as,
\begin{align}\label{KG00}
\qquad(\eta^{\mu\nu}\partial_\mu\partial_\nu-M^2) \, \phi = 0,
\end{align}
where $x^\mu=(t,x,y,z)$ and $\partial_\mu=(\partial_t,\partial_x,\partial_y,\partial_z)$. Unfortunately this is still not generally covariant. If we rewrite Eq.~\eqref{KG00} in arbitrary coordinates, $x'^\mu$, we find,
\begin{align}
\nonumber
\left(\eta^{\sigma\rho}\frac{\partial x'^\mu}{\partial x^\sigma}\frac{\partial x'^\nu}{\partial x^\rho}\partial_\mu\partial_\nu-M^2\right) \, \phi +
\eta^{\sigma\rho}\frac{\partial^2 x'^\mu}{\partial x^\sigma\,\partial x^\rho}\,\partial_\mu\phi
= 0.
\end{align}
This formulation, however, is coordinate-invariant. If we change coordinates again to $x''^\mu$ the equation keeps the same form except with $x'^\mu\to x''^\mu$.
This demonstrates an ambiguity in the usage of the term generally covariant above~\cite{WallaceAfraid}: coordinate-independent can mean coordinate-invariant (but still written in terms of coordinates) or coordinate-free (written without any reference to coordinates at all). The real benefits of general covariance come from having a coordinate-free representation. This is the notion of general covariance relevant throughout this paper. To achieve general covariance we need to reformulate Eq.~\eqref{KG00} in the coordinate-free language of differential geometry.
Before this however, I need to introduce some terminology. Throughout this paper I will associate with any classical spacetime theory with two spaces of models~\cite{ThorneLeeLightman,Pooley2015}: kinematically possible models (KPMs) and dynamically possible models (DPMs). Roughly, these are off-shell and on-shell solutions.
KPMs are all of the mathematical objects which have the right sort of structures to make sense as models of our theory (regardless of whether they satisfy the dynamics). These are represented as an ordered collection of the theory's manifold together with its geometric fields and matter fields. For our Klein Gordon example, the KPMs are given by\footnote{The name SR1 is picked to follow the notation set in~\cite{Pooley2015}.},
\begin{align}
\text{SR1:}\qquad\text{KPMs:}\quad&\langle\mathcal{M},\eta_\text{ab},\phi\rangle
\end{align}
where $\mathcal{M}$ is a differentiable (3+1)-manifold, $\eta_\text{ab}$, is a fixed Lorentzian metric field\footnote{It's important to note the difference between $\eta_{\mu\nu}$ and $\eta_\text{ab}$. Any tensor object with greek indices (e.g., $\eta_{\mu\nu}$) is to be understood as the components of a certain tensor in a particular coordinate system. By contrast, any tensor object with roman indices (e.g., $\eta_\text{ab}$) is coordinate-free, its ``indices'' are merely there to remind us of the rank of this tensor and to help us see how it interacts with the other tensor objects.}, and $\phi:\mathcal{M}\to\mathbb{R}$ is a dynamical real scalar field.
By contrast, a theory's dynamically possible models (DPMs), are the subset of the KPMs which obey the theory's dynamical equations. For SR1 the DPMs are picked out by,
\begin{align}
\text{SR1:}\qquad\text{KPMs:}\quad&\langle\mathcal{M},\eta_\text{ab},\phi\rangle\\
\nonumber
\text{DPMs:}\quad&(\eta^\text{ab}\nabla_\text{a}\nabla_\text{b}-M^2)\,\phi = 0,
\end{align}
where $\nabla_\text{a}$ is the unique covariant derivative operator compatible with the metric, (i.e. with $\nabla_\text{c}\,\eta_\text{ab}=0$). This formulation of the Klein Gordon equation is now generally covariant (in the coordinate-free sense).
\subsubsection*{Klein Gordon - Symmetries}
Let us now use this generally covariant formulation to help us understand this theory's symmetries. It is important to distinguish two kinds of symmetry~\cite{EarmanJohn1989Weas} (spacetime symmetries and dynamical symmetries) related to two different kinds of fields~\cite{Pooley2015} (fixed fields and dynamical fields). The latter distinction is that fixed fields are fixed by fiat to be the same in every model. By contrast, dynamical fields can vary from model to model. In SR1, $\eta_\text{ab}$ is fixed whereas $\phi$ is dynamical.
The distinction regarding symmetries is as follows. Spacetime symmetries are those diffeomorphisms, \mbox{$d\in\text{Diff}(\mathcal{M})$}, which preserve the theory's fixed fields (regardless of the dynamical equations). Dynamical symmetries are those diffeomorphisms, \mbox{$d\in\text{Diff}(\mathcal{M})$}, which map solutions to solutions when applied to the dynamical fields of our models (leaving the fixed fields fixed). In either case, let us call these external symmetries.
For SR1 our only fixed field is $\eta_\text{ab}$. Thus SR1's spacetime symmetries are those diffeomorphisms\footnote{In this paper for simplicity I will not differentiate between the pull back and push forward of $d$. Both of these will be referred to as $d^*$ which will be called the drag along map. $d^*$ is whatever modification of $d$ is demanded by the context.} with $d^*\eta_\text{ab}=\eta_\text{ab}$. Only a small subset of $\text{Diff}(\mathcal{M})$ have this property, namely the Poincar\'e group.
For SR1, given a generic DPM, $\langle\mathcal{M},\eta_\text{ab},\phi\rangle$ we can apply a generic diffeomorphism \mbox{$d\in\text{Diff}(\mathcal{M})$} to its dynamical fields to get some KPM, \mbox{$\langle\mathcal{M},\eta_\text{ab},d^*\phi\rangle$}. This diffeomorphism $d$ is a dynamical symmetry when for every input DPM this output KPM is a solution to the dynamics (i.e., is also a DPM). It turns out that all and only $d$ in the Poincar\'e group maps solutions to solutions. Thus the dynamical symmetry group of SR1 is the Poincar\'e group.
In the above example, the theory's spacetime symmetries and its dynamical symmetries match. There are good reasons\footnote{If there are more dynamical symmetries than spacetime symmetries, then some of the theory's fixed fields are not being used by any of the dynamics, in which case why are they there? Conversely, if there are more spacetime symmetries than dynamical symmetries then it appears some necessary piece of spacetime structure is missing. E.g., consider a case where the dynamics are not boost invariant and so implicitly pick out a rest frame, but somehow the spacetime comes equipped with no rest frame.} for these to match in general~\cite{EarmanJohn1989Weas} but they won't always\footnote{They would not match for instance if, as a piece of fixed field, we had included a time orientation field, $\chi$, which distinguishes the future light cone from the past light cone at each event. In this case, the dynamical symmetries would still be the Poincar\'e group, but the spacetime symmetries would only be the time-orientation preserving subset of these. In general, the spacetime symmetries can be smaller than the dynamical symmetries if there is some piece of spacetime structure which is unused by the dynamics.}. In any case it is important to keep them separate conceptually. Unless otherwise specified, throughout this paper unqualified references to symmetries should be understood as meaning dynamical symmetries.
For our later reasoning, it is important to stress two points here. Firstly, it's important to note that neither of these notions of symmetry have anything to do with coordinate representations. Recall from Sec.~\ref{SecIntro} Kretschmann's point that any theory can be represented in terms of any coordinates (or, indeed, in terms of no coordinates at all). Another immediate consequence of this is the familiar fact that coordinate systems do not in any way restrict a theory's possible symmetries. It is a familiar fact that there is no conceptual barrier to having a continuum theory with rotational symmetry formulated on a Cartesian coordinate system.
Secondly, it is important to note that both types of external symmetry discussed above (dynamical and spacetime) are restricted to be within $\text{Diff}(\mathcal{M})$. But why the $\mathcal{M}$ in $\text{Diff}(\mathcal{M})$? Because this is the place where the theory's fundamental fields map from. But why the $\text{Diff}$ in $\text{Diff}(\mathcal{M})$? Because this is the relevant class of automorphisms for differentiable manifolds. Of course, more can be said about each of these points, however here it suffices to note that if our fields are mapped out of some manifold-like space $Q$ then we would expect to find our symmetries only within $\text{Auto}(Q)$ for some relevant notion of automorphism.
For completeness I ought to mention another type of symmetry that a theory might have: internal symmetries and relatedly gauge symmetries. In any given theory, the dynamical fields will map\footnote{More generally, the fields might be defined as sections of a fiber bundle of $\mathcal{V}$ over $\mathcal{M}$, but let's neglect that complication here.} from the manifold $\mathcal{M}$ into some value space $\mathcal{V}$ as $\phi:\mathcal{M}\to \mathcal{V}$. We may find additional internal symmetries of our theory within the automorphisms of this value space, $\text{Auto}(\mathcal{V})$. We might also find our theory has gauge symmetries by allowing these internal automorphisms to vary smoothly across the manifold.
For SR1, the value space of our dynamical field $\phi$ is the real numbers $\mathcal{V}=\mathbb{R}$. Taking $\mathbb{R}$ to be a vector space here, the relevant automorphisms are $\text{Aff}(\mathbb{R})$ consisting of linear-affine transformations. Among these, SR1's internal symmetries are only global re-scalings of $\phi$ as $\phi\to A\,\phi$ for some $A\in\mathbb{R}$. We find no gauge symmetry trying to localize this.
\subsubsection*{Klein Gordon - Background Structure}
Let us now use this generally covariant formulation to help us understand this theory's background structure. As mentioned in the introduction, there is ongoing debate~\cite{Norton1993,Pitts2006,Pooley2010,ReadThesis,Pooley2015,Teitel2019,Belot2011} about what exactly should and should not count as background structure. However, it is widely agreed that any fixed field ought to count as background structure. Thus, for SR1 the fixed Lorentzian metric $\eta_\text{ab}$ counts as background structure.
From this one may be tempted to reason (poorly) as follows. A theory's spacetime symmetries are just those transformations which preserve its fixed fields. Any fixed fields count as background structures. As such, minimizing background structure is the same as maximizing spacetime symmetry. Therefore, background independence is the same concept as diffeomorphism invariance.
However, this reasoning and its conclusion are in error. There can be other kinds of background structure than fixed fields. Indeed, following~\cite{Pooley2015}, we can reformulate SR1 as,
\begin{align}
\text{SR2:}\qquad\text{KPMs:}\quad&\langle\mathcal{M},g_\text{ab},\phi\rangle,\\
\nonumber
\text{DPMs:}\quad&(g^\text{ab}\nabla_\text{a}\nabla_\text{b}-M^2)\,\phi = 0\\
\nonumber
&R^\text{a}{}_\text{bcd}=0.
\end{align}
Here the fixed Lorentzian metric field, $\eta_\text{ab}$, has been replaced with a dynamical metric field, $g_\text{ab}$, with signature $(-1,1,1,1)$. The dynamical metric field varies from model to model and obeys a new dynamical equation, $R^\text{a}{}_\text{bcd}=0$, where $R^\text{a}{}_\text{bcd}$ is the Riemann tensor associated with $\nabla_\text{a}$. Note that SR2 has the same KPMs as the Klein Gordon equation in GR would:
\begin{align}
\text{GR:}\qquad\text{KPMs:}\quad&\langle\mathcal{M},g_\text{ab},\phi\rangle,\\
\nonumber
\text{DPMs:}\quad&(g^\text{ab}\nabla_\text{a}\nabla_\text{b}-M^2)\,\phi = 0\\
\nonumber
&G_\text{ab}=8\pi \, T_\text{ab}.
\end{align}
Indeed, SR2 and GR only differ in their dynamical equations. One may favor SR2 over SR1 on these grounds.
What are SR2's symmetries? SR2 has no fixed fields. As such, its spacetime symmetries are the full diffeomorphism group, $\text{Diff}(\mathcal{M})$. This theory's dynamical symmetries are also the full diffeomorphism group, $\text{Diff}(\mathcal{M})$. Thus, SR2 is diffeomorphism invariant.
Ought we to conclude (using the above erroneous logic) that because SR2 is diffeomorphism invariant that it is also background independent? Clearly not. Intuitively SR1 and SR2 should have the same background structures. The difference is that in SR2 this background structure is hidden whereas SR1 is in a sense more honest, declaring its background structure up front as a fixed field. For this reason one may prefer SR1 to SR2.
However, as SR2 clearly demonstrates, we cannot in general expect our theories to be so honest about their background structures. There can be other sorts of background structure in our theories than those which are declared upfront as fixed fields. There is a wide literature attempting to find a way of systematically identifying these hidden background structures~\cite{Norton1993,Pitts2006,Pooley2010,ReadThesis,Pooley2015,Teitel2019,Belot2011}.
To demonstrate these ideas further and lay the groundwork for what is to come, let's consider one more example.
\subsection{Heat Equation}\label{HeatEqGenCov}
Let us next consider a real scalar field $\psi:\mathcal{M}\to\mathbb{R}$ satisfying the following one and two-dimensional heat equations:
\begin{align}\label{HeatEq00}
&\text{\bf Heat Equation 00 (H00):}\\
\nonumber
&\partial_t \psi(t,x) = \alpha_0\, \partial_x^2\,\psi(t,x)\\
\label{HeatEq0}
&\text{\bf Heat Equation 0 (H0):}\\
\nonumber
&\partial_t \psi(t,x,y) = \frac{\alpha_0}{2}\, (\partial_x^2+\partial_y^2)\,\psi(t,x,y)
\end{align}
with some diffusion rate $\alpha_0\geq0$. Focusing on the two-dimensional case, after substantial work, one can rewrite this equation in the coordinate-free language of differential geometry as follows:
\begin{align}\label{H0GenCov}
\text{H0}:\qquad\text{KPMs:}\quad&\langle \mathcal{M}, t_\text{ab}, h^\text{ab}, \nabla_\text{a},T^\text{a},\psi\rangle\\
\nonumber
\text{DPMs:}\quad&
T^\text{a}\,\nabla_\text{a}\psi
=\frac{\alpha_0}{2} \, h^\text{bc} \nabla_\text{b}\nabla_\text{c}\psi.
\end{align}
The geometric objects used in this formulation are as follows. $\mathcal{M}$ is a 2+1 dimensional differentiable manifold. $h^\text{ab}$ and $t_\text{ab}$ are space and time metric fields. They are each symmetric with signatures $(0,1,1)$ and $(1,0,0)$ respectively. Moreover, these metrics are orthogonal to each other (i.e., with $h^\text{ab} t_\text{bc}=0$). These metrics allow us to compute lengths and durations. $\nabla_\text{a}$ is a covariant derivative operator which is compatible with these metrics (i.e., $\nabla_\text{a}h^\text{bc}=0$ and $\nabla_\text{a}t_\text{bc}=0$). Note that $\nabla_\text{a}$ is not uniquely determined by the metric in this case because neither $h^\text{ab}$ nor $t_\text{ab}$ are invertible. We take $\nabla_\text{a}$ to be flat, $R^\text{a}{}_\text{bcd}=0$. The covariant derivative operator $\nabla_\text{a}$ allows us to do parallel transport and compute derivatives of non-scalar fields.
The quadruple, $\langle \mathcal{M}, t_\text{ab}, h^\text{ab}, \nabla_\text{a}\rangle$, picked out by the above discussed objects is a Galilean spacetime~\cite{ReadThesis}. However, in addition to these structures, our spacetime also has a constant unit time-like vector field $T^\text{a}$ with \mbox{$\nabla_\text{a}T^\text{b}=0$} and \mbox{$t_\text{ab}T^\text{a}T^\text{b}=1$}. This vector field picks out a standardized way of moving forward in time (i.e, translation generated by $T^\text{a}\nabla_\text{a}$). That is, $T^\text{a}$ provides a rest frame.
Each of the above discussed objects is considered fixed in this formulation: they do not vary from model to model and do not evolve dynamically. The only dynamical field in this formulation is the real scalar field $\psi:\mathcal{M}\to\mathbb{R}$ representing the temperature field. Mathematical structures satisfying the above conditions (independent of whether it satisfies the dynamics) are the theory's KPMs. This theory's DPMs are the subset of the KPMs which additionally satisfy the theory's dynamics. The dynamical equation in Eq.~\eqref{H0GenCov} says that the derivative of the temperature field in the $T^\text{a}$ direction is proportional to its second derivative in space.
What are the heat equation's symmetries? In contrast with SR1, the H0 has many more fixed fields: $t_\text{ab}$, $h^\text{ab}$, $\nabla_\text{a}$, and $T^\text{a}$ are all fixed. Ultimately, this restricts the theory's spacetime symmetries down to the two-dimensional Euclidean group (spacial translations, rotations and reflections) plus constant time shifts. Note that time inversions and Galilean boosts are not symmetries because each of these fail to preserve our rest frame $T^a$. This theory's dynamical symmetries match its spacetime symmetries.
And what about internal symmetries? Taking $\mathcal{V}=\mathbb{R}$ to be a vector space here, the relevant automorphisms are $\text{Aff}(\mathbb{R})$ consisting of linear-affine transformations. Within $\text{Aff}(\mathbb{R})$ any transformation preserves solution-hood, \mbox{$\psi\to A\psi+b$} for $A,b\in\mathbb{R}$. Localizing this internal symmetry does not lead to any gauge symmetries.
What are the heat equation's background structures? The above discussed fixed fields will surely count as background structures. While conceivably there could be more background structures than just these, given the simplicity of this theory that seems unlikely.
\subsection{First (Naive) Intuitions about Discrete Spacetime Theories}\label{NaiveIntuitions}
Let us now attempt (naively) to apply some of the above discussion to discrete theories. Ultimately, these first intuitions are naive because we have not yet developed a discrete analog of general covariance. Thus, we have little clarity as to what parts of these theories are substantive as opposed to merely representational. Proceeding anyway despite this, we are apt to get things wrong.
This subsection has two purposes. The first is to informally introduce a few discrete spacetime theories (i.e., lattice theories) before giving a more general characterization. Secondly, I would like to give some (faulty) support for the (ultimately wrong) first intuitions given in the introduction preceding the central claims. The following two sections will then repeat these objectives in much more detail applied to several discrete heat equations.
Let's begin with three such discrete heat equations: one set on a uniform 1D lattice, one set on square lattice and one set on a hexagonal lattice. Each of the following have only nearest-neighbor (N.N.) interactions.
\begin{align}
\label{H1Long}
&\text{\bf 1D N.N. Heat Equation (H1):}\\
\nonumber
&\frac{\d }{\d t}\phi_n(t)
=\alpha\,[\phi_{n-1}(t)-2\phi_n(t)+\phi_{n+1}(t)]\\
\label{H4Long}
&\text{\bf Square N.N. Heat Equation (H4):}\\
\nonumber
&\frac{\d }{\d t}\phi_{n,m}(t)
=\frac{\alpha}{2}\big[\phi_{n-1,m}(t) -2\phi_{n,m}(t)+\phi_{n+1,m}(t)\\
\nonumber
&\qquad\qquad\qquad+\phi_{n,m-1}(t) -2\phi_{n,m}(t)+\phi_{n,m+1}(t)\big]\\
\label{H5Long}
&\text{\bf Hexagonal N.N. Heat Equation (H5):}\\
\nonumber
&\frac{\d }{\d t}\phi_{n,m}(t)
=\frac{\alpha}{3}\big[\, \phi_{n-1,m}(t) -2\phi_{n,m}(t)+\phi_{n+1,m}(t)\\
\nonumber
&\qquad\qquad\qquad+\phi_{n,m-1}(t) -2\phi_{n,m}(t)+\phi_{n,m+1}(t)\\
\nonumber
&\qquad\qquad\qquad+\phi_{n+1,m-1}(t)-2\phi_{n,m}(t)+\phi_{n-1,m+1}(t)\big]
\end{align}
These theories are named H1, H4, and H5 in anticipation of their treatment in Sec.~\ref{SecSevenHeat}. In the latter two cases the lattice sites are indexed $(n,m)$ as shown in Fig.~\ref{FigLat}. In each case, the right-hand side is the best approximation of the second derivative possible using only nearest neighbor interactions.
Note that in each of the above theories time is still treated as continuous. This doesn't have to be the case as we could also consider the following example:
\begin{align}
&\text{\bf Square N.N. Klein Gordon Equation:}\\
\nonumber
&\phi_{j-1,n}\!-\!2\phi_{j,n}\!+\!\phi_{j+1,n}
=\phi_{j,n-1}\!-\!2\phi_{j,n}\!+\!\phi_{j,n+1}
\!-\!\mu^2 \phi_{j,n}
\end{align}
with $j$ indexing time and $n$ indexing space and $\mu$ playing the role of the field's mass.
Let's start (naively) interpreting these theories by taking their above formulations seriously, i.e. Eq.~\eqref{H1Long}, Eq.~\eqref{H4Long} and Eq.~\eqref{H5Long}. Taken literally what are these theories about?
Beginning with H1, H4 and H5, these theories are intuitively about a field $\phi_\ell(t)$ which maps lattice sites ($\ell\in L\cong\mathbb{Z}\cong\mathbb{Z}^2$) and times ($t\in\mathbb{R}$) into temperatures ($\phi_\ell(t)\in\mathbb{R}$). That is a field $\phi:Q\to \mathcal{V}$ with manifold $Q=L\times\mathbb{R}$ and value space $\mathcal{V}=\mathbb{R}$.
Similarly, for the discrete Klein Gordon example the fundamental field seems to be $\phi:Q\to \mathcal{V}$ except that it has a manifold $Q=L$ with no times. While this paper will only explicitly deal with theories with $Q=L\times\mathbb{R}$, I expect that the central claims hold true in either case.
In either case, taking $\phi:Q\to \mathcal{V}$ seriously as a fundamental field leads us to thinking of $Q$ as the theory's underlying manifold and $\mathcal{V}=\mathbb{R}$ as the theory's value space. It is important to note that in this interpretation, $Q$ is the entire manifold, it is not being thought of as embedded in some larger manifold. (However, a view like this will be considered in our third interpretation in Sec.~\ref{SecHeat3}.)
With this manifold and value space picked out, what can we expect of these theories' symmetries? As discussed earlier in this section, if $Q$ is a theory's underlying manifold then its external symmetries (either spacetime or dynamical) are restricted $\text{Auto}(Q)$ for some relevant notion of automorphism. Similarly, the theory's internal symmetries are restricted to $\text{Auto}(\mathcal{V})$. We might also have gauge symmetries which mix these two. Returning to $\text{Auto}(Q)$ however what is the relevant notion of automorphism?
Answering this question will require us to distinguish what structures are ``built into'' $Q$ and what are ``built on top of'' $Q$. The analogous distinction in the continuum case is that we generally take the manifold's differentiable structure to be built into it while the Minkowski metric, for instance, is something additional built on top of the manifold. In this paper, I am officially agnostic on where we draw this line in the discrete case. However, for didactic purposes I will here be as conservative as possible giving $Q$ as little structure as is sensible. Note that the more structure we associate with $Q$ the smaller the class of relevant automorphisms $\text{Auto}(Q)$. Thus, I am taking $\text{Auto}(Q)$ to be as large as it can reasonably be.
For the second kind of discrete theory (i.e., those with both space and time discrete) the minimal structure we can reasonably associate with $Q=L$ is that of a set. As such the largest $\text{Auto}(Q)$ could reasonably be is permutations of the lattice sites, \mbox{$\text{Auto}(Q)=\text{Perm}(L)$}.
For the first kind of discrete theory (i.e., those with continuous time) the minimal structure we can reasonably associate with $Q=L\times\mathbb{R}$ is that of a set times a differentiable manifold. In this case, the largest $\text{Auto}(Q)$ could reasonably be is permutations of the lattice sites together with time reparametrizations, \mbox{$\text{Auto}(Q)=\text{Perm}(L)\times\text{Diff}(\mathbb{R})$}.
Recall that in addition to $\text{Auto}(Q)$ we might also have internal or gauge symmetries. While in general there may be abundant internal or gauge symmetries, for the present cases there are not many. In particular, for all of the above-mentioned theories we only have $\mathcal{V}=\mathbb{R}$. Moreover, these theories are all linear (solutions sum to solutions) such that it makes sense to structure $\mathbb{R}$ as an affine vector space. Thus, $\text{Auto}(\mathcal{V})=\text{Aff}(\mathbb{R})$ such that our internal symmetries are linear-affine rescalings of $\phi$. Moreover, localizing these internal symmetries reveal no gauge symmetries in these examples.
Thus, in total, for H1 H4 and H5 the widest scope of symmetry transformations available to us is:
\begin{align}\label{PermutationLong}
s:\quad \phi_\ell(t)\mapsto c_1 \phi_{P(\ell)}(\tau(t))+c_2
\end{align}
for some $c_1,c_2\in\mathbb{R}$, some smooth $\tau(t)$, and some permutation $P:L\to L$. Similarly, for the discrete Klein Gordon example we have
\begin{align}
s:\quad \phi_\ell\mapsto c_1 \phi_{P(\ell)}+c_2
\end{align}
dropping the time label.
In either case, however, it should be clear that our theory cannot have continuous spacial translations and rotations (at least not while interpreting $\phi:Q\to\mathcal{V}$ and consequently $Q$ as fundamental as we are here). I have thus spelled out some (ultimately faulty) support for the (ultimately wrong) first intuitions put forward in Sec.~\ref{SecIntro}. To summarize, taking the initial presentation of these theories literally, we are led to think of $\phi:Q\to\mathcal{V}$ to be fundamental with manifold $Q$ and value space $\mathcal{V}$. Reasoning from here we found that the lattice itself (in addition to further lattice structure) restricts our theories' symmetries to be discrete: it appears we cannot have continuous spacial and rotational symmetries.
As I will show in Sec.~\ref{SecHeat2}, this systematically under predicts the symmetries that discrete spacetime theories can and do have. Fixing this issue will lead us to develop two discrete analogs of general covariance. However, before this let me introduce several discrete heat equations around which the rest of the paper will be framed.
\section{Seven Discrete Heat Equations}\label{SecSevenHeat}
In this section I will introduce seven discrete heat equations (H1-H7) in an interpretation-neutral way and solve their dynamics. In the previous section H1, H4 and H5 were already introduced and analyzed somewhat. Here we are starting again clean.
In particular, in the previous section casual comparison was made between parts of these theories' dynamical equations and various approximations of the second derivative. While, as I will discuss, such comparisons can be made, to do so immediately is unearned. It comes dangerously close to imagining the lattices shown in Fig.~\ref{FigLat} as being embedded in a continuous manifold. This may be something we want to do later, but it is a non-trivial interpretational move which ought not be done so casually.
Crucially, in this section I will be analyzing these theories as discrete-native theories. As such, it's important to think of the following discrete spacetime theories as self-sufficient theories in their own right. We must not begin by thinking of them as various discretizations or bandlimitations of the continuum theories. While, as I will discuss, these discrete theories have some notable relationships to various continuum theories it is important to resist any temptation to see these continuum theories as ``where they came from''.
Moreover, the previous section casually associated these theories with the lattice structures shown in Fig.~\ref{FigLat}. Namely, H4 was associated with a square lattice and H5 with a hexagonal lattice. Making such associates ab initio is unwarranted. While we may eventually associate these theories with those lattice structures we cannot do so immediately. Such an association would need to be made following careful consideration of the dynamics. Beginning here in an interpretation neutral way these theories ought to be seen as being defined over a completely unstructured lattice. That is, at this point the set of labels for the lattice sites, $L$, is just that, an unstructured set.
With these words of caution in mind, let's introduce some dynamics. All seven theories consider a field \mbox{$\phi:Q\to\mathbb{R}$} with $Q=L\times\mathbb{R}$. That is, all seven theories consider a real scalar field $\phi_\ell(t)$.
As a first example consider a field $\phi_\ell(t)$ which under some convenient relabeling of the lattice sites, $\ell\mapsto n$, satisfies Eq.~\eqref{H1Long}, namely,
\begin{align}
\text{H1}:\quad&\frac{\d }{\d t}\phi_n(t)
=\alpha\,[\phi_{n-1}(t)-2\phi_n(t)+\phi_{n+1}(t)].
\end{align}
At the risk of repeating myself, these $n\in\mathbb{Z}$ are just labels. The fact that our labels $n\in\mathbb{Z}$ are in a sense equidistant from each other does not force us to think of the lattice sites as being equidistant from each other or on a uniform grid. Nor are we forced to think that ``the distance between lattice sites'' to be meaningful at all. Dynamical considerations may later push us in this direction, but the mere convenience of this labeling should not.
For practical applications it is convenient to rewrite the dynamics by collecting these field values $\phi_n(t)$ into an infinite dimensional vector as
\begin{align}\label{PhiDef}
\bm{\phi}(t)\coloneqq (\dots,\phi_{-1}(t),\phi_0(t),\phi_1(t),\dots)^\intercal\in \mathbb{R}^L.
\end{align}
In these terms the dynamics of H1 is given by,
\begin{align}
\label{DH1}
&\text{\bf Heat Equation 1 (H1):}\\
\nonumber
&\frac{\d }{\d t}\bm{\phi}(t)=\alpha \, \Delta_{(1)}^2 \bm{\phi}(t)
\end{align}
where $\Delta_{(1)}^2$ is the following bi-infinite Toeplitz matrix:
\begin{align}\label{Delta12}
\Delta_{(1)}^2=\frac{1}{2}\{\Delta_{(1)}^+,\Delta_{(1)}^-\}
&=\text{Toeplitz}(1,\,-2,\,1)\\
\label{Delta1p}
\Delta_{(1)}^+&=\text{Toeplitz}(0,-1,\,1)\\
\label{Delta1m}
\Delta_{(1)}^-&=\text{Toeplitz}(-1,\,1,\,0)
\end{align}
Recall that Toeplitz matrices are so called diagonal-constant matrices with $[A]_{i,j}=[A]_{i+1,j+1}$. Thus, the values in the above expression give the matrix's values on either side of the main diagonal.
In addition to H1, I will also consider two more theories with the following dynamics:
\begin{align}
\label{DH2}
&\text{\bf Heat Equation 2 (H2):}\\
\nonumber
&\frac{\d }{\d t}\bm{\phi}(t)=\alpha \, \Delta_{(2)}^2 \bm{\phi}(t)\\
\label{DH3}
&\text{\bf Heat Equation 3 (H3):}\\
\nonumber
&\frac{\d }{\d t}\bm{\phi}(t)=\alpha \, D^2 \bm{\phi}(t).
\end{align}
where
\begin{align}\label{BigToeplitz}
\Delta_{(2)}^2&=\text{Toeplitz}(\frac{-1}{12},\,\frac{4}{3},\,\frac{-5}{2},\,\frac{4}{3},\,\frac{-1}{12})\\
\nonumber
D&=\text{Toeplitz}(\dots,\!\frac{-1}{5},\!\frac{1}{4},\!\frac{-1}{3},\!\frac{1}{2},\!-1,\!0,\!1,\!\frac{-1}{2},\!\frac{1}{3},\!\frac{-1}{4},\!\frac{1}{5},\!\dots)\\
\nonumber
D^2&=\text{Toeplitz}(\dots,\!\frac{-2}{16},\!\frac{2}{9},\!\frac{-2}{4},\!\frac{2}{1},\!\frac{-2\pi^2}{6},\!\frac{2}{1},\!\frac{-2}{4},\!\frac{2}{9},\!\frac{-2}{16},\!\dots).
\end{align}
Although above I warned about thinking in terms of derivative approximations prematurely, a few comments are here warranted. Suppose we were to somehow imagine these $\phi_n(t)$ values as coming from some continuous function $\phi(t,x)$ by either sampling it or taking local averages on/around some uniform lattice $x_n=n\,a$. Vectorizing these values as in Eq.~\eqref{PhiDef} and applying any of the above defined Toeplitz matrices would give us approximations to the derivative of $\phi(t,x)$. Namely, $\Delta_{(1)}^+/a$ would be associated with the forward derivative approximation, $\Delta_{(1)}^-/a$ would be associated with the backwards derivative approximation, and $\Delta_{(1)}^2/a^2$ would be associated with the nearest neighbor second derivative approximation,
\begin{align}
\Delta_{(1)}^2/a^2:\quad\partial_x^2 f(x)
&\approx\frac{f(x-a)-2 f(x)+f(x+a)}{a^2}.
\end{align}
Similarly $\Delta_{(2)}^2/a^2$ is related to the next-to-nearest-neighbor approximation to the second derivative.
Notice that the longer range we make our derivative approximations the more accurate they can be. The operator $D$ (and its square $D^2$) in some sense are the best discrete approximations to the derivative (and second derivative) possible. The defining property of $D$ is that it is diagonal in the (discrete) Fourier basis with spectrum,
\begin{align}\label{LambdaD}
\lambda_D(k)=\mathrm{i}\,k
\end{align}
for $k\in[-\pi,\pi]$. This is in tight connection with the continuum derivative operator $\partial_x$ which is diagonal in the (continuum) Fourier basis with spectrum \mbox{$\lambda_{\partial_x}(k)=\mathrm{i} k$} for $k\in[-\infty,\infty]$.
Alternatively, one can construct $D^2$ in the following way: generalize $\Delta_{(1)}^2/a^2$ and $\Delta_{(1)}^2/a^2$ to $\Delta_{(n)}^2/a^2$ namely the best second derivative approximation considering which considers $n^\text{th}$ neighbors to either side. Taking the limit $n\to\infty$ gives $D^2=\lim_{n\to\infty}\Delta_{(n)}^2$. Other aspects of $D$ will be discussed in Sec. \ref{SecSamplingTheory} including the related derivative approximation Eq.~\eqref{ExactDerivative}, but enough has been said for now.
While these connections to derivative approximations allow us to export some intuitions from the continuum theories into these discrete theories, we must resist this (at least for now). In particular, I should stress again that we should not be thinking of any of H1, H2 and H3 as coming from the continuum theory under some approximation of the derivative. Rather, let us pretend these theories ``came from nowhere'' and let us see what sense we can make of them.
In addition to H1-H3, I will consider four more discrete heat equations (two of these have already been introduced above).
First, consider a field $\phi_\ell(t)$ which under some convenient relabeling of the lattice sites, $\ell\mapsto (n,m)$, satisfies Eq.~\eqref{H4Long}, namely,
\begin{align}
\nonumber
\text{H4}: \ &\frac{\d }{\d t}\phi_{n,m}(t)
=\frac{\alpha}{2}\big[\phi_{n-1,m}(t) -2\phi_{n,m}(t)+\phi_{n+1,m}(t)\\
\nonumber
&\qquad\qquad\qquad+\phi_{n,m-1}(t) -2\phi_{n,m}(t)+\phi_{n,m+1}(t)\big]
\end{align}
As before, it will be convenient to organize this theory's field values into a vector. We can handle the two indices by introducing a tensor product structure into the vector space, $\bm{\phi}(t)\in\mathbb{R}^L=\mathbb{R}^\mathbb{Z}\otimes\mathbb{R}^\mathbb{Z}$, with the first tensor factor corresponding to the first index $n$ and the second tensor factor corresponding to the second index $m$. In these terms the dynamics of H4 given above is given by,
\begin{align}\label{DH4}
&\text{\bf Heat Equation 4 (H4):}\\
\nonumber
&\frac{\d}{\d t}\bm{\phi}(t) =\frac{\alpha}{2} \, (\Delta^2_{(1),\text{n}}+ \Delta^2_{(1),\text{m}}) \, \bm{\phi}(t),
\end{align}
where the notation $A_\text{n}\coloneqq A\otimes\openone$ and $A_\text{m}\coloneqq\openone\otimes A$ mean $A$ acts only on the first or second tensor space respectively. Thus, $\Delta^2_{(1),\text{n}}$ is just $\Delta^2_{(1)}$ applied to the first index, $n$. Likewise $\Delta^2_{(1),\text{m}}$ is just $\Delta^2_{(1)}$ applied to the second index, $m$.
A similar treatment of Eq.~\eqref{H5Long} gives us,
\begin{align}\label{DH5}
&\text{\bf Heat Equation 5 (H5):}\\
\nonumber
&\frac{\d}{\d t}\bm{\phi}(t) =\frac{\alpha}{3} \, \Big[\Delta^2_{(1),\text{n}}+ \Delta^2_{(1),\text{m}}\\
\nonumber
&\qquad\qquad\quad \ +\frac{1}{2}\big\{\Delta^+_\text{m}-\Delta^+_\text{n},\Delta^-_\text{m}-\Delta^-_\text{n}\big\}\Big] \, \bm{\phi}(t),
\end{align}
where the curly brackets $\{A,B\}= A\,B + B\,A$ indicate the anticommutator. While the third term looks complicated, it is just the analog of $\Delta^2_{(1),\text{n}}$ and $\Delta^2_{(1),\text{m}}$ but in the $n-m$ direction.
At the risk of repeating myself endlessly, in both H4 and H5 these \mbox{$n,m\in\mathbb{Z}$} are just labels of lattice sites. The fact that our labels $n,m\in\mathbb{Z}$ can be thought of as a square lattice does not force us to think of the lattice sites as being arranged in a square lattice. If we come to think anything like this, it should be by investigating the dynamics of these theories. Indeed, for H5 we will find this is not the case.
Finally, in addition to H4 and H5 I consider the following two theories:
\begin{align}
\label{DH6}
&\text{\bf Heat Equation 6 (H6):}\\
\nonumber
&\frac{\d}{\d t}\bm{\phi}(t) =\frac{\alpha}{2} \, (D^2_\text{n}+ D^2_\text{m}) \, \bm{\phi}(t)\\
\label{DH7}
&\text{\bf Heat Equation 7 (H7):}\\
\nonumber
&\frac{\d}{\d t}\bm{\phi}(t) =\frac{\alpha}{3}\left(D^2_\text{n}+D^2_\text{m}+(D_\text{m}-D_\text{n})^2\right) \, \bm{\phi}(t).
\end{align}
Having introduced these seven theories, let us next solve their dynamics.
\subsection{Solving Their Dynamics}
To gain some intuition about H1-H7, let us next solve their dynamics. Conveniently, each of H1-H7 admit planewave solutions. Moreover, in each case these planewave solutions form a complete basis of solutions.
Considering first H1-H3 we have solutions,
\begin{align}\label{PlaneWave123}
\phi_n(t;k)=e^{-\mathrm{i} k \, n} \,e^{-\Gamma(k)\,t}.
\end{align}
with $k\in[-\pi,\pi]$. Note that extending $k$ outside of this range does not yield new planewave solutions due to Euler's identity, $\exp(2\pi\mathrm{i})=1$. This is related to the phenomena of aliasing in digital image processing. For H1-H3 the wavenumber-dependent decay rate, $\Gamma(k)$, for each theory is given by:
\begin{align}
\text{H1}:& \quad \!\! \Gamma(k)= \alpha \, (2-2\text{cos}(k))\\
\nonumber
\text{H2}:& \quad \!\! \Gamma(k)= \frac{\alpha}{6}\,(\text{cos}(2\,k)-16\,\text{cos}(k)+15)\\
\nonumber
\text{H3}:& \quad \!\! \Gamma(k)= \alpha \, k^2.
\end{align}
Note that $\Gamma(k)$ for H3 follows Eq.~\eqref{LambdaD}, essentially from the definition of $D$.
\begin{figure}[t!]
\includegraphics[width=0.4\textwidth]{PaperFigures/FigHeatDecay.pdf}
\caption{The decay rates for the planewave solutions to the discrete heat equations are plotted as a function of wavenumber for H1, H2 and H3 (bottom to top).}\label{FigHeatDecay}
\end{figure}
Fig.~\ref{FigHeatDecay} shows these decay rates as a function of wavenumber. Thus, what sets these theories apart is the rate at which high frequency planewaves decay. H1-H3 more-or-less agree at low frequencies, with each saying that if $k \ll \pi$ then $\Gamma(k)\approx \alpha \, k^2$. If we consider only solutions with all or most of their wavenumber support with $k \ll \pi$, we have an approximate one-to-one correspondence between the solutions to these theories. This is roughly why each of these theories have the same continuum limit, namely H00 defined in Sec.~\ref{SecGenCov}.
Note that the decay rate $\Gamma(k)$ for H3 exactly matches the decay rate of the continuum theory, at least for $k\in[-\pi,\pi]$. Note also that H2 is a nearer match to the continuum theory for $k\ll\pi$ than H1 is.
\begin{align}
\text{H1}:& \quad \!\! \Gamma(k)= \alpha \, \left(k^2-\frac{k^4}{12}+\mathcal{O}(k^6)\right)\\
\nonumber
\text{H2}:& \quad \!\! \Gamma(k)= \alpha \left(k^2-\frac{k^6}{90}+\mathcal{O}(k^8)\right)\\
\nonumber
\text{H3}:& \quad \!\! \Gamma(k)= \alpha \, k^2.
\end{align}
This is due to its longer range coupling giving a better approximation of the derivative. In terms of converging to the continuum limit, one can expect H3 to outpace H2 which outpaces H1.
While interesting in their own right, these relationships with the continuum theory are not directly helpful in helping us understand H1-H3 in their own right as discrete-native theories.
Before discussing the dynamics of H4-H7, another word of warning. As mentioned above, due to Euler's identity, $\exp(2\pi\mathrm{i})=1$, the discrete planewave solutions with wavenumbers $k$ and \mbox{$k+2\pi$} are identical. Thus, it is best not to think of the x-axis of Fig.~\ref{FigHeatDecay} abruptly ending at $k=\pm\pi$ but rather as wrapping around on itself.
The planewave solutions to H4-H7 are,
\begin{align}
\phi_{n,m}(t;k_1,k_2)=e^{-\mathrm{i} k_1 n-\mathrm{i} k_2 m} \,e^{-\Gamma(k_1,k_2)\,t}
\end{align}
with $k_1,k_2\in[-\pi,\pi]$. As before, extending $k_1$ and $k_2$ outside of this range does not yield new planewave solutions due to Euler's identity, $\exp(2\pi\mathrm{i})=1$. The wavenumber-dependent decay rate $\Gamma(k_1,k_2)$ for each theory is given by:
\begin{align}
\text{H4}:& \ \Gamma(k_1,k_2)= \alpha \left(2\!-\!\text{cos}(k_1)\!-\!\text{cos}(k_2)\right)\\
\nonumber
\text{H5}:& \ \Gamma(k_1,k_2)= \frac{2\alpha}{3} \left(3\!-\!\text{cos}(k_1)\!-\!\text{cos}(k_2)\!-\!\text{cos}(k_2-k_1)\right)\\
\nonumber
\text{H6}:& \ \Gamma(k_1,k_2)= \frac{\alpha}{2} \, (k_1^2+k_2^2)\\
\nonumber
\text{H7}:& \ \Gamma(k_1,k_2)=
\frac{\alpha}{3}\left(k_1^2+k_2^2+(k_2-k_1)^2\right).
\end{align}
Note that $\Gamma(k_1,k_2)$ for H6 and H7 follow from Eq.~\eqref{LambdaD}, essentially from the definition of $D$.
Unlike H1-H3, these theories do not all agree with each other in the small $k_1$ and $k_2$ regime. H4 and H6 agree that for $k_1,k_2 \ll \pi$ we have \mbox{$\Gamma(k_1,k_2)\approx \frac{\alpha}{2} (k_1^2+k_2^2)$}. Moreover, H5 and H7 agree with each other in this regime, but not with H4 and H6. Do we have two different results in the continuum limit here?
Closer examination reveals that we do not. The key to realizing this is to note that under the transformation,
\begin{align}\label{SkewH6H7}
k_1&\mapsto k_1\\
\nonumber
k_2&\mapsto \frac{1}{2}\, k_1+\frac{\sqrt{3}}{2}\,k_2.
\end{align}
we have $\Gamma(k_1,k_2)$ for H7 mapping onto $\Gamma(k_1,k_2)$ for H6. Applying this transformation to H5 does not map H5 onto H4, but it does bring their $k_1,k_2 \ll \pi$ behavior into agreement.
Thus, if we consider only solutions with all or most of their wavenumber support with $k_1,k_2 \ll \pi$ (or the appropriately transformed regime for H5 and H7) we have an approximate one-to-one correspondence between the solutions to these theories. Within this regime we can define their common continuum limit. Repeating our analysis of H1-H3 here, we expect H6 and H7 to converge in the continuum limit faster than H4 and H5.
This paper will make three attempts at interpreting these discrete heat equations. A first important point of comparison between these interpretations is what sense they make of these different convergence rates in the continuum limit.
Note also that for H6 and H7 the above discussed one-to-one correspondence is not approximate, nor is it restricted to the small $k_1$ and $k_2$ regime. We have an exact unrestricted one-to-one correspondence between the solutions of H6 and H7. Given any solution to H7 we can decompose it into planewaves, map these onto new planewaves using an invertible transformation (i.e., Eq.~\eqref{SkewH6H7}) and then add them up to get a corresponding solution to H6. Similarly vice versa from H6 to H7.
A second important point of comparison between these three interpretations put forward in this paper is what sense they make of this one-to-one correspondence between H6 and H7. Such a correspondence does not automatically mean that these theories are identical or even equivalent.
Next consider the fact that H6 is manifestly rotation invariant in the $k_1$, $k_2$ plane whereas H4, H5 and H7 are not. However, given the one-to-one correspondence between the solutions of H6 and H7, there is some (skewed) sense in which H7 is rotation invariant as well. A third important point of comparison between the coming interpretations is what sense they make of H6 and H7 being rotation invariant (at least in Fourier space).
Having introduced these theories and solved their dynamics in an interpretation-neutral way. We can now make a first (ultimately misled) attempt at interpreting them.
\section{A First Attempt at Interpreting some Discrete Spacetime Theories}\label{SecHeat1}
Now that we have introduced these seven discrete theories and solved their dynamics, let's get on to interpreting them. Let us begin by following our first intuitions and analyze these seven theories concerning their underlying manifold, locality properties, and symmetries. Ultimately however, as I will discuss later, much of the following is misled and will need to be revisited and revised later. Luckily, retracing where we went wrong here will be instructive later.
Let's start by taking the initial formulation of the above theories seriously, i.e. Eq.~\eqref{H1Long}, Eq.~\eqref{H4Long} and Eq.~\eqref{H5Long}. Taken literally what are these theories about? These theories are intuitively about a field $\phi_\ell(t)$ which maps lattice sites ($\ell\in L\cong\mathbb{Z}\cong\mathbb{Z}^2$) and times ($t\in\mathbb{R}$) into temperatures ($\phi_\ell(t)\in\mathbb{R}$). That is a field $\phi:Q\to \mathcal{V}$ with manifold $Q=L\times\mathbb{R}$ and value space $\mathcal{V}=\mathbb{R}$. Taking $\phi:Q\to \mathcal{V}$ seriously as a fundamental field leads us to thinking of $Q=L\times\mathbb{R}$ as the theory's underlying manifold. It is important to note that in this interpretation, $Q$ is the entire manifold, it is not being thought of as embedded in some larger manifold. (However, a view like this will be considered in our third interpretation in Sec.~\ref{SecHeat3}.)
Let's see what consequences these interpretive moves have for locality and symmetry.
\subsection{Intuitive Locality}\label{SecIntuitiveLocality}
Firstly, let's develop a sense of comparative locality for H1, H2, and H3 taking $Q$ to be the underlying manifold. In a highly intuitive sense, theory H1 is the most local in that it couples together the fewest lattice sites: the instantaneous rate of change of $\phi_n(t)$ only depends on itself, $\phi_{n-1}(t)$, and $\phi_{n+1}(t)$. It is this sense of locality which justifies us calling these sites its ``nearest neighbors''. After this, H2 is the next most local in the same sense: it couples next-to-nearest neighbors. Finally, in this sense H3 is the least local, it has an infinite range coupling: the instantaneous rate of change of $\phi_n(t)$ depends on the current value at every lattice site. Thus at least on this intuitive notion of locality, $\text{H1} > \text{H2} > \text{H3}$ with higher rated theories being more local. Similarly, assessing H4-H7 on this intuitive notion of locality gives the ratings,
$\text{H4},\text{H5} > \text{H6},\text{H7}$.
There is some tension however with these intuitive locality ratings and the rate we expect each theory to converge at in the continuum limit. For H1-H3 our intuitive locality ratings are $\text{H1} > \text{H2} > \text{H3}$ but we expect convergence speeds in the continuum limit to be $\text{H3} > \text{H2} > \text{H1}$. Similar tension exists for H4-H7. How is it that our most non-local theory is somehow the nearest to our perfectly local continuum theory?
In one sense there is no mystery here, when we make our derivative approximation longer range (more non-local) they can clearly get more accurate. But the question remains how exactly does an increasingly non-local operation give us an increasingly good approximation of a perfectly local operation (differentiation)?
This tension will be dissolved and resolved in our second and third interpretations respectively. In particular, as I will discuss, these later interpretations negate or reverse all of the above intuitive locality judgements.
\subsection{Intuitive Symmetries}
What discrete spacetime manifold can we intuitively read off of H1-H7? As discussed in Sec.~\ref{NaiveIntuitions}, intuitively the manifold underlying each of these theories is $Q\coloneqq L\times\mathbb{R}$. As discussed there, taking $Q$ to be the theory's underlying manifold limits our theories' possible symmetries. Accounting for both external and internal symmetries (gauge symmetries are not relevant here) we found the widest scope of symmetry transformations possible were Eq.~\eqref{PermutationLong}. Namely, we found the possibilities are permutations of the lattice sites, time reparametrizations, and linear-affine rescalings. It is convenient to translate this in terms of the vector, $\bm{\phi}(t)$, as
\begin{align}\label{Permutation}
s:\quad\bm{\phi}(t)\mapsto c_1\,P\,\bm{\phi}(\tau(t))+c_2\,\bm{1},
\end{align}
for some permutation matrix, $P$, and some monotone smooth function $\tau(t)$ and for some $c_1,c_2\in\mathbb{R}$ where $\bm{1}=(\dots,1,1,1,\dots)^\intercal$ is the constant vector. (Note that the permutation $P$ cannot depend on time or else this transformation will be discontinuous).
It should be stressed that according to this interpretation this is the largest space of symmetries that H1-H7 could possibly have. Indeed, I have been charitable considering the lattice sites structured only as a set (perhaps artificially) increasing the size of $\text{Auto}(Q)$. Given this, it would be highly surprising if we found H1-H7 to have symmetries outside of this set. (Indeed, such a surprise is coming in the next section.)
\subsubsection*{Symmetries of H1-H7: First Attempt}
What then are the symmetries of H1-H7 according to this interpretation? The technical details of this evaluation are in Appendix \ref{AppA}, but ultimately for H1-H3 the symmetries are:
\begin{enumerate}
\item[1)] discrete shifts which map lattice site $n\mapsto n+d_1$ for some integer $d_1\in\mathbb{Z}$
\item[2)] negation symmetry which maps lattice site \mbox{$n\mapsto -n$},
\item[3)] constant time shifts which map $t\mapsto t+\tau$ for some real $\tau\in\mathbb{R}$
\item[4)] and linear-affine rescaling which maps \mbox{$\phi_\ell(t)\mapsto c_1\phi_\ell(t)+c_2$} for some $c_1,c_2\in\mathbb{R}$
\end{enumerate}
These are the symmetries of a uniform one-dimensional lattice $x_n=n\in\mathbb{R}$ (plus time shifts and linear-affine rescaling). The above negation symmetry corresponds to mirror reflection. Previously I had warned against prematurely interpreting the lattice sites underlying H1-H3 as being organized into a uniform grid. Now, however, having investigated this theory's dynamical symmetries we have some motivation to do so.
What about H4 and H6? The technical details of this evaluation are in Appendix \ref{AppA}, but ultimately for H4 and H6 the symmetries are:
\begin{enumerate}
\item[1)] discrete shifts which map lattice site \mbox{$(n,m)\mapsto (n+d_2,m+d_3)$} for some integers $d_2,d_3\in\mathbb{Z}$
\item[2)] two negation symmetries which map lattice site $(n,m)\mapsto (-n,m)$ and $(n,m)\mapsto (n,-m)$ respectively.
\item[3)] a 4-fold symmetry which maps lattice site \mbox{$(n,m)\mapsto (m,-n)$}
\item[4)] constant time shifts which map $t\mapsto t+\tau$ for some real $\tau\in\mathbb{R}$
\item[5)] and linear-affine rescaling which maps \mbox{$\phi_\ell(t)\mapsto c_1\phi_\ell(t)+c_2$} for some $c_1,c_2\in\mathbb{R}$
\end{enumerate}
These are the symmetries of a square lattice \mbox{$z_{n,m}=(n,m)\in\mathbb{R}^2$} (plus time shifts and linear-affine rescaling). The above 4-fold symmetry corresponds to quarter rotation. Previously I had warned against prematurely interpreting the lattice sites underlying H4-H7 as being organized into a square lattice. Now, however, having investigated these theories' dynamical symmetries we have some motivation to do so at least for H4 and H6.
What about H5 and H7? The technical details of this evaluation are in Appendix \ref{AppA}, but ultimately for H5 and H7 the symmetries are:
\begin{enumerate}
\item[1)] discrete shifts which map lattice site \mbox{$(n,m)\mapsto (n+d_2,m+d_3)$} for some integers $d_2,d_3\in\mathbb{Z}$
\item[2)] an exchange symmetry which maps lattice site $(n,m)\mapsto (m,n)$.
\item[3)] a 6-fold symmetry which maps lattice site \mbox{$(n,m)\mapsto (-m,n+m)$}. (Roughly, this permutes the three terms in Eq.~\eqref{DH5} for H5 and Eq.~\eqref{DH7} for H7.)
\item[4)] constant time shifts which map $t\mapsto t+\tau$ for some real $\tau\in\mathbb{R}$.
\item[5)] and linear-affine rescaling which maps \mbox{$\phi_\ell(t)\mapsto c_1\phi_\ell(t)+c_2$} for some $c_1,c_2\in\mathbb{R}$
\end{enumerate}
These are the symmetries of a hexagonal lattice \mbox{$z_{n,m}=(n+m/2,\sqrt{3}m/2)\in\mathbb{R}^2$} (plus time shifts and linear-affine rescaling). The above 6-fold symmetry corresponds to one-sixth rotation. Previously I had warned against prematurely interpreting the lattice sites underlying H4-H7 as being organized into a square lattice. Indeed, as we can now see for H5 and H7 this would have been faulty.
Thus, by investigating these theories' dynamical symmetries we were able to find what sort of lattice structure the bare unstructured lattice $L$ has for each theory (e.g. a uniform grid, square lattice and hexagonal lattice). Recall the discussion of matching dynamical symmetries with spacetime symmetries in Sec.~\ref{SecGenCov}. For continuum spacetime theories, we can only discover their fixed spacetime structures by investigating the dynamics. (Of course, we have no hope of discovering them directly through dynamical means, they are dynamically-fixed.) The same is true here, we started with a bare lattice, $L$, investigated the dynamics, and now we have good candidates for what lattice structures each of these theories have in addition to $L$.
\vspace{0.5cm}
Finally, in this interpretation what sense can be made of H6 and H7 having a one-to-one correspondence between their solutions discussed after Eq.~\eqref{SkewH6H7}? While this correspondence between solutions certainly exists, little sense can be made of it here. As the above discussion has revealed this interpretation associates very different symmetries to H6 and H7 and correspondingly very different lattice structures. While there is nothing wrong per se with this assessment our later interpretations will make better sense of this correspondence.
\vspace{0.5cm}
To summarize, this interpretation has the benefit of being highly intuitive. Taking the fields given to us, $\phi:Q\to\mathbb{R}$, seriously we identified the underlying manifold as $Q$. From this we got some intuitive notions of locality. Moreover, by finding these theories' dynamical symmetries we were able to grant some more structure to their lattice sites (e.g. a uniform grid, square lattice and hexagonal lattice).
However, there are three issues with this interpretation which will become clear in light of our later interpretations. Firstly, our locality assessments are in tension with the rates at which these theories converge to the (perfectly local) continuum theory in the continuum limit. Secondly, despite the one-to-one correspondence between the solutions to H6 and H7, this interpretation regards them as significantly different theories: with different lattice structures and (here consequently) different symmetries. The final issue (which will become clear in the next section) is that this interpretation drastically under predicts the kinds of symmetries which H1-H7 can and do have. In fact, each of H1-H7 have a hidden continuous translation symmetry. Moreover, H6 and H7 have a hidden continuous rotation symmetry.
As I will discuss, the root of all of these issues is taking the theory's lattice structure to be a piece of fixed background structure and moreover taking the lattice itself to be a fundamental part of the underlying manifold. Our second attempt at interpreting these theories will fix these issues.
\section{A Second Attempt at Interpreting some Discrete Spacetime Theories}\label{SecHeat2}
In the previous section, I claimed that H1-H7 have hidden continuous translation and rotation symmetries. But how can this be? How can discrete spacetime theories have such continuous symmetries? As discussed above, if we take our underlying manifold to be $Q$ then our symmetries are limited to $\text{Auto}(Q)$ which clearly cannot support continuous translation and rotation symmetries.
In order to avoid this conclusion we must deny the premise, $Q$ must not be the underlying manifold. What led us to believe $Q$ was the underlying manifold? We arrived at this conclusion by taking the real scalar field $\phi:Q\to\mathcal{V}$ to be fundamental. $Q$ is the underlying manifold because it is where our fundamental field maps from. In order to avoid this conclusion we must deny the premise, the field $\phi:Q\to\mathcal{V}$ must not be fundamental.
But if $\phi:Q\to\mathcal{V}$ is not fundamental then what is? Fortunately, our above discussion has already provided us with another field which we might take as fundamental. Namely, $\bm{\phi}(t)$ defined in Eq.~\eqref{PhiDef}.
On this second interpretation I will be taking the formulations of H1-H7 in terms of $\bm{\phi}(t)$ seriously: namely Eq.~\eqref{DH1}, Eq.~\eqref{DH2}, Eq.~\eqref{DH3}, and Eqs.~\eqref{DH4}-\eqref{DH7}. Taken literally what are these theories about (in this formulation)? These theories are intuitively about a field
$\bm{\phi}:\mathbb{R}\to \mathbb{R}^L$ which maps times ($t\in\mathbb{R}$) into infinite dimensional vectors ($\bm{\phi}\in\mathbb{R}^L$). That is a field $\bm{\phi}:\mathcal{M}\to \mathcal{V}$ with manifold $\mathcal{M}=\mathbb{R}$ and value space $\mathcal{V}=\mathbb{R}^L$. Taking $\bm{\phi}:\mathcal{M}\to \mathcal{V}$ seriously as a fundamental field leads us to thinking of $\mathcal{M}=\mathbb{R}$ as the theory's underlying manifold.
Notice that in this interpretation the lattice sites, $L$, are no longer a part of our manifold. They have been ``internalized'' into the value space. Indeed, in this interpretation H1-H7 are classical continuum spacetime theories (albeit with an abnormally large value space).
Let's see what consequences these interpretive moves have for locality and symmetry. To preview: this second interpretation either dissolves or resolves all of our issues with the first interpretation. To preview: the tension between locality and convergence in the continuum limit is dissolved. H6 and H7 are seen to be equivalent in a stronger sense. And, perhaps most importantly, this interpretation reveals H1-H7's hidden continuous translation and rotation symmetries. However, as I will discuss, this interpretation has some issues of its own which will ultimately require us to make a third pass over these theories.
\subsection{Internalized Locality}
Before discussing the effect this move has on the theories' possible symmetries, let's think briefly about what it does to our sense of locality. I claimed above that this interpretation dissolves the tension between convergence in the continuum limit and the sense locality developed in Sec.~\ref{SecIntuitiveLocality}. It does this by dissolving the possibility of any notion of locality stemming from the lattice sites.
In this interpretation the lattice sites have been internalized, they are no longer part of the manifold and therefore we no longer have a right to extract intuitions about locality from them. In this interpretation, the manifold consists only of times, $t\in\mathbb{R}$. Consequently our only notion of locality is locality in time. The dynamics of each of H1-H7 are local in time and are therefore local in every possible sense. There is no longer any tension concerning how the differences in locality line up with the differences in continuum convergence rate; there simply are no differences in locality anymore.
If this seems unsatisfying to you I agree. One might feel that internalization's ban on extracting notions of locality from the lattice sites is far too extreme. Intuitively, more strongly coupled lattice sites ought to be in some sense closer together. Moreover, one might rightly hope for an interpretation which not only dissolves the tension between a theory's locality and its convergence in the continuum limit, but resolves it by bringing them into harmony. Indeed, if we have no notion of locality between lattice sites it is difficult to see how we get a notion of locality in the continuum limit.
These are all valid complaints which will be addressed in Sec.~\ref{SecHeat3} as I make a third attempt at interpreting H1-H7.
\subsection{Internalized Symmetries}
But how does this internalization move affect a theory's capacity for symmetry? At first glance, this may appear to have made things worse. If our manifold is now just times $t\in\mathbb{R}$ then our only possible dynamical symmetries are time-reparametrizations (i.e., not continuous translations and rotations). However, while there are certainly less possible dynamical/spacetime symmetries, we are now open to a wider range of internal symmetries. It is among these internal symmetries that we will find H1-H7's hidden continuous translation and rotation transformations. As I will argue these symmetries can reasonably be given these names despite being internal symmetries. (In the following section I will present a third attempt at interpreting these theories which ``externalizes'' these symmetries, making them genuinely spacial translations and rotations.)
With our focus now on $\bm{\phi}:\mathcal{M}\to \mathcal{V}$, let us consider its possibilities for symmetries. As discussed above, associated with the manifold we have only time reparametrizations, $\text{Diff}(\mathcal{M})=\text{Diff}(\mathbb{R})$. However, associated with the value space (i.e., an infinite dimensional vector space) we now have the full range of invertible linear-affine transformations over $\mathbb{R}^L$, namely $\text{Aff}(\mathbb{R}^L)$. (In principle these could be tied together into gauge transformations, but this is not relevant here.)
Thus, taken together the possibly symmetries for our theories are $r\in\text{Diff}(\mathbb{R})\times\text{Aff}(\mathbb{R}^L)$ which act on $\bm{\phi}(t)$ as,
\begin{align}\label{GaugeVR}
r:\quad\bm{\phi}(t)\mapsto \Lambda\,\bm{\phi}(\tau(t))+\bm{c}
\end{align}
for some monotone smooth function $\tau(t)$, any invertible linear transformation $\Lambda$, and some vector $\bm{c}\in \mathbb{R}^L$. Contrast this with the symmetries available to us on our first interpretation, namely Eq.~\eqref{Permutation}. The present interpretation has a significantly wider class of symmetries than before.
\subsubsection*{Symmetries of H1-H7: Second Attempt}
Which of the above transformations are symmetries for H1-H7? The technical details of this evaluation are in Appendix \ref{AppA}, but the results are the following. For H1-H3 the symmetries in this interpretation are:
\begin{enumerate}
\item[1)] Action by $T^\epsilon$ sending $\bm{\phi}(t)\mapsto T^\epsilon \bm{\phi}(t)$ where \mbox{$T^\epsilon$} is defined below.
\item[2)] negation symmetry which maps lattice site \mbox{$n\mapsto -n$},
\item[3)] constant time shifts which map $t\mapsto t+\tau$ for some real $\tau\in\mathbb{R}$
\item[4)] and linear-affine rescaling which maps \mbox{$\phi_\ell(t)\mapsto c_1\phi_\ell(t)+c_2$} for some $c_1,c_2\in\mathbb{R}$
\end{enumerate}
These are exactly the same symmetries that we found on the previous interpretation with one difference: discrete shifts have been replaced with action by
\begin{align}\label{TDef}
T^\epsilon\coloneqq\text{exp}(\epsilon D).
\end{align}
with $\epsilon\in\mathbb{R}$. As I will now discuss, $T^\epsilon$ can be thought of as a continuous translation operator despite it being here classified as an internal symmetry.
First note that $T^\epsilon$ is a generalization of discrete shift operation in the sense that taking $\epsilon=d_1\in\mathbb{Z}$ reduces action by $T^\epsilon$ to the map $n\mapsto n+d_1$. Moreover, note that $T^\epsilon$ is additive in the sense that $T^{\epsilon_1}\,T^{\epsilon_2}=T^{\epsilon_1+\epsilon_2}$. In particular, this means $T^{1/2}\,T^{1/2}=T^1$: there is something we can do twice to move one space forward. The same is true for all fractions adding to one. Finally, recall from the discussion following Eq.~\eqref{LambdaD} that $D$ is closely related to the continuum derivative operator, exactly matching its spectrum for $k\in[-\pi,\pi]$. Recall also that the derivative is the generator of translation, i.e. $h(x+\epsilon)=\text{exp}(\epsilon\, \partial_x) h(x)$. In this sense also $T^\epsilon=\text{exp}(\epsilon D)$ is a translation operator. More will be said about $T^\epsilon$ in Sec.~\ref{SecSamplingTheory}.
Thus we have our first big lesson: discrete spacetime theories can have continuous translation symmetries. The fact that our discrete theories at first appeared on some lattice with some lattice structure does nothing to forbid this.
Next let's consider H4-H7. Previously the symmetries of H4 and H6 matched and the symmetries of H5 and H7 also matched. Here however, these pairings are broken up and a new matching pair is formed between H6 and H7. More will be said about this momentarily.
Let's consider H4 first. For H4 the symmetries in this interpretation are:
\begin{enumerate}
\item[1)] Action by $T^\epsilon_\text{n}$ sending $\bm{\phi}(t)\mapsto T^\epsilon_\text{n} \bm{\phi}(t)$ where by convention $T^\epsilon_\text{n}=T^\epsilon\otimes\openone$. Similarly for $T^\epsilon_\text{m}=\openone\otimes T^\epsilon$.
\item[2)] two negation symmetries which map lattice site $(n,m)\mapsto (-n,m)$ and $(n,m)\mapsto (n-,m)$ respectively.
\item[3)] a 4-fold symmetry which maps lattice site \mbox{$(n,m)\mapsto (m,-n)$}
\item[4)] constant time shifts which map $t\mapsto t+\tau$ for some real $\tau\in\mathbb{R}$
\item[5)] and linear-affine rescaling which maps \mbox{$\phi_\ell(t)\mapsto c_1\phi_\ell(t)+c_2$} for some $c_1,c_2\in\mathbb{R}$
\end{enumerate}
These are exactly the symmetries which we found on our first interpretation but with action by $T^\epsilon_\text{n}$ and $T^\epsilon_\text{m}$ replacing the discrete shifts. The same discussion following Eq.~\eqref{TDef} applies here, justifying us calling $T^\epsilon_\text{n}$ and $T^\epsilon_\text{m}$ continuous translation operations.
For H5 the symmetries in this interpretation are:
\begin{enumerate}
\item[1)] Action by $T^\epsilon_\text{n}$ sending $\bm{\phi}(t)\mapsto T^\epsilon_\text{n} \bm{\phi}(t)$ where by convention $T^\epsilon_\text{n}=T^\epsilon\otimes\openone$. Similarly for $T^\epsilon_\text{m}=\openone\otimes T^\epsilon$.
\item[2)] an exchange symmetry which maps lattice site $(n,m)\mapsto (m,n)$.
\item[3)] a 6-fold symmetry which maps lattice site \mbox{$(n,m)\mapsto (-m,n+m)$}. (Roughly, this permutes the three terms in Eq.~\eqref{DH5} for H5 and Eq.~\eqref{DH7} for H7.)
\item[4)] constant time shifts which map $t\mapsto t+\tau$ for some real $\tau\in\mathbb{R}$.
\item[5)] and linear-affine rescaling which maps \mbox{$\phi_\ell(t)\mapsto c_1\phi_\ell(t)+c_2$} for some $c_1,c_2\in\mathbb{R}$
\end{enumerate}
These are exactly the symmetries which we found on our first interpretation but with action by $T^\epsilon_\text{n}$ and $T^\epsilon_\text{m}$ replacing the discrete shifts. The same discussion following Eq.~\eqref{TDef} applies here, justifying us calling $T^\epsilon_\text{n}$ and $T^\epsilon_\text{m}$ continuous translation operations.
Before moving on to analyze the symmetries of H6 and H7, let's first see what this interpretation has to say about the one-to-one correspondence between them. As noted following Eq.~\eqref{SkewH6H7}, H6 and H7 have a one-to-one correspondence between their solutions. Given a solution to H7 I can decompose it into planewaves, map these onto new planewaves using the invertible transformation Eq.~\eqref{SkewH6H7} and then add them up to get a corresponding solution to H6. Similarly vice versa from H7 to H6. In our first interpretation, despite this, H6 and H7 were judged to be different theories because they had different lattice structures and (there consequently) different symmetries.
Things are substantially different in this interpretation. The transformations between solutions of H6 and H7 described above is a linear transformation on $\bm{\phi}$. Thus, in this interpretation the only difference between H6 and H7 is a change of basis for the vector space $\mathbb{R}^L$. Because this is a transformation of the form Eq.~\eqref{GaugeVR} (but notably not Eq.~\eqref{Permutation}) for any symmetry transformation for H6 there is a corresponding symmetry transformation for H7.
This is in strong contrast to the results of our previous analysis in Sec.~\ref{SecHeat1}. There H6 was seen to have the symmetries of a square lattice and H7 was seen to have the symmetries of a hexagonal lattice. By contrast, in the present interpretation H6 and H7 are thoroughly equivalent.
Thus we have our second big lesson: discrete theories which are presented to us with very different lattice structures (i.e., a square lattice versus a hexagonal lattice), may nonetheless turn out to be completely equivalent theories. This not only in terms of a one-to-one mapping between their solutions as we saw before, but in terms of their symmetries as well. In this interpretation, the process for switching between lattice structures is simply a change of basis in the value space.
In the rest of this subsection I will only discuss the symmetries H6, analogous conclusions are true for H7 after applying Eq.~\eqref{SkewH6H7}. The symmetries for H6 in this interpretation are:
\begin{enumerate}
\item[1)] Action by $T^\epsilon_\text{n}$ sending $\bm{\phi}(t)\mapsto T^\epsilon_\text{n} \bm{\phi}(t)$ where by convention $T^\epsilon_\text{n}=T^\epsilon\otimes\openone$. Similarly for $T^\epsilon_\text{m}=\openone\otimes T^\epsilon$.
\item[2)] an exchange symmetry which maps lattice site $(n,m)\mapsto (m,n)$.
\item[3)] Action by $R^\theta$ sending $\bm{\phi}(t)\mapsto R^\theta \bm{\phi}(t)$ with $R^\theta$ defined below.
\item[4)] constant time shifts which map $t\mapsto t+\tau$ for some real $\tau\in\mathbb{R}$
\item[5)] and linear-affine rescaling which maps \mbox{$\phi_\ell(t)\mapsto c_1\phi_\ell(t)+c_2$} for some $c_1,c_2\in\mathbb{R}$.
\end{enumerate}
As with H4, we have here action by $T^\epsilon_\text{n}$ and $T^\epsilon_\text{m}$ replacing the discrete shifts from before. However, additionally we have the quarter rotations replaced with action by $R^\theta$ where
\begin{align}\label{RthetaDef}
R^\theta = \exp(\theta (&N D_\text{m} - M D_\text{n}))
\end{align}
with $\theta\in\mathbb{R}$ and where $N$ and $M$ are position operators which return the first and second index.
As I will now discuss, $R^\theta$ can be thought of as a continuous rotation operator despite it being here an internal symmetry. First note that $R^\theta$ is a generalization of quarter rotation operation in the sense that taking $\theta=\pi/2$ reduces action by $R^\theta$ to the map $(n,m)\mapsto (m,-n)$. Moreover, note that $R^\theta$ is cyclically additive in the sense that $R^{\theta_1}\,R^{\theta_2}=R^{\theta_1+\theta_2}$ with $R^{2\pi}=\openone$. In particular, this means $R^{\pi/4}\,R^{\pi/4}=R^{\pi/2}$. There is something we can do twice to make a quarter rotation. Similarly for all fractional rotations. Finally, recall from the discussion following Eq.~\eqref{LambdaD} that $D$ is closely related to the continuum derivative operator, exactly matching its spectrum for $k\in[-\pi,\pi]$. Recall also that rotations are generated through the derivative as \mbox{$h(R(x,y))= \exp(\theta (x \partial_y - y \partial_x))h(x,y)$}. In this sense also $R^\theta$ is a rotation operator. More will be said about $R^\theta$ in Sec.~\ref{SecSamplingTheory}.
This adds to our first big lesson: discrete spacetime theories can have not only continuous translation but also continuous rotation symmetries. The fact that our discrete theories at first appeared with lattice structure does nothing to forbid this.
\vspace{0.5cm}
To summarize: this second attempt at interpreting H1-H7 has fixed all of the issues with our previous interpretation. Firstly, there is no longer any tension between these theories differing locality properties and the rates at which they converge to the (perfectly local) continuum theory in the continuum limit. (There are no longer any differences in locality.) Secondly, the fact that we have a one-to-one correspondence between the solutions to H6 and H7 is properly reflected in their matching symmetries. Finally, this interpretation has exposed the fact that H1-H7 have a hidden continuous translation and rotation symmetries.
These are all substantial improvements, but ultimately this interpretation has its own issues. The way that the tension is dissolved between locality and convergence in the continuum limit is unsatisfying. We ought to be able to extract intuitions about locality from the lattice sites. Moreover, while this interpretation has indeed exposed H1-H7's hidden continuous translation and rotation symmetries, the way it classifies them seems wrong. They are here classified as internal symmetries (i.e., symmetries on the value space) whereas intuitively they should be external symmetries (i.e., symmetries on the manifold).
The root of both of these issues is taking the theory's lattice structure to be internalized into the theory's value space. Our third attempt at interpreting these theories will fix this. Before making this third attempt, let me complete the current interpretation by applying our continuum notion of general covariance to H1-H7.
\subsection{Internalized General Covariance}
Rewriting H1-H7 in the coordinate-free language of differential geometry we have:
\begin{align}\label{IntGenCov}
\text{H1-H7:}\quad\text{KPMs:}\quad&\langle \mathcal{M},t_\text{ab},T^\text{a},\Delta_{H1-H7}^2,\bm{\phi}\rangle\\
\nonumber
\text{DPMs:}\quad&
T^\text{a}\,\nabla_\text{a}\bm{\phi}
=\alpha \, \Delta_{H1-H7}^2 \, \bm{\phi}
\end{align}
where $\mathcal{M}$ is a $0+1$ dimensional manifold, $t_\text{ab}$ is a fixed metric field with signature $(1)$, $\nabla_\text{a}$ is the unique covariant derivative operator compatible with $t_\text{ab}$, and $T^\text{a}$ is a fixed constant time-like unit vector field, $\nabla_\text{a}T^\text{b}=0$ and $t_\text{ab}T^\text{a}T^\text{b}=1$. Here $\bm{\phi}:\mathcal{M}\to \mathcal{V}$ is a dynamical infinite-dimensional vector field and $\Delta_{H1-H7}^2$ is whichever operator appears in the relevant dynamical equation: Eq.~\eqref{DH1}, Eq.~\eqref{DH2}, Eq.~\eqref{DH3} and Eqs.~\eqref{DH4}-\eqref{DH7}. Notice that on this second interpretation, the lattice has disappeared from the manifold, having been fully-internalized.
Thus we have our third big lesson: given a discrete spacetime theory with some lattice structure we can always reformulate it in such a way that it has no lattice structure whatsoever. In fact, there is no longer even a lattice here. In this interpretation, this is done by internalizing the lattice structure.
Before making a third attempt at interpreting H1-H7, I need to introduce some mathematical tools regarding bandlimited functions and Nyquist-Shannon sampling theory.
\section{Brief Review of Bandlimited Functions and Nyquist-Shannon Sampling Theory}\label{SecSamplingTheory}
This section will provide a thorough but informal mathematical overview of the primary tools used in the third attempt to interpret H1-H7, namely bandlimited functions and Nyquist-Shannon sampling theory. My intention is not to prove these theorems of sampling theory in any technical sense but rather to make them intuitive. For a selection of introductory texts on sampling theory see ~\cite{GARCIA200263,SamplingTutorial,UnserM2000SyaS}.
To introduce the topic I will restrict our attention to the one-dimensional case with uniform sample lattice before generalizing to higher dimensions and non-uniform samplings later on.
\subsection{One Dimension Uniform Sample Lattices}\label{Sec1DUniform}
A bandlimited function is one whose Fourier transform has compact support. Consider a generic bandlimited function, $f_\text{B}(x)$, with a bandwidth of $K$. That is, a function $f_\text{B}(x)$ such that its Fourier transform,
\begin{align}
\mathcal{F}[f_\text{B}(x)](k)\coloneqq\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty f_\text{B}(x) \, e^{-\mathrm{i} k x} \d x,
\end{align}
has support only for wavenumbers $\vert k\vert< K$.
Suppose that we know the value of $f_\text{B}(x)$ only at the regularly spaced sample points, $x_n=n\,a+b$, with some spacing, \mbox{$0\leq a\leq a^*\coloneqq\pi/K$}, and offset, $b\in\mathbb{R}$. Let \mbox{$f_n=f_\text{B}(x_n)$} be these sample values. Having only the discrete sample data, $\{(x_n,f_n)\}_{n\in\mathbb{Z}}$, how well can we approximate the function?
The Nyquist-Shannon sampling theorem~\cite{ShannonOriginal} tells us that from this data we can reconstruct $f_\text{B}$ exactly everywhere! That is, from this discrete data, $\{(x_n,f_n)\}_{n\in\mathbb{Z}}$, we can determine everything about the function $f_\text{B}$ everywhere. In particular, the following reconstruction is exact,
\begin{align}\label{SincRecon}
f_\text{B}(x)
&= \sum_{n=-\infty}^\infty S\!\left(\frac{x-x_n}{a}\right) \ f_n\\
\nonumber
&= \sum_{n=-\infty}^\infty S_n\!\left(\frac{x-b}{a}\right) \ f_n,
\end{align}
where
\begin{align}
S(y)=\frac{\sin(\pi y)}{\pi y}, \quad\text{and}\quad
S_n(y)=S(y-n),
\end{align}
are the normalized and shifted sinc functions. Note that $S_n(m)=\delta_{nm}$ for integers $n$ and $m$. Moreover, note that each $S_n(x)$ is both $L_1$ and $L_2$ normalized and that taken together the set $\{S_n(x)\}_{n\in\mathbb{Z}}$ forms an orthonormal basis with respects to the $L_2$ inner product. The fact that any bandlimited function can be reconstructed in this way is equivalent to the fact that this orthonormal basis spans the space of bandlimited functions with bandwidth of $K=\pi$.
\begin{figure}
\includegraphics[width=0.4\textwidth]{PaperFigures/Fig1DSamplesa.pdf}
\includegraphics[width=0.4\textwidth]{PaperFigures/Fig1DSamplesb.pdf}
\includegraphics[width=0.4\textwidth]{PaperFigures/Fig1DSamplesc.pdf}
\includegraphics[width=0.4\textwidth]{PaperFigures/Fig1DSamplesd.pdf}
\includegraphics[width=0.4\textwidth]{PaperFigures/Fig1DSamplese.pdf}
\caption{Several different (but completely equivalent) graphical representations of the bandlimited function \mbox{$f_\text{B}(x)=1+S(x-1/2)+x\,S(x/2)^2$} with bandwidth of $K=\pi$ and consequently a critical spacing of $a^*=\pi/K=1$. Subfigure a) shows the function values for all $x$. b) shows the values of $f_\text{B}$ at $x_n=n/2$. Since $1/2<a^*=1$ this is an instance of oversampling. c) shows the values of $f_\text{B}$ at $x_n=n$. This is an instance of critical sampling. d) shows the values of $f_\text{B}$ at $x_n=n+1/3$. This too is an instance of critical sampling. e) shows a non-uniform sampling of $f_\text{B}$. From any of these samplings we can recover the function $f_\text{B}$ exactly everywhere.}\label{Fig1DSamples}
\end{figure}
As a concrete example, let us consider the function $f_\text{B}(x)=1+S(x-1/2)+x\,S(x/2)^2$, shown in Fig.~\ref{Fig1DSamples}a. This function has a bandwidth of $K=\pi$ and so has a critical sample spacing of $a^*=\pi/K=1$. Thus, we can fully reconstruct $f_\text{B}(x)$ knowing only its values at $x_n=n\,a+b$ for any spacing $a\leq a*=1$. In particular the sample values at $x_n=n/2$ are sufficient to exactly reconstruct the function, see Fig.~\ref{Fig1DSamples}b. So too are the sample values at the integers $x_n=n$ and at $x_n=n+1/3$, see Fig.~\ref{Fig1DSamples}c and Fig.~\ref{Fig1DSamples}d. In each of these cases the reconstruction is given by Eq.~\eqref{SincRecon}.
Everything about this function can be reconstructed from any uniform sample lattice with $a\leq a^*=1$. In particular, the value of $f_\text{B}$ a third of the way between sample point, $f_\text{B}(2/3)$, is fixed by $\{(n,f_\text{B}(n))\}_{n\in\mathbb{Z}}$ even though we have no sample at or even near $x=2/3$. The derivative of $f_\text{B}$ at zero, $f_\text{B}'(0)$, is fixed by $\{(n,f_\text{B}(n))\}_{n\in\mathbb{Z}}$ even though the only sample point we have in this neighborhood is $f_\text{B}(0)$. Moreover, the derivative at $x=2/3$, namely $f_\text{B}'(2/3)$, is fixed by $\{(n,f_\text{B}(n))\}_{n\in\mathbb{Z}}$ even we have no sample points in the neighborhood.
On first exposure this may be shocking: how can a function's behavior everywhere be fixed by its value at a discrete set of points? When $f_\text{B}$ is represented discretely, where has all of the information gone? Where is the information about the derivative at $x=1/3$ stored in the discrete representation? One may feel that any such discretely-determined function must belong to a very restricted class.
Classes of functions fixed by their values at discrete points are not uncommon in mathematics. For instance, polynomials of finite degree are fixed knowing their values at only finitely many places. More surprisingly, the Identity Theorem of complex analysis tell us that any entire function, $g_\text{E}(x)$, is fixed by the values $\{g_\text{E}(1),\,g_\text{E}(1/2),\,g_\text{E}(1/3),\,\dots\}$. Recall that entire functions are those functions whose Taylor series based at any point converges everywhere. The class of entire functions include all polynomials, all trig-functions, all hyperbolic trig-function, and a great many elementary combinations of these.
Before discussing bandlimited functions further, it will be instructive to take a brief detour into the land of entire functions.
\subsubsection*{An Entire Detour}
A generic entire function, $g_\text{E}(x)$, is fully determined by knowing all of its derivatives at any given location. For instance, any $g_\text{E}(x)$ can be equivalently represented by the infinite-dimensional vector, $\bm{g}$, which collects together all of $g_\text{E}$'s derivative at $x=0$. Namely, the $r^\text{th}$ entry of $\bm{g}$ is \mbox{$g_r=\partial_x^r g_\text{E}(0)$} for integers $r\geq0$.
Every property of $g_\text{E}(x)$ can be represented in terms of this vector. By construction, the derivatives of $g_\text{E}$ at $x=0$ are ``stored'' in this representation trivially as,
\begin{align}
\partial_x^r g_\text{E}(0)
&= g_r =\bm{v}_r^\intercal \bm{g},
\end{align}
where $\bm{v}_r=(0,\dots,0,1,0,\dots)^\intercal$ with the 1 in the $r^\text{th}$ place (counting from zero).
The values of $g_\text{E}$ away from $x=0$ can be recovered via $g_\text{E}$'s Taylor series. \begin{align}
g_\text{E}(x)
&= \sum_{n=0}^\infty \frac{x^n}{n!} \ g_n.
\end{align}
The derivatives of $g_\text{E}$ away from $x=0$ can be recovered by taking derivatives of the above formula. Ultimately one finds,
\begin{align}
\partial_x^r g_\text{E}(a)
&= \sum_{n=r}^\infty \frac{a^{n-r}}{(n-r)!} \ g_n
=\bm{v}_r^\intercal T^x_\text{E} \, \bm{g}.
\end{align}
where the entries of the matrix $T^a_\text{E}$ are \mbox{$[T^a_\text{E}]_{i,j}=a^{j-i}/(j-i)!$} if $j\geq i$ and 0 otherwise. This recovers the above formula when $r=0$.
\begin{align}
T^x_\text{E}
=\begin{pmatrix}
1 & a & a^2/2! & a^3/3! & a^4/4! & \dots\\
0 & 1 & a & a^2/2! & a^3/3! & \dots\\
0 & 0 & 1 & a & a^2/2! & \dots\\
0 & 0 & 0 & 1 & a & \dots\\
0 & 0 & 0 & 0 & 1 & \dots\\
\dots & \dots & \dots & \dots & \dots & \dots\\
\end{pmatrix}.
\end{align}
$T^a_\text{E}$ acts as the translation operator for this representation of entire functions. Indeed, $T^a_\text{E}$ is a representation of the translation group, satisfying $T^a_\text{E} T^b_\text{E}=T^{a+b}_\text{E}$. If $\bm{g}$ represents $g_\text{E}(x)$ then $T^a_\text{E}\bm{g}$ represents $g_\text{E}(x+a)$.
Thus, we have seen how any derivative of $g_\text{E}$ away from $x=0$ can be recovered when we represent $g_\text{E}$ with $\bm{g}$. In this representation, the derivatives at $x=0$ are stored ``primitively'' while everything else requires some unpacking. Of course, there is nothing special about this representation. We might alternatively choose to store the derivatives of $g_\text{E}(x)$ at $x=a$ primitively and recover everything else from there. More creatively, recalling the identity theorem, we could store the values of $g_\text{E}(x)$ at $x=1/n$ for integer $n$ and recover everything else from there. Each of the above representations are related to each other by a simple change of basis.
Of course, one can go wrong in designing a representation for $g_\text{E}(x)$: knowing only the even derivatives at $x=0$ won't fix $g_\text{E}$. However, we still have great freedom in how to represent $\bm{g}$. Any derivative of $g_\text{E}$ at any location can appear ``primitively'' in our designer representation. Moreover, any wish-list of derivatives of $g_\text{E}$ can appear primitively in our designer representation. To see this, note that if this wish-list does not already give us a complete basis we can simply begin adding in derivatives of $g_\text{E}$ at $x=0$ until the space is spanned. Eventually this will complete the basis.
Consider the representation of $g_\text{E}$ given by,
\begin{align}
\tilde{\bm{g}}=(g_\text{E}(0),g_\text{E}(1),g_\text{E}'(0),g_\text{E}'(1),g_\text{E}''(0),g_\text{E}''(1),\dots)^\intercal
\end{align}
This is obviously enough information to recover $g_\text{E}$; it contains all of the derivatives of $g_\text{E}$ at $x=0$ and $x=1$. However, it is an interesting question exactly how much of this information can be deleted without losing our ability to recover $g_\text{E}$ exactly. I have no answer to this question, but an analogous question will soon arise for bandlimited functions.
Let's now return to a discussion of bandlimited functions.
\subsubsection*{Return to One Dimensional Uniform Sample Lattices}
Much of the above discussion about entire functions carries over unchanged to bandlimited functions.
To begin, let's see how the values and derivatives of $f_\text{B}$ everywhere are encoded into its values at a sufficiently dense uniform sample lattice. The values of $f_\text{B}$ at the sample points $x=x_n$ are stored trivially as,
\begin{align}
f_\text{B}(x_n)
&= f_n
= \bm{w}_n^\intercal\bm{f},
\end{align}
where $\bm{w}_n=(\dots,0,1,0,\dots)^\intercal$ with the 1 in the $n^\text{th}$ position. Note that $\bm{w}$ is infinite in both directions.
The values of $f_\text{B}$ away from the sample points (at \mbox{$x=x_n+\epsilon\,a$} for $\epsilon\in\mathbb{R}$) can be reconstructed as,
\begin{align}\label{TBdef}
f_\text{B}(x_n+\epsilon\,a)
&= \sum_{m=-\infty}^\infty S_m(n+\epsilon) \ f_m
=\bm{w}_n^\intercal \, T_\text{B}^\epsilon \, \bm{f}
\end{align}
where the entries of the matrix $T^\epsilon_\text{B}$ are $[T^\epsilon_\text{B}]_{i,j}=S_i(j+\epsilon)$. Note $T_\text{B}^\epsilon$ acts as the translation operator for this representation of bandlimited functions. Indeed, $T^\epsilon_\text{B}$ is a representation of the translation group, satisfying $T^\alpha_\text{B} T^\beta_\text{B}=T^{\alpha+\beta}_\text{B}$. If $\bm{f}$ represents $f_\text{B}(x)$ then $T^\epsilon_\text{B}\bm{f}$ represents $f_\text{B}(x+\epsilon)$.
From this translation operator we can identify the derivative operator for bandlimited functions, $D_\text{B}$, as
\begin{align}
D_\text{B}\coloneqq\lim_{\epsilon\to0} \frac{T^\epsilon_\text{B}-\openone}{\epsilon}.
\end{align}
It should be noted that $D_\text{B}$ and $T_\text{B}^\epsilon$ commute and moreover we have the usual relationship between derivatives and translations, $T_\text{B}^\epsilon=\exp(\epsilon\, D_\text{B})$.
From the above definition of $D_\text{B}$ one can easily work out its matrix entries as \mbox{$[D_\text{B}]_{i,j}=(-1)^{i-j}/(i-j)$} when $i\neq j$ and 0 when $i=j$. Note that $D_\text{B}$ acts as the derivative operator for this representation of bandlimited functions. If $\bm{f}$ represents $f_\text{B}(x)$ then $\frac{1}{a}D_\text{B}\bm{f}$ represents $f_\text{B}'(x)$.
\begin{align}\label{DBMatrix}
D_\text{B}&=\text{Toeplitz}(\dots,\!\frac{1}{4},\!\frac{-1}{3},\!\frac{1}{2},\!-1,\!0,\!1,\!\frac{-1}{2},\!\frac{1}{3},\!\frac{-1}{4},\!\dots)
\end{align}
Comparing this with the $D$ operator introduced in Eq.~\eqref{BigToeplitz} we see that they are numerically identical. Indeed, , $D_\text{B}$ is diagonal in the Fourier basis with spectrum $\lambda_{D_\text{B}}(k)=\mathrm{i}\,k$ for $k\in[-\pi,\pi]$. This is exactly the defining property of the $D$ operator introduced earlier, see Eq.~\eqref{LambdaD}.
Indeed, $D_\text{B}=D$ and moreover $T_\text{B}=T$. If we were to extend our discussion to two-dimensional functions we could find a discrete representation of the rotation operator for bandlimited functions. $R_\text{B}$. This would come out numerically equal to the $R$ operator introduced earlier in Eq.~\eqref{RthetaDef}, namely $R_\text{B}=R$. Thus, the discrete notions of derivative, translation, and rotation that we have been using up until now are intimately connected with bandlimited functions.
It should be noted that $D=D_\text{B}$ gives us the following remarkable derivative approximation (which is exact for bandlimited functions):
\begin{align}\label{ExactDerivative}
\partial_x f(x)
&\approx2\sum_{m=1}^\infty (-1)^{m+1} \frac{f(x+m\,a)-f(x-m\,a)}{2\,m\,a}.
\end{align}
Namely, when $h$ is bandlimited with bandwidth of $K$ and $a\leq\pi/K$ then this formula is exact. Moreover, if the Fourier transform of $h$ is mostly supported in $[-K,K]$ with thin tails (e.g, Gaussian tails) outside this region, then this is a very good derivative approximation.
Ultimately we can compute any derivative of $f_\text{B}$ anywhere from our sample data as,
\begin{align}
\partial_x^r\,f_\text{B}(x_n+\epsilon\,a)
=\frac{1}{a^r}\,\bm{w}_n^\intercal \, D_\text{B}^r T_\text{B}^\epsilon \, \, \bm{f}.
\end{align}
Thus, we can recover any value or derivative of $f_\text{B}$ from its values on any sufficiently dense uniform sample lattice. We can translate between any two uniform sample lattices with the same spacing by using the bandlimited translation operator, $T_\text{B}^\epsilon$. Translating between uniform sample lattices with different spacings is more difficult, but it can be done (as long as both are sufficiently dense with spacings $a\leq\pi/K$). Ultimately, each of these redescriptions can be accomplished by a simple change of basis.
\subsection{Non-Uniform Sample Lattices}\label{Sec2DSampling}
The previous subsection showed how any value or derivative of $f_\text{B}$ can be recovered from its values on any sufficiently dense uniform sample lattice. Moreover, it showed how changing between representing $f_\text{B}$ with different uniform sample lattices is ultimately just a change of basis.
However, just as I discussed in the case of entire functions, we can be more creative with how we try to represent $f_\text{B}$ than this. In particular, we can begin designing a customized representation by picking out any wish-list of values or derivatives of $f_\text{B}$ to appear ``primitively'' in our representation. If this wish-list does not give us a complete spanning of the space of bandlimited function, then we can just begin adding in samples off of a uniform lattice until it does. What results is a non-uniform sample lattice (albeit one with a uniform sub-lattice).
Alternatively we might consider beginning from an overly dense sampling and trimming down from there. For example, figure Fig.~\ref{Fig1DSamples}b shows $f_\text{B}$ sampled at twice the necessary frequency. This is a representation of $f_\text{B}$ in an overcomplete basis. Imagine oversampling by a factor of ten with a spacing of $a=a^*/10$. Intuitively, this sample lattice has ten times the information needed to recover the function exactly. If we were to delete all but every tenth data point we would still be able to recover the function. But what if we just half of the sample points, but did so randomly? This would result in a non-uniform sample lattice. See for instance Fig.~\ref{Fig1DSamples}e. Hopefully, the reader has some intuition that at least some non-uniform sample lattices are sufficient to exactly reconstruct $f_\text{B}$.
The answers to such questions are given by the various non-uniform sampling theorems~\cite{GARCIA200263,SamplingTutorial}. The details of these theorems are not important here; They can all be summarized as saying that reconstruction is possible when our non-uniform sample points are ``sufficiently dense'' in some technical sense. The sampling shown in Fig. \ref{Fig1DSamples}e is sufficiently dense. The reconstruction in the non-uniform case is significantly more complicated than it is in the uniform case, recall Eq.~\eqref{SincRecon}. In the non-uniform case it is generally of the form,
\begin{align}
f_\text{B}(x)=\sum_{m=-\infty}^\infty G_m(x;\{z_n\}_{n\in\mathbb{Z}}) \, f_\text{B}(z_m)
\end{align}
for some reconstruction functions, $G_m$, which depend in a complicated way on the location of all of the other sample points, $\{z_n\}_{n\in\mathbb{Z}}$.
\subsection{Higher Dimensional Sampling}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.95\textwidth]{PaperFigures/Fig2DSamples.pdf}
\caption{Several different (but completely equivalent) graphical representations of the bandlimited function given by Eq.~\eqref{J1}. This function has a bandwidth of $\sqrt{k_x^2+k_y^2}<K=\pi$ and so has a critical spacing of $a^*=\pi/K=1$ in every direction. The scale of each subfigure is 5x5. In each subfigure, the colored regions are the Voronoi cells around the sample points (black). Subfigure a) shows the function values for all $x$. b) shows $f_\text{B}$ sampled on a square lattice with $z_{n,m}=(n/2,m/2)$. Since $1/2<a^*=1$ this is an instance of oversampling. c) shows $f_\text{B}$ sampled on a square lattice with $z_{n,m}=(n,m)$. This is an instance of critical sampling since $a=a^*=1$. d) shows $f_\text{B}$ sampled on a square lattice with $z_{n,m}=(n+m,n-m)/\sqrt{2}$. e) $f_\text{B}$ sampled on a hexagonal lattice of with \mbox{$z_{n,m}=(n+m/2,\sqrt{3}m/2)\in\mathbb{R}^2$}. f) shows $f_\text{B}$ sampled on an irregular lattice.}
\label{Fig2DSamples}
\end{figure*}
The same story about bandlimited functions is largely true in higher dimensions as well. A two-dimensional function $f_\text{B}(x,y)$ is bandlimited if is Fourier transform $\mathcal{F}[f_\text{B}(x,y)](k_x,k_y)$ is compactly supported in the \mbox{($k_x$, $k_y$)-plane}. Specifying the value of the bandwidth is less straightforward in the high dimensional case as the Fourier transform's support may have different extents in different directions. However, any compact region can be bounded in a square. We can thus always imagine $f_\text{B}(x,y)$ as being bandlimited with \mbox{$(k_x,k_y)\in[-K,K]\times[-K,K]$} for some $K>0$. As such, we can represent $f_\text{B}(x,y)$ with a (sufficiently dense) uniform sample lattice in both the $x$ and $y$ directions. That is we can represent $f_\text{B}(x,y)$ in terms of its sample values on a sufficiently dense square lattice.
Once we have such a uniform sampling, the reasoning carried out above applies unchanged. We can include any values or derivatives of $f_\text{B}(x,y)$ as part of our representation, as long as this is part or supplemented by a sufficiently dense (in some technical sense) sample lattice.
For a concrete example consider the bandlimited function shown shown in Fig.~\ref{Fig2DSamples}a, namely,
\begin{align}\label{J1}
f_\text{B}(x,y)=J_1(\pi\,r)/(\pi\,r)
\end{align}
where $J_1$ is the first Bessel function and $r=\sqrt{x^2+y^2}$. This function is bandlimited with $\sqrt{k_x^2+k_y^2}<K=\pi$ and hence critical spacing $a^*=\pi/K=1$. Moreover, this function is rotationally symmetric.
Given this function's bandwidth of $K=\pi$, we can represent it via its sample values taken on a square lattice with spacing $a=1/2\leq a^*=1$, see Fig. \ref{Fig2DSamples}b. We can also use a coarser square lattice with a spacing of $a=a^*=1$, see Fig. \ref{Fig2DSamples}c. We could also use a rotated square lattice, see Fig. \ref{Fig2DSamples}d. Sampling the function on a hexagonal lattice also works, see Fig. \ref{Fig2DSamples}e. Finally we can use a non-uniform lattice of sample points, see Fig. \ref{Fig2DSamples}f. From each of these discrete representations, we could recover the original bandlimited function everywhere exactly via some generalization of Eq.~\eqref{SincRecon}.
Thus, there is no conceptual barrier to representing a rotationally invariant bandlimited function on a square lattice. Indeed, there is no issue with representing such a function on any sufficiently dense lattice. In light of the analogy proposed in this paper, we can see this as analogous to the unsurprising fact that there is no conceptual barrier to representing rotationally invariant functions in Cartesian coordinates. There is no requirement that our representation (be it a choice of coordinates or a choice of sample points) latches onto the symmetries of what is being represented.
Thus we have a non-uniform sampling theory for higher dimensions. But what about a sampling theory on curved spaces? While such things are not relevant for the aims of this paper, recently notable progress has been made on developing a sampling theory for curved manifolds~\cite{CurvedSampling,Martin2008}.
\section{A Third Attempt at Interpreting some Discrete Spacetime Theories}\label{SecHeat3}
In Sec.~\ref{SecHeat2} it was revealed that H1-H7 have hidden continuous symmetry transformations which intuitively correspond to spacial translations and rotation. In our first attempt at interpreting H1-H7 the possibility of such symmetries were outright denied, see Sec.~\ref{SecHeat1}. In our second attempt, these hidden symmetries were exposed, but they were classified (unintuitively) as internal symmetries, see Sec.~\ref{SecHeat2}. This is due to an ``internalization'' move made in our second interpretation. This move also undercut our ability to use the lattice sites to reason about locality.
In this section (using the tools introduced in the previous section) I will show how we can externalize these symmetries by 1) inventing a continuous manifold for them to live on and 2) embedding our states/dynamics onto this manifold as bandlimited functions.
A perspective similar to this third interpretation has been put forward in the physics literature by Achim Kempf~\cite{UnsharpKempf,Kempf2000b,Kempf2003,Kempf2004a,Kempf2004b,Kempf2006,Martin2008,Kempf_2010,Kempf2013,Pye2015,Kempf2018} among others~\cite{PyeThesis,Pye2022,BEH_2020}. For an overview see~\cite{Kempf2018}.
\subsection{Choice of Manifold, Embedding and Sample Points}\label{SecHeat3A}
If we are going to externalize these symmetries then we need to have a big enough manifold on which to do the job. Clearly neither the manifold in our first interpretation, $Q$, or in our second, $\mathbb{R}$, is up to the task. What manifold $\mathcal{M}$ might be up to the task?
The first thing we must do is pick which of our theory's symmetries we would like to externalize (there may be some symmetries we want to keep internal). For H1-H7 we want to externalize the following symmetries: continuous translations, continuous rotations, mirror reflections, and constant time shifts. In any case, we collect these symmetries together in a group $G_\text{ext}$. Clearly, our choice of manifold $\mathcal{M}$ needs to be big enough to have $G_\text{ext}$ as a subgroup of $\text{Diff}(\mathcal{M})$. Of course this doesn't uniquely specify the manifold we ought to use. If $\mathcal{M}$ works, then so does any $\mathcal{M}'$ with $\mathcal{M}$ as a sub-manifold. For standard Occamistic reasons, it is natural to go with the smallest manifold which gets the job done. The larger the gap between $G_\text{ext}$ and $\text{Diff}(\mathcal{M})$ the more fixed spacetime structures will need to be introduced later on. One might proceed by trying to formalize the ``size'' of this gap and prove something about it minimum. However, here I prefer to just get building.
Let's begin by picking out all of our theory's continuous translation symmetries (in either space or time). As many of these as there are will give us a lower bound on the number of dimensions our manifold requires. Another guide to the necessary number of dimensions is the dimensionality of the lattice structure revealed in our first interpretation Sec.~\ref{SecHeat1}: e.g., a uniform grid, a square lattice and a hexagonal lattice. Either of these indicators suggest that for H1-H3 we have $\mathcal{M}_{1-3}\cong\mathbb{R}^2$ and for H4-H7 we have $\mathcal{M}_{4-7}\cong\mathbb{R}^3$. When the subscript on $\mathcal{M}$ is not relevant it will be dropped.
Once we have a manifold selected, we need to somehow embed $\phi_\ell(t)$ (or equivalently $\bm{\phi}(t)$) into it. While other embeddings are possible, given the tools developed in Sec. \ref{SecSamplingTheory} and how well they appear to suit our purposes there are substantial reasons to go for a bandlimited embedding. That is, we are going to think of each $\phi_\ell(t)$ as a sample value which is drawn from some bandlimited field $\phi_\text{B}:\mathcal{M}\to\mathbb{R}$ at some sample point $z_\ell(t)\in\mathcal{M}$. That is,
\begin{align}\label{H1H7Embed}
\text{H1-H7}:\quad&\phi_\ell(t)=\phi_\text{B}(z_\ell(t))
\end{align}
But what points should we take for $z_\ell(t)$? In principle we here have complete freedom\footnote{One may feel some tension here with the point stressed in Sec.~\ref{SecSamplingTheory} that a choice of sampling lattice must always be sufficiently dense in some technical sense. This is true when we already have in mind a fixed bandlimited function and manifold. To describe this function we need a sufficiently dense sampling, depending on its bandwidth. However, here we have no such function and manifold in mind. We are building a manifold and then associating certain values with certain points on the manifold. From these we will then construct a bandlimited function. The bandwidth of the resulting function will be compatible with our choice of sample points. By construction these sample values capture all of the information about the function.
Suppose we want to then switch our choice of sample points, there are two ways of doing this. One can begin the above process again from scratch, embedding our old sample values onto new sample points. We are completely free in how to do this. Conversely, we could also get new sample values by sampling our old function at the new sample points. Unlike before, here our new sample points are restricted to be sufficiently dense according to the bandwidth of the old function.} in selecting these sample points. However, perhaps surprisingly, if we make natural choices about how the symmetries we have already identified fit onto $\mathcal{M}$ then our way forward here is more-or-less fixed.
Let us consider H4-H7 and suppose we make the following choices lining up our symmetry transformations $G$ with certain diffeomorphisms on $\mathcal{M}$. Specifically, I take:
\begin{enumerate}
\item our continuous translation symmetry $T_\text{n}^\epsilon$ to act on the manifold as $(t,x,y)\mapsto(t,x+\epsilon\,a,y)$ for some lattice spacing $a>0$,
\item our continuous translation symmetry $T_\text{m}^\epsilon$ to act on the manifold as $(t,x,y)\mapsto(t,x,y+\epsilon\,a)$,
\item our constant time shifts $t\to t+\tau$ act on the manifold as $(t,x,y)\mapsto(t+\tau,x,y)$,
\item and finally $\phi_{0,0}(0)$ to be a sample of $\phi_\text{B}$ at \mbox{$z_{0,0}(0)=(0,0,0)$}.
\end{enumerate}
Given these choices after fixing everything else is fixed with
\begin{align}
\text{H1-H3:}\quad& z_{n}(t)= (t,n\,a)\\
\nonumber
\text{H4-H7:}\quad& z_{n,m}(t)= (t,n\,a,m\,a)
\end{align}
with similar logic for applying for H1-H3. Fig.~\ref{FigHeatSpaceTime} shows for H1-H3 these sample points (vertical black lines.) as they lie on the spacetime manifold $\mathcal{M}_{1-3}\cong\mathbb{R}^2$. One can imagine an analogous figure for H4-H7 with the sample points forming a square lattice extended through time.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{PaperFigures/FigHeatSpaceTime.pdf}
\caption{One of the exponentially decaying planewave solutions to H1-H3 is shown here. The vertical black lines mark the locations of sample points which we might use to represent the dynamics. Alternatively, the red lines show another set of sample points we might use to represent the dynamics.}\label{FigHeatSpaceTime}
\end{figure}
One may worry that we are here taking a square lattice for each of H4-H7 whereas for H5 and H7 we naturally ought to embed on a hexagonal lattice. This point will resolve itself naturally later.
Having chosen a manifold, embedding, and sample points, I will next reconstruct the bandlimited function $\phi_\text{B}(t)$ from these sample points.
\subsection{Bandlimited Dynamics}\label{SecHeat3B}
The previous subsection motivated us to think of the discrete variables from H1-H7 as samples of some bandlimited function, $\phi_\text{B}$, as in Eq.~\eqref{H1H7Embed}. Using these sample values we can use the tools discussed in Sec. \ref{SecSamplingTheory} to reconstruct $\phi_\text{B}$ exactly. In particular making use of Eq.~\eqref{SincRecon} we have,
\begin{align}\label{PhiSincRecon}
\text{H1-H3:}\quad&\phi_\text{B}(t,x)
=\sum_{n=-\infty}^\infty S_{n}(x/a) \ \phi_{n}(t)\\
\nonumber
\text{H4-H7:}\quad&\phi_\text{B}(t,x,y)
=\sum_{n,m=-\infty}^\infty S_{n}(x/a) \, S_{m}(x/a) \, \phi_{n,m}(t)
\end{align}
Note that by construction $\phi_\text{B}(t,x)$ and $\phi_\text{B}(t,x,y)$ are both bandlimited with bandwidth of $K=\pi/a$ for each time $t$.
Fig.~\ref{FigHeatSpaceTime} shows for H1-H3 what this bandlimited function $\phi_\text{B}$ might look like. In particular, this figure shows one of the planewave solutions decaying exponentially in time. (Note that at a fixed wavenumber, H1-H3 only differ by a time rescaling such that this figure represents them all equally well.) One can imagine an analogous figure for H4-H7.
In addition to translating the state-of-the-world at each time into the bandlimited setting, we can also translate over the dynamics. This translation is aided by the fact that the derivative is the generator of translations, i.e., $h(x+a)=\text{exp}(a\, \partial_x) h(x)$. For H1 we have,
\begin{align}\label{DH1bandlimited}
\frac{\partial}{\partial t}\phi_\text{B}(t,x)
\!&=\sum_{n=-\infty}^\infty S_{n}(x/a) \ \frac{\d}{\d t}\phi_{n}(t)\\
\nonumber
&=\alpha\!\!\sum_{n=-\infty}^\infty\!\! S_{n}(x/a) \big[\phi_{n+1}(t)-2\phi_{n}(t)+\phi_{n-1}(t)\big]\\
\nonumber
&=\alpha\,
[\phi_\text{B}(t,x - a) - 2\phi_\text{B}(t,x) + \phi_\text{B}(t,x + a)]\\
\nonumber
&=\alpha \, [\exp(-a\,\partial_x)-2+\exp(a\,\partial_x)] \phi_\text{B}(t,x)\\
\nonumber
&=\alpha \, [2\text{cosh}(a\,\partial_x)-2] \ \phi_\text{B}(t,x).
\end{align}
Similarly for the other theories we have:
\begin{align}\label{DH2bandlimited}
\text{H2}:&\ \partial_t\phi_\text{B}
=\frac{\alpha}{6} [-\text{cosh}(2a\partial_x)\!+\!16\text{cosh}(a\partial_x)\!-\!15]\phi_\text{B}\\
\label{DH3bandlimited}
\text{H3}:&\ \partial_t\phi_\text{B}
=\alpha \, a^2\,\partial_x^2 \, \phi_\text{B}\\
\label{DH4bandlimited}
\text{H4}:&\ \partial_t\phi_\text{B}
=\alpha\,[\text{cosh}(a\,\partial_x)+\text{cosh}(a\,\partial_y)\!-\!2]\phi_\text{B}\\
\label{DH5bandlimited}
\text{H5}:&\ \partial_t\phi_\text{B}
=\alpha\,[\text{cosh}(a\,\partial_x)+\text{cosh}(a\,\partial_y)\\
\nonumber
&\qquad\qquad \ \ \ +\text{cosh}(a\,(\partial_y-\partial_x))\!-\!3]
\phi_\text{B},\\
\label{DH6bandlimited}
\text{H6}:&\
\partial_t\phi_\text{B}
=\frac{\alpha\,a^2}{2} \, (\partial_x^2+\partial_y^2) \, \phi_\text{B}\\
\label{DH7bandlimited}
\text{H7}:&\
\partial_t\phi_\text{B}
=\frac{\alpha\,a^2}{3}\left(\partial_x^2+\partial_y^2+(\partial_x-\partial_y)^2\right) \, \phi_\text{B}
\end{align}
We can easily solve each of these dynamical equations Just as in Sec. \ref{SecSevenHeat} these dynamics admit a complete basis of planewave solutions. Here we have:
\begin{align}\label{PlaneWaveCont}
\text{H1-H3:}\quad&\phi(t,x;k)
=e^{-\mathrm{i} k x} \,e^{-\Gamma(k)\,t}\\
\nonumber
\text{H4-H7:}\quad&\phi(t,x,y;k)
=e^{-\mathrm{i} k_1 x-\mathrm{i} k_2 y} \,e^{-\Gamma(k_1,k_2)\,t}.
\end{align}
Each of these planewaves decays at the same rates given in Sec.~\ref{SecSevenHeat}. There is however, one substantial difference here. Before the wavenumber was restricted to $k\in[-\pi,\pi]$ with solutions with $k$ outside of this range being identical solutions in this range. Here, the wavenumber is unrestricted. However, $\phi_\text{B}$ simply has no support over these solutions outside of $k\in[-K,K]$.
\subsection{Bandlimited Locality}\label{BandlimitedLocality}
Let's next develop a sense of comparative locality for H1-H7 according to this interpretation. Recall that in Sec. \ref{SecIntuitiveLocality} we found an intuitive notion of locality such that \mbox{$\text{H1}>\text{H2}>\text{H3}$} and \mbox{$\text{H4},\text{H5}>\text{H6},\text{H7}$} with higher rated theories being more local. Viewed from a bandlimited perspective however, a different notion of locality becomes natural. Indeed, as I will now discuss, all of these locality comparisons are here reversed.
In general, differential equations are considered local when they only involve derivatives up to a finite order. Each of these derivatives is a local operation and there is no way to build from a finite set of them something non-local. However, when one is allowed an infinite number of derivatives one can create non-local dynamics. Recall that $h(x+a)=\text{exp}(a\, \partial_x) h(x)$. Indeed, this is exactly what is going on in the dynamical equations of H1, H2, H4 and H5. From a bandlimited perspective, these are highly non-local theories despite previously being the most local. The bandlimited function $\phi_\text{B}$ is instantaneously coupled to the value it takes a distance of $a$ or even $2a$ away. By contrast, H3, H6 and H7 are perfectly local from the bandlimited perspective. On this new notion of locality we have $\text{H3}>\text{H1},\text{H2}$ and $\text{H6},\text{H7}>\text{H4},\text{H5}$. These are essentially the reverse judgements of what we had before.
But which of these two notions of locality should we care about? This depends on which view we take of the spacetime manifold underlying these theories. Indeed, it's not surprising that changes in the underlying manifold as drastic as $Q\to\mathcal{M}_{1-7}$ will have drastic consequences for our intuitive notions of locality. As I have discussed previously, (unlike with $\mathcal{M}_{1-7}$) taking $Q$ to be the manifold underlying H1-H7 systematically underpredicts these theories' symmetries. Thus we find substantial reason to prefer the bandlimited notion of locality associated with $\mathcal{M}_{1-7}$.
One may still be puzzled, however. Suppose we stick with viewing $\mathcal{M}$ as the underlying manifold. One may reason (poorly) as follows: When we view the dynamics of H3 (or H6 or H7) in terms of $\phi_\text{B}$ it is local. However, when we view the dynamics in terms of its sample points (which are after all extremely local: samples at a point) we find dynamics like Eq.~\eqref{DH3} which couples the sample points to each other at an infinite range via $D$. What gives?
The oversight in the above line-of-thought is thinking that the sample points correspond to localized degrees of freedom of $\phi_\text{B}$. They do not. Yes, the sample point of $f_\text{B}$ at $x_0$ can be understood as
\begin{align}\label{DeltaSample}
f_\text{B}(x_0) = \int \d x \, f_\text{B}(x) \, \delta(x-x_0).
\end{align}
However, it is also true for every bandlimited function $f_\text{B}$ and for every $x_0$ that,
\begin{align}\label{WeightedAvg}
f_\text{B}(x_0) = \frac{1}{a}\int \d x \, f_\text{B}(x) \, S\left(\frac{x-x_0}{a}\right).
\end{align}
for $a<\pi/K$. This is because $\delta(x)$ and $S(x/a)$ have identical Fourier transforms for $k\in[K,-K]$. For bandlimited functions, these two kernels are identical.
We thus have two mathematical representations for what it means to evaluate a bandlimited function at a point. As I will now discuss, the second representation is more in line with the nature of bandlimited function than the first. Mathematically, this is because its kernel is bandlimited whereas the other's (the Dirac delta) is not. In Eq.~\eqref{DeltaSample}, we project the bandlimited function onto a kernel outside of the bandlimited universe, whereas in Eq.~\eqref{WeightedAvg} we stay within the bandlimited universe.
Standardly, functions are thought of as something which maps some ``point'' in an input space to some ``point'' in an output space. Thus, standardly, at the core of being a function is the notion of ``evaluating a function at a point''. This is often taken as a primitive unanalyzable operation: it's just what functions do. However, pretend for a moment that we meet an alien species who have never thought of functions in this way. Rather, they take as an primitive unanalyzable operation: integrating two functions against each other. (More generally, suppose they hold this attitude towards integrating a function against a distribution.) When they ask us what we mean by ``evaluating a function at a point'', we would likely answer them by pointing to an equation like Eq.~\eqref{DeltaSample}.
To this they may ask, how do we know that such distributions as the Dirac delta exist when and where we need them? In a bandlimited context they do not. Thus, bizarrely, for bandlimited functions the supposedly basic notion of "evaluating a function at a point" breaks down. This likely has consequences for how we think of the spacetime manifold (if points aren't the sort of things we can evaluate functions at, what are they?) but this is a question for another paper.
There is a physical story which runs parallel to this mathematics regarding the localization of degrees of freedom and counterfactuals. My claim is that if we restrict our attention to bandlimited functions, then bandlimited functions have no localized degrees of freedom. I am here understanding degrees of freedom as things which can vary independently from each other. This is a context sensitive notion in that it depends on both what the other candidate degrees of freedom are and how we are allowed to vary them. One cannot (while keeping $f_\text{B}$ bandlimited) change the value of $f_\text{B}$ only at one point or even only in a compact region. Suppose you could. The difference between the function before and after the change would itself have to be bandlimited (the set of bandlimited functions is closed under subtraction). But this is impossible since no bandlimited function can be compactly supported. Every compactly supported function has non-zero support over all wavenumbers.
To be clear: whether or not the sample value $f_\text{B}(x_n)$ is a local degree of freedom of $f_\text{B}$ depends on context. Suppose we fix $f_\text{B}$ by giving its values at some (potentially non-uniform) sample lattice $x_n$. In one sense, all of these sample values are degrees of freedom because we can vary them all independently. Changing each of these would change $f_\text{B}$ almost everywhere, but its values at all the other sample points would remain the same. One cannot however, vary one of these sample values while only changing the function locally.
Thus, for both physical and mathematical reasons it is improper to associate the sample values of a bandlimited function with the sample point (besides as a mere label). Nor can one associate the sample values with the weighted average of the function over some compact region\footnote{In this sense Fig.~\ref{Fig2DSamples} and Figs.~\ref{FigEvolutionH4}, \ref{FigEvolutionH5}, and \ref{FigEvolutionH6} are misleading (and Fig.~\ref{Fig1DSamples} too although less so) in the following sense. Both figures associate each sample value with the sample point it was taken at. As discussed above, this is slightly misleading but ultimately understandable. However, Fig.~\ref{Fig2DSamples} and Fig.~\ref{FigEvolutionH4}, \ref{FigEvolutionH5} and \ref{FigEvolutionH6} also casually associates each sample value with the Voronoi cells surrounding its sample point. One must resist any temptation to associate the sample value with any sort of weighted average taken within these cells.}. If the sample value is to be associated with some weighted average it must be over the whole domain, e.g., as in Eq.~\eqref{WeightedAvg}.
In light of this we may want to clarify what exactly is meant by Eq.~\eqref{H1H7Embed}. We ought not think of pinning the bandlimited function down at these points $z_\ell(t)$ on the manifold. Bandlimited functions don't know what points are. Rather, this must be understood in a softer way as fixing a certain weighted average of the function, along the lines of Eq.~\eqref{WeightedAvg}.
Ultimately, for H3, H6, and H7, the apparent tension in between the locality of the dynamics for $\phi_\text{B}(t,x)$ namely and the non-locality of the dynamics for its sample values, $\phi_n(t)=\phi_\text{B}(t,x_n)$ is resolved as thus. The sample values themselves are to be understood as non-local objects. Hence, it is unsurprising if these non-local things obey non-local dynamics.
We thus have good reason to favor the bandlimited notion of locality introduced here over the intuitive one introduced in Sec.~\ref{SecIntuitiveLocality}. Another such reason is given in the next section: unlike the bandlimited notion of locality, the intuitive notion of locality discussed above is fragile and not preserved under resampling.
\subsection{Bandlimited Nyquist-Shannon Resampling}
Let's next see how changing between different lattice representations affects the dynamics.
\subsubsection*{Equivalence of H6 and H7 via Resampling}
First, let's see what this third interpretation has to say about the one-to-one correspondence between the solutions to H6 and H7 noted following Eq.~\eqref{SkewH6H7}. In our first interpretation, H6 and H7 were seen as different theories with different symmetries despite this correspondence. In our second interpretation, however, H6 and H7 were seen as equivalent via a change of basis on the value space.
As I will now discuss, here H6 and H7 are still seen as equivalent, but now via a change of coordinate and a change of sample points. Namely, we can make sense of the skew transformation Eq.~\eqref{SkewH6H7} in terms of a coordinate transformation as:
\begin{align}\label{CoorH6H7}
x&\mapsto x + \frac{1}{2} y\\
\nonumber
y&\mapsto \frac{\sqrt{3}}{2}y.
\end{align}
None of our previous interpretations were able to make sense of Eq.~\eqref{SkewH6H7} in terms of a coordinate transformation because they had no continuous manifold on which to define it.
Applying this transformation to H7's dynamics, namely Eq.~\eqref{DH7bandlimited}, we find
\begin{align}
\text{H7}:&\
\partial_t\phi_\text{B}
=\frac{\alpha\,a^2}{2} \, (\partial_x^2+\partial_y^2) \, \phi_\text{B}.
\end{align}
Thus, in their bandlimited formulation, H6 and H7 are just a change of coordinates away from each other. This is a much stronger notion of equivalence than just having a one-to-one correspondence between solutions.
Beginning from this unified bandlimited description of H6 and H7, how should we understand the two (seemingly different) dynamical equations we started from, namely Eq.~\eqref{DH6} and Eq.~\eqref{DH7}? As I will now discuss, these discrete dynamical equations result from describing the single $\phi_\text{B}$ with different sample points.
Note that in Sec.~\ref{SecHeat3A} we here embedded both H6 and H7 onto the manifold via a square lattice, \mbox{$z_{n,m}(t)= (t,n\,a,m\,a)$}. However, applying the coordinate transformation which maps H7 onto H6, namely Eq.~\eqref{CoorH6H7}, transforms a square lattice onto a hexagonal one. See Fig.~\ref{FigSkew}. Thus, taking into account this coordinate change, we have effectively embedded H7 onto our manifold using a hexagonal lattice.
\begin{figure}[t]
\includegraphics[width=0.4\textwidth]{PaperFigures/FigSkew.pdf}
\caption{As this figure shows, a linear transformation of coordinates (namely Eq.~\eqref{CoorH6H7}) maps a square lattice to a hexagonal one.}\label{FigSkew}
\end{figure}
Indeed, after applying Eq.~\eqref{CoorH6H7} to H7, the only remaining difference between it and H6 is that H6's sample points form a square lattice and H7's form a hexagonal lattice. Thus, in our third interpretation H6 and H7 are seen as describing the same bandlimited function just using different sample points. We have thus, not only shown in what ways these theories are identical (as the second interpretation also did) but we have also shed light on what is going on behind the scenes in our first interpretation.
Our second big lesson holds true in this interpretation just as it did in the second one: discrete theories which are presented to us with very different lattice structures (i.e., a square lattice versus a hexagonal lattice), may nonetheless turn out to be completely equivalent theories. In this interpretation, the process for switching between lattice structures is simply reformulating as a bandlimited function, and then resampling.
\subsubsection*{Boosted Resampling of H1-H3}
In order to better see how this process or resampling works in general, let's work through another example. In particular, I will first recover the discrete dynamics for H1, namely Eq.~\eqref{H1Long}, from its bandlimited dynamics, namely Eq.~\eqref{DH1bandlimited}. Then I will discuss how one might resample H1-H3 using boosted sample points.
In Sec.~\ref{SecHeat3A} we embedded H1-H3 onto a manifold via the sample point $z_n(t)=(t,n\,a)$. Following this in Sec.~\ref{SecHeat3B} we reconstructed the bandlimited field $\phi_\text{B}(t,x)$ and solved its dynamics. In particular, we found exponentially decaying planewave solutions Eq.~\eqref{PlaneWaveCont}. One of these plane wave solutions is shown in Fig.~\ref{FigHeatSpaceTime} along with its original sample points (vertical black lines). Note that at a fixed wavenumber, H1-H3 only differ by a time rescaling such that this figure represents them all equally well.
Before considering the boosted sample points (red lines in Fig.~\ref{FigHeatSpaceTime}), let's first cast the bandlimited dynamics for H1 down onto these stationary sample points.
Using the identity Eq.~\eqref{WeightedAvg} for bandlimited functions we have
\begin{align}\label{ReSample1}
&\frac{\d}{\d t}\phi_n(t)
=\frac{1}{a}\int \d x\, S_{n}(x/a) \ \partial_t\phi_\text{B}(t,x)
\end{align}
From here we would like to get $(\d/\d t)\phi_n(t)$ in terms of the other sample values, $\phi_m(t)$. To do this we can rewrite $\partial_t\phi_\text{B}(t,x)$ as follows:
\begin{align}\label{ReSample2}
&\partial_t\phi_\text{B}(t,x)\\
\nonumber
&=\alpha[2\text{cosh}(a\,\partial_x)-2] \ \phi_\text{B}(t,x)\\
\nonumber
&=\alpha [\exp(-a\,\partial_x)-2+\exp(a\,\partial_x)] \phi_\text{B}(t,x)\\
\nonumber
&=\alpha [\phi_\text{B}(t,x - a) - 2\phi_\text{B}(t,x) + \phi_\text{B}(t,x + a)]\\
\nonumber
&=\alpha\sum_m \,[S_{m-1}(x/a)-2 S_{m}(x/a)+S_{m+1}(x/a)] \phi_m(t)
\end{align}
where in the last step we have used Eq.~\eqref{PhiSincRecon}. Plugging this into Eq.~\eqref{ReSample1} and using the fact that $\{S_n(x)\}_{n\in\mathbb{Z}}$ form an orthonormal basis in the $L^2$ norm we have,
\begin{align}
\frac{\d}{\d t}\phi_n(t)
&=\alpha\sum_m \,[\delta_{n,m-1}-2 \delta_{n,m}+\delta_{n,m+1}] \phi_m(t)\\
\nonumber
&=\alpha\,[\phi_{n+1}(t)-2\phi_{n}(t)+\phi_{n-1}(t)].
\end{align}
Thus we have recovered the discrete dynamics of H1, Eq.~\eqref{H1Long} from its bandlimited dynamics, Eq.~\eqref{DH1bandlimited}. Using a similar process we can recover the discrete dynamics for H2-H7 from their bandlimited dynamics.
We can do more than this however. We can not only recover the original discrete dynamics from the bandlimited dynamics, but new discrete dynamics as well. We can do this by describing the bandlimited dynamics on a new set of sample points.
For instance, let's consider H1 sampled on the boosted sample points (slanted red lines) shown in Fig.~\ref{FigHeatSpaceTime} with
\begin{align}
\text{Boosted:} \quad &z_n^\text{Boost}(t)=(t,n\,a+v\,t)
\end{align}
for some speed $v$. Let $\varphi_n(t)$ be the sample values at these new sample points, that is
\begin{align}
\varphi_n(t)=\phi_\text{B}(z_n^\text{Boost}(t)).
\end{align}
What are the dynamics which these new sample values obey?
Using the identity Eq.~\eqref{WeightedAvg} for bandlimited function we have,
\begin{align}
&\frac{\d}{\d t}\varphi_n(t)
=\frac{1}{a}\int \d x\, S_{n+v\,t/a}(x/a) \ \frac{\d}{\d t}\phi_\text{B}(t,x)\\
\nonumber
&=\frac{1}{a}\int \d x\, S_{n}(x/a) \ \frac{\d}{\d t}\phi_\text{B}(t,x-v\,t)\\
\nonumber
&=\frac{1}{a}\int \d x\, S_{n}(x/a) \ \big[\partial_t\phi_\text{B}-v\,\partial_x\phi_\text{B}\big]_{(t,x-v\,t)}
\end{align}
Repeating our previous derivation Eq.~\eqref{ReSample2} we can simplify the first term. This leads us to
\begin{align}
\frac{\d}{\d t}\varphi_n(t)
&=\alpha\,[\varphi_{n+1}(t)-2\varphi_{n}(t)+\varphi_{n-1}(t)]\\
\nonumber
&-\frac{v}{a}\int \d x\, S_{n}(x/a) \ \partial_x\phi_\text{B}(t,x-v\,t)
\end{align}
Making use of the derivative approximation (which is exact for bandlimited functions) given by Eq.~\eqref{ExactDerivative} and collecting these sample values into a vector \mbox{$\bm{\varphi}(t)=(\dots,\varphi_{-1}(t),\varphi_0(t),\varphi_1(t),\dots)$} we have
\begin{align}
\text{H1}:\quad
\frac{\d }{\d t}\bm{\varphi}(t)&=\alpha\,\Delta_{(1)}^2 \bm{\varphi}(t)-\frac{v}{a}\,D\,\bm{\varphi}(t).
\end{align}
Repeating this process for H2 and H3 we would find,
\begin{align}
\text{H2}:\quad
\frac{\d }{\d t}\bm{\varphi}(t)&=\alpha\,\Delta_{(2)}^2 \bm{\varphi}(t)-\frac{v}{a}\,D\,\bm{\varphi}(t)\\
\text{H3}:\quad
\frac{\d }{\d t}\bm{\varphi}(t)&=\alpha\,D^2 \bm{\varphi}(t)-\frac{v}{a}\,D\,\bm{\varphi}(t).
\end{align}
Note that the appearance of this new term in the dynamics means that none of these theories are Galilean boost invariant.
Also note how the infinite range discrete derivative operator $D$ appears in each of these equations, even when we start off with only finite range derivative approximation. Moreover, note that while before this resampling H1-H3 were local in the intuitive sense of Sec.~\ref{SecIntuitiveLocality} (i.e., nearest-neighbor couplings only), they are no longer once we have resample them. Thus, this intuitive notion of locality is uncomfortably representation dependent and hence unphysical. Another example of this loss of intuitive locality under resampling is described in the next section.
\subsubsection*{Resampling H5 on a Square Lattice}
Before going on to discuss the symmetry of these theories under this third interpretation, one final resampling should be discussed regarding H5.
Noted that applying the coordinate transformation Eq.~\eqref{CoorH6H7} to H5 changes its dynamics from Eq.~\eqref{DH5bandlimited} to:
\begin{align}\label{DH5bandlimitedSkew}
\text{H5:}\ \partial_t \phi_\text{B}
=\frac{\alpha}{3}\,\big[ &2\text{cosh}(a\,\partial_x)-2\\
\nonumber
+&2\text{cosh}(a(\sqrt{3} \partial_y+\partial_x)/2)-2\\
\nonumber
+&2\text{cosh}(a(\sqrt{3} \partial_y-\partial_x)/2)-2\big]\phi_\text{B}.
\end{align}
Note that this dynamics manifestly has a one-sixth rotation symmetry.
Like with H7, it is this version of H5's dynamics which we can think of as being sampled on a hexagonal lattice to give back $\eqref{H5Long}$. However, we do not have to sample this theory on a hexagonal lattice. Sampling it on a square lattice has the effect of taking $\partial_x\to D_\text{n}$ and $\partial_y\to D_\text{m}$ resulting in the discrete dynamics,
\begin{align}\label{DH5Skew}
\text{H5:}\ \partial_t \phi_\text{B}
=\frac{\alpha}{3}\,\big[ &2\text{cosh}(D_\text{n})-2\\
\nonumber
+&2\text{cosh}((\sqrt{3} D_\text{m}+D_\text{n})/2)-2\\
\nonumber
+&2\text{cosh}((\sqrt{3} D_\text{m}-D_\text{n})/2)-2\big]\phi_\text{B}.
\end{align}
This is equivalent to Eq.~\eqref{DH5} under a change of sample points from hexagonal to square.
Note that before this resampling H5 was local in the intuitive sense of Sec.~\ref{SecIntuitiveLocality} (i.e., nearest-neighbor couplings only), it is no longer. Thus, this intuitive notion of locality is uncomfortably representation dependent and hence unphysical.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{PaperFigures/FigRow1.pdf}
\centering
\includegraphics[width=0.9\textwidth]{PaperFigures/FigRow2.pdf}
\centering
\includegraphics[width=0.9\textwidth]{PaperFigures/FigRow3.pdf}
\caption{The dynamics of H4 is here shown being carried out in a variety of lattice representations. In the left most column the initial condition is shown in its bandlimited representation, given by Eq.~\eqref{InCond}. In the rightmost column the final evolved state is shown in its bandlimited representation. Here the evolution time is $t=0.8$ and the diffusion rate is $\alpha=1$. This state can be found in four different ways. Firstly by applying the dynamics Eq.~\eqref{DH4bandlimited} to of Eq.~\eqref{InCond}. The other three ways are shown in the three rows of this figure. The first row shows the initial condition being sampled onto a square lattice. This is then evolved forward in time via Eq.~\eqref{H4Long}. The bandlimited representation of the final state is then recovered through the methods discussed in Sec.~\ref{SecSamplingTheory}. The second and third rows show the same process carried out on a hexagonal lattice and an irregular lattice. Notice that the final state has a 4-fold symmetry regardless of how the dynamics is represented. Notice that the final state is the same regardless of how the dynamics is represented.}\label{FigEvolutionH4}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{PaperFigures/FigRow4.pdf}
\centering
\includegraphics[width=0.9\textwidth]{PaperFigures/FigRow5.pdf}
\centering
\includegraphics[width=0.9\textwidth]{PaperFigures/FigRow6.pdf}
\caption{The dynamics of H5 is here shown being carried out in a variety of lattice representations. In the left most column the initial condition is shown in its bandlimited representation, given by Eq.~\eqref{InCond}. In the rightmost column the final evolved state is shown in its bandlimited representation. Here the evolution time is $t=8.3$ and the diffusion rate is $\alpha=1$. This state can be found in four different ways. Firstly by applying the dynamics Eq.~\eqref{DH5bandlimitedSkew} to Eq.~\eqref{InCond}. The other three ways are shown in the three rows of this figure. The second row shows the initial condition being sampled onto a hexagonal lattice. This is then evolved forward in time via Eq.~\eqref{H5Long}. The bandlimited representation of the final state is then recovered through the methods discussed in Sec.~\ref{SecSamplingTheory}. The second and third rows show the same process carried out on a square lattice and an irregular lattice. Notice that the final state has a 6-fold symmetry regardless of how the dynamics is represented. Notice that the final state is the same regardless of how the dynamics is represented.}\label{FigEvolutionH5}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{PaperFigures/FigRow7.pdf}
\centering
\includegraphics[width=0.9\textwidth]{PaperFigures/FigRow8.pdf}
\centering
\includegraphics[width=0.9\textwidth]{PaperFigures/FigRow9.pdf}
\caption{The dynamics of H6 is here shown being carried out in a variety of lattice representations. In the left most column the initial condition is shown in its bandlimited representation, given by Eq.~\eqref{InCond}. In the rightmost column the final evolved state is shown in its bandlimited representation. Here the evolution time is $t=1$ and the diffusion rate is $\alpha=1$. This state can be found in four different ways. Firstly by applying the dynamics Eq.~\eqref{DH6bandlimited} to Eq.~\eqref{InCond}. The other three ways are shown in the three rows of this figure. The first row shows the initial condition being sampled onto a square lattice. This is then evolved forward in time via Eq.~\eqref{DH6}. The bandlimited representation of the final state is then recovered through the methods discussed in Sec.~\ref{SecSamplingTheory}. The second and third rows show the same process carried out on a hexagonal lattice and an irregular lattice. Notice that the final state is rotation invariant regardless of how the dynamics is represented. Notice that the final state is the same regardless of how the dynamics is represented.}\label{FigEvolutionH6}
\end{figure*}
\subsection{Bandlimited Symmetries}
Now that we have translated the dynamics of our seven heat equations into a bandlimited setting, we can now discuss their dynamical symmetries. While no new symmetries have been revealed in moving from our second to our third interpretation, the symmetries are represented and classified differently. In particular, all of the symmetries (except for the linear-affine rescaling) are now represented as external symmetries. In particular, the continuous translation and rotation symmetries identified earlier are now honest-to-goodness manifold symmetries, represented by diffeomorphisms $d\in\text{Diff}(\mathcal{M})$.
Our first big lesson holds true here as well: discrete spacetime theories can have (external) continuous translation and rotation symmetries. The fact that our discrete theories at first appeared on some lattice with some lattice structure does nothing to forbid this.
I could end this section here, but I think it is helpful to see the independence of dynamical symmetries on the lattice and lattice structure explicitly. The symmetries of a dynamics has nothing to do with the symmetries of the lattice it is represented on. Just as we can represent any bandlimited state on any lattice, so too can we represent any bandlimited dynamics on any lattice. To see this consider Figs. \ref{FigEvolutionH4}, \ref{FigEvolutionH5} and \ref{FigEvolutionH6}.
In each of these figures we begin from some initial heat distribution with a bandlimited representation,
\begin{align}\label{InCond}
\phi_\text{B}(0,x,y)=\frac{J_1(\pi r)}{\pi r}
+\frac{J_0(\pi r)-J_2(\pi r)}{2}
\end{align}
where $J_n(r)$ is the $n^\text{th}$ Bessel function and $r=\sqrt{x^2+y^2}$. This function is bandlimited with bandwidth of $K=\pi$ and is rotationally invariant. This function is shown in the first columns of Figs. \ref{FigEvolutionH4}, \ref{FigEvolutionH5} and \ref{FigEvolutionH6}.
We can therefore represent this function with sampling on a square lattice with $a=1$. We could equivalently represent this function on a hexagonal lattice or even an irregular lattice. For each of H4, H5 and H6=H7 Such representations are shown in the second columns of Figs. \ref{FigEvolutionH4}, \ref{FigEvolutionH5} and \ref{FigEvolutionH6}.
For each of these theories, we then have a choice of which representation to carry out the dynamics in. I here consider four options: as a bandlimited function, as samples on a square lattice, as samples on a hexagonal lattice or as samples on an irregular lattice. The various options for H4, H5 and H6=H7 are shown in Figs. \ref{FigEvolutionH4}, \ref{FigEvolutionH5} and \ref{FigEvolutionH6} respectively.
Let's begin with the dynamics of H5 represented on a hexagonal lattice. This is shown in the middle row of Fig.~\ref{FigEvolutionH5}. The bandlimited representation of the initial heat distribution is shown Fig.~\ref{FigEvolutionH5}b1. The initial sample points on the hexagonal lattice are shown Fig.~\ref{FigEvolutionH5}b2. These can be evolved forward in time using Eq.~\eqref{H5Long}. The resulting time-evolved sample points are shown in Fig.~\ref{FigEvolutionH5}b4. From these we can reconstruct a bandlimited representation for the state using the techniques discussed in Sec.~\ref{SecSamplingTheory}. The resulting reconstruction is shown in Fig.~\ref{FigEvolutionH5}b4.
Alternatively, we could have carried out this evolution with no lattice representation at all. That is, we could have skipped from Fig.~\ref{FigEvolutionH5}b1 directly to Fig.~\ref{FigEvolutionH5}b4. We could do this by applying the dynamics Eq.~\eqref{DH5bandlimitedSkew} directly to the bandlimited initial condition Eq.~\eqref{InCond}. It is in this sense that the bandlimited and discrete representations of our dynamics are equivalent.
The first and third rows of Fig.~\ref{FigEvolutionH5} show the exact same evolution via H5 represented on different lattices, namely a square lattice and an irregular lattice. In the first row the evolution is carried out by a resampled version of Eq.~\eqref{H5Long}, namely Eq.~\eqref{DH5Skew}. In the third row the evolution is carried out by whatever resampling of Eq.~\eqref{H5Long} corresponds to this irregular lattice.
Notice that the final state has a 6-fold symmetry regardless of how the dynamics is represented. Moreover, notice that the final state is the same regardless of how the dynamics is represented. Just as we can represent any bandlimited state on any lattice, so too can we represent any bandlimited dynamics on any lattice.
Fig.~\ref{FigEvolutionH4} makes the same demonstration for H4. Notice that the final state has a 4-fold symmetry regardless of how the dynamics is represented. Notice that the final state is the same regardless of how the dynamics is represented.
Likewise, Fig.~\ref{FigEvolutionH6} makes the same demonstration for H6. Notice that the final state is rotation invariant regardless of how the dynamics is represented. Notice that the final state is the same regardless of how the dynamics is represented.
These figures demonstrate clear as can be that a theory's lattice structure has nothing to do with its dynamical symmetries. We can represent any bandlimited dynamics on any lattice.
\subsection{Bandlimited General Covariance}\label{SecFullGenCov}
As the above discussion has shown, giving our discrete theory a bandlimited representation has had many of the same benefits one expects from a generally covariant formulation. Namely, we have exposed certain parts of our theory as merely representational artifacts and in the process we have come to a better understanding of our theory's symmetries and background structures. This is the work of the titular discrete analog of general covariance. This analogy will be spelled out in detail in the following section.
Now, however, I show how to combine this discrete analog with our usual continuum notion of general covariance. To do this, one simply takes the dynamical equations of H1-H7 (i.e., Eq.~\eqref{DH1bandlimited}-Eq.~\eqref{DH7bandlimited}) and recast them in the coordinate-free language of differential geometry. For simplicity, however, I will just consider H4 and H6=H7 here.
Beginning with H6=H7 we should first note that its dynamics Eq.~\eqref{DH6bandlimited} are nearly identical to those of H0 the continuum heat equation's given by Eq.~\eqref{HeatEq0}. There are only two differences. Firstly, the dynamics for H0 has a diffusion constant of $\alpha_0$ whereas H6=H7 has $\alpha\,a^2$. This can be disregarded by setting $\alpha_0=\alpha a^2$. The second difference is more substantial. The field in H6=H7, namely $\phi_\text{B}$, is bandlimited whereas the field in H0, namely, $\psi$ is not. As we will soon see, this is indeed their only real difference.
We can rewrite H6=H7 in the coordinate-free language of differential geometry as follows.
\begin{align}\label{DH6GenCov}
\text{H6=H7:}\quad\text{KPMs:}\quad&\langle \mathcal{M}, t_\text{ab}, h^\text{ab}, \nabla_\text{a},T^\text{a},\phi_\text{B}\rangle\\
\nonumber
\text{DPMs:}\quad&
T^\text{a}\,\nabla_\text{a}\psi
=\frac{\alpha_0}{2} \, h^\text{bc} \nabla_\text{b}\nabla_\text{c}\phi_\text{B}.
\end{align}
The geometric objects used in this formulation are all just as defined following Eq.~\eqref{H0GenCov} except that $\phi_\text{B}$ here is bandlimited. In order for this reformulation to really be coordinate-free, we need some geometric way of understanding that $\phi_\text{B}$ is bandlimited. But how can this be expressed geometrically, i.e, in terms of $h^\text{ab}$ and $\nabla_\text{a}$?
Consider any space-like hypersurface in this spacetime. That is, consider any surface $H\in\mathcal{M}$ such that all of its tangent vectors $x^\text{a}$ have $t_\text{ab}\,x^\text{a}=0$. Consider the eigen-problem for functions $f:H\to\mathbb{R}$ defined on this surface, $h^\text{bc} \nabla_\text{b}\nabla_\text{c} f= -\lambda f$. Because the spacetime is flat, we know that $H$ is flat and therefore the eigensolutions are all planewaves, with frequency $k=\sqrt{\lambda}$. We can now say what it means for $\phi_\text{B}$ to be bandlimited.
$\phi_\text{B}$ is bandlimited if and only if for any space-like hypersurface $H$ if we restrict $\phi_\text{B}$ to $H$ and then expanded it in the above discussed eigenbasis, then only eigensolutions with eigenvalues in some fixed finite range are needed. The extent of this range is the bandwidth of $\phi_\text{B}$.
Note that spelling out what it means for $\phi_\text{B}$ to be bandlimited did not require talking about $T^\text{a}$. Thus, this geometric definition of being bandlimited can be applied in Galilean spacetimes as well. We did however make use of the flatness of the spacetime. As such, another geometric definition of being bandlimited will need to be developed for curved spacetimes~\cite{CurvedSampling}.
Before further analyzing H6=H7, let's next consider H4. Rewritten in the coordinate-free language of differential geometry, H4 becomes:
\begin{align}\label{DH4GenCov}
\text{H4:}\quad\text{KPMs:}\quad&\langle \mathcal{M}, t_\text{ab}, h^\text{ab}, \nabla_\text{a},T^\text{a},X^\text{a},Y^\text{a},\phi_\text{B}\rangle\\
\nonumber
\text{DPMs:}\quad&
T^\text{a}\,\nabla_\text{a}\psi
=\frac{\alpha}{2} \, F(X^\text{b}\nabla_\text{b},Y^\text{c}\nabla_\text{c})\,\phi_\text{B}.
\end{align}
where
\begin{align}
F(x,y)
=2\text{cosh}(a\,x)+2\text{cosh}(a\,y)\!-\!4.
\end{align}
Here $X^\text{a}$ and $Y^\text{a}$ are a pair of fixed constant space-like unit vectors which are orthogonal to each other. That is,
\begin{align}
\nabla_\text{a}X^\text{b}&=0 &
\nabla_\text{a}Y^\text{b}&=0\\
\nonumber
t_\text{ab} X^\text{a}&=0 &
t_\text{ab} Y^\text{a}&=0\\
\nonumber
h_\text{ab} X^\text{a}X^\text{b}&=1 &
h_\text{ab} Y^\text{a}Y^\text{b}&=1\\
\nonumber
h_\text{ab} X^\text{a}Y^\text{b}&=0
\end{align}
Note that the inverse space metric $h_\text{ab}$ is only well defined for spacelike vectors, see \cite{ReadThesis}. Roughly, $X^\text{a}$ and $Y^\text{a}$ here serve to pick out the directions for the rotational anomalies appearing in Fig.~\ref{FigEvolutionH4}.
Now that we have applied both our discrete and continuous notion of general covariance to H4 and H6=H7, we should be in a position to identify their background structures and symmetries.
The fixed fields $X^\text{a}$ and $Y^\text{a}$ appearing in H4 count as additional background structures and limit the spacetime symmetries of H4. Namely they forbid rotation invariance.
Turning our attention towards H6=H7 we see that, perhaps surprisingly, its background structures and symmetries are exactly the same as H0's. The only difference between these H6=H7 and H0 is that here $\phi_\text{B}$ is bandlimited whereas there $\psi$ is not. This is a restriction at the level of KPMs as to which dynamical fields are allowed. In fact, in either theory the dynamics guarantees that if the temperature field starts off bandlimited it will stay bandlimited. Thus this restriction of the allowed dynamical fields is really just a restriction on the allowed initial conditions. Thus, ultimately the only difference between H6=H7 and H0 is a restriction on the initial conditions.
As innocent as this restriction on initial conditions may seem, it has serious implications for counterfactual reasoning. As discussed following Eq.~\eqref{WeightedAvg}, when restricted to bandlimited functions we can no longer ask ``What would have happened, if things had been different only in this compact region?'' Any bandlimited counter-instance must be globally different.
However, other than this restriction there is no substantial difference between H6=H7 and H0. Any lattice structure suggested by our original formulation of H6 and H7, Eq.~\eqref{DH6}-\eqref{DH7} has been revealed to be nothing more than a coordinate-like representational artifact. Our third big lesson is visible here in full force: given a discrete spacetime theory with some lattice structure we can always reformulate it in such a way that it has no lattice structure whatsoever. In fact, there is no longer even a lattice here. In this interpretation, this is done by reformulating it as a bandlimited function on some manifold.
\section{Two Discrete Analogs of General Covariance}\label{SecDisGenCov}
Three lessons have been repeated throughout this paper. Each of these lessons is visible in both our second and third attempts at interpreting H1-H7. Combined these lessons give us a rich analogy between lattice structures and coordinate systems: Lattice structure is rather less like a fixed background structure and rather more like a coordinate system, i.e., merely a representational artifact.
These three lessons run counter to the three first intuitions one is likely to have regarding lattice structure discussed in Sec.~\ref{SecIntro}. Namely, that lattices and lattice structure: restrict our symmetries, distinguish our theories, and are fundamentally ``baked-into'' the theory. As we have seen, they do not restrict our symmetries, they do not distinguish our theories and they are representational not fundamental. In particular, we have learned the following three lessons.
Our first lesson was that taking the lattice and/or lattice structure seriously as a fixed background structure or as a fundamental part of the underlying manifold systematically under predicts the symmetries that discrete theories can and do have. Indeed, discrete theories can have significantly more symmetries than our first intuitions might allow for. As Sec.~\ref{SecHeat2} and Sec.~\ref{SecHeat3} have shown each of H1-H7 has a continuous translation symmetry despite being introduced with discrete lattice structures. Moreover, H6 and H7 even have a continuous rotation symmetry. The fact that a lattice structure was used in the initial statement of these theory's dynamics does not in any way restrict their symmetries. There is no conceptual barrier to having a theory with continuous symmetries formulated on a discrete lattice.
In light of the proposed analogy between lattice structure and coordinate systems this first lesson is not mysterious. Coordinate systems are neither background structure nor a fundamental part of the manifold. The use of a certain coordinate system does not in any way restrict a theory's symmetries. Indeed, it is a familiar fact that there is no conceptual barrier to having a rotationally invariant theory formulated on a Cartesian coordinate system.
Our second lesson was that discrete theories which are presented to us with very different lattice structures may nonetheless turn out to be completely equivalent theories. Indeed, as we have seen, two of our discrete theories (H6 and H7) have a one-to-one correspondence between their solutions. This despite the fact that these theories were initially presented to us with different lattice structures (i.e., a square lattice and a hexagonal lattice respectively).
However, when in Sec.~\ref{SecHeat1} we took these lattice structures seriously as a fixed background structure, we found that despite having one-to-one correspondence H6 and H7 were inequivalent; they were here judged to have different symmetries. Only in Sec.~\ref{SecHeat2} and Sec.~\ref{SecHeat3} when stopped take the lattice structure so seriously did we ultimately see H6 and H7 as having the same symmetries. Indeed, in these later two interpretations H6 and H7 were seen to be identical, simply re-descriptions of a single theory. In Sec.~\ref{SecHeat2} this re-description is a change of basis in the theory's value space, whereas in Sec.~\ref{SecHeat3} this re-description is a change of the sample points we are using to describe the bandlimited field state.
Moreover, as I have discussed in Sec.~\ref{SecHeat3}, our ability to switch between two different lattice structures for H6=H7 holds more generally. For any discrete theory, we can always re-described it using any\footnote{There is some subtlety here which was discussed in Sec.~\ref{SecHeat3}.} lattice structure we wish.
In light of the proposed analogy between lattice structure and coordinate systems this second lesson is not mysterious. Unsurprisingly, continuum theories presented to us in different coordinate systems may turn out to be equivalent. Moreover, we can always re-describe any continuum theory in any coordinates we wish.
Our third lesson was that, in addition to being able to switch between lattice structures, we can also reformulate any discrete theory in such a way that it has no lattice structure whatsoever and indeed no lattice whatsoever. I have shown two ways of doing this. In Sec.~\ref{SecHeat2} this was done by internalizing the lattice structure into the theory's value space. In Sec.~\ref{SecHeat3} this was done by embedding the discrete theory onto a continuous manifold using bandlimited functions. Adopting a lattice structure and switching between them was then handled using Nyquist-Shannon sampling theory, see Sec.~\ref{SecSamplingTheory}.
In light of the proposed analogy between lattice structure and coordinate systems this third lesson is not mysterious. This is analogous to the familiar fact, discussed in Sec.~\ref{SecGenCov}, that any continuum theory can be written in a generally covariant (i.e., coordinate-free) way. Thus, the two above-discussed ways of reformulating a discrete theory to be lattice-free are each analogous to reformulating a continuum theory to be coordinate-free (i.e., a generally covariant reformulation). Thus we have not one but two discrete analogs of general covariance. See Fig.~\ref{FigTwoAnalogies}.
\begin{figure}[t]
\begin{flushleft}
\text{{\bf Internal Discrete General Covariance:}}
\end{flushleft}
$\begin{array}{rcl}
\text{Coordinate Systems} & \!\!\leftrightarrow\!\! & \text{Lattice Structure}\\
\text{Changing Coordinates} & \!\!\leftrightarrow\!\! & \text{Change of Basis in Value Space}\\
\text{Gen. Cov. Formulation} & \!\!\leftrightarrow\!\! & \text{Internalized Formulation}\\
\text{(i.e., coordinate-free)} & \ & \text{(i.e., lattice-free)}\\
\end{array}$\\
\begin{flushleft}
\text{{\bf External Discrete General Covariance:}}
\end{flushleft}
$\begin{array}{rcl}
\text{Coordinate Systems} & \!\!\leftrightarrow\!\! & \text{Lattice Structure}\\
\text{Changing Coordinates} & \!\!\leftrightarrow\!\! & \text{Nyquist-Shannon Resampling}\\
\text{Gen. Cov. Formulation} & \!\!\leftrightarrow\!\! & \text{Bandlimited Formulation}\\
\text{(i.e., coordinate-free)} & \ & \text{(i.e., lattice-free)}\\
\end{array}$
\caption{A schematic of the two notions of discrete general covariance introduced in this paper. These are compared in Sec.~\ref{SecDisGenCov}. The internal strategy is applied to H1-H7 in Sec.~\ref{SecHeat2} whereas the external strategy is applied to H1-H7 in Sec.~\ref{SecHeat3}.}
\label{FigTwoAnalogies}
\end{figure}
Before contrasting these two analogies, let's recap what they agree on. In either case, as one would hope, our discrete analog helps us to disentangle a discrete theory's substantive content from its merely representational artifacts. In particular, in both cases, lattice structure is revealed to be non-substantive and merely representational as is the lattice itself. Lattice structure is no more attached or baked-into to our discrete spacetime theories than coordinate systems are to our continuum theory. In either case, getting clear about this has helped us to expose our discrete theory's hidden continuous symmetries.
What distinguishes these two notions of discrete general covariance is how they treat the lattice and lattice structure after it has been revealed as being coordinate-like and so merely representational. The approach in Sec.~\ref{SecHeat2} was to internalize the lattice structure into the theory's value space. By contrast, the approach in Sec.~\ref{SecHeat3} was to keep the lattice structure external, but to flesh it out into a continuous manifold such that it is no longer fundamental. Let us therefore call these two notions of discrete general covariance internal and external respectively.
While for H1-H7 these internal and external approaches have agreed on what symmetries there are, they have disagreed about how they are to be classified. Moreover, these two approaches pick out very different underlying manifolds for our discrete theories. As a consequence, they license very different conclusions about locality.
In each of these differences I find reason to favor the external approach. To briefly overview my feelings: It is more natural for the continuous translation and rotation symmetries of H1-H7 to be classified as external. Moreover, keeping the lattice structure external as a part of the manifold, allows us to draw intuitions about locality from it. However, neither of these reasons are decisive and I think either approach is likely to be fruitful for further investigation/use.
\section{Conclusion}\label{SecConclusion}
This paper has introduced two discrete analogs of general covariance (see Fig.~\ref{FigTwoAnalogies}) and demonstrated their usefulness. In either case, as hoped, when applied to a discrete spacetime theory (i.e., a lattice theory) this discrete analog helps us disentangle the theory's substantive content from its representational artifacts. Indeed, my analysis has shown that lattice structure is rather less like a fixed background structure or part of an underlying manifold and rather more like a coordinate system, i.e., merely a representational artifact. Ultimately, as I have shown, the lattice structure supposedly underlying any discrete ``lattice'' theory has the same level of physical import as coordinates do, i.e., none at all. Namely, lattice structure is no more attached or baked-into to our discrete spacetime theories than coordinate systems are to our continuum theory.
Three lessons learned throughout this paper support this strong analogy between lattice structures and coordinate systems. These lessons serve to undermine the three first intuitions about lattice structure laid out in Sec.~\ref{SecIntro}.
Firstly, as I have shown, taking lattice structure seriously as a fixed background structure (or as a fundamental part of the underlying manifold) systematically under predicts the symmetries that discrete theories can and do have. Indeed, as I have shown, lattice structure does not in any way restrict a discrete theory's possible symmetries. Discrete theories can and do have significantly more symmetries than our first intuitions might allow for. There is no conceptual barrier to having a theory with continuous symmetries formulated on a discrete lattice. As I have discussed, this is analogous to the familiar fact that there is no conceptual barrier to having a continuum theory with rotational symmetry formulated on a Cartesian coordinate system.
Secondly, as I have shown, discrete theories which are presented to us with very different lattice structures (e.g., a square lattice versus a hexagonal lattice), may nonetheless turn out to be completely equivalent theories. Moreover, given any discrete theory with some lattice structure always\footnote{There is some subtlety here which was discussed in Sec.~\ref{SecHeat3}.} re-describe it using a different lattice structure. As I have discussed, this is analogous to the familiar fact that our continuum theories can be described in different coordinates, and moreover we can switch between these coordinate systems freely.
Thirdly, as I have discussed, in addition to being able to switch between lattice structures, we can also reformulate any discrete theory in such a way that it has no lattice structure (and indeed no lattice) whatsoever. As I have discussed, this is analogous to the familiar fact that any continuum theory can be written in a generally covariant (i.e., coordinate-free) way.
While the details of switching between lattice structures and of lattice-free reformulation differ between the two notions of discrete general covariance mentioned above (see Fig.~\ref{FigTwoAnalogies}) the above three lessons are clear in either case. Lattice structure is very much coordinate-like and consequently ought to be viewed as a merely representational artifact.
This result is significant for two reasons. Firstly, it has consequences for other issues in the philosophy of space and time. More on this in Sec.~\ref{SecOutlook}. However, this alone stands as a shocking conclusion.
One might have an intuition that the world could be fundamentally set on a lattice. This lattice might be square or hexagonal and we might discover which by probing the world at the smallest possible scales looking for violations of rotational symmetry, or other lattice artifacts. Many serious efforts at quantum gravity assume that the world is set on a lattice of some sort at the smallest scales. However, as this paper clearly demonstrates this just cannot be the case. The world cannot be ``fundamentally set on a square lattice'' (or any other lattice) any more than it could be ``fundamentally set in a certain coordinate system''. Lattice structures are just not the sort of thing that can be fundamental; they are thoroughly representational. Spacetime cannot be discrete (even when it might be representable as such)\footnote{As I discussed in an earlier footnote, I am unhappy with calling the sort of spacetime theories discussed in this paper ``lattice theories''. Given the results of this paper (that lattice structure is thoroughly representational) this name is misleading. However, given the claim I just made, calling them ``discrete spacetime theories'' is also misleading. The spacetimes considered here are at most \textit{representable as} discrete (at the significant cost of systematically suppressing the possibility for continuous symmetries). In light of this I currently think ``discretely representable spacetime theories'' is the most apt term for them. The defining feature of such theories in my mind is them having a finite density of degrees of freedom to borrow a phrase from Achim Kempf~\cite{UnsharpKempf,Kempf2003,Kempf2004a,Kempf2004b,Kempf2006}.}.
\section{Outlook}\label{SecOutlook}
As conclusive as the above discussion is, it opens a number of questions which require further investigation. Firstly, the above work can be extended in a number of directions: to Lorentzian theories, to theories with non-linear dynamics, to gauge theories, to gravitational theories, to first quantized theories, to second quantized theories, etc. Some interesting work has already been done in the physics literature about bandlimited quantum field theory~\cite{Pye2015,PyeThesis,BEH_2020}.
Moreover, the above work raises some interesting questions about the nature of locality in a bandlimited world. As discussed in Sec.~\ref{BandlimitedLocality}, there are no local degrees of freedom in a bandlimited world. Indeed, to change a bandlimited field somewhere requires that we change it everywhere. As discussed in Sec.~\ref{SecFullGenCov}, this non-locality only shows at the level of KPMs, that is as a restriction on what worlds are possible before dynamics are considered, off-shell. The dynamics of bandlimited theories, however, can be totally local however, see H3 and H6-H7. What are the philosophical consequences of this new sort of counterfactual non-dynamical non-locality?
Another set of interesting questions arises in connection with the status of the manifold in the spacetime manifold in the above discussion. Consider the following in light of the dynamical vs geometrical spacetime debate. Roughly, which of dynamical and spacetime symmetries are explanatorily prior. Are spacetime structures merely codifications of the dynamical behavior of matter? Or do they have an independent existence and act to govern the dynamical behavior of matter (by for instance restricting its possible symmetries)? Moreover, consider Norton's complaint~\cite{Norton2008} that proponents of the dynamical approach must assume some prior spacetime structure (name the spacetime manifold itself) to even begin talking about the dynamics of matter let alone its dynamical symmetries.
As I have shown in this paper, we can make sense of dynamics and dynamical symmetries without (much of) a manifold. In particular, in my second interpretation given in Sec.~\ref{SecHeat2}, I moved (much of) the spacetime manifold into the theory's value space, leaving only time behind. In principle time could be internalized as well. I was then able to analyze the theory's symmetries and decide which of these to externalize. In particular, in Sec.~\ref{SecHeat3} I was able to design a manifold specifically to fit with the theory's already-studied dynamics. Thus the spacetimes discussed in Sec.~\ref{SecHeat3} and the spacetime structures placed on top of them, are very much in line with the dynamical approach.
This is (potentially) a very different situation to the manifold which underlies our continuum heat equation (H0 in Sec.~\ref{SecGenCov}) if it is understood along the lines of the geometric approach. What underpins this difference? As the above discussion has shown, H0 and H6=H7 only differ as to whether the dynamical fields are bandlimited. In the first case, the spacetime manifold seems to be an ineliminable part of the theory, along the lines of Norton's complaint. However, in the second case the manifold seems far from necessary. Indeed, the manifold underlying H6=H7 was invented in Sec.~\ref{SecHeat3} as a means of better codifying the dynamics of our theories (in full accordance with the dynamical approach). Indeed, there had been much discussion of the dynamical symmetries of H6=H7 long prior to finding a suitable manifold for these theories.
This is suggestive of the possibility that the manifold underlying H0 is not so necessary as it first appears. Indeed, if the manifold underlying H0 is necessary to even begin describing the dynamics, its necessity is contingent in the following sense: if the fields under consideration had been bandlimited they could have been described without a manifold. That is, the descriptive necessity of the manifold is contingent upon the existence of arbitrarily high frequencies in the physical fields.
However, more needs to be done to develop this point. Principally, the above work ought to be extended to Lorentzian theories as these serve as the main stage for the dynamical vs geometric spacetime debate.
\begin{acknowledgments}
The authors thanks James Read, Jason Pye, and Nick Menicucci for their helpful feedback.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
1,108,101,564,399 | arxiv | \section{Introduction}
Consider a family $\mathcal{K}$ of positive homothetic copies of a fixed convex body $K \subset \mathbb{R}^d$ with homothety coefficients $\tau_1, \ldots, \tau_n > 0$. Following Hadwiger~\cite{hadwiger1947nonseparable}, we call $\mathcal{K}$ \emph{non-separable} if any hyperplane $H$ intersecting $\conv \bigcup \mathcal{K}$ intersects a member of $\mathcal{K}$. Answering a question by Erd\H{o}s, A.~W.~Goodman and R.~E.~Goodman~\cite{goodman1945circle} proved the following assertion:
\begin{theorem}[A.~W.~Goodman, R.~E.~Goodman, 1945]
\label{thm:goodman}
Given a non-separable family $\mathcal{K}$ of Euclidean balls of radii $r_1, \ldots, r_n$ in $\mathbb{R}^d$, it is always possible to cover them by a ball of radius $R = \sum r_i$.
\end{theorem}
Let us outline here the idea of their proof since we are going to reuse it in different settings.
First, A.~W.~Goodman and R.~E.~Goodman prove the following lemma, resembling the $1$-dimensional case of the general theorem:
\begin{lemma
\label{lem:segm}
Let $I_1, \ldots, I_n \subset \mathbb{R}$ be segments of lengths $\ell_1, \ldots, \ell_n$ with midpoints $c_1, \ldots, c_n$. Assume the union $\bigcup I_i$ is a segment (i.e. the family of segments is non-separable). Then the segment $I$ of length $\sum \ell_i$ with midpoint at the center of mass $c = \frac{\sum \ell_i c_i}{\sum \ell_i}$ covers $\bigcup I_i$.
\end{lemma}
Next, for a family $\mathcal{K} = \{o_i + r_i B\}$ ($B$ denotes the unit ball centered at the origin of $\mathbb{R}^d$), A.~W.~Goodman and R.~E.~Goodman consider the point $o = \frac{\sum r_i o_i}{\sum r_i}$ (i.e., the center of mass of~$\mathcal{K}$ if the weights of the balls are chosen to be proportional to the radii). They project the whole family onto $d$ orthogonal directions (chosen arbitrarily) and apply Lemma~\ref{lem:segm} to show that the ball of radius $R = \sum r_i$ centered at $o$ indeed covers $\mathcal{K}$.
In~\cite{bezdek2016non}, K.~Bezdek and Z.~L\'angi show that Theorem~\ref{thm:goodman} actually holds not only for balls but also for any centrally-symmetric bodies:
\begin{theorem}[K.~Bezdek and Z.~Langi, 2016]
\label{thm:symm}
Given a non-separable family of homothets of centrally-symmetric convex body $K \subset \mathbb{R}^d$ with homothety coefficients $\tau_1, \ldots, \tau_n > 0$ it is always possible to cover them by a translate of $\left(\sum \tau_i\right)K$.
\end{theorem}
The idea of their proof is to use Lemma~\ref{lem:segm} to deduce the statement for the case when $K$ is a hypercube, and then deduce the result for sections of the hypercube (which can approximate arbitrary centrally-symmetric bodies).
It is worth noticing that Theorem~\ref{thm:symm} follows from Lemma~\ref{lem:segm} by a more direct argument (however, missed by A.~W.~Goodman and R.~E.~Goodman). In 2001 F.~Petrov proposed a particular case of the problem (when $K$ is a Euclidean ball) to Open Mathematical Contest of Saint Petersburg Lyceum~\textnumero239 \cite{piter-olimpiadi2000-2002}. He assumed the following solution (working for any symmetric $K$ as well): For a family $\mathcal{K} = \{o_i + \tau_i K\}$, consider a homothet $\left(\sum \tau_i\right) K + o$ with center $o = \frac{\sum \tau_i o_i}{\sum \tau_i}$. If $\left(\sum \tau_i\right) K + o$ does not cover~$\mathcal{K}$, then there exists a hyperplane $H$ separating a point $p \in \conv \bigcup \mathcal{K} \setminus \left(\left(\sum \tau_i\right) K + o\right)$ from $\left(\left(\sum \tau_i\right) K + o\right)$. Projection onto the direction orthogonal to $H$ reveals a contradiction with Lemma~\ref{lem:segm}.
Another interesting approach to Goodmans' theorem was introduced by K.~Bezdek and A.~Litvak~\cite{bezdek2015packing}. They put the problem in the context of studying the packing analogue of Bang's problem through the LP-duality, which gives yet another proof of Goodmans' theorem for the case when $K$ is a Euclidean disk in the plane. One can adapt their argument for the original Bang's problem to get a ``dual'' counterpart of Goodmans' theorem. We discuss this counterpart and give our proof of a slightly more general statement in Section~\ref{sect:anti}.
The paper is organized as follows.
In Section~\ref{sect:nonsymm} we prove a strengthening (with factor $\frac{d+1}{2}$ instead of $d$) of the following result of K.~Bezdek and Z.~Langi:
\begin{theorem}[K.~Bezdek and Z.~Langi, 2016]
\label{thm:nonsymmweak}
Given a non-separable family of positive homothetic copies of a (not necessarily centrally-symmetric) convex body $K \subset \mathbb{R}^d$ with homothety coefficients $\tau_1, \ldots, \tau_n > 0$, it is always possible to cover them by a translate of $d\left(\sum \tau_i\right)K$.
\end{theorem}
In Section~\ref{sect:simplex} we show that if we weaken the condition of non-separability considering only $d+1$ directions of separating hyperplanes, then factor $\frac{d+1}{2}$ cannot be improved.
In Section~\ref{sect:anti} we prove a counterpart of Goodmans' theorem related to the notion somehow opposite to non-separability: Given a positive integer $k$ and a family of Euclidean balls of radii $r_1, \ldots, r_n$ in $\mathbb{R}^d$, it is always possible to inscribe a ball of radius $r = \left(\sum r_i\right)/k$ within their convex hull, provided every hyperplane intersects at most $k$ interiors of the balls.
\section{A Goodmans-type result for non-symmetric bodies}
\label{sect:nonsymm}
Let $K \subset \mathbb{R}^d$ be a (not necessarily centrally-symmetric) convex body containing the origin and let $K^\circ = \{p : \langle p,q \rangle \le 1 \; \forall q \in K\}$ (where $\langle\cdot,\cdot\rangle$ stands for the standard inner product) be its polar body. We define the following \emph{parameter of asymmetry}:
\[
\sigma = \min\limits_{q\in \inte K} \min \{\mu>0: (K-q) \subset -\mu (K-q)\}
\]
It is an easy exercise in convexity to establish that $\min \{\mu>0: (K-q) \subset -\mu (K-q)\} = \min \{\mu>0: (K-q)^\circ \subset -\mu (K-q)^\circ\}$. So an equivalent definition (which is more convenient for our purposes) is
\[
\sigma = \min\limits_{q\in \inte K} \min \{\mu>0: (K-q)^\circ \subset -\mu (K-q)^\circ\}.
\]
The value $\frac{1}{\sigma}$ is often referred to as \emph{Minkowski's measure of symmetry} of body~$K$ (see, e.g.,~\cite{grunbaum1963measures}).
\begin{theorem}
\label{thm:nonsymm}
Given a non-separable family of positive homothetic copies of (not necessarily centrally-symmetric) convex body $K \subset \mathbb{R}^d$ with homothety coefficients $\tau_1, \ldots, \tau_n > 0$ it is always possible to cover them by a translate of $\frac{\sigma+1}{2}\left(\sum \tau_i\right)K$. (Here $\sigma$ denotes the parameter of asymmetry of $K$, defined above.)
\end{theorem}
\begin{proof}
We start by shifting the origin so that $K^\circ \subset -\sigma K^\circ$.
For a family $\mathcal{K} = \{o_i + \tau_i K\}$, consider the homothet $\frac{\sigma+1}{2} \left(\sum \tau_i\right) K + o$ with center $o = \frac{\sum \tau_i o_i}{\sum \tau_i}$. Assume that $\frac{\sigma+1}{2}\left(\sum \tau_i\right) K + o$ does not cover $\mathcal{K}$, hence there exists a hyperplane $H$ (strictly) separating a point $p \in \conv \bigcup \mathcal{K} \setminus \left(\frac{\sigma+1}{2}\left(\sum \tau_i\right) K + o\right)$ from $\left(\frac{\sigma+1}{2}\left(\sum \tau_i\right) K + o\right)$. Consider the orthogonal projection $\pi$ along $H$ onto the direction orthogonal to $H$. Suppose the segment $\pi(K)$ is divided by the projection of the origin in the ratio $1:s$. Since $K^\circ \subset -\sigma K^\circ$, we may assume that $s \in [1, \sigma]$. Identify the image of $\pi$ with the coordinate line $\mathbb{R}$ and denote $I_i = [a_i, b_i] = \pi\left(o_i + \tau_i K\right)$, $c_i = \pi(o_i)$, $\ell_i = b_i - a_i$, $L = \sum \ell_i$ (see Figure~\ref{pic:nonsymm}). Note that the $\ell_i$ are proportional to the $\tau_i$, and that $s(c_i - a_i) = b_i - c_i$. Denote $c = \pi(o) = \frac{\sum \ell_i c_i}{L}$ and $I = [a,b] = \pi\left(\frac{\sigma+1}{2}\left(\sum \tau_i\right) K + o\right)$ the segment of length $\frac{\sigma+1}{2}L$ divided by $c$ in the ratio $1:s$.
\begin{figure}
\centering
\includegraphics{goodman-figures-3.mps}
\caption{Illustration of the proof of Theorem~\ref{thm:nonsymm}}
\label{pic:nonsymm}
\end{figure}
Also consider the midpoints $c_i' = \frac{a_i+b_i}{2}$. By Lemma~\ref{lem:segm}, the segment $I' = [a',b']$ of length $L$ with midpoint at $c' = \frac{\sum \ell_i c_i'}{L}$ covers the union $\bigcup I_i =\pi(\mathcal{K})$. Let us check that $I' \subset I$, which would be a contradiction, since $\pi(p) \in I'$, $\pi(p) \notin I$.
First, notice that $\displaystyle c_i' = \frac{a_i + b_i}{2} \ge \frac{s a_i + b_i}{1+s} = c_i$, hence
\[
a' = c' - \frac12 L \ge c - \frac12 L \ge c - \frac{1}{1+s} \frac{\sigma+1}{2} L = a.
\]
Second, $\displaystyle c_i' - c_i = \frac{a_i+b_i}{2} - \frac{sa_i + b_i}{1+s} = \frac{s-1}{s+1} \frac{\ell_i}{2}$, hence
\begin{multline*}
b' = c' + \frac12 L = c + \left(c'-c\right) + \frac12 L = c + \frac{s-1}{2\left(s+1\right)} \frac{\sum \ell_i^2}{L} + \frac12 L \le \\
\le c + \frac{s-1}{2\left(s+1\right)} L + \frac12 L \le c + \frac{s}{1+s} \frac{\sigma+1}{2} L = b.
\end{multline*}
\end{proof}
\begin{lemma}[H.~Minkowski, J.~Radon
\label{lem:asymm}
Let $K$ be a convex body in $\mathbb{R}^d$.Then $\sigma \le d$, where $\sigma$ denotes the parameter of asymmetry of $K$, defined above.
\end{lemma}
For the sake of completeness we provide a proof here.
\begin{proof}
Suppose the origin coincides with the center of mass $g = \int\limits_K x \; dx / \int\limits_K dx$. We show that $K^\circ \subset -d K^\circ$.
Consider two parallel support hyperplanes orthogonal to one of the coordinate axes $Ox_1$. We use the notation $H_t = \{x = (x_1,\ldots,x_d): x_1 = t\}$ for hypeplanes orthogonal to this axis. Without loss of generality, these support hyperplanes are $H_{-1}$ and $H_s$ for some $s \ge 1$. We need to prove $s \le d$.
Assume that $s > d$. Consider a cone $C$ defined as follows: its vertex is chosen arbitrarily from $K \cap H_s$; its section $C \cap H_0 = K \cap H_0$; the cone is truncated by $H_{-1}$.
Since $C$ is a $d$-dimensional cone, the $x_1$-coordinate of its center of mass divides the segment $[-1, s]$ in ratio $1:d$. Therefore, the center of mass has positive $x_1$-coordinate.
It follows from convexity of $K$ that $C \setminus K$ lies (non-strictly) between $H_{-1}$ and $H_0$, hence the center of mass of $C \setminus K$ has non-positive $x_1$-coordinate. Similarly, $K \setminus C$ lies (non-strictly) between $H_0$ and~$H_s$, hence its center of mass has non-negative $x_1$-coordinate. Thus, the center of mass of $K = (C \setminus (C \setminus K)) \cup (K \setminus C)$ (see Figure~\ref{pic:cone}) must have positive $x_1$-coordinate, which is a contradiction.
\begin{figure}[h]
\centering
\includegraphics{goodman-figures-5.mps}
\caption{Illustration of the proof of Lemma~\ref{lem:asymm}}
\label{pic:cone}
\end{figure}
\end{proof}
\begin{corollary}
\label{cor:nonsymmstrong}
The factor $d$ in Theorem~\ref{thm:nonsymmweak} can be improved to $\frac{d+1}{2}$.
\end{corollary}
\begin{proof}
The result follows from Theorem~\ref{thm:nonsymm} and Lemma~\ref{lem:asymm}.
An alternative proof of this corollary that avoids Lemma~\ref{lem:asymm} is as follows. We use the notation of Theorem~\ref{thm:nonsymmweak}. Consider the smallest homothet $\tau K$, $\tau > 0$, that can cover $\mathcal{K}$ (after a translation to $\tau K + t$, $t \in \mathbb{R}^d$). Since it is the smallest, its boundary touches $\partial \conv \bigcup \mathcal{K}$ at some points $q_0$, $\ldots$, $q_m$ ($m \le d$) such that the corresponding support hyperplanes $H_0$, $\ldots$, $H_m$ bound a \emph{nearly bounded} set $S$, i.e., a set that can be placed between two parallel hyperplanes.
Circumscribe all the bodies from the family $\mathcal{K}$ by the smallest homothets of $S$ and apply Theorem~\ref{thm:nonsymm} for them (note that if $m<d$ then $S$ is unbounded, but that does not ruin our argument). Since $S$ is a cylinder based on an $m$-dimensional simplex, its parameter of asymmetry equals $m \le d$, and we are done.
\end{proof}
\begin{remark}
\label{rem:nonsymmbest}
Up to this moment the best possible factor for non-symmetric case is unknown. Bezdek and L\'angi~\cite{bezdek2016non} give a sequence of examples in $\mathbb{R}^d$ showing that it is impossible to obtain a factor less than $\frac23 + \frac{2}{3\sqrt{3}}$ $(> 1)$ for any $d \ge 2$.
\end{remark}
\section{A sharp Goodmans-type result for simplices}
\label{sect:simplex}
Consider the case when $K \subset \mathbb{R}^d$ is a simplex.
In this section we are only interested in separating hyperplanes parallel to a facet of $K$.
\begin{theorem}
\label{thm:simplex}
Let $\mathcal{K}$ be a family of positive homothetic copies of a simplex $K \subset \mathbb{R}^d$ with homothety coefficients $\tau_1$, $\ldots$, $\tau_n > 0$. Suppose any hyperplane $H$ (parallel to a facet of $K$) intersecting $\conv \bigcup \mathcal{K}$ intersects a member of $\mathcal{K}$.
Then it is possible to cover $\bigcup \mathcal{K}$ by a translate of $\frac{d+1}{2}\left(\sum \tau_i\right)K$.
Moreover, factor $\frac{d+1}{2}$ cannot be improved.
\end{theorem}
\begin{proof}
A proof of possibility to cover follows the same lines as (and is even simpler than) the proof of Theorem~\ref{thm:nonsymm}. Let $K$ have its center of mass at the origin. For a family $\mathcal{K} = \{o_i + \tau_i K\}$, consider a homothet $\frac{d+1}{2} \left(\sum \tau_i\right) K + o$ with center $o = \frac{\sum \tau_i o_i}{\sum \tau_i}$.
Assuming $\frac{d+1}{2}\left(\sum \tau_i\right) K + o$ does not cover $\mathcal{K}$, we find a hyperplane $H$ (strictly) separating a point $p \in \conv \bigcup \mathcal{K} \setminus \left(\frac{d+1}{2}\left(\sum \tau_i\right) K + o\right)$ from $\left(\frac{d+1}{2}\left(\sum \tau_i\right) K + o\right)$.
Note that $H$ can be chosen among the hyperplanes spanned by the facets of $\left(\frac{d+1}{2}\left(\sum \tau_i\right) K + o\right)$, so $H$ is parallel to one of them.
After projecting everything along $H$ onto the direction orthogonal to $H$, we repeat the same argument as before and show that (in the notation from Theorem~\ref{thm:nonsymm})
\[
a' = c' - \frac12 L \ge c - \frac12 L = a,
\]
which contradicts our assumption.
Next, we construct an example showing that factor $\frac{d+1}{2}$ cannot be improved.
Consider a simplex \[K = \{x = (x_1, \ldots, x_d) \in \mathbb{R}^d: x_i \ge 0,\,\,\, \sum\limits_{i=1}^d x_i \le \frac{d(d+1)}{2} N + 1\},\] where $N$ is an arbitrary large integer. Section it with all hyperplanes of the form $\{x_i = t\}$ or of the form $\sum\limits_{i=1}^d x_i = t$ (for $t \in \mathbb{Z}$). Consider all the smallest simplices generated by these cuts and positively homothetic to $K$. We use coordinates
\[
\begin{pmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{pmatrix}, \quad 0 \le b_i \in \mathbb{Z}, \quad \sum\limits_{i=1}^d b_i \le \frac{d(d+1)}{2} N,
\]
to denote the simplex lying in the hypercube $\{b_i \le x_i \le b_i+1, i = 1, \ldots, d\}$.
For $d=2$ (see Figure~\ref{pic:triangle}) we compose $\mathcal{K}$ of the simplices with the following coordinates:
\[
\begin{pmatrix}
0 \\
N
\end{pmatrix},
\begin{pmatrix}
1 \\
N+1
\end{pmatrix}, \ldots,
\begin{pmatrix}
N \\
2N
\end{pmatrix},
\begin{pmatrix}
N+1 \\
0
\end{pmatrix}, \ldots,
\begin{pmatrix}
2N \\
N-1
\end{pmatrix}.
\]
\begin{figure}
\centering
\includegraphics{goodman-figures-2.mps}
\caption{Example for $d=2$ and $N=5$}
\label{pic:triangle}
\end{figure}
For $d=3$:
\[
\begin{pmatrix}
0 \\
N \\
2N
\end{pmatrix},
\begin{pmatrix}
1 \\
N+1 \\
2N+1
\end{pmatrix}, \ldots,
\begin{pmatrix}
N \\
2N \\
3N
\end{pmatrix},
\begin{pmatrix}
N+1 \\
2N+1 \\
0
\end{pmatrix}, \ldots,
\begin{pmatrix}
2N \\
3N \\
N-1
\end{pmatrix},
\begin{pmatrix}
2N+1 \\
0 \\
N
\end{pmatrix}, \ldots,
\begin{pmatrix}
3N \\
N-1 \\
2N-1
\end{pmatrix}.
\]
For general $d$:
\[
\begin{pmatrix}
0 \\
N \\
2N \\
\vdots \\
(d-1)N
\end{pmatrix},
\begin{pmatrix}
1 \\
N+1 \\
2N+1 \\
\vdots \\
(d-1)N+1
\end{pmatrix}, \ldots,
\begin{pmatrix}
i \pmod{dN+1} \\
N + i \pmod{dN+1}\\
2N + i \pmod{dN+1}\\
\vdots \\
(d-1)N + i \pmod{dN+1}
\end{pmatrix}, \ldots,
\begin{pmatrix}
dN \\
N-1 \\
2N-1 \\
\vdots \\
(d-1)N-1
\end{pmatrix}.
\]
It is rather straightforward to check that each $b_i$ ranges over the set $\{0, 1, \ldots, dN\}$, and their sum is not greater than $\frac{d(d+1)}{2} N$. Therefore, the chosen family $\mathcal{K}$ is indeed non-separable by hyperplanes parallel to the facets of $K$. Moreover, the chosen simplices touch all the facets of $K$, so $K$ is the smallest simplex covering $\mathcal{K}$. Finally, we note that any one-dimensional parameter of $K$ (say, its diameter) is $\frac{{d(d+1)} N}{{2}(dN+1)}$ times greater than the sum of the corresponding parameters of the elements of $\mathcal{K}$, and this ratio tends to $\frac{d+1}{2}$ as $N \to \infty$.
\end{proof}
\section{A ``dual'' version of Goodmans' theorem}
\label{sect:anti}
\begin{lemma}
\label{lem:antisegm}
Let $I_1, \ldots, I_n \subset \mathbb{R}$ be segments of lengths $\ell_1, \ldots, \ell_n$ with midpoints $c_1, \ldots, c_n$. Assume every point on the line belongs to at most $k$ of the interiors of the $I_i$. Then the segment $I$ of length $\frac{1}{k}\sum \ell_i$ with midpoint at the center of mass $c = \frac{\sum \ell_i c_i}{\sum \ell_i}$ lies in $\conv \bigcup I_i$.
\end{lemma}
\begin{proof}
Mark all the segment endpoints and subdivide all the segments by the marked points. Next, put the origin at the leftmost marked point and numerate the segments between the marked points from left to right. We say that the $i$-th segment is of multiplicity $0 \le k_i \le k$ if it is covered $k_i$ times. We keep the notation $I_i$ for the new segments with multiplicities, $c_i$ for their midpoints, and~$\ell_i$ for their lengths. Note that the value $\frac{\sum \ell_i c_i}{\sum \ell_i}$ is preserved after this change of notation: it is the coordinate of the center of mass of the segments regarded as solid one-dimensional bodies of uniform density.
Note that $c_i = \ell_1 + \ldots + \ell_{i-1} + \frac12 \ell_i$. We prove that $c = \frac {\sum k_i\ell_i c_i}{\sum k_i\ell_i} \ge \frac{\sum k_i \ell_i}{2k}$ (this would mean that the left endpoint of $I$ is contained in $\conv \bigcup I_i$; for the right endpoint everything is similar).
The inequality in question
\[
2c \sum_i k_i \ell_i = k_1 \ell_1 \cdot \ell_1 + k_2 \ell_2 \cdot \left(2\ell_1 + \ell_2\right) + k_2 \ell_2 \cdot \left(2\ell_1 + 2\ell_2 + \ell_3\right) + \ldots \stackrel{?} \ge \frac{1}{k} \left( \sum_i k_i\ell_i \right)^2
\]
is equivalent to
\[
k \left( \sum_i k_i\ell_i^2 + 2 \sum_{i<j} k_j\ell_i\ell_j \right)\stackrel{?} \ge \left( \sum_i k_i\ell_i \right)^2,
\]
which is true, since $k \ge k_i$.
\end{proof}
\begin{theorem}
\label{thm:anti}
Let $k$ be a positive integer, and $\mathcal{K}$ be a family of positive homothetic copies (with homothety coefficients $\tau_1, \ldots, \tau_n > 0$) of a centrally-symmetric convex body $K \subset \mathbb{R}^d$. Suppose any hyperplane intersects at most $k$ interiors of the homothets. Then it is possible to put a translate of $\frac{1}{k}\left(\sum \tau_i\right) K$ into their convex hull.
\end{theorem}
\begin{proof}
As usual, for a family $\mathcal{K} = \{o_i + \tau_i K\}$, consider a homothet \mbox{$\frac{1}{k}\left(\sum \tau_i\right) K + o$} with center $o = \frac{\sum \tau_i o_i}{\sum \tau_i}$. Assume $\frac{1}{k} \left(\sum \tau_i\right) K + o$ does not fit into $\conv \bigcup \mathcal{K}$, then there exists a hyperplane $H$ separating a point $p \in \frac{1}{k} \left(\sum \tau_i\right) K + o$ from $\conv \bigcup \mathcal{K}$. After projecting onto the direction orthogonal to $H$, we use Lemma~\ref{lem:antisegm} to obtain a contradiction.
\end{proof}
\begin{remark}
The estimate in Theorem~\ref{thm:anti} is sharp for any $k$, as can be seen from the example of $k$ translates of $K$ lying along the line so that consecutive translates touch.
\end{remark}
\subsection*{Acknowledgements.}
The authors are grateful to Rom Pinchasi and Alexandr Polyanskii for fruitful discussions. Also the authors thank Roman Karasev, Kevin Kaczorowski, and the anonymous referees for careful reading and suggested revisions.
The research of the first author is supported by People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement n$^\circ$[291734].
The research of the second author is supported by he Russian Foundation for Basic Research grant 15-01-99563 A and grant 15-31-20403 (mol\_a\_ved).
|
1,108,101,564,400 | arxiv | \section{\large Introduction}
In this paper, we discuss the problem of formulating the causality
principle in quantum field theory on noncommutative spacetime. A
noncommutative spacetime of $d$ dimensions is defined by
replacing the coordinates $x^\mu$ of ${\mathbb R}^d$ by Hermitian
operators $\hat x^\mu$ satisfying the commutation relations
\begin{equation}
[\hat x^\mu, \hat x^\nu]=i\theta^{\mu\nu},
\label{1.1*}
\end{equation}
where $\theta^{\mu\nu}$ is a real antisymmetric $d\times d$
matrix, which will henceforth be assumed constant as in most
papers on this subject. The Weyl-Wigner correspondence between
algebras of operators and algebras of functions enables one to
consider quantum field theories on noncommutative spacetime as a
form of nonlocal QFT described by an action, in which the ordinary
product of fields is replaced by the Moyal-Weyl-Groenewold star
product
\begin{multline}
(f\star_\theta
g)(x)=f(x)\exp\left(\frac{i}{2}\,\overleftarrow{\partial_\mu}\,
\theta^{\mu\nu}\,\overrightarrow{\partial_\nu}\right)g(x)\\
=f(x)g(x)+\sum_{n=1}^\infty\left(\frac{i}{2}\right)^n\frac{1}{n!}\,
\theta^{\mu_1\nu_1}\dots
\theta^{\mu_n\nu_n}\partial_{\mu_1}\dots
\partial_{\mu_n}f(x)\partial_{\nu_1}\dots\partial_{\nu_n}g(x)
\label{1.2*}
\end{multline}
(see, e.g.,~\cite{Sz} for more details). Recent interest in
noncommutative QFT was caused mainly by the fact that it occupies
an intermediate position between the usual quantum field theory
and string theory~\cite{SeiW}. At present, considerable study is
being given not only to actual models, but also to the conceptual
framework of this theory. In particular, in~\cite{Alv, Ch1,FP,FW}
efforts were made to derive a corresponding generalization of the
axiomatic approach~\cite{SW,J,BLOT}. Much attention is being given
to the nonlocal effects inherent in noncommutative QFT. A
comparison of the theories in which the time coordinate is
involved in the noncommutativity and the theories with
$\theta^{0\nu}=0$ shows that these latter are preferable because
they obey unitarity. In~\cite{Alv,LS,CFI}, it was argued, however,
that in the case of space-space noncommutativity the usual causal
structure with the light cone is replaced by a structure with a
light wedge respecting the unbroken $SO(1,1)\times SO(2)$
symmetry. Since quantum fields are singular by their very nature,
a comprehensive study of the question of causality must include
finding an adequate space of test functions. In the standard
formalism~\cite{SW,J,BLOT}, quantum fields are taken to be
tempered operator-valued distributions, which are defined on the
Schwartz space ${\mathscr{S}}$ consisting of all infinitely differentiable
functions of fast decrease. As noted in~\cite{Alv}, the assumption
of temperedness is open to the question in noncommutative QFT
because of UV/IR mixing. Moreover, the correlation functions of
some gauge-invariant operators admit an exponential growth at
energies much larger than the noncommutativity scale~\cite{GHI},
and this is an argument in favour of analytic test functions. The
very structure of the star product~\eqref{1.2*}, which is defined
by an infinite-order differential operator, suggests that analytic
test functions may be used in noncommutative QFT along with or
instead of Schwartz's ${\mathscr{S}}$.
In~\cite{S07}, we argued that the appropriate test function spaces
must be algebras under the Moyal
$\star$-product and showed that the spaces $S^\beta_\alpha$ of
Gelfand and Shilov~\cite{GS2} satisfy this condition if and only
if $\alpha\ge\beta$. The space $S^\beta_\alpha$ consists of the
smooth functions that decrease at infinity faster than
exponentially with order $1/\alpha$ and a finite type, and whose
Fourier transforms behave analogously but with order $1/\beta$.
Clearly, all these spaces are contained in the space ${\mathscr{S}}$, which
can be thought of as $S^\infty_\infty$. As shown in~\cite{S07},
the series~\eqref{1.2*} is absolutely convergent for any $f,g\in
S^\beta_\alpha$ if and only if $\beta<1/2$. However, the star
multiplication has a unique continuous extension to any space
$S^\beta_\alpha$ with $\alpha\ge\beta$. It is natural to use the
spaces with $\beta<1/2$ as an initial functional domain of
quantum fields on noncommutative spacetime, but this does not rule
out a possible extension to a larger test function space depending
on the model under consideration. Recently, the use of spaces
$S^\beta=S^\beta_\infty$, $\beta<1/2$, was also advocated by
M.~Chaichian {\it et al}~\cite{Ch2}.
If $\beta<1$, then the test functions are entire analytic, and the
notion of support loses its meaning for the generalized functions
that are defined on $S^\beta_\alpha$ or $S^\beta$ and constitute
their dual spaces $S^{\prime\beta}_\alpha$ and $S^{\prime\beta}$.
Nevertheless, some basic theorems of the theory of distributions
can be extended to these generalized functions because they retain
the property of angular localizability~\cite{FS92,S93}. This
property leads naturally to the condition of asymptotic
commutativity, which was used in nonlocal QFT instead of local
commutativity and was shown to ensure the existence of
CPT-symmetry and the standard spin-statistics relation for
nonlocal fields~\cite{S99}. We already discussed in~\cite{S06}
how some of these proofs with test functions in $S^0$ can be
adapted to noncommutative QFT. Here we intend to argue that
quantum fields living on noncommutative spacetime indeed satisfy
the asymptotic commutativity condition and to explain the
interrelation between this condition and the fundamental length
scale which is determined by the noncommutativity
parameter~$\theta$. To avoid notational clutter, we will use the
one-index spaces of type $S$, although the two-index spaces
provide a wider distributional framework.
In section~2, we introduce the test function space
$\mathscr{S}^{1/2}$ which most closely corresponds to the Moyal
star product. All spaces $S^\beta$ with $\beta<1/2$ are contained
in this space, but it is smaller than $S^{1/2}$ and may be defined
as a maximal space with the property that the series~\eqref{1.2*}
is absolutely convergent for any pair of its elements. We also
prove that $\mathscr{S}^{1/2}$ is a topological algebra under the
star product. In section~3, two classes of spaces related to
$S^{\beta}$, $\mathscr{S}^{\beta}$ and associated with cones in
${\mathbb R}^d$ are defined, and it is shown that these spaces are
algebras under the $\star$-product for $\beta<1/2$ in the former
case and for $\beta\le 1/2$ in the latter case. In section~4, the
exact formulation of the asymptotic commutativity condition is
given and its physical consequences are briefly outlined. In the
same section we introduce the notion of $\theta$-locality. In
section~5, we take, as a case in point, the normal ordered
$\star$-square $:\phi\star\phi:$ of the free scalar field $\phi$
and show that it obeys the conditions of asymptotic commutativity
and $\theta$-locality. Section~6 contains concluding remarks.
\section{\large Test function spaces adequate to the Moyal star product}
An advantage of the spaces $S^\beta$ over ${\mathscr{S}}$ is their
invariance under the action of infinite-order differential
operators, the set of which increases with decreasing $\beta$. In
what follows, we consider functions defined on ${\mathbb R}^d$ and use the
usual multi-index notation:
$$
\partial^\kappa=\displaystyle{\frac{\partial^{|\kappa|}}{\partial
x_1^{\kappa_1}\dots\partial x_d^{\kappa_d}}},\qquad
|\kappa|=\kappa_1+\dots+\kappa_d,\qquad
\kappa^\kappa=\kappa^{\kappa_1}_1\cdot\dots\cdot
\kappa_d^{\kappa_d},
$$
where $\kappa\in {\mathbb Z}_+^d$. Let $\beta\ge0$, $B>0$, and $N$ be an
integer. We denote by $S_N^{\beta,B}({\mathbb R}^d)$ the Banach space of
infinitely differentiable function with the norm
\begin{equation}
\|f \|_{B,N}=\sup_{x,\kappa}\,(1+|x|)^N\frac{|\partial^\kappa
f(x)|}{B^{|\kappa|}\kappa^{\beta\kappa}}.
\label{2.1*}
\end{equation}
We also
write $S_N^{\beta,B}$ for this space when this cannot lead to
confusion. Let us consider the operator
\begin{equation}
\sum_{\lambda\in {\mathbb Z}_+^d} c_\lambda \partial^\lambda
\label{2.2*}
\end{equation}
assuming that $\sum_\lambda c_\lambda z^\lambda$ has less than
exponential growth of order $\le 1/\beta$ and type $b$. In
treatise~\cite{GS2}, it was shown that under the condition
$b<\beta/(e^2B^{1/\beta})$ the operator~\eqref{2.2*} maps the
space $S_N^{\beta,B}$ to $S_N^{\beta,B'}$, where $B'=e^\beta B$.
This result can be improved by using the inequality
$(k+l)^{k+l}\le 2^{k+l}k^kl^l$. The assumption of order of growth,
together with the Cauchy inequality, implies that $|c_\lambda|\le
C \prod_{j=1}^d r_j^{-\lambda_j}e^{b\,r_j^{1/\beta}}$ for any
$r_j>0$. Locating the minimum with respect to $r_j$, we obtain
\begin{equation}
|c_\lambda|\le
C\left(\frac{be}{\beta}\right)^{\beta|\lambda|}\frac{1}
{\lambda^{\beta\lambda}}.
\label{2.3*}
\end{equation}
If $f\in S_N^{\beta,B}$, then we have
\begin{multline}
(1+|x|)^N\left|\partial^\kappa\sum_\lambda
c_\lambda\partial^\lambda f(x)\right|\le \|f \|_{B,N}\sum_\lambda
c_\lambda B^{|\kappa+\lambda|}(\kappa+\lambda)^{\beta(\kappa+\lambda)} \\
\leq \|f
\|_{B,N}2^{\beta|\kappa|}B^{|\kappa|}\kappa^{\beta\kappa}\sum_\lambda
c_\lambda 2^{\beta|\lambda|}B^{|\lambda|}\lambda^{\beta\lambda}.
\label{2.4*}
\end{multline}
Suppose that
\begin{equation}
b<\frac{\beta}{2eB^{1/\beta}}.
\label{2.5*}
\end{equation}
Then the last series in~\eqref{2.4*} converges by virtue of the
inequality~\eqref{2.3*}. Taking $B'\ge 2^\beta B$, we obtain
$\|\sum_\lambda c_\lambda\partial^\lambda f\|_{B',N}\le C'\|f
\|_{B,N}$ and conclude that the operator~\eqref{2.2*} maps
$S_N^{\beta,B}$ to $S_N^{\beta,B'}$ continuously.
Now we apply this consideration to the operator
\begin{equation}
\exp\left(\frac{i}{2}\,\theta^{\mu\nu}\frac{\partial}{\partial
x_1^\mu} \frac{\partial}{\partial
x_2^\nu}\right)=\sum_{n=0}^\infty\left(\frac{i}{2}\right)^n\frac{1}{n!}\,
\theta^{\mu_1\nu_1}\dots\theta^{\mu_n\nu_n}\frac{\partial}{\partial
x_1^{\mu_1}}\dots\frac{\partial}{\partial
x_1^{\mu_n}}\frac{\partial}{\partial
x_2^{\nu_1}}\dots\frac{\partial}{\partial x_2^{\nu_n}}.
\label{2.6*}
\end{equation}
Clearly, the order of the entire function
$\exp((i/2)\theta^{\mu\nu}z_{1\mu} z_{2\nu})$ is equal to $2$ and
the type is less than or equal to $|\theta|/4$, where
$$
|\theta|= \sum_{\mu<\nu}|\theta^{\mu\nu}|.
$$
Hence we have the following theorem.
\medskip
{\bf Theorem 1.}
{\it Let $B<1/\sqrt{e|\theta|}$. Then the operator~\eqref{2.6*}
maps the space $S_N^{1/2, B}({\mathbb R}^{2d})$ continuously into the
space $S_N^{1/2,B'}({\mathbb R}^{2d})$, where $B'= B\sqrt{2}$. The series
obtained by applying this operator to a function $f\in S_N^{1/2,
B}({\mathbb R}^{2d})$ is absolutely convergent in the norm
$\|\cdot\|_{B',N}$.}
\medskip
We define the countably-normed spaces ${\mathscr{S}}^\beta$ by
\begin{equation}
\mathscr{S}^\beta=\bigcap_{N,B}S^{\beta,B}_N.
\label{2.7*}
\end{equation}
A sequence $f_n$ converges to $f\in\mathscr{S}^\beta$ if
$\|f_n-f\|_{B,N}\to 0$ for every $B>0$ and for every~$N$. The
foregoing leads directly to the following result.
\medskip
{\bf Theorem 2.} {\it The operator~\eqref{2.6*} maps
the space $\mathscr{S}^{1/2}({\mathbb R}^{2d})$ to itself continuously.
Hence it is well defined and continuous on its dual space
$\mathscr{S}^{\prime 1/2}({\mathbb R}^{2d})$. The series obtained by
applying this operator to $f\in \mathscr{S}^{1/2}({\mathbb R}^{2d})$ is
absolutely convergent in each of the norms of
$\mathscr{S}^{1/2}({\mathbb R}^{2d})$.}
\medskip
Below is given a description in terms of the Fourier transform,
which shows that the operator~\eqref{2.6*} is bijective on
$\mathscr{S}^{1/2}$ and so it is a linear topological isomorphism
of $\mathscr{S}^{1/2}$ as well as of $\mathscr{S}^{\prime 1/2}$.
Analogous statements certainly hold for any $\mathscr{S}^\beta$
with $\beta\le 1/2$, but $\mathscr{S}^{1/2}$ is the largest of
these spaces and most closely corresponds to the
operator~\eqref{2.6*} and hence to the Moyal product~\eqref{1.2*}.
In what follows, we use the notation
$$
\partial_{x_1}\theta\,\partial_{x_2}= \theta^{\mu\nu}
\frac{\partial}{\partial x_1^\mu} \frac{\partial}{\partial
x_2^\nu}.
$$
The map $\mathscr{S}^\beta({\mathbb R}^d)\times
\mathscr{S}^\beta({\mathbb R}^d)\to\mathscr{S}^\beta({\mathbb R}^d)$ that takes
each pair $(f,g)$ to the function $f\star g$ can be considered as
the composite map
\begin{equation}
\mathscr{S}^\beta({\mathbb R}^d)\times
\mathscr{S}^\beta({\mathbb R}^d)\stackrel{\otimes}{\longrightarrow}\mathscr{S}^\beta({\mathbb R}^{2d})
\stackrel{e^{(i/2)\partial_{x_1}\theta\,\partial_{x_2}}}
{\longrightarrow}\mathscr{S}^\beta({\mathbb R}^{2d})\stackrel{\widehat
{\mathsf m}}{\longrightarrow}\mathscr{S}^\beta({\mathbb R}^d),
\label{2.8*}
\end{equation}
where the first arrow takes $(f,g)$ to the function $(f\otimes
g)(x_1,x_2)=f(x_1)g(x_2)$, the second arrow is the action of
operator~\eqref{2.6*}, and the third arrow is the restriction of
elements of $\mathscr{S}^\beta({\mathbb R}^{2d})$ to the diagonal
$x_1=x_2$. The first map is obviously continuous, and we now argue
that the third map is also continuous. Although the spaces
$\mathscr{S}^\beta$ are not invariant under the Fourier
transformation, they closely resemble the Schwartz space ${\mathscr{S}}$ in
their other properties. These spaces are complete and metrizable,
i.e., belong to the class of Fr\'echet spaces. Furthermore, they
are Montel spaces (or perfect in nomenclature of~\cite{GS2}) and
nuclear. An analogue of Schwartz's kernel theorem states that
$\mathscr{S}^\beta({\mathbb R}^{2d})$ coincides with the completion of the
tensor product $\mathscr{S}^\beta({\mathbb R}^d)\mathbin{\otimes_\pi}
\mathscr{S}^\beta({\mathbb R}^d)$ equipped with the projective topology.
Therefore, the set of continuous bilinear maps
$\mathscr{S}^\beta({\mathbb R}^d)\times \mathscr{S}^\beta({\mathbb R}^d)\to
\mathscr{S}^\beta({\mathbb R}^d)$ can be identified with the set of
continuous linear maps $\mathscr{S}^\beta({\mathbb R}^{2d})\to
\mathscr{S}^\beta({\mathbb R}^d)$. In particular, the linear map $\widehat
{\mathsf m}$ corresponds to the ordinary pointwise multiplication
$\mathsf m\colon (f,g)\to f\cdot g$, and its continuity follows
from (and amounts to) the fact that $\mathscr{S}^\beta({\mathbb R}^d)$ is
a topological algebra under the ordinary multiplication. We thus
get the following theorem.
\medskip
{\bf Theorem 3.} {\it The spaces $\mathscr{S}^{\beta}({\mathbb R}^d)$
with $\beta\le1/2$ are topological algebras under the Moyal
$\star$-product. If $f,g \in \mathscr{S}^\beta({\mathbb R}^d)$, where
$\beta\le1/2$, then the series~\eqref{1.2*} is absolutely
convergent in this space.}
\medskip
Another way of proving this is to estimate the expression
$(1+|x|)^N |\partial^\kappa(f\star g)(x)|$ with the use of
Leibniz's formula. Such a computation is almost identical to the
proof of theorem~4 in paper~\cite{S07} dealing with the spaces
$S^\beta_\alpha$.
The Gelfand-Shilov spaces $S^\beta$ are constructible from the
spaces $S_N^{\beta, B}$ in the following way:
\begin{equation}
S^\beta=\bigcup_{B>0} S^{\beta, B}, \qquad S^{\beta,
B}=\bigcap_{B'>B,N\in{\mathbb Z}_+} S_N^{\beta, B'}.
\label{2.9*}
\end{equation}
A sequence $f_n$ is said to be convergent to an element $f\in
S^\beta$ if there is a $B>0$ such that all $f_n$ and $f$ are
contained in the space $S^{\beta, B}$ and $f_n\to f$ in each of
its norms.
Gelfand and Shilov~\cite{GS2} have shown that the spaces $S^\beta$
are algebras under the pointwise multiplication and that this
operation is separately continuous in their topology.
Mityagin~\cite{M} has proved that these spaces are nuclear.
Another proof is given in~\cite{Izv}, where in addition their
completeness is established and the corresponding kernel theorem
is proved. From this theorem, it follows that the set of
separately continuous bilinear maps $S^\beta({\mathbb R}^d)\times
S^\beta({\mathbb R}^d)\to S^\beta({\mathbb R}^d)$ is identified with the set of
continuous linear maps $S^\beta({\mathbb R}^{2d})\to S^\beta({\mathbb R}^d)$. We
combine these facts in a manner analogous to that used in the case
of ${\mathscr{S}}^\beta$ and suppose that $\beta$ satisfies the strict
inequality $\beta<1/2$. Then $e^{b|z|^{1/2}}\le C_\epsilon
e^{\epsilon|z|^{1/\beta}}$, where $\epsilon>0$ can be taken
arbitrarily small, and we obtain the following result.
\medskip
{\bf Theorem 4.} {\it The operator~\eqref{2.6*} maps every
space $S^\beta({\mathbb R}^{2d})$ with $\beta<1/2$ to itself
continuously. The spaces $S^\beta({\mathbb R}^d)$, where $\beta<1/2$, are
algebras under the Moyal $\star$-product, and the star
multiplication is separately continuous under their topology. If
$f,g \in S^\beta({\mathbb R}^d)$, then the series~\eqref{1.2*} converges
absolutely in this space.}
\medskip
The Fourier transformation $\mathcal{F}\colon f(x)\to\hat
f(p)=\int f(x)e^{ip\cdot x}dx$ converts $S^\beta$ to the space
$S_\beta$ which consists of all smooth functions satisfying the
inequalities
$$
|\partial^\kappa h(p)|\le C_\kappa e^{-|p/B|^{1/\beta}}\qquad
\text{for some $B(h)>0$ and for every $\kappa$},
$$
whereas $\mathscr{S}_{\beta}=\mathcal{F}[\mathscr{S}^{\beta}]$
consists of the functions satisfying
$$
|\partial^\kappa h(p)|\le C_{\kappa, B}e^{-|p/B|^{1/\beta}}\quad
\text{for each $B>0$ and for every $\kappa$}.
$$
The operator~\eqref{2.6*} turns into the multiplication of
the Fourier transforms by the function
\begin{equation}
e^{-(i/2)p_1\theta p_2},\qquad \text{where}\quad p_1\theta
p_2\stackrel{\mathrm{def}}{=} p_{1\mu}\theta^{\mu\nu}p_{2\nu}. \label{2.10*}
\end{equation}
Clearly, this function is a multiplier of
$\mathscr{S}_{\beta}({\mathbb R}^{2d})$ and of $S_\beta({\mathbb R}^{2d})$ for any
$\beta$. Hence the operator~\eqref{2.6*} admits continuous
extension to all these spaces, and this extension is unique
because $S_0=C^\infty_0$ is dense in each of them. It follows
that the $\star$-product also has a unique continuous extension to
the spaces with $\beta>1/2$. This extension is defined by
\begin{equation}
(f\times g)(x)=\frac{1}{(2\pi)^{2d}}\int\int \hat f(q)\hat
g(p)\,e^{-iqx-ipx-(i/2)q\theta p} dq dp \notag
\end{equation}
and is often called the ``twisted product''~\cite{G-BV}. The
uniqueness of the extension entitles us to identify the operation
$(f,g)\to f\times g$ with the $\star$-multiplication and to say
that $\mathscr{S}^{\beta}$ and $S^\beta$ are star product algebras
for any $\beta$. However, proposition~2 in~\cite{S07} shows that
every space $\mathscr{S}^{\beta}$ with $\beta>1/2$ and every
$S^\beta$ with $\beta\ge 1/2$ contain functions for which the
series~\eqref{1.2*} is not convergent in the topology of these
spaces.
\section{\large Test function algebras over cones in ${\mathbb R}^d$}
The operator~\eqref{2.6*} is nonlocal, but when acting on the
functionals defined on ${\mathscr{S}}^\beta$ or on $S^\beta$, it preserves
the property of a rapid decrease along a given direction of
${\mathbb R}^{2d}$ if a functional has such a property. In order for this
statement to be given a precise mathematical meaning, we use
spaces which are related to ${\mathscr{S}}^\beta({\mathbb R}^d)$ and
$S^\beta({\mathbb R}^d)$, but associated with cones in ${\mathbb R}^d$. Such
sheaves of spaces arise naturally in nonlocal quantum field
theory, see~\cite{FS92,S93}.
Let $U$ be an open connected cone in ${\mathbb R}^d$. We denote by
$S^{\beta, B}_N(U)$ the space of all infinitely differentiable
functions on $U$ with the finite norm
\begin{equation}
\|f \|_{U,B,N}=\sup_{x\in
U;\kappa}\,(1+|x|)^N\frac{|\partial^\kappa
f(x)|}{B^{|\kappa|}\kappa^{\beta\kappa}}.
\label{3.1*}
\end{equation}
The spaces ${\mathscr{S}}^\beta(U)$, $S^{\beta, B}(U)$ and $S^\beta(U)$ are
constructed from $S^{\beta, B}_N(U)$ by formulae analogous
to~\eqref{2.7*} and \eqref{2.9*}. If $\beta\le 1$, then all
elements of these spaces can be continued analytically to the
whole of ${\mathbb C}^d$ and this definition can be rewritten in terms of
complex variables. Using the Taylor and Cauchy formulae, it is
easy to verify (see, e.g., \cite{FS92} for details) that the space
$S^\beta(U)$ with $\beta<1$ coincides with the space of all entire
analytic functions satisfying the inequalities
\begin{equation} |f(z)|\le C_N (1+|x|)^{-N}\,e^{d(Bx,U)^{1/(1-\beta)}
+|By|^{1/(1-\beta)}},\qquad N=0,1,\dots,
\label{3.2*}
\end{equation}
where $z=x+iy$,
$d(x,U)=\inf_{\xi\in U}|x-\xi|$ is the distance from $x$ to $U$
and the constants $C_N, B$ depend on $f$. This space is independent of the
choice of the norm $|\cdot|$ on ${\mathbb R}^d$, because all these norms
are equivalent. We also note that $d(Bx,U)=Bd(x,U)$ since $U$ is
a cone. The analytic continuations of the elements of
${\mathscr{S}}^\beta(U)$ satisfy analogous inequalities for every $B$ and
for every $N$ with constants $C_{B,N}$ instead of $C_N$. This
representation makes it clear that the spaces $S^\beta(U)$ and
${\mathscr{S}}^\beta(U)$, where $\beta<1$, are algebras under the ordinary
multiplication.
The arguments used in the proofs of theorems 1 and 2 are
completely applicable to the spaces over cones and furnish the
following result.
\medskip
{\bf Theorem 5.} {\it Let $U$ be an open cone in ${\mathbb R}^{2d}$.
If $B<1/\sqrt{e|\theta|}$, then the operator~\eqref{2.6*}
maps the normed space $S_N^{1/2, B}(U)$ to $S_N^{1/2,B'}(U)$,
where $B'= B\sqrt{2}$, and is bounded. Each of the spaces
$\mathscr{S}^\beta(U)$ with $\beta\le1/2$ and $S^\beta(U)$ with
$\beta<1/2$ is continuously mapped by this operator into itself.
Consequently, it is defined and continuous on their dual spaces
$\mathscr{S}^{\prime\beta}(U)$, $S^{\prime\beta}(U)$. The series
obtained by applying the operator~\eqref{2.6*} to the elements of
these spaces are absolutely convergent.}
\medskip
As is shown in~\cite{Izv}, the spaces $S^{\beta, B}(U)$ are
nuclear. It immediately follows that ${\mathscr{S}}^\beta(U)$ and
$S^\beta(U)$ also have this property. Theorem~6 of~\cite{Izv}
states that the space $S^\beta(U\times U)$ coincides with the
completion of the tensor product $S^\beta
(U)\mathbin{\otimes_\iota} S^\beta (U)$ endowed with the inductive
topology. Let $f,g\in S^\beta(U)$. In complete analogy to the
reasoning of section~2, we can decompose the map $(f,g)\to f\star
g$ as follows:
\begin{equation}
S^\beta(U)\times
S^\beta(U)\stackrel{\otimes}{\longrightarrow}S^\beta(U\times U)
\stackrel{e^{(i/2)\partial_{x_1}\theta\partial_{x_2}}}
{\longrightarrow}S^\beta(U\times U)\stackrel{\widehat {\mathsf
m}}{\longrightarrow}S^\beta(U).
\label{3.3*}
\end{equation}
The former map in~\eqref{3.3*} is separately continuous and the
latter two are continuous. (As before, we denote by $\widehat
{\mathsf m}$ the linear map that canonically corresponds to the
ordinary product.) A similar representation certainly holds for
the spaces ${\mathscr{S}}^\beta(U)$, and in that case we have even a
simpler situation because these are Fr\'echet spaces and we need
not distinguish between separately continuous and continuous
linear maps. We thus get the following theorem.
\medskip
{\bf Theorem 6.} {\it Let $U$ be an open cone in ${\mathbb R}^d$. Every
space $\mathscr{S}^\beta(U)$ with $\beta\le1/2$ is a topological
algebra under the Moyal $\star$-product. If $\beta< 1/2$, then
$S^\beta(U)$ also is an algebra with respect to the Moyal product
and the $\star$-multiplication is separately continuous under its
topology. The series~\eqref{1.2*} converges absolutely in these
spaces for each pair of their elements.}
\medskip
The special convenience of $S^\beta$ (and $S^\beta_\alpha$) is
that the generalized functions defined on these spaces of analytic
test functions have been shown to possess the property of angular
localizability, which is specified in the following manner. We say
that a functional $v\in S^{\prime\beta}({\mathbb R}^d)$ is carried by a
closed cone $K\subset {\mathbb R}^d$ if $v$ admits a continuous extension
to every space $S^\beta(U)$, where $U\supset K\setminus\{0\}$.
This property is equivalent to the existence of a continuous
extension to the space
\begin{equation}
S^\beta(K)=\bigcup_{U\supset K\setminus\{0\}}S^\beta(U)
\label{3.4*}
\end{equation}
endowed with the topology induced by the family of injections
$S^\beta(U)\to S^\beta(K)$. When such an extension exists, it is
unique because $S^\beta$ is dense in $S^\beta(U)$ and in
$S^\beta(K)$ by theorem~5 of~\cite{S97}. The
representation~\eqref{3.2*} makes it clear that outside $K$
elements of $S^\beta(K)$ can have an exponential growth of order
$1/(1-\beta)$ and a finite type. Hence we are entitled to
interpret the existence of a nontrivial carrier cone of $v\in
S^\beta({\mathbb R}^d)$ as a falloff property of this functional in the
complementary cone or more specifically as a decrease faster than
exponentially with order $1/(1-\beta)$ and maximum type.
Moreover, the relation
\begin{equation}
S^{\prime\, \beta}(K_1\cap K_2)= S^{\prime\, \beta}(K_1)\cap S^{\prime\,
\beta}(K_2)
\label{3.5*}
\end{equation}
holds, which implies that every element of $S^{\prime\,
\beta}({\mathbb R}^d)$ has a unique minimal closed carrier cone in
${\mathbb R}^d$. This fact has been established in~\cite{FS92,S93} for
$0<\beta<1$, and a detailed proof of the relation~\eqref{3.5*} for
the most complicated case $\beta=0$ is available in~\cite{Izv}.
Clearly, the spaces $S^\beta(K)$ over closed cones also are
algebras under the Moyal $\star$-product if $\beta<1/2$.
\section{\large Asymptotic commutativity and $\theta$-locality}
It is believed that a mathematically rigorous theory of quantum
fields on noncommutative spacetime shall adopt the basic
assumption of the axiomatic approach~\cite{SW,J,BLOT} that quantum
fields are operator-valued generalized functions with a common,
dense and invariant domain $D$ in the Hilbert space of states. The
optimal test function spaces may be model-dependent, but the above
consideration shows that in any case the space ${\mathscr{S}}^{1/2}$ and the
spaces $S^\beta$ with $\beta<1/2$, as well as their related spaces
over cones, are attractive for use in noncommutative QFT.
Analytic test functions were used in nonlocal field theory over
many years and it would be reasonable to draw this experience. The
axiomatic formulation of nonlocal QFT developed in~\cite{FS92,
S93,S99} is based on the idea of changing local commutativity
to an asymptotic commutativity condition, which means that the
commutator or anticommutator of any two fields of the theory is
carried by the cone
\begin{equation}
\overline{{\mathbb V}}\times {\mathbb R}^d =\{(x,x')\in {\mathbb R}^{2d}\colon (x-x')^2\ge
0\}.
\label{4.1*}
\end{equation}
In more exact terms, if the fields $\phi$, $\psi$ are defined
on the test function space $S^\beta({\mathbb R}^d)$, $\beta<1$, then
either
\begin{equation}
\langle\Phi\, [\phi(x),\psi(x')]_-\Psi\rangle
\label{4.2*}
\end{equation}
or
\begin{equation}
\langle\Phi\, [\phi(x),\psi(x')]_+\Psi\rangle
\label{4.3*}
\end{equation}
is carried by the cone~\eqref{4.1*} for all $\Phi, \Psi\in D$. The
matrix elements~\eqref{4.2*}, \eqref{4.3*} can be regarded as
generalized functions on ${\mathbb R}^d$ because $S^\beta$ is nuclear and
the relation $S^\beta ({\mathbb R}^d)\mathbin{\hat{\otimes}_\iota} S^\beta
({\mathbb R}^d) =S^\beta ({\mathbb R}^{2d})$ holds. The asymptotic commutativity
condition becomes weaker with decreasing $\beta$. For $\beta=0$,
it means that the commutator of observable fields averaged with
test functions in $S^0$ decreases at spacelike separation no worse
than exponentially with order 1 and maximum type. Together with
other Wightman axioms, this condition ensures the existence of the
CPT-symmetry operator and the normal spin-statistics relation for
the nonlocal quantum fields. The proofs~\cite{S99} of these
theorems use the notion of the analytic wave front set of
distributions in an essential way. This generalization of the
local commutativity axiom also preserves the cluster decomposition
property of the vacuum expectation values. As shown
in~\cite{S82}, it preserves even the strong exponential version of
this property if the theory has a mass gap. This makes possible
interpreting the nonlocal QFT subject to the asymptotic
commutativity condition in terms of the particle scattering
because the cluster decomposition property plays a key role in
constructing the asymptotic states and the S-matrix.
In~\cite{S06}, we discussed some peculiarities of using the
analytic test functions in quantum field theory on noncommutative
spacetime for the case of a charged scalar field and space-space
noncommutativity. We have shown that this theory has CPT-symmetry
if it satisfies a suitably modified condition of asymptotic
commutativity. This modification uses the
generalization~\cite{Sm, Izv} of the notion of carrier cone to the
bilinear forms.
The test function spaces $S^\beta$, $\beta<1/2$, are convenient
for use in quantum field theory on noncommutative spacetime
because they are algebras under the $\star$-product and the
generalized functions defined on them have the property of angular
localizability, which enables one to apply analogues of some basic
theorems of Schwartz's theory of distributions. Moreover,
$S^\beta({\mathbb R}^d)$ are invariant under the affine transformations of
coordinates and the spaces of this kind over the light cone are
invariant under the Poincar\'e group. The asymptotic commutativity
provides a way of formulating causality in noncommutative QFT, but
it is insensitive to the magnitude of the noncommutativity
parameter which determines the fundamental length scale. The above
analysis suggests that a more accurate formulation can be obtained
by using spaces $S^{1/2,B}$. The nonlocal effects in quantum field
theory on noncommutative spacetime are determined by the
structure of the Moyal $\star$-product, and one might expect that
in this theory each of the matrix elements~\eqref{4.2*}
(or~\eqref{4.3*} for unobservable fields) admits a continuous
extension to the space
\begin{equation}
S^{1/2,B}(\overline{{\mathbb V}}\times {\mathbb R}^d),\qquad \text{where}\quad
B\sim \frac{1}{\sqrt{|\theta|}}\,.
\label{4.4*}
\end{equation}
(In general, $B$ may depend on the fields $\phi$, $\psi$ and the
states $\Phi$, $\Psi$.) This condition will be called
$\theta$-{\it locality}. Clearly, it is stronger than the
asymptotic commutativity condition stated for $\beta<1/2$, but it
is also consistent with the Poincar\'e covariance. Conceivably,
the $\theta$-locality expresses the absence of acausal effects on
scales much larger than the fundamental scale $\Lambda\sim
\sqrt{|\theta|}$. If such is the case, this assumption might be
called macrocausality. It should be emphasized that we do not
assume here that the fields are defined only on the analytic test
functions. It is quite possible that their matrix elements are
usual tempered distributions. In other words, we use $S^{1/2,B}$
as a tool for formulating causality rather than as the functional
domain of definition of fields. In the next section, we reconsider
from this standpoint a typical example which was used
in~\cite{Ch,G} for showing the violation of microcausality in
quantum field theory on noncommutative spacetime.
\section{\large An example}
Let $\phi$ be a free neutral scalar field of mass $m$ in a
spacetime of $d$ dimensions and let
\begin{multline}
\mathcal O(x)\equiv:\phi\star\phi:(x) =\lim_{x_1,x_2\to
x}:\phi(x_1)\phi(x_2):\\+\sum_{n=1}^\infty\left(\frac{i}{2}
\right)^n\frac{1}{n!}\,\theta^{\mu_1\nu_1}\dots
\theta^{\mu_n\nu_n}\lim_{x_1,x_2\to x}:\partial_{\mu_1}\dots
\partial_{\mu_n}\phi(x_1)\,\partial_{\nu_1}\dots\partial_{\nu_n}\phi(x_2):.
\label{5.1*}
\end{multline}
Every term in~\eqref{5.1*} is well defined as a Wick binomial.
M.~Chaichian {\it et al}~\cite{Ch} and O.~Greenberg~\cite{G}
studied the question of microcausality in noncommutative QFT for
the choice $\mathcal O$ as a sample observable. Specifically, they
considered the matrix element
$$
\langle 0|\,[\mathcal O(x), \mathcal O(y)]_-|p_1,p_2\rangle
$$
at $x^0=y^0$. In the case of space-space noncommutativity, with
$\theta^{12}=-\theta^{21}\ne 0$ and the other elements of the
$\theta$-matrix equal to zero, the commutator $[\mathcal O(x),
\mathcal O(y)]_-$ vanishes in the light wedge
$(x^0-y^0)^2<(x^3-y^3)^2$, but Greenberg found that $[\mathcal
O(x), \partial_\nu\mathcal O(y)]_-$ fails to vanish outside this
wedge and so violates microcausality. We shall show that
nevertheless the $\theta$-locality condition is fulfilled for
this observable and this result holds irrespectively of the type
of noncommutativity.
First, we calculate the vacuum expectation value
\begin{equation}
{\mathscr{W}}(x,y;z_1,z_2)=\langle 0|\,\mathcal O(x) \mathcal O(y)
:\phi(z_1)\phi(z_2):|0\rangle.
\label{5.3*}
\end{equation}
We use the Wick theorem and express
$$
\langle
0|:\phi(x_1)\phi(x_2)\!:\,\,:\phi(y_1)\phi(y_2)\!:\,\,:\phi(z_1)\phi(z_2)\!:|0\rangle
$$
in terms of the two-point function
\begin{equation}
w(x-y)= \langle
0|\phi(x)\phi(y)|0\rangle=\frac{1}{(2\pi)^{d-1}}\int e^{-ik\cdot
(x-y)}\vartheta(k^0)\delta(k^2-m^2)\,dk.
\notag
\end{equation}
Applying then the relation
\begin{equation}
\lim_{x_1, x_2 \rightarrow
x}\exp\left(\frac{i}{2}\partial_{x_1}\theta\,\partial_{x_2}\right)
e^{ik\cdot x_1}e^{ip\cdot x_2}
\equiv e^{ik\cdot x}\star e^{ip\cdot x}=e^{-(i/2)k\theta p}e^{i(k+p)\cdot
x},
\notag
\end{equation}
we obtain
\begin{multline}
{\mathscr{W}}(x,y;z_1,z_2)= 4\!\int\!\!
\frac{dkdp_1dp_2}{(2\pi)^{3(d-1)}}\,\vartheta(k^0)\delta(k^2-m^2)
\prod_{i=1}^2\vartheta(p_i^0)\delta(p_i^2-m^2)
\cos\left(\frac{1}{2}k\theta p_i\right)\\\times e^{-ik\cdot
(x-y)-ip_1\cdot (x-z_1)-ip_2\cdot (y-z_2)} + (z_1\leftrightarrow
z_2).
\label{5.4*}
\end{multline}
This formal derivation should be accompanied by a comment. The
function
$$\cos\left(\frac{1}{2}k\theta
p_1\right)\cos\left(\frac{1}{2}k\theta p_2\right)
$$
is a multiplier
for the Schwartz space ${\mathscr{S}}$, and hence the right-hand side
of~\eqref{5.4*} is well defined as a tempered distribution. This
distribution is obtained by applying the operator
\begin{equation}
\cos\left(\frac{1}{2}\partial_x\theta
\partial_{z_1}\right)\cos\left(\frac{1}{2}\partial_y\theta
\partial_{z_2}\right)
\label{5.6*}
\end{equation}
to the distribution
$$
4\!\int\!\!
\frac{dkdp_1dp_2}{(2\pi)^{3(d-1)}}\,\vartheta(k^0)\delta(k^2-m^2)
\prod_{i=1}^2 \vartheta(p_i^0)\delta(p_i^2-m^2)
e^{-ik\cdot (x-y)-ip_1\cdot (x-z_1)-ip_2\cdot (y-z_2)}\\ +
(z_1\leftrightarrow z_2).
$$
By theorem~2, the operator~\eqref{5.6*} is defined and continuous
on ${\mathscr{S}}^{1/2}({\mathbb R}^{4d})$ (and on any space $S^\beta({\mathbb R}^{4d})$
with $\beta<1/2$) and the power series expansion of~\eqref{5.4*}
in $\theta$ is weakly convergent to ${\mathscr{W}}$ in the dual space
${\mathscr{S}}^{\prime 1/2}$. This implies the strong convergence because
${\mathscr{S}}^{1/2}$ is a Montel space. However, that is not to say that
this expansion converges to ${\mathscr{W}}$ in the topology of the space
${\mathscr{S}}'$ of tempered distributions.
Using~\eqref{5.4*}, we obtain
\begin{multline}
\langle 0|\,[\mathcal O(x),\mathcal
O(y)]_-:\phi(z_1)\phi(z_2)\!:|0\rangle\\= 4\!\int\!\!
\frac{dkdp_1dp_2}{(2\pi)^{3(d-1)}}\,\epsilon (k^0)\delta(k^2-m^2)
\prod_{i=1}^2\theta(p_i^0)\delta(p_i^2-m^2)
\cos\left(\frac{1}{2}k\theta p_i\right)\\\times e^{-ik\cdot
(x-y)-ip_1\cdot (x-z_1)-ip_2\cdot (y-z_2)} + (z_1\leftrightarrow
z_2),
\label{5.7*}
\end{multline}
which agrees with formula (7) of~\cite{G}.
\medskip
{\bf Theorem 7.} {\it The restriction of the
distribution~\eqref{5.7*} to $S^{1/2}$ has a continuous extension
to the space $S^{1/2, B}({\mathbb V}\times {\mathbb R}^{3d})$, where
$B<1/\sqrt{e|\theta|}$ and
\begin{equation}
{\mathbb V}\times {\mathbb R}^{3d}=\{(x,y,z_1,z_2)\in {\mathbb R}^{4d}\colon (x-y)^2>0\}.
\label{5.8*}
\end{equation}
A fortiori, the restriction of this distribution to any space
$S^\beta({\mathbb R}^{4d})$ with $\beta<1/2$ is strongly carried by the
closed cone $\overline{{\mathbb V}}\times {\mathbb R}^{3d}$.
\medskip
Proof.} Let $B'= B\sqrt{2}$. The restriction of~\eqref{5.7*} to
$S^{1/2,B'}({\mathbb R}^{4d})$ is obtained by applying the
operator~\eqref{5.6*} to the restriction of
\begin{multline}
D(x,y;z_1,z_2)\equiv \langle
0|\,[:\phi^2\!:(x),:\phi^2\!:(y)]_-:\phi(z_1)\phi(z_2):|0\rangle\\=
4i\Delta(x-y)w(x-z_1)w(y-z_2)+ (z_1\leftrightarrow z_2).
\label{5.9*}
\end{multline}
Clearly, $D(x,y;z_1,z_2)$ vanishes for $(x-y)^2<0$ and the
restriction $D| S^{1/2,B'}({\mathbb R}^{4d})$ has a continuous extension
$\widetilde{D}$ to the space $S^{1/2,B'}({\mathbb V}\times {\mathbb R}^{3d})$.
This extension can be defined by $(\widetilde{D},f)=(D,\chi f)$,
where $\chi$ is a multiplier of the Schwartz space, which is equal
to 1 on an $\epsilon$-neighborhood of $\bar{\mathbb V}\times {\mathbb R}^{3d}$
and to zero outside the
$2\epsilon$-neighborhood. Such a multiplier satisfies the uniform
estimate
$|\partial^\kappa\chi|\le C_\kappa$, and the multiplication by $\chi$
maps
$S^{1/2,B'}({\mathbb V}\times {\mathbb R}^{3d})$ into ${\mathscr{S}}({\mathbb R}^{4d})$
continuously. By theorem~5, applying~\eqref{5.6*} to
$\widetilde{D}$, we obtain a continuous extension of the
functional~\eqref{5.7*} to the space $S^{1/2, B}({\mathbb V}\times
{\mathbb R}^{3d})$. This proves theorem~7. We point out once again that
this theorem holds for any matrix $\theta$ and in particular for
both space-space and time-space noncommutativity.
\section{\large Concluding remarks}
Our analysis shows that the $\theta$-locality condition or the
weaker condition of asymptotic commutativity for the restrictions
of fields to the test function spaces $S^\beta$, $\beta<1/2$, can
serve as a substitute of microcausality in quantum field theory
on noncommutative spacetime even though the fields are tempered.
The character of singularity is certainly dependent on the model,
but multiplication by the exponential~\eqref{2.10*} alone cannot
spoil temperedness. As stressed in~\cite{FW,S}, any attempt to
replace microcausality by a weaker requirement must take the
theorem on the global nature of local commutativity into
consideration. The Borchers and Pohlmeyer version~\cite{BP} of
this theorem states that local commutativity follows from an
apparently weaker assumption that $[\phi(x),\psi(x')]_\pm$
decreases at large spacelike separation faster than exponentially
of order 1. The example $:\phi\star\phi:$ discussed above
demonstrates that this theorem is inapplicable to the asymptotic
commutativity condition and that this condition does not imply
local commutativity. The point is that the fast decrease at
spacelike separation is understood here differently than
in~\cite{BP}, as a property of the field (anti)commutators
averaged with appropriate test functions. We have restricted our
consideration to the specific matrix element of the commutator,
but the technique developed in~\cite{SS01} enables one to
construct the operator realization of $:\phi\star\phi:$ in the
state space of $\phi$ and to prove that in this instance the
$\theta$-locality condition is completely fulfilled. In
combination with the usual relativistic transformation law of
states and fields, the asymptotic commutativity ensures the
existence of CPT-symmetry and the normal spin-statistics relation
for nonlocal fields~\cite{S99}. One might expect that in
noncommutative QFT similar conclusions can be deduced from a
suitable combination of the $\theta$-locality and the twisted
Poincar\'e covariance~\cite{FW,CK} which is currently received
much attention.
Most, if not all, of the results established above for $S^\beta$
can readily be extended to the spaces $S^\beta_\alpha$ whose
topological structure is even simpler. In particular, a theorem
similar to theorem~1 holds with $S^{1/2, B}_{\alpha, A}$ in place
of $S_N^{1/2, B}$. Analogues of theorems~2 and~3 hold for
$\mathscr{S}^{\beta}_\alpha=\bigcap_{A,B}S^{\beta,B}_{\alpha,A}$,
where $\beta\le1/2$ and $\alpha>1-\beta$. An analogue of theorem~4
is valid for
$S^{\beta}_\alpha=\bigcup_{A,B}S^{\beta,B}_{\alpha,A}$ with
$\beta<1/2$ and $\alpha\ge 1- \beta$. Of course, analogues of
theorems~5 and 6 hold with the same replacements.
\medskip
\section* {\large Acknowledgments}
This paper was supported in part by the the Russian
Foundation for Basic Research (Grant No.~05-01-01049) and the
Program for Supporting Leading Scientific Schools (Grant
No.~LSS-4401.2006.2).
\baselineskip=15pt
|
1,108,101,564,401 | arxiv | \section{Introduction}
Despite some x-ray facilities and experiments make use of Compton scattering to probe for instance the electronic and magnetic structure of materials~\cite{Sak98, Tsc98}, the limited flux and brilliance (brightness) that is currently available at the required high energies ($\gtrsim$20~keV), seem to have precluded the popularization of these techniques.
With the advent of the 4$^{\scriptsize\textnormal{th}}$ generation of synchrotron light sources, such as ESRF-EBS~\cite{ESRF}, the projected APS-U~\cite{APS}, Petra~IV~\cite{PETRA}, and SPring-8-II~\cite{SPring8}, as well as the proposal of novel facilities based on x-ray free-electron lasers~\cite{Hua13}, which increase the brightness and coherent flux for hard x-rays at least two orders of magnitude beyond today’s capability, a unique opportunity arises to use Compton scattering in ways that were not conceived before.
An example of these new possibilities is scanning Compton x-ray microscopy (SCXM)~\cite{Vil18}. This technique has the potential of obtaining 10's of nanometer resolution images of biological or radiosensitive samples without sectioning or labelling. Thus, it bridges the capabilities of optical and electron microscopes.
Exploiting Compton interactions for biological imaging is possible because, in spite of its inelastic nature, the SCXM technique makes an optimal use of the number of scattered photons per unit dose, i.e., the deposited energy per unit of mass.
Generally speaking, an efficient use of Compton scattering implies, first and foremost, that a nearly 4$\pi$-coverage is required (Fig.~\ref{fig:Angular_distribution}), at an optimal energy around 64~keV if aiming for instance at resolving DNA structures \cite{Vil18}. This poses a formidable challenge for current detection technologies, which are costly and have detection areas much below the required size. Conversely, at lower x-ray energies ($\lesssim$10~keV), imaging based on coherent scattering has benefited from the development of ultra-fast pixelated silicon detectors, capable of performing photon-counting up to $10^7$ counts/s/pixel. A nowadays typical detection area is $40\times40$ cm$^2$, sufficient for covering the coherent forward cone at a distance of about 1~m, at near 100\% quantum efficiency \cite{HPC_review}. At higher energies, silicon must be replaced by a semi-conductor with a higher stopping power to x-rays, e.g., CdTe. However, targeting a geometrical acceptance around $70\%$ at 64~keV, while providing enough space to incorporate a compact setup (namely the sample holder, step motor, pipes, shielding and associated mechanics), would imply an imposing active area for these type of detectors, well above 1000~cm$^2$. For comparison, PILATUS3 X CdTe 2M, one of the latest high-energy x-ray detectors used at synchrotron sources, has an active area of 25$\times$28~cm$^2$ \cite{Pilatus}.
Clearly, the availability of a 4$\pi$/high energy x-ray detector would soon become an important asset at any next generation facility, if it can be implemented in a practical way.
\begin{figure}[h!!!]
\centering
\includegraphics[width=82mm]{fig_1.pdf}
\caption{Differential cross section for Compton-scattered photons on DNA (in barn per stereoradian), for a linearly polarized x-ray beam of 64~keV as obtained with Monte Carlo simulations (using Geant4~\cite{Geant4}) and tabulated values~\cite{Hub75} (dashed lines), for different azimuthal regions: $\phi=[0-10]^\circ$(green), $\phi=[85-95]^\circ$(blue) and integrated over $\phi$ (red). $\phi$ indicates the angle relative to the direction of the polarization vector.}
\label{fig:Angular_distribution}
\end{figure}
In this work we have implemented a novel approach for the detection of 4$\pi$ Compton-scattered photons based on a technology borrowed from particle physics:
the electroluminescent Time Projection Chamber (EL-TPC), discussing its performance as an SCX-microscope. TPCs, introduced by D. Nygren in 1974 \cite{Nyg74, Nyg18} are nowadays ubiquitous in particle and nuclear physics, chiefly used for reconstructing particle interactions at high track multiplicities \cite{ALICE}, and/or when very accurate event reconstruction is needed \cite{DDM-Lomba, DUNE, Gon18}. The main characteristics of the particular TPC-flavour proposed here can be summarized as: i) efficient to high energy x-rays thanks to the use of xenon as the active medium, ii) continuous readout mode with a time sampling around ${\Delta}T_s = 0.5~{\mu}$s, iii) typical temporal extent of an x-ray signal (at mid-chamber): ${\Delta}T_{x-ray} = 1.35~{\mu}$s, iv) about 2000 readout pixels/pads, v) single-photon counting capability, and vi) an energy resolution potentially down to 2\% FWHM for 60~keV x-rays, thanks to the electroluminescence mode~\cite{Fano}, only limited by the Fano factor $F$.\footnote{A non-zero value of $F$ stems from the the intrinsic spread of primary ionization, as the partition of energy between excitations and ionizations changes event by event.} Importantly, the distinct advantage of using electroluminescence instead of conventional avalanche multiplication is the suppression of ion space charge, traditionally a shortcoming of TPCs operated under high rates.
Our design is inspired by the proposal in \cite{Dave_prop}, that has been successfully adopted by the NEXT collaboration in order to measure neutrino-less double-beta decay \cite{Francesc}, but we include three main simplifications: i) operation at atmospheric pressure, to facilitate the integration and operation at present x-ray sources, ii) removal of the photomultiplier-based energy-plane, and iii) introduction of a compact all-in-one electroluminescence structure, purposely designed for photon-counting experiments.
In this paper we discuss, starting from section~\ref{sectDesign}, the main concepts and working principles leading to our conceptual detector design. Next, in section \ref{TPCresponse}, we study the photon counting capabilities of a realistic detector implementation. We present the expected performance when applied to the SCXM technique in section~\ref{results}. Finally, we assess the limits and scope of the proposed technology in section~\ref{discussion}.
\section{TPC design} \label{sectDesign}
\subsection{Dose and intrinsic resolving power}
In a scanning, dark-field, configuration, the ability to resolve a feature of a given size embedded in a medium can be studied through the schematic representation shown in Fig. \ref{fig:Dose}-top, that corresponds to an arbitrary step within a 2d-scan, in a similar manner as presented in~\cite{Vil18}.
\begin{figure}[t]
\centering
\includegraphics[width=82mm]{fig_2a.pdf}
\includegraphics[width=82mm]{fig_2b.pdf}
\caption{Top: study case. A cubic DNA feature (size $d$) is embedded in a cubic water cell ($l= 5~{\mu}$m), surrounded by air/helium ($a= 5$ mm). The photon beam scans regions containing only water (case 0), or water and DNA (case f). These two cases are used to evaluate the resolving power of SCXM at a given dose.
Bottom: dose needed to resolve a DNA feature as a function of its size assuming 100\% detection efficiency, for x-ray energies of 30~keV and 64~keV, obtained respectively with Geant4~\cite{Geant4} (solid lines) and using NIST values \cite{NIST} (dotted line), and the formulas in text. The black line represents the maximum tolerable dose estimated from coherent scattering experiments ~\cite{ChapmanDose}.}
\label{fig:Dose}
\end{figure}
Three main assumptions lead to this simplified picture: i) the dose fractionation theorem \cite{DoseFrac}, based on which one can expect 3d reconstruction capabilities at the same resolution (and for the same dose) than in a single 2d-scan, ii) the ability to obtain a focal spot, $d'$, down to a size comparable to (or below) that of the feature to be resolved, $d$, and iii) a depth of focus exceeding the dimensions of the sample under study, $l$.
We adopt the situation in Fig. \ref{fig:Dose}-top as our benchmark case, and we use the Rose criterion \cite{Rose} as the condition needed to discern case $f$ (feature embedded within the scanned volume) from case $0$ (no feature), that reads in the Poisson limit as:
\begin{equation}
\frac{|N_{f} - N_{0}|}{\sqrt{\sigma_{N_{f}}^2 + \sigma_{N_{0}}^2}} =\frac{|N_{f} - N_{0}|}{\sqrt{N_{f} + N_{0}}} \geq 5 \label{Rose}
\end{equation}
with $N$ being the number of scattered photons. Substitution of physical variables in eq. \ref{Rose} leads directly to a required fluence of:
\begin{equation}
\phi \geq \phi_{min} = 25\frac{(2l-d)\!\cdot\!\lambda^{-1}_w+d\!\cdot\!\lambda^{-1}_f+4\!\cdot\!a\!\cdot\!\lambda^{-1}_a}{d'^2\!\cdot\!d^2\!\cdot\!(\lambda^{-1}_f-\lambda^{-1}_w)^2} \label{PhiMin}
\end{equation}
and we will assume $d'\simeq d$. Here $\lambda_w$, $\lambda_f$, $\lambda_a$ are the Compton-scattering mean free paths of x-rays in water, DNA, and air (or helium), respectively (table~\ref{tab:material_parameters}), and dimensions are defined in Fig. \ref{fig:Dose}-top. Finally, we evaluate the dose that will be imparted at the feature in these conditions as:
\begin{equation}
\mathcal{D} \! = \! \phi_{min}\!\cdot\!\varepsilon\!\cdot\!\frac{N_A}{M_f}\!\!\cdot\!\!\!\left[\sigma_{ph}\! +\!\! \int\!\!\frac{d\sigma_{_C}}{d\Omega}\!\!\cdot\!\!(1\!-\!\frac{1}{1\!\!+\!\frac{\varepsilon}{m_ec^2}(1\!-\!\cos\theta)}\!) d\Omega \right] \label{surDos}
\end{equation}
where $\sigma_{ph}$ is the photoelectric cross section and $d\sigma_{_C}/d\Omega$ is the differential cross section for Compton scattering, both evaluated at the feature. $M_f$ is the feature molar mass, $N_A$ the Avogadro number, $\varepsilon$ the photon energy and $\theta$ its scattering angle. The dose inherits the approximate $l/d^4$ behaviour displayed in equation (\ref{PhiMin}).
\begin{table}[h]
\caption{Mean free path for different materials at the studied energies 30 and 64~keV, according to NIST.}
\begin{tabular}{p{0.16\textwidth}p{0.08\textwidth}p{0.08\textwidth}p{0.08\textwidth}}
\hline
Mean free path & 30 keV & 64 keV & Material \\
\hline
$\lambda_w$~[cm] & 5.47 & 5.69 & water\\
$\lambda_f$~[cm] & 3.48 & 3.54 & DNA \\
$\lambda_a$~[cm] & 4950.49 & 4945.60 & air \\
\hline
\end{tabular}
\label{tab:material_parameters}
\end{table}
Working with eq. \ref{surDos} is convenient because it has been used earlier, in the context of coherent scattering, as a metric for assessing the maximum radiation prior to inducing structural damage~\cite{ChapmanDose}. By resorting to that estimate (black line in Fig. \ref{fig:Dose}-bottom), the doses required for resolving a feature of a given size can be put into perspective. These doses, obtained using Geant4 for a DNA feature embedded in a $5~{\mu}$m water-equivalent cell, are shown as continuous lines. Results resorting to NIST values~\cite{NIST} and Hubbell parameterization for $d\sigma_{_C}/d\Omega$ ~\cite{Hub75} are displayed as dashed lines, highlighting the mutual consistency in this simplified case. Clearly, SCXM can potentially resolve 33~nm-size DNA features inside $5~{\mu}$m cells, and down to 26~nm if a stable He atmosphere around the target can be provided.
Using eq. \ref{surDos} as a valid metric for inter-comparison between SCXM and coherent scattering is at the moment an open question and will require experimental verification. In particular, the formula assumes implicitly that the energy is released locally. However, a 10~keV photoelectron has a range of up to $2~{\mu}$m in water, while a 64~keV one can reach $50~{\mu}$m. An approximate argument can be sketched based on the fact that the average energy of a Compton electron for 64~keV x-rays (in the range 0-14~keV) is similar to that of a 10~keV photo-electron stemming from 10~keV x-rays, a typical case in coherent diffraction imaging (CDI). Given that at 64~keV most (around 70\%) of the energy is released in Compton scatters, the situation in terms of locality will largely resemble that of coherent scattering. Hence, compared to CDI, only about 30\% of the energy will be carried away from the interaction region by the energetic 64~keV photoelectrons. On the other hand, at 30~keV (the other energy considered in this study) the photoelectric effect contributes to 90\% of the dose, so one can expect a higher dose tolerance for SCXM than the one estimated here.
Naturally, the shielding pipes, the structural materials of the detector, the detector efficiency, the instrumental effects during the reconstruction, and the accuracy of the counting algorithms can limit the achievable resolution, resulting in dose values larger than the ones in Fig. \ref{fig:Dose}. These effects are discussed in the next sections.
\begin{figure}[h!!]
\centering
\includegraphics[width=70mm]{fig_3a.pdf}
\includegraphics[width=71mm]{fig_3b.pdf}
\caption{Top(a): ionization distributions in xenon gas, stemming from x-rays interacting in an infinite volume. They are obtained after aligning each x-ray ionization cloud by its barycenter, and projecting it over an arbitrary axis. Calculations from Geant4 are compared with the microscopic code DEGRAD developed by S. Biagi \cite{Bia13}. Top(b): probability of characteristic x-ray emission in xenon for an incident photon energy of 30~keV (red) and 64~keV (blue), in Geant4. The K-shell (green) and L-shell (orange) lines, as tabulated in~\cite{Booklet}, are shown for comparison. Bottom(a): transverse size of a point-like ionization cluster after drifting along 50~cm, obtained from Magboltz. Bottom(b): longitudinal size of a point-like ionization cluster (in time units), in the same conditions. Results for pure xenon and a fast `counting' mixture based on Xe/CH$_4$ are shown for comparison.}
\label{fig:ClusterSize}
\end{figure}
\subsection{Technical description of the TPC working principle}
When x-rays of energies of the order of 10's of keV interact in xenon gas at atmospheric pressure, the released photoelectron creates a cloud of secondary ionization (containing thousands of electrons) with a typical ($1\sigma$) size of 0.25-1~mm (Fig. \ref{fig:ClusterSize}-top). If the x-ray energy is above that of the xenon K-shell, characteristic emission around 30-34~keV will ensue, in about 70\% of the cases. At these energies, x-ray interactions in xenon take place primarily through photoelectric effect, with just a small ($\lesssim 1\%$) probability of Compton scattering.
The ionization clouds (hereafter `clusters') drift, due to the electric field $E_{drift}$ of the TPC, towards the electroluminescence/anode plane, as shown in Fig.~\ref{fig:sketch}-top, following a diffusion law as a function of the drift distance $z$:
\begin{equation}
\sigma_{z (x,y)} = D_{L (T)}^* \sqrt{z} \label{diffusion_law}
\end{equation}
where $D_L^*$ and $D_T^*$ are the longitudinal and transverse diffusion coefficients, respectively. In fact, diffusion is impractically large in pure noble gases, given that the cooling of ionization electrons is inefficient under elastic collisions only. Addition of molecular additives, enabling vibrational degrees of freedom at typical electron energies, is a well established procedure known to improve the situation drastically, and can be accurately simulated with the electron transport codes Magboltz/Pyboltz \cite{Magboltz, Pyboltz}. In particular, a small (0.4\%) addition of CH$_4$ is sufficient to reduce the cluster size well below that in pure xenon (Fig. \ref{fig:ClusterSize}-bottom), as required for photon-counting. An essential ingredient to the use of Xe-CH$_4$ admixtures is the recent demonstration that the electroluminescence signal is still copious in these conditions \cite{Henriques}.\footnote{This unanticipated result, that might not look significant at first glance, results from a very subtle balance between the quenching of the xenon triplet state and the cooling of drifting electrons through inelastic collisions \cite{DiegoMicro}.} Hence, for a drift field $E_{drift} = 110$~V/cm, the cluster's longitudinal size can be kept at the $\sigma_z = 4$~mm level even for a 50~cm-long drift, corresponding to a temporal spread of $\sigma_t = 0.75~{\mu}$s, while the transverse size approaches $\sigma_{x,y} = $10~mm. The electron drift velocity is $v_d=\sigma_z/\sigma_t=$5 mm/$\mu$s.
The proposed detection concept is depicted in Fig. \ref{fig:sketch}-top, with Fig. \ref{fig:sketch}-bottom displaying a close-up of the pixelated readout region, that relies on the recent developments on large-hole acrylic multipliers \cite{FATGEM}.
Provided sufficient field focusing can be achieved at the structure, as shown in Fig. \ref{fig:sketch}-bottom, the ionization clusters will enter a handful of holes, creating a luminous signal in the corresponding silicon photomultiplier (SiPM) situated right underneath, thus functioning, in effect, as a pixelated readout.
In summary: i) x-rays that Compton-scatter at the sample interact with the xenon gas and give rise to clusters of characteristic size somewhere in the range 1-10 mm-$\sigma$, depending on the distance to the electroluminescence plane; ii) given the relatively large x-ray mean free path of around 20~cm in xenon at 1~bar, one anticipates a sparse distribution of clusters, that can be conveniently recorded with 10~mm-size pixels/pads, on a readout area of around 2000 cm$^2$ ($N_{pix}=2000$).
\begin{figure}[h!]
\centering
\includegraphics[width=70mm]{fig_4a.pdf}
\includegraphics[width=80mm]{fig_4b.pdf}
\caption{Top: schematic representation of the working principle of the EL-TPC. Photons scattered at the sample reach the xenon gas, creating ionization clusters that drift, while diffusing, towards the anode plane, where they induce electroluminescence. Bottom: close-up of the electroluminescence region, based on the recently introduced acrylic-based electroluminescence multipliers, developed in collaboration between IGFAE and the CERN-RD51 workshops \cite{FATGEM}.}
\label{fig:sketch}
\end{figure}
From the FWHM per x-ray cluster at about mid-chamber: $\Delta_{x,y}|_{x-ray} = 2.35/\sqrt{2} \cdot \sigma_{x,y} = 16$~mm, an average multiplicity $M$ of around $4$ per cluster may be assumed if resorting to $10~\textnormal{mm} \times 10~\textnormal{mm}$ pixels/pads. The temporal spread, on the other hand, can be approximated by: ${\Delta}T_{x-ray} = 2.35/\sqrt{2} \cdot \sigma_z/v_d = 1.35~{\mu}$s. Taking as a reference an interaction probability of $P_{int} = 2.9\times10^{-4}$ ($5~{\mu}$m water-equivalent cell, 10~mm of air), a 70\% detection efficiency $\epsilon$, and an $m=20$\% pixel occupancy, this configuration yields a plausible estimate of the achievable counting rate as:
\begin{equation}
r_{max} = \frac{1}{\epsilon P_{int}} \frac{m \cdot N_{pix}}{M} \frac{1}{\Delta{T}_{x-ray}} = 3.6 \times 10^{11}~\textnormal{(ph/s)}
\end{equation}
compatible a priori with the beam rates for hard x-rays foreseen at the new generation of light sources \cite{ESRF}. However, in order to have a realistic estimate of the actual counting performance it is imperative to understand which level of occupancy/pile-up can be \emph{really} tolerated by the detector, before the photon-counting performance deteriorates above the Poisson-limit or proportionality of response is irreparably lost. We address this problem specifically in section \ref{TPCresponse}.
\subsection{Geometry optimization with Geant4}
The suitability of the TPC technology for SCXM depends primarily on the ability to detect $\sim60$ keV photons within a realistic gas volume, in the absence of pressurization. Given that the mean free path of 60~keV x-rays in xenon is 20~cm, the most natural $4\pi$-geometry adapting to this case is a hollow cylinder with a characteristic scale of around half a meter. On the other hand, the geometrical acceptance is a function of $\arctan(2R_{i}/L)$, with $L$ being the length and $R_{i}$ the inner radius of the cylinder. In order to place the sample holder, step motor, pipes and associated mechanics, we leave an $R_i= 5$~cm inner bore. Finally, the xenon thickness ($R_o$-$R_i$), that is the difference between the outer and inner TPC radii, becomes the main factor for the detector efficiency, as shown in Fig.~\ref{fig:efficiency}. We discuss two photon energies: 30 and 64~keV. The latter represents the theoretical optimum for SCXM in terms of dose, while the former, sitting just below the $K$-shell energy of xenon, is a priory more convenient for counting due to the absence of characteristic (K-shell) x-ray re-emission inside the chamber. The mean free path is similar for the two energies, therefore no obvious advantage (or disadvantage) can be appreciated in terms of detector efficiency, at this level of realism.
\begin{figure}[h!]
\centering
\includegraphics[width=80mm]{fig_5.pdf}
\caption{Efficiency as a function of the thickness of the xenon cylinder ($R_o$-$R_i$) for different lengths, at energies of 30 and 64~keV. The dotted line indicates the benchmark geometry considered in text, for a length $L=50$~cm.}
\label{fig:efficiency}
\end{figure}
We consider now a realistic geometry, opting for an inner cylinder shell made out of 0.5~mm-thick aluminum walls, with 2~mm HDPE (high density polyethylene), $50~{\mu}$m kapton and $15~{\mu}$m copper, sufficient for making the field cage of the chamber, that is needed to minimize fringe fields (inset in Fig. \ref{fig:TPC3D}). The HDPE cylinder can be custom-made and the kapton-copper laminates are commercially available and can be adhered to it by thermal bonding or epoxied, for instance. The external cylinder shell may well have a different design, but it has been kept symmetric for simplicity. We consider in the following a configuration that enables a good compromise in terms of size and flexibility: $L=50~$cm and $R_o= 25~$cm. The geometrical acceptance nears in this case 80\%. Additional 10~cm would be typically needed, axially, for instrumenting the readout plane and taking the signal cables out of the chamber, and another 10~cm on the cathode side, for providing sufficient isolation with respect to the vessel, given that the voltage difference will near 10~kV. Although those regions are not discussed here in detail, and have been replaced by simple covers, the reader is referred to \cite{Francesc} for possible arrangements. With these choices, the vessel geometry considered in simulations is shown in Fig. \ref{fig:TPC3D}, having a weight below 10~kg.
The necessary structural material of the walls and the presence of air in the hall reduce the overall efficiency from 62.8\% to 58.5\% (64~keV) and from 64.5\% to 40.0\% (30~keV). The beam enters the experimental setup from the vacuum pipes (not included in the figure) into two shielding cones (made of stainless steel and covered with lead shields) and from there into the sample region. Our case study is that of a 33~nm DNA feature inside a $5~{\mu}$m cell, and 5~mm air to and from the shielding cones. The conical geometry is conceived not to crop the angular acceptance of the x-rays scattered on-sample, providing enough space to the focusing beam, and enabling sufficient absorption of stray x-rays from beam-air interactions along the pipes. In a $4\pi$ geometry as the one proposed here, the cell holder and step motor should ideally be placed along the polarization axis, where the photon flux is negligible.
\begin{figure}[h!]
\centering
\includegraphics[width=85mm]{fig_6.pdf}
\caption{A) TPC geometry in Geant4, aimed at providing nearly $4\pi$-coverage for SCXM. B) detail of the region faced by x-rays when entering the detector, that includes the vessel and field cage. C) detail of the sample region and the shielding cones.}
\label{fig:TPC3D}
\end{figure}
\subsection{Image formation in the TPC}
The parameters used for computing the TPC response rely largely on the experience accumulated during the NEXT R\&D program. We consider a voltage of -8.5~kV at the cathode and 3~kV across the electroluminescence structure, with the anode sitting at ground, a situation that corresponds to fields around $E_{drift}=110$~V/cm and $E_{el}=6$~kV/cm in the drift and electroluminescence regions, respectively. The gas consists of Xe/CH$_4$ admixed at 0.4\% in volume in order to achieve a 40-fold reduction in cluster size compared to operation in pure xenon (Fig. \ref{fig:ClusterSize}-bottom). The electroluminescence plane will be optically coupled to a SiPM matrix, at the same pitch, forming a pixelated readout. The optical coupling may be typically done with the help of a layer of ITO (indium-tin oxide) and TPB (tetraphenyl butadiene) deposited on an acrylic plate, following \cite{Francesc}. This ensures wavelength shifting to the visible band, where SiPMs are usually more sensitive. The number of SiPM-photoelectrons per incoming ionization electron, $n_{phe}$, that is the single most important figure of merit for an EL-TPC, can be computed from the layout in Fig. \ref{fig:sketch}-bottom, after considering: an optical yield $Y = 250$ ph/e/cm at $E_{el}=6$~kV/cm \cite{FATGEM}, a TPB wavelength-shifting efficiency $WLSE_{TPB}=0.4$ \cite{Gehman}, a solid angle coverage at the SiPM plane of $\Omega_{SiPM}=0.3$ and a SiPM quantum efficiency $QE_{SiPM}=0.4$. Finally, according to measurements in \cite{Henriques}, the presence of 0.4\% CH$_4$ reduces the scintillation probability by $P_{scin}=0.5$, giving, for a $h=5$~mm-thick structure:
\begin{equation}
n_{phe} = Y \cdot h \cdot WLSE_{TPB} \cdot \Omega_{SiPM} \cdot QE_{SiPM} \cdot P_{scin} = 3
\end{equation}
Since the energy needed to create an electron-ion pair in xenon is $W_I=22$~eV, each 30-64~keV x-ray interaction will give raise to a luminous signal worth 4000-9000 photoelectrons (phe), spanning over 4-8 pixels, hence well above the SiPM noise. The energy resolution (FWHM) is obtained from \cite{Henriques} as:
\begin{equation}
\mathcal{R}(\varepsilon\!=\!64~\textnormal{keV}) \simeq 2.355 \sqrt{F + \frac{1}{n_{phe}}\left(1+\frac{\sigma_G^2}{G^2}\right)}\sqrt{\frac{W_I}{\varepsilon}} = 3.1\%
\end{equation}
with $\sigma_G/G$ being the width of the single-photon distribution (around 0.1 for a typical SiPM) and $F\simeq0.17$ the Fano factor of xenon. For comparison, a value compatible with $\mathcal{R}(\varepsilon\!=\!64~\textnormal{keV})=5.5\%$ was measured for acrylic-hole multipliers in \cite{FATGEM}. In present simulations, the contribution of the energy resolution has been included as a gaussian smearing in the TPC response.
Finally, the time response function of the SiPM is included as a Gaussian with a 7~ns width, convoluted with the transit time of the electrons through the electroluminescence structure $\Delta{T}_{EL} = 0.36~\mu{s}$, being both much smaller in any case than the typical temporal spread of the clusters (dominated by diffusion). The sampling time is taken to be ${\Delta}T_s = 0.5~\mu$s as in \cite{Francesc}, and a matrix of 1800 10~mm-pitch SiPMs is assumed for the readout. Images are formed after applying a 10~phe-threshold to all SiPMs.
\begin{figure}[h]
\centering
\includegraphics[width=80mm]{fig_7.pdf}
\caption{A typical TPC image reconstructed from the SiPM signals (in phe), as recorded in one time-slice (${\Delta}T_s=~0.5~ {\mu}$s), for a beam rate of $r=3.7\times 10^{10} ~s^{-1}$. The crosses show the clusters' centroids, obtained from `MC-truth' information.}
\label{fig:ClusterCounting}
\end{figure}
A fully processed TPC image for one time slice (${\Delta}T_s=~0.5~ {\mu}$s), obtained at a beam rate of $r=3.7\times 10^{10}$~ph/s for a photon energy $\varepsilon=64$~keV, is shown in Fig. \ref{fig:ClusterCounting}. The main clusters have been marked with crosses, by resorting to `Monte Carlo truth', i.e., they represent the barycenter of each primary ionization cluster in Geant4. The beam has been assumed to be continuous, polarized along the $x$-axis, impinging on a 5~${\mu}$m water cube surrounded by air, with a 33~nm DNA cubic feature placed at its center. The Geant4 simulations are performed at fixed time, and the x-ray interaction times are subsequently distributed uniformly within the dwell time corresponding to each position of the scan. It must be noted that interactions taking place at about the same time may be recorded at different times depending on the $z$-position of each interaction, (and viceversa, clusters originating at different interaction times, may eventually be reconstructed in the same time slice). This scrambling (unusual under typical TPC operation) renders every time slice equivalent for the purpose of counting. In principle, the absolute time and $z$ position can be disambiguated from the size of the cluster, using the diffusion relation in eq. \ref{diffusion_law}, thus allowing photon-by-photon reconstruction in time, space, and energy. A demonstration of the strong correlation between $z$-position and cluster width, for 30~keV x-ray interactions, can be found in \cite{DiegoAccurate} for instance.
The design parameters used in this subsection are compiled in tables 1-4 of the Appendix~\ref{appendixB}.
\section{Photon counting capabilities}\label{TPCresponse}
\subsection{Ideal counting limit}
The attenuation in the structural materials, re-scatters, characteristic emission, as well as the detector inefficiency, are unavoidable limiting factors for counting. These intrinsic limitations can be conveniently evaluated from the signal-to-noise ratio, defined from the relative spread in the number of ionization clusters per scan step (see Fig. \ref{fig:Dose}), as obtained in Monte Carlo ($n_{MC}$):
\begin{equation}
S/N = n_{MC}/\sigma_{n_{MC}} \label{S/N1}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=85mm]{fig_8.pdf}
\caption{Intrinsic counting performance (using Monte Carlo truth information) for 64~keV x-ray photons, characterized by the signal to noise ratio (relative to case 0). Photon counting (green) and calorimetric mode (red) are displayed as a function of the realism of the simulations.}
\label{fig:simu_realism}
\end{figure}
Figure \ref{fig:simu_realism} shows the deterioration of the $S/N$ for 64~keV photons, as the realism of the detector increases. It has been normalized to the relative spread in the number of photons scattered on-sample per scan step, $\sqrt{N_0}$, so that it equals 1 for a perfect detector (see appendix~\ref{appendixA}):
\begin{equation}
S/N^* \equiv \frac{1}{\sqrt{N_0}} \cdot S/N \label{S/N2}
\end{equation}
The figure also shows the $S/N^*$ in `calorimetric mode', with the counting performed by simply integrating the total collected light per scan step ($\varepsilon_{tot}$), instead of photon-by-photon. $S/N^*$ is defined in that case, equivalently, as: $S/N^* = (\varepsilon_{tot}/\sigma_{\varepsilon_{tot}}) / \sqrt{N_0}$. The values obtained are just slightly below the ones expected considering detector inefficiency alone (see appendix~\ref{appendixA}):
\begin{equation}
S/N^* \simeq \sqrt{\epsilon}
\end{equation}
therefore suggesting a small contribution from re-scatters in the materials or other secondary processes.
\subsection{Real counting}
Given the nature of the detector data (Fig. \ref{fig:ClusterCounting}), consisting of voxelized ionization clusters grouped forming ellipsoidal shapes, generally separable, and of similar size, we select the K-means clustering method~\cite{Kmeans} to perform cluster counting. The counting algorithm has been implemented as follows:
i) the `countable' clusters are first identified time-slice by time-slice using Monte Carlo truth information, as those producing a signal above a certain energy threshold ($\varepsilon_{th}$) in that slice. The energy threshold is chosen to be much lower than the typical cluster energies. In this manner, only small clusters are left out of the counting process when most of their energy is collected in adjacent time-slices from which charge has spread out due to diffusion, and where they will be properly counted once the algorithm is applied there; ii) a weighted inertia ($I$) distribution is formed, as conventionally done in K-means, and a threshold ($\delta I_{th}$) is set to the variation of the inertia with the number of clusters counted by the algorithm ($n$) (Fig. \ref{fig:kmeans}). The threshold is optimized for each beam rate condition. We concentrate on beam rates for which the average efficiency and purity of the cluster identification in 2d slides is larger than 80\%, as the ones illustratively depicted in Fig. \ref{fig:counting_beam_rate1}. The counting efficiency and purity can been defined, as customary, as:
\begin{eqnarray}
\epsilon_{counting} & = & \frac{n_{matched}}{n_{MC}} \label{eff_count}\\
p_{counting} & = & \frac{n_{matched}}{n}
\end{eqnarray}
where $n_{matched}$ is the number of counted clusters correctly assigned to MC clusters and $n_{MC}$ is the number of MC clusters. The K-means optimization parameters have been chosen to simultaneously maximize the counting efficiency while achieving $n \simeq n_{MC}$, therefore $\epsilon_{counting} \simeq p_{counting}$.
\begin{figure}[h]
\centering
\includegraphics[width=85mm]{fig_9.pdf}
\caption{The K-means cluster-counting algorithm evaluates the partition of $N$ observations (voxelized ionization clusters in our case) in $n$ clusters, so as to minimize the inertia $I$, defined as the sum of the squared distances of the observations to their closest cluster center. In the plot: convergence of K-means for a beam rate of 10$^{11}$ ph/s. A Savitzky–Golay filter is applied for the purpose of smoothing the variation of the inertia $\delta I$.}
\label{fig:kmeans}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=85mm]{fig_10.pdf}
\caption{Cluster counting performance for typical ${\Delta}T_s=~0.5~{\mu}$s time-slices, for different energies ($\varepsilon$) and beam rates ($r$). Crosses indicate the cluster centroids from MC and circles are the clusters found by K-means. The average counting-efficiency and purity along the detector are given below in brackets. Top left: $\varepsilon$ = 64~keV and $r = 3.7\times10^{10}$~ph/s ($\epsilon_{counting}$ = 88.2\%, $p_{counting}$ = 86.9\%). Top right: $\varepsilon$ = 64~keV and $r = 7.5\times10^{10}$~ph/s ($\epsilon_{counting}$ = 84.2\%, $p_{counting}$ = 83.2\%). Bottom left: $\varepsilon$ = 30~keV and $r = 6.5\times10^{10}$~ph/s ($\epsilon_{counting}$ = 87.9\%, $p_{counting}$ = 87.5\%). Bottom right: $\varepsilon$ = 30~keV and $r = 1.3\times10^{11}$~ph/s ($\epsilon_{counting}$ = 83.9\%, $p_{counting}$ = 83.1\%). For $\varepsilon$ = 30~keV only about half of the clusters are produced, which enables measuring at higher beam rates than $\varepsilon$ = 64~keV, at comparable efficiency and purity.}
\label{fig:counting_beam_rate1}
\end{figure}
Fig. \ref{fig:counting_beam_rate2} (top) shows the performance of the counting algorithm, presenting the average number of clusters counted per 2d slice as a function of beam rate, with $\varepsilon_{th}$ and $\delta I_{th}$ optimized for each case as described above (green line). Red lines indicate the predictions outside the optimized case, that illustrate the consistent loss of linearity as the beam rate increases. Fig. \ref{fig:counting_beam_rate2} (bottom) shows the relative spread in the number of counted clusters $\sigma_n/n$, and comparison with Monte Carlo truth. These results can be qualitatively understood if recalling that, by construction, the threshold inertia is strongly correlated with the average number of clusters and its size. Therefore, a simple K-means algorithm will inevitably bias the number of counted clusters to match its expectation on $I$, if no further considerations are made. Therefore, once $\delta I_{th}$ has been adjusted to a certain beam rate, there will be systematic overcounting for lower beam rates, and undercounting for higher ones, as reflected by Fig. \ref{fig:counting_beam_rate2} (top). In present conditions, a 2$^{\scriptsize{\textnormal{nd}}}$ order polynomial is sufficient to capture this departure from proportionality introduced by the algorithm. A similar (although subtler) effect takes place for the cluster distributions obtained slice-by-slice, where this systematic overcounting-undercounting effect makes the cluster distribution marginally (although systematically) narrower, as seen in Fig. \ref{fig:counting_beam_rate2} (bottom). As a consequence, the directly related magnitude $S/N^*$ (eqs. \ref{S/N1}, \ref{S/N2}), is not deteriorated by the counting algorithm. On the other hand, proportionality is lost, and its impact needs to be addressed, depending on the application. The particular case of SCXM is scrutinized in the next section.
Finally, the photon-counting efficiency (eq. \ref{eff_count}) can be assessed through Fig.~\ref{fig:counting_efficiency}-top, where it is displayed as a function of the beam rate on target. It can be seen how, for the case of 30 and 64~keV photons, its value exceeds 85\% for rates up to 10$^{11}$~ph/s and $0.5\cdot10^{11}$~ph/s, respectively. At these high beam rates, counting capability suffers from event pile-up while, at low beam rates, it is limited by the presence of low-energy deposits (corresponding to x-ray interactions for which most of the energy is collected in adjacent slices). It must be recalled, at this point, that a complete reconstruction requires combining 2d time-slices as the ones studied here, in order to unambiguously identify clusters in 3d. Given that each cluster extends over 4-6 slices due to diffusion, and clusters are highly uncorrelated, a 3d counting efficiency well above 90\% can be anticipated in the above conditions.
\section{Projections for SCXM}\label{results}
We propose the characterization of the EL-TPC technology in light of its performance as a cellular microscope, through the study of the smallest resolvable DNA-feature (size $d$) as a function of the scan time ($\Delta{T}_{scan}$). Justification of the following derivations can be found in appendix~\ref{appendixA}, starting with:
\begin{equation}
d = \left(R^2 2 l^2\frac{(l\lambda_w^{-1} + 2a\lambda_a^{-1})}{(\lambda_f^{-1} - \lambda_w^{-1})^2} \frac{1}{C_l(r)^2 \cdot S/N^{*,2}\cdot r \cdot\Delta{T}_{scan}}\right)^{1/4} \label{eq:Clin}
\end{equation}
Here $R$ equals 5 under the Rose criterion and the rate-dependent coefficient $C_l<1$ depends on the deviation of the counting algorithm from the proportional response, its expression being given in appendix~\ref{appendixA}. Other magnitudes have been already defined. Since the smallest resolvable feature size ($d^{\dagger}$) is ultimately determined by the dose imparted at it when structural damage arises (eq. \ref{surDos}, Fig. \ref{fig:Dose}), the necessary scan time to achieve such performance ($\Delta{T}_{scan}^{\dagger}$) can be readily obtained:
\begin{equation}
\Delta{T}_{scan}^{\dagger} = R^2 2 l^2\frac{(l\lambda_w^{-1} + 2a\lambda_a^{-1})}{(\lambda_f^{-1} - \lambda_w^{-1})^2} \frac{1}{C_l(r)^2 \cdot S/N^{*,2}\cdot r \cdot (d^{\dagger})^4} \label{eq:DeltaTdagger}
\end{equation}
For a detector with finite efficiency, the value of $d^{\dagger}$ can be recalculated by simply accounting for the necessary increase in fluence (and hence in dose), as:
\begin{eqnarray}
\phi \rightarrow \phi' & = & \phi/\epsilon \\
\mathcal{D} \rightarrow \mathcal{D}' & = & \mathcal{D}/\epsilon
\end{eqnarray}
that results in slightly deteriorated values compared to Fig. \ref{fig:Dose}: $d^{\dagger}=36$~nm instead of $d^{\dagger}=33$~nm for $\varepsilon$=64~keV, and $d^{\dagger}=44$~nm instead of $d^{\dagger}=37$~nm for $\varepsilon$=30~keV.
The limiting scan time (i.e., above which structural damage will appear) can be hence assessed from the behaviour of eq. \ref{eq:DeltaTdagger} with beam rate, as shown in Fig.~\ref{fig:counting_efficiency}-bottom. For 64~keV, the loss of linearity of the counting algorithm at high rates results in a turning point at $9.3 \times 10^{10}$ ph/s, above which an increase in rate stops improving the ability to resolve an image. For 30~keV, due to the absence of characteristic emission, only about half of the clusters are produced and the optimum rate is found at a higher value, $r = 1.6 \times 10^{11}$. The counting efficiency and purity in these conditions is in the range 82-84\%.
\begin{figure}[h]
\centering
\includegraphics[width=85mm]{fig_11.pdf}
\caption{Top: counting performance characterized through the average number of clusters counted per 2d time-slice as a function of the beam rate for $\varepsilon$~=~64~keV. Bottom: relative spread of the number of clusters per 2d time-slice from Monte Carlo truth and counted with K-means. The $1/\sqrt{r}$ expectation (dashed) is shown for comparison.}
\label{fig:counting_beam_rate2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=85mm]{fig_12_alternative.pdf}
\caption{Top: efficiency of the cluster counting process as a function of the beam rate for x-rays of 30 and 64~keV. Bottom: time to reach the dose-limited resolution as a function of the beam rate. A minimum is reached when the product of $C_l^2\cdot r$ reaches a maximum, i.e. the time decreases with beam rate until the effect of the non-proportional counting (resulting from event pile-up) becomes dominant. The optimum beam rate and corresponding counting efficiency are marked with a dotted line for both energies.}
\label{fig:counting_efficiency}
\end{figure}
It is now possible to evaluate eq. \ref{eq:Clin} under different scenarios: i) a relatively simple calorimetric mode (total energy is integrated), for which we assume a hard x-ray beam rate typical of the new generation of synchrotron light sources as $r = 10^{12}$~ph/s, and ii) a rate-limited photon-by-photon counting scenario, for the optimum rates $r = 9.3\times10^{10}$ ph/s (64 keV) and $r=1.6\times10^{11}$ ph/s (30 keV), obtained above. Values for $C_l(r)$ are extracted from 2$^{\scriptsize\textnormal{nd}}$-order fits as discussed in appendix. The remaining parameters are common to both modes: $S/N^*=0.71$, efficiency $\epsilon=58.5\%$ (64~keV), $S/N^*=0.63$, $\epsilon=40.0\%$ (30~keV); finally we assume $l=5~{\mu}$m, $a=5$ mm, $R=5$, with the mean free paths ($\lambda$) taken from table \ref{tab:material_parameters}. Results are summarized in Fig. \ref{fig:ScanTime}. At 64~keV, the dose-limited resolution $d^{\dagger}=36$~nm can be achieved in approximately 24~h while, at $30$~keV, $d^{\dagger}=44$~nm is reached in just 8~h. In the absence of systematic effects, operation in calorimetric mode would bring the scan time down to $\leq 1$~h in both cases, although abandoning any photon-by-photon counting capabilities.
\begin{figure}[h]
\centering
\includegraphics[width=85mm]{fig_13.pdf}
\caption{Resolution achievable with a 64~keV photon beam (left) and a 30~keV photon beam (right) as a function of the scan time for a cell of $5$~$\mu$m (green line). The red line shows the limit in which a calorimetric measurement is performed and photon-by-photon counting is abandoned. The horizontal line shows the dose-limited resolution in each case, prior to inducing structural damage.}
\label{fig:ScanTime}
\end{figure}
\section{Discussion}\label{discussion}
The results presented here illustrate the potential of the proposed technology for high energy x-ray detection (up to $\simeq 60$-$70$ keV) at high-brightness synchrotron light sources, in particular for cellular imaging. In deriving them, we have adopted some simplifications, that should be superseded in future work, and are analyzed here:
\begin{enumerate}
\item \emph{Availability of photon-by-photon information}: cluster reconstruction with high efficiency and purity enables $x, y, t + t_{drift}$ and $\varepsilon$ determination, and arguably the interaction time $t$ and $z$ position can be obtained from the study of the cluster size, as it has been demonstrated for 30~keV x-rays at near-atmospheric pressure before \cite{DiegoAccurate}. This can help at removing backgrounds not accounted for, as well as any undesired systematic effect (beam or detector related). Since this technique provides a parallax-free measurement, the concept may be extended to other applications, e.g., x-ray crystallography. The presence of characteristic emission from xenon will unavoidably create confusion, so if unambiguous correspondence between the ionization cluster and the parent x-ray is needed, one must consider operation at $\lesssim$ 30~keV.
\item \emph{Data processing and realism}: photon-by-photon counting at a rate nearing $5\cdot10^7$~ph/s over the detector ($\equiv$ 10$^{11}$~ph/s over the sample) , as proposed here, is a computer intensive task. Achieving this with sufficient speed and accuracy will require the optimization of the counting algorithm, something that will need to be accomplished, ultimately, with real data. To this aim, both the availability of parallel processing as well as the possibility of simultaneous operation in calorimetric mode are desirable features. This will be studied in the near future through a dedicated experiment.
\item \emph{Simplicity and compactness}: the detector geometry proposed here has been conceived as a multi-purpose permanent station. A portable device focused purely on SCXM, on the other hand, could simply consist of a cubic $25\textnormal{cm} \times 25\textnormal{cm} \times 25\textnormal{cm}$ vessel that may be positioned, e.g., on top of the sample (at a distance of about $\sim 5$cm). The geometry would thus have an overall efficiency around 30\% for 64~keV photons. For SCXM, and given that $S/N^* \simeq \sqrt{\epsilon}$ as shown in this work, a loss of efficiency can be almost fully compensated by means of the corresponding increase in beam rate, at the price of a deteriorated value for the dose limited resolution $d^{\dagger}$. In this case, a value corresponding to $d^{\dagger}=41$~nm could be achieved in 12~h, for our test study.
\item \emph{Feasibility}: the technology proposed comes from the realm of high energy physics, with an inherent operational complexity that might not be affordable at light source facilities. A further possibility could be considered, by resorting to ultra-fast (1.6~ns resolution) hit-based TimePix cameras (e.g., \cite{TimePix, Nom19}) with suitable VUV-optics, allowing $256 \times 256$ pixel readout at 80~MHit/s, and thus abandoning completely the SiPM readout. The vessel would just house, in such a case, the acrylic hole multiplier and cathode mesh, together with the power leads; it would be filled with the xenon mixture at atmospheric pressure and interfaced to the outside with a VUV-grade viewport. This would compromise partly the ability to disentangle clusters by using time information, as well as energy information, since only the time over threshold would be stored and not the temporal shape of each cluster, or its energy. On the other hand, it would enhance the spatial information by a factor of $30$ relative to the SiPM matrix proposed here (the hole pitch of the acrylic hole multiplier should be reduced accordingly). Indeed, TimePix cameras are regularly used nowadays for photon and ion counting applications \cite{TimePix1, TimePix2}, but have not been applied to x-ray counting yet, to the best of our knowledge. The counting and signal processing algorithms could be in this way directly ported, given the similarity with the images taken in those applications. The readiness of such an approach, aiming at immediate implementation, represents an attractive and compelling avenue.
\end{enumerate}
The imaging criterion and study case chosen in this work are inspired by \cite{Vil18}, where a dose-limited resolution of 34~nm was obtained for SCXM, compared to around 75~nm for CDI. A typical bio-molecule feature was chosen, embedded in a $5~{\mu}$m cell placed in vacuum. The present study shows that a 36~nm DNA feature can be resolved in similar conditions even after accounting for the presence of beam-shielding, air, photon transport through a realistic detector, including the detector response in detail, and finally implementing photon-counting through a K-means algorithm.
\section{Conclusions and outlook}\label{conclus}
We introduce a new $4\pi$-technology (EL-TPC) designed for detecting $\sim\!60$~keV x-ray photons at rates up to $5\cdot10^7$ ph/s over the detector ($10^{11}$ ph/s over the sample), with an overall detection efficiency (including geometrical acceptance) around 60\%. At these rates, photon-by-photon counting can be achieved at an efficiency and purity above 80\%, and plausibly well above 90\% after straightforward improvements on the counting algorithm employed in this work. The technology has been re-purposed from its original goal in particle physics (the experimental measurement of $\beta\beta0\nu$ decay) and, with a number of minor simplifications, it has been optimally adapted to the task of Compton x-ray microscopy in upcoming light sources. The proposed detector can be implemented either as a permanent station or a portable device. Concentrating on $5~{\mu}$m cells as our test case, we estimate that, under a Rose imaging criterion, and assuming the dose fractionation theorem, 36~nm DNA features may be resolved in 24~h by using a permanent station and 41~nm in 12~h with a portable device.
Alternatively, the scan time could be brought down to less than 1~h by resorting to the calorimetric mode, although the photon-by-photon counting capability would need to be abandoned.
Our analysis includes detailed Geant4 transport, a realistic detector response and a simplified 2d-counting algorithm based on K-means. Thus, we understand that the obtained rate capability (and scan time) should be understood as lower (upper) limits to the actual capabilities when using more refined 3d-algorithms, including constraints in energy and cluster size.
Although substantially below the nominal photon-counting capabilities of solid-state pixelated detectors, we believe that a number of applications could benefit from the proposed development, targeting specifically at the newly available 4$^{\scriptsize\textnormal{th}}$ generation synchrotron light sources, capable of providing high-brightness hard x-rays. Indeed, previous conceptual studies point to about a factor $\times 2$ increase in resolving power for SCXM compared to CDI, in similar conditions to ours. The present simulation work just comes to support the fact that a complete 3d scan would be realizable in about 24~h time, under realistic assumptions on the experimental setup, detector response and counting algorithms.
\section*{Funding Information}
A. Sa\'a Hern\'andez is funded through the project ED431F 2017/10 (Xunta de Galicia) and D. Gonz\'alez-D\'iaz through the Ramon y Cajal program, contract RYC-2015-18820. C.D.R. Azevedo is supported by Portuguese national funds (OE), through FCT - Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia, I.P., in the scope of the Law 57/2017, of July 19.
\section*{Acknowledgments}
We thank Ben Jones and David Nygren (University of Texas at Arlington), as well as our RD51 colleagues for stimulating discussions and encouragement, and specially to David Jos\'e Fern\'andez, Pablo Amedo, and Pablo Ameijeiras for discussions on the K-means method, and Dami\'an Garc\'ia Castro for performing the Magboltz simulations.
\begin{appendices}
\section{Relation between resolution and scan time}~\label{appendixA}
\subsection{Proportional (ideal) case}
We start from the imaging criterion, applied to an arbitrary position of the step motor within a cell-scan:
\begin{equation}
\frac{|N_{f} - N_{0}|}{\sqrt{\sigma_{N_{f}}^2 + \sigma_{N_{0}}^2}} = R \label{Rose_App}
\end{equation}
where $R=5$ corresponds to the Rose condition. N$_f$ is the number of scattered photons from a water medium with a `to-be-resolved' feature inside it, and N$_0$ contains only water, instead (see Fig.~\ref{fig:Dose}-top). This equation can be re-expressed as:
\begin{equation}
\frac{|N_{f} - N_{0}|}{\sqrt{N_f^2\left(\frac{\sigma_{N_{f}}}{N_{f}}\right)^2 + N_0^2\left(\frac{\sigma_{N_{0}}}{N_{0}}\right)^2}} = R \label{Rose_App2}
\end{equation}
that, under the assumption $N_{f} \gtrsim N_{0}$, and defining the signal to noise ratio as $S/N \equiv N_f/\sigma_{N_f} \simeq N_0/\sigma_{N_0}$ can be rewritten, in general, as:
\begin{equation}
\frac{1}{\sqrt{2}} \frac{N_{f} - N_{0}}{N_{0}} \times S/N = R \label{Rose_App3}
\end{equation}
When considering photon counting, it is understood that a relation can be established between the distribution of ionization clusters that are counted in the detector (mean $n$, standard deviation $\sigma_n$) and the distribution of scattered photons (mean $N_f\simeq N_0$, standard deviation $\sigma_{N_f} \simeq \sigma_{N_0}$). If resorting to an unbiased counting algorithm, this relation will be proportional. In that case, the pre-factors on the left-hand-side of eq. \ref{Rose_App3} remain, and any detector-related effect is contained in the quantity:
\begin{equation}
S/N = \frac{N_f}{\sigma_{N_f}} \simeq \frac{N_0}{\sigma_{N_0}} \rightarrow \frac{n}{\sigma_n}
\end{equation}
At fixed number of scattered photons ($\simeq N_0$) the relative fluctuations in the number of counted clusters will increase due to efficiency losses, characteristic emission, and re-scatters on the cell itself, air or structural materials, thereby resulting in a loss of signal to noise. It is convenient to normalize this definition to the Poisson limit for a perfect detector:
\begin{equation}
S/N^* = \frac{1}{\sqrt{N_0}}\cdot S/N
\end{equation}
and so the new quantity $S/N^*$ is now defined between $0$ and $1$, with $S/N = n/\sigma_n$ obtained, in the main document, from detailed simulations of the photon propagation through the experimental setup. Substitution of $N_f$ and $N_0$ by physical quantities in eq. \ref{Rose_App3} yields:
\begin{equation}
\frac{1}{\sqrt{2}} \frac{d(\lambda_f^{-1} - \lambda_w^{-1})}{l\lambda_w^{-1} + 2a\lambda_a^{-1}} \times S/N^* \times \sqrt{N_0} = R \label{Rose_App4}
\end{equation}
with $d$ being the feature size, $l$ the cell dimension, and $\lambda_{f,w,a}$ the mean free paths in the feature, water and air, respectively, as defined in text.
Now, we make use of the fact that $N_0=r \cdot \Delta{T_{step}} \cdot (l\lambda_w^{-1} + 2a\lambda_a^{-1})$, with $r$ being the beam rate, $\Delta{T_{step}}$ a time step within the scan, and $\Delta{T_{scan}}$ the total time for a 2d scan: $\Delta{T_{scan}} = \left(\frac{l}{d}\right)^2 \cdot \Delta{T_{step}}$. By replacing $N_0$ in the previous equation we obtain:
\begin{equation}
\frac{1}{\sqrt{2}} \frac{d^2(\lambda_f^{-1} - \lambda_w^{-1})}{l(l\lambda_w^{-1} + 2a\lambda_a^{-1})^{1/2}} \times S/N^* \times \sqrt{r \cdot \Delta{T_{scan}}} = R \label{Rose_App5}
\end{equation}
from which the time needed for a complete 2d scan can be expressed as:
\begin{equation}
\Delta T_{scan} = R^2 \frac{2 l^2}{d^4}\frac{(l\lambda_w^{-1} + 2a\lambda_a^{-1})}{(\lambda_f^{-1} - \lambda_w^{-1})^2} \frac{1}{S/N^{*,2} \cdot r} \label{T_App}
\end{equation}
and, solving for $d$:
\begin{equation}
d = \left(R^2 2 l^2\frac{(l\lambda_w^{-1} + 2a\lambda_a^{-1})}{(\lambda_f^{-1} - \lambda_w^{-1})^2} \frac{1}{S/N^{*,2} \cdot r \cdot \Delta{T}_{scan}}\right)^{1/4} \label{d_App}
\end{equation}
Expression \ref{d_App} can be approximated under the simplifying assumption that $S/N^*$ is mainly limited by Poisson statistics and by the efficiency of the detector (modelled through a simple binomial distribution), disregarding production of secondary particles or re-scatters across structural materials, hence:
\begin{equation}
S/N^* \!=\! \frac{1}{\sqrt{N_0}} \frac{n}{\sigma_{n}} \! \simeq \! \frac{1}{\sqrt{N_0}} \frac{N_0 \epsilon}{\sqrt{\epsilon^2 N_0 + \epsilon\cdot(1-\epsilon)\cdot N_0}} \! =\! \sqrt{\epsilon} \label{StoNapp}
\end{equation}
From which it can be seen that detector efficiency and beam rate enter as a product in the denominator in formulas \ref{T_App} and \ref{d_App}. Consequently, detector inefficiency increases the scan time linearly, as intuitively expected.
\subsection{Non proportional case}
We consider now the more realistic case where there is a non-proportional response of the counting algorithm. This is characterized, for the K-means algorithm implemented in text, as a second order polynomial (Fig. 11):
\begin{equation}
n = a + b r + c r^2
\end{equation}
By analogy, if the K-means parameters are optimized for a certain beam rate, r, the response to cell regions causing a different number of scattered photons $N$, relative to the water-only case, will be:
\begin{equation}
n = a + b \frac{N}{N_0} + c \left(\frac{N}{N_0}\right)^2
\end{equation}
and $a(r)$, $b(r)$, $c(r)$ are now rate-dependent.
Eq. \ref{Rose_App3} should be rewritten, accordingly, as:
\begin{equation}
\frac{1}{\sqrt{2}} \frac{n_{f} - n_{0}}{n_{0}} \times S/N = R
\end{equation}
and the relative variation in $n$ becomes:
\begin{equation}
\frac{n_{f} - n_{0}}{n_{0}} = \frac{1}{a+b+c}\left( b\frac{N_f - N_0}{N_0} + c\frac{N_f^2 - N_0^2}{N_0^2} \right)
\end{equation}
that, for $N_f \simeq N_0$, can be re-expressed as:
\begin{equation}
\frac{n_{f} - n_{0}}{n_{0}} = C_l(r) \frac{N_{f} - N_{0}}{N_{0}}
\end{equation}
with $C_l(r) = \frac{b + 2c}{a + b + c}$. Hence, a loss of linearity during the counting process enters linearly in eq. \ref{Rose_App3}. The general expression for the resolvable feature size as a function of the beam rate is, finally, by analogy with eq. \ref{d_App}:
\begin{equation}
d = \left(R^2 2 l^2\frac{(l\lambda_w^{-1} + 2a\lambda_a^{-1})}{(\lambda_f^{-1} - \lambda_w^{-1})^2} \frac{1}{C_l(r)^2 \cdot S/N^{*,2}\cdot r \cdot\Delta{T}_{scan}}\right)^{1/4} \label{d_App_Clin}
\end{equation}
that is the expression used in the main document, for the achievable resolution as a function of the scan time, under a given imaging criterion $R$.
The detector response enters this final expression in three ways:
\begin{enumerate}
\item Through the increased fluctuation in the number of detected clusters, relative to the ideal (Poisson) counting limit, characterized through the signal to noise ratio, $S/N^*$.
\item The non-linearity of the counting algorithm, $C_l$.
\item The assumed maximum operating rate, $r$, for which the product $C_l^2\cdot r$ reaches a maximum, as for larger rates stops improving the ability to resolve an image.
\end{enumerate}
\section{EL-TPC parameters}~\label{appendixB}
Here we compile the main parameters used for the simulation of the TPC response, together with additional references when needed.
\begin{table}[h]
\centering
\caption{Parameters of the TPC vessel}
\begin{tabular}{p{0.06\textwidth}p{0.04\textwidth}p{0.09\textwidth}p{0.21\textwidth}}
\hline
$R_{i}$ & 5 & cm & inner radius \\
$R_{o}$ & 25 & cm & outer radius \\
$L$ & 50 & cm & length \\
\hline
\end{tabular}
\label{tab:Gas_parameters}
\end{table}
\begin{table}[h]
\centering
\caption{Main gas parameters (xenon + 0.4\% CH$_4$)}
\begin{tabular}{p{0.06\textwidth}p{0.04\textwidth}p{0.09\textwidth}p{0.21\textwidth}}
\hline
\multicolumn{4}{l} {in the drift/collection region} \\
\hline
$E_c$ & 110 & V/cm & collection field \\
$V_{cat}$ & -8.5 & kV & cathode voltage \\
$F$ & 0.15 & & Fano factor~\cite{Dave_prop} \\
$W_I$ & 22 & eV & energy to create an e$^-$-ion pair \cite{Dave_prop}\\
$D_T^*$ & 0.548 & mm/$\sqrt{\textnormal{cm}}$ & transverse diffusion coefficient \cite{Pyboltz}\\
$D_L^*$ & 1.52 & mm/$\sqrt{\textnormal{cm}}$ & longitudinal diffusion coefficient \cite{Pyboltz}\\
$v_d$ & 5.12 & mm/$\mu{s}$ & drift velocity \cite{Pyboltz} \\
\hline
\multicolumn{4}{l} {in the electroluminescence (EL) region} \\
\hline
$E_{EL}$ & 6 & kV/cm & EL field \\
$V_{gate}$ & -3 & kV & voltage at FAT-GEM entrance (`gate') \\
$v_{d,EL}$ & 13.7 & mm/$\mu$s & drift velocity~\cite{Pyboltz} \\
\hline
\end{tabular}
\label{tab:TPC_parameters}
\end{table}
\begin{table}[h!]
\centering
\caption{Parameters of the electroluminescent structure}
\begin{tabular}{p{0.06\textwidth}p{0.04\textwidth}p{0.09\textwidth}p{0.21\textwidth}}
\hline
$r_h$ & 3 & mm & hole radius \\
$t$ & 5 & mm & thickness \\
$p_h$ & 10 & mm & hole-to-hole pitch \\
$m_{opt}$ & 250 & ph/e/cm & optical gain~\cite{FATGEM} \\
$P_{scin}$ & 0.5 & & scincillation probability~\cite{Henriques} \\
\hline
\end{tabular}
\label{tab:FATGEM_parameters}
\end{table}
\begin{table}[h!]
\centering
\caption{Parameters of the readout}
\begin{tabular}{p{0.06\textwidth}p{0.04\textwidth}p{0.09\textwidth}p{0.21\textwidth}}
\hline
p$_{si}$ & 10 & & pitch of SiPM matrix \\
$\Delta{T}_{\textnormal{s}}$ & 0.5 & ${\mu}s$ & time sampling / time per slice \\
$\sigma_{t}$ & 7 & ns & temporal width of SiPM signal \cite{Hamamatsu} \\
$\sigma_G/G$ & 0.1 & & relative spread of single phe charge in SiPM \cite{Hamamatsu} \\
$\Omega_{TPB}$ & 0.3 & & geometrical acceptance of SiPM after wavelength shifter \\
$QE_{wls}$ & 0.4 & & quantum efficiency of wavelength shifter~\cite{Gehman}\\
$QE_{si}$ & 0.4 & & quantum efficiency of SiPM~\cite{Hamamatsu} \\
\hline
\end{tabular}
\label{tab:TPC_parameters}
\end{table}
\end{appendices}
\newpage
\newpage
|
1,108,101,564,402 | arxiv | \section{Introduction}
\label{sec:intro}
The dynamics of quantum fields in an expanding spacetime is a subject of primordial importance for cosmology and inflation. In this context, the usual approach is a semi-classical treatment where spacetime is treated classically and interacts with a matter content which is of quantum nature and can have a possible backreaction on the geometry \cite{Birrell:1982ix}. De Sitter spacetime is of particular interest both physically, as it is a good approximation of the inflationary phase, and mathematically, because of its high degree of symmetry.
The computation of quantum corrections in the presence of interactions is a lot more complicated in a curved background as usual pertubative tools are not always available. One particular setup in which non trivial effects arise is the case of light scalar fields in the expanding Poincar\'e patch of de Sitter spacetime, particularly relevant for inflationnary cosmology. The scalar fields mode functions are significantly modified by the curvature with, in particular, a strong amplification of the infrared modes, which can be viewed as intense particle production from the gravitational field \cite{Mottola:1984ar,Tsamis:2005hd,Krotov:2010ma}. This effect is at the origin of infrared and secular divergences in loop computations that limit the use of perturbation theory \cite{Weinberg:2005qc,Starobinsky:1994bd}.
A variety of non perturbative treatments exists to address the question of the nonlinear effects ({\it e.g.} self-interactions), see Refs.~\cite{Starobinsky:1994bd,Tsamis:2005hd,vanderMeulen:2007ah,Burgess:2009bs,Rajaraman:2010xd,Beneke:2012kn,Serreau:2011fu,Akhmedov:2011pj,Garbrecht:2011gu,Boyanovsky:2012qs,Parentani:2012tx,Kaya:2013bga,Serreau:2013eoa,Gautier:2013aoa,Youssef:2013by,Boyanovsky:2015tba,Guilleux:2015pma,Moss:2016uix,Prokopec:2017vxx} for various examples. The most prominent one is certainly the stochastic approach, developed in Ref.~\cite{Starobinsky:1994bd}. It gives an effective description of the dynamics of the infrared, long wavelength, modes in terms of an effective Langevin equation. The infrared modes of the scalar fields behave classically as a result of the aforementioned gravitational amplification and experience a random noise which encodes the effect of the ultraviolet modes crossing the horizon during expansion. The Langevin dynamics can be treated through the equivalent Fokker-Planck equation. This gives access, for example, to the late-time, equilibrium probability distribution for the fields, from which one can compute various equal-time correlators, often analytically for simple enough potentials. Unequal-time correlators or genuine nonequilibrium properties, which contain important information about the long time/distance properties of the theory (dynamical time scales, spectral indices, etc.), are more difficult to access analytically and even numerically in some situations. For instance, for a simple quartic potential, the cases of vanishing or of negative square mass are intrinsically nonperturbative.
The stochastic Langevin equation is a particular case of the so-called model A in the Halperin {\it et al.} classification of nonequilibrium dynamical systems \cite{Hohenberg:1977ym}. In the present article, we shall use tools developed in this context to compute various unequal-time correlators at large time separation, which gives access to different autocorrelation and relaxation time scales. In the stationary state, the problem can be formulated as a supersymmetric one-dimensional field theory \cite{Janssen:1976,Canet:2011wf,Prokopec:2017vxx}, free of ultraviolet divergences and which is a lot easier to manage than the original $D$-dimensional quantum field theory (QFT).
This one-dimensional field theory gives analytic access to properties of the stationary state reached by the scalar fields in the late-time limit. Diagram resummations, previously performed in the complete four dimensional field theory \cite{Gautier:2013aoa,Gautier:2015pca}, can be done here in a simpler way which reproduces the leading infrared behavior. We compute various correlators in two approximations schemes. First, in a perturbative expansion in the self-interaction coupling constant, which is, however, limited to not too light fields. The second approximation scheme is the $1/N$ expansion, where $N$ is the number of scalar fields. The latter allows to consider the interesting case of massless fields and of a symmetry breaking potential \cite{Gautier:2015pca,LopezNacir:2019ord}.
Along with the path integral formulation of the model A comes some interpretation of the different correlators and specific relations which are usually formulated in a statistical physics language. Using this analogy allow us here to reformulate these results in terms of our particular model and discuss some consequences for the scalar field correlator.
After briefly reviewing the effective stochastic approach, we present the functional formulation of the Langevin equation and discuss the supersymmetry of the resulting field theory in Sec.~\ref{sec:setup}. Various properties of the field correlators, independent of any approximation scheme are discussed in Sec.~\ref{sec:generalfeatures}. Our calculations in the perturbative and the $1/N$ expansions are presented in Sec.~\ref{sec:resummations}. We conclude in Sec.~\ref{sec:Concl}. Additional calculations and technical details are presented in the various appendices.
\section{General setup}
\label{sec:setup}
We briefly recall the effective stochastic theory for the superhorizon modes of light scalar fields in de Sitter spacetime and review the functional formulation of the resulting one-dimensional model A as a supersymmetric field theory. We consider an $O(N)$ symmetric scalar field theory on the expanding Poincar\'e patch of a $D$-dimensional de Sitter spacetime with $d$ spatial dimensions ($D=d+1$). The metric reads $\dd s^2 = -\dd t^2 + a(t) \dd \vec x^2$, with $a(t)=e^{Ht}$, where $t$ is the cosmological time and we set the Hubble rate $H=1$. The classical action reads
\begin{equation}
{\cal S}=-\int_x\left\{\frac{1}{2}\partial_\mu\hat\varphi_a\partial^\mu\hat\varphi_a+\hat V\left(\hat\varphi^2\right)\right\},
\end{equation}
where $\hat\varphi^2=\hat\varphi_a\hat\varphi_a$ and $\int_x$ denotes the appropriate, invariant integration measure.
\subsection{Effective stochastic approach}
For light fields in units of $H$, the (quantum) fluctuations of long wavelength, superhorizon modes are well described by the effective Langevin equation \cite{Starobinsky:1994bd}
\begin{equation}
\dot{ \hat \varphi}_a+ \frac1 {d} \hat V_{,\hat a} = \hat\xi_a,
\label{eq:stochastic1}
\end{equation}
where the dot denotes a time derivative and we used the notation $\hat V_{, \hat a}=\partial\hat V/\partial\hat \varphi_a$. Here, the infrared fields $\hat\varphi_a$, spatially smeared over a Hubble patch, effectively behave as classical stochastic fields whose fluctuations mimic those of the long wavelength modes of the original quantum fields. Those stochastic fluctuations are driven by the random kicks from the (quantum) subhorizon modes which cross the horizon at a constant rate due to the gravitational redshift. This is represented by the noise term $\hat\xi_a$, whose stochastic properties reflect the quantum state of the system. For the Bunch-Davies (BD) vacuum, and treating the ultraviolet modes in the linear approximation, one finds \cite{Starobinsky:1994bd}
\begin{equation}
\ev{\hat\xi_a(t,\vec x) \hat\xi_b(t',\vec x')} = \frac2{d\Omega_{D+1}} \delta_{ab} \delta(t-t') {\cal F}(\abs{\vec x - \vec x'}),
\label{eq:noisestochastic1}
\end{equation}
with $\Omega_{n} = 2\pi^{n/2}/\Gamma\qty(n/2)$ and where the function ${\cal F}$ reflects the spatial smearing: it can always be normalized as ${\cal F}(0)=1$ and it vanishes rapidly for spatial separations $\abs{\vec x - \vec x'}\gtrsim1$. Its precise form depends on the smearing procedure. Within a single Hubble patch, ${\cal F}\approx1$ and the time evolution of the infrared fields is described by an effective one-dimensional Langevin equation with a Gaussian white noise. At sufficiently late times, the system is driven towards a stationary regime where, {\em e.g.}, the equilibrium distribution of field values is given by
\begin{equation}
P(\hat \varphi_a) \propto e^{-\Omega_{D+1} \hat V(\hat \varphi^2)}.
\label{eq:equilibrium}
\end{equation}
The latter describes the equal-time statistical properties of the stochastic process and reflects the quantum fluctuations of the infrared modes of the original quantum fields in the BD vacuum. It can be seen as the Boltzmann distribution for a thermal system. Introducing the Hamiltonian for the superhorizon field in the Hubble patch under consideration as $\hat {\cal H}=\int d^dx \hat V={\cal V}_d \hat V$, where ${\cal V}_d=\Omega_d/d$ is the volume of the $d$-dimensional spherical Hubble patch (of radius $H^{-1}=1$), the distribution \eqref{eq:equilibrium} reads ${\cal P}\propto e^{-\beta \hat{\cal H}}$,
with $\beta=\Omega_{D+1}/{\cal V}_d=2\pi$ the inverse Gibbons-Hawking temperature \cite{Gibbons:1977mu}.
It is useful to rescale the variables so as to absorb the various volume factors. Defining
\begin{align}\label{eq:rescaling}
\hat\varphi_a = \sqrt{\frac2{d\Omega_{D+1}}}\varphi_a \quad{\rm and}\quad\hat V(\hat\varphi^2) = \frac2{\Omega_{D+1}}V(\varphi^2),
\end{align}
we get, denoting the time derivative with a dot,
\begin{equation}
\dot\varphi_a(t) + V_{,a}(t) = \xi_a(t)
\label{eq:staro}
\end{equation}
\begin{equation}
\ev{\xi_a(t) \xi_b(t')} = \delta_{ab} \delta(t-t')
\label{eq:whitenoise}
\end{equation}
This is a particular, $(0+1)$-dimensional case of the model A in the classification of Halperin {\it et al.} \cite{Hohenberg:1977ym}, which has been widely studied in the context of out-of-equilibrium statistical physics. It can be given an elegant functional formulation by means of the Janssen-de Dominicis (JdD) procedure \cite{Janssen:1976,Canet:2011wf}, which provides an efficient starting point for implementing various field techniques \cite{Canet:2011wf}. Recent examples in the present context include diagrammatic methods \cite{Garbrecht:2013coa,Garbrecht:2014dca} or renormalization group techniques \cite{Prokopec:2017vxx}. We now briefly review the JdD procedure.
\subsection{Path integral formulation}
The expectation value of an operator $\mathcal{O}(\varphi)$ can be formally expressed as
\begin{equation}
\ev{\mathcal{O}(\varphi)} = \int \mathcal{D} \xi P[\xi] \mathcal{O}(\varphi_{\xi})
\label{eq:JDD}
\end{equation}
where $\varphi_{\xi}$ is a solution of solution of Eq.~\eqref{eq:staro} with given initial conditions and
\begin{equation}
P[\xi] = \frac1{\sqrt{2 \pi}} e^{- \int_t \frac12\xi_a^2}
\end{equation}
is the normalized probability distribution of the noise, with $\int_t=\int_{-\infty}^{+\infty} \dd{t}$.
In general, one should also average over initial conditions in Eq.~\eqref{eq:JDD}. However, the latter become irrelevant if we restrict our considerations to the stationary regime. Assuming the uniqueness of the solution of Eq.~\eqref{eq:staro} for a given realization of the noise (and given initial conditions), one writes
\begin{equation}
\mathcal{O}(\varphi_{\xi}) = \int \mathcal{D}\varphi\; \delta[\dot\varphi_a + V_{,a} - \xi_a] \mathcal{J}[\varphi] \mathcal{O}(\varphi)
\label{eq:avO}
\end{equation}
where $\mathcal{J}[\varphi] = \abs{{\rm Det}\qty[ \delta_{ab} \partial_t + V_{,ab} ]}$ is the appropriate functional Jacobian. Under the above uniqueness assumption, one can forget the absolute value on the determinant and exponentiate the latter in terms of Grassmann fields
\begin{equation}
\mathcal{J}[\varphi] \to \int \mathcal{D}[\psi,\bar \psi] \,e^{i\int_t \bar \psi_a(\delta_{ab} \partial_t + V_{,ab})\psi_b }.
\label{}
\end{equation}
Similarly, one exponentiates the functional delta as
\begin{equation}
\delta[\dot \varphi_a + V_{,a} - \xi_a] = \int \mathcal{D}[i\tilde \varphi] e^{-\int_t \tilde \varphi_a (\dot\varphi_a + V_{,a} - \xi_a )} ,
\label{}
\end{equation}
where the so-called response fields $\tilde \varphi_a$ are purely imaginary.
Integration over the Gaussian noise $\xi_a$ finally gives, up to an irrelevant constant factor ${\cal N}$,
\begin{equation}
\ev{\mathcal{O}(\varphi)} = {\cal N}\int \mathcal{D}[\varphi,i\tilde \varphi,\psi,\bar\psi] \; e^{-S_{\rm JdD}[\varphi,\tilde \varphi,\psi,\bar \psi]} \mathcal{O}(\varphi) ,
\label{eq:JDDpathinteg}
\end{equation}
with the following action
\begin{equation}
S_{\rm JdD} = \int_t\left\{ \tilde \varphi_a \qty( \dot \varphi_a + V_{,a} ) - \frac12 \tilde \varphi^2 -i\bar \psi_a\qty(\delta_{ab} \partial_t + V_{,ab})\psi_b\right\}.
\label{eq:action}
\end{equation}
This one-dimensional statistical field theory with $4N$ fields describes the leading infrared behaviour of the underlying QFT in de Sitter spacetime. Alternatively, we can use a more symmetric form of the action by changing the variable $\tilde{\varphi}_a\to F_a = i (\dot \varphi_a - \tilde \varphi_a)$. The action rewrites as
\begin{align}
\label{eq:actionF}
S_{\rm JdD} = & \int_t \left\{\frac12 \dot \varphi^2 + \frac12 F^2 - i \bar \psi_a \dot\psi_a+ iF_aV_{,a} - i \bar\psi_a V_{,ab} \psi_b \right\},
\end{align}
where we neglect the boundary term $\int_t 2\dot\varphi_aV_a=\int_t\dot V$ in the stationary state. This form of the action makes clear another link, namely, it relates to a supersymmetric quantum mechanics after the Wick rotation $t\to i\tau$ \cite{Synatschke:2008pv}.
\subsection{Supersymmetry}
The action \eqref{eq:action} or, equivalently, \eqref{eq:actionF}, possesses various symmetries, such as the time-translation and the time-reversal symmetries of the stationary regime, which can be conveniently encoded in a supersymmetry that mixes the bosonic and fermionic degrees of freedom \cite{Canet:2011wf,Synatschke:2008pv}. To exhibit the latter, it is convenient to recast the various fields into the superfield
\begin{equation}\label{eq:superfield}
\Phi_a(t,\theta,\bar\theta) = \varphi_a(t) + \bar \theta \psi_a(t) + \bar \psi_a(t) \theta + \bar \theta \theta F_a(t),
\end{equation}
living on the superspace $(t,\theta,\bar\theta)$, with Grassmann directions $\theta$ and $\bar \theta$. The generators of the supersymmetry can be written as $Q = i\partial_{\bar \theta} + \theta\partial_t$ and $\bar Q = i\partial_\theta + \bar \theta \partial_t$, and the covariant derivatives $D=i \partial_{\bar \theta} - \theta \partial_t$, $\bar D = i\partial_\theta - \bar \theta \partial_t$ allow to write the action in the following form
\begin{equation}
S_{\rm JdD} = \int \dd{z} \left\{\frac12 \Phi_a K \Phi_a + i V(\Phi_a)\right\}
\label{}
\end{equation}
with\footnote{Our convention for the Grassmann integration is $\int \dd\theta \dd\bar\theta \bar \theta\theta = 1$.} $z = (t,\bar\theta,\theta)$, $\dd{z} = \dd{t}\dd\theta\dd\bar\theta$ and $K=\frac12 \qty(\bar D D - D\bar D)$.
\section{General properties of the correlator}
\label{sec:generalfeatures}
The general form of the superfield correlators is constrained by various considerations, most prominently the symmetries and causality. In this Section, we detail the case of the connected\footnote{Unless explicitly stated, we only consider connected correlators in what follows. For simplicity, we do not introduce a special notation.} two-point correlator, with the notation
\begin{equation}
G^{ab}_{12}(t_1,t_2)=\ev{\Phi_a(t_1,\theta_1,\bar\theta_1)\Phi_b(t_2,\theta_2,\bar\theta_2)}.
\end{equation}
For simplicity, we consider a single field ($N=1$). The generalization to arbitrary $N$ is trivial.
\subsection{Supersymmetry constraints}
The dependence of the inverse propagator $\Gamma^{(2)}$ and the propagator $G$ on the Grassmann variables are strongly constrained by the supersymmetry of the action. First, the anticommutator $\acomm{Q}{\bar Q}=2i\partial_t$ generates the time-translation invariance, so that it proves more convenient to work in frequency space
\begin{equation}
G_{12}(t_1,t_2)=\int\frac{\dd{\omega}}{2\pi} e^{-i\omega (t_1-t_2)}G_{12}(\omega).
\end{equation}
and similarly for the two-point vertex $\Gamma^{(2)}_{12}(\omega)$. The general dependence of the latter in the Grassmann variables involves {\it a priori} six independent functions:
\begin{align}
\Gamma^{(2)}_{12}(\omega) &= A(\omega) + \bar \theta_1 \theta_1 B(\omega) + \bar \theta_2 \theta_2 C(\omega) + \bar\theta_1 \theta_1 \bar \theta_2 \theta_2 D(\omega)\nonumber \\
& + \bar \theta_1 \theta_2 E(\omega) + \bar \theta_2 \theta_1 F(\omega) .
\label{}
\end{align}
Supersymmetry implies the Ward identities
\begin{align}
\qty(Q_1 + Q_2)\Gamma^{(2)}_{12}(\omega) &= 0,\\
\qty(\bar Q_1 + \bar Q_2)\Gamma^{(2)}_{12}(\omega) &= 0,
\end{align}
where the numerical index indicates the Grassmann variable each operator $Q$ or $\bar Q$ is acting on. These yield four independent constraints which are solved as
\begin{align}
C(\omega)&=B(\omega)\\
D(\omega)&=\omega^2A(\omega)\\
E(\omega)&=-B(\omega)-\omega A(\omega)\\
F(\omega)&=-B(\omega)+\omega A(\omega).
\end{align}
Renaming $A(\omega)=\eta(\omega)$ and $B(\omega)=i\gamma(\omega)$, the general structure of the two-point vertex is \cite{Canet:2011wf,Synatschke:2008pv}
\begin{equation}
\Gamma^{(2)}_{12}(\omega) = i\gamma(\omega)\delta_{12} + \eta(\omega) K_{\omega}\delta_{12},
\label{eq:gamma2general}
\end{equation}
where the two Grassmann structures
\begin{align}
\delta_{12} &= (\bar \theta_1 - \bar \theta_2)(\theta_1 - \theta_2),\\
K_\omega\delta_{12} &= 1 + \omega (\bar \theta_2 \theta_1 - \bar \theta_1 \theta_2) + \omega^2\bar \theta_1 \theta_1 \bar \theta_2 \theta_2
\end{align}
denote, respectively, the Dirac function in Grassmann coordinates and the supersymmetric d'Alembertian operator $K_1\delta(z_1-z_2)$ in frequency space, with $\delta(z_1-z_2)=\delta(t_1-t_2)\delta_{12}$.
The superfield propagator is obtained by inversion, $\int_2 \Gamma^{(2)}_{12}(\omega)G_{23}(\omega)=\delta_{13}$, with $\int_2=\int\dd{\theta_2}\dd{\bar\theta_2}$, and reads
\begin{align}
G_{12}(\omega)= \frac{-i\gamma(\omega) \delta_{12} + \eta (\omega)K_\omega\delta_{12}}{\omega^2 \eta^2(\omega) + \gamma^2(\omega)}.
\label{eq:propaggeneral}
\end{align}
Using the decomposition \eqref{eq:superfield} of the superfield, we obtain the various correlators\footnote{Our convention is $\ev{A(t)B(t')}=\int\frac{\dd{\omega}}{2\pi}e^{-i\omega(t-t')}G_{AB}(\omega)$.}
\begin{align}
\label{eq:Gphiphi}
G_{\varphi\varphi}(\omega) &= \frac{\eta(\omega)}{\omega^2 \eta^2(\omega) + \gamma^2(\omega)} ,\\
G_{\varphi F}(\omega) &= \frac{-i\gamma(\omega)}{\omega^2 \eta^2(\omega) + \gamma^2(\omega)},
\end{align}
as well as $G_{FF}(\omega)=\omega^2G_{\varphi\varphi}(\omega)$, $G_{\psi\bar\psi}(\omega)=-G_{\varphi F}(\omega) - \omega G_{\varphi\varphi}(\omega)$, and $G_{\bar\psi\psi}(\omega)=G_{\varphi F}(\omega) - \omega G_{\varphi\varphi}(\omega)$.
Now, from the path integral representation \eqref{eq:JDDpathinteg}, we see that both $\ev{\varphi(t)\varphi(t')}$ and $\ev{\varphi(t)\tilde\varphi(t')}$ are real (despite $\tilde\varphi$ being imaginary) and thus $G_{\varphi\varphi}(t)\in \mathbb{R}$ and $G_{\varphi F}(t)\in i\mathbb{R}$. Using also the permutation identity of the superfield correlator, $G_{12}(t)=G_{21}(-t)$, we conclude, in frequency space, that both the functions $\gamma(\omega)$ and $\eta(\omega)$ are real and even.
\subsection{Fluctuation-dissipation relation}
The stationary, equilibrium state of the system is characterized by a fluctuation-dissipation relation which directly follows from the above constraints. This relates the statistical correlator $G_{\varphi\varphi}(\omega)$ (fluctuation) to the response function\footnote{The relation of the stochastic response and spectral functions with the retarded and spectral functions of the underlying QFT are discussed in the Appendix~\ref{sec:quantumvsstochastic}.} $G_{\varphi\tilde\varphi}(\omega)$ (dissipation) or, more precisely, to the stochastic spectral function $\rho$, which we now introduce. The response function is given by
\begin{align}
G_{\varphi\tilde\varphi}(\omega)= i\qty[G_{\varphi F}(\omega) + \omega G_{\varphi\varphi}(\omega)]= \frac{i}{\omega \eta(\omega) + i\gamma(\omega)}
\label{eq:Gretarded}
\end{align}
and we define the stochastic spectral function as
\begin{equation}
\rho(\omega) \equiv 2 i \Im G_{\varphi\tilde\varphi}(\omega)=2i\omega G_{\varphi\varphi}(\omega),
\label{eq:spectral}
\end{equation}
where the second equality follows from Eqs.~\eqref{eq:Gphiphi}--\eqref{eq:Gretarded}. In real time, this reads
\begin{equation}
\label{eq:fdrealtime}
\rho(t)=-2 \partial_t G_{\varphi\varphi}(t).
\end{equation}
This is the announced fluctuation-dissipation relation characteristic of a thermal state in the high temperature (classical field) regime as discussed in the Appendix~\ref{sec:quantumvsstochastic}.
An interesting consequence of the above relation is the exact identity
\begin{equation}\label{eq:rhoone}
\rho(t=0^+)= 1,
\end{equation}
which can be proven as follows. In the limit\footnote{The correlator $\ev{\xi(0^+)\varphi(0)}=0$ by causality. Considering, instead, $t\to0^-$, one would have to take into account the nonzero correlator $\ev{\xi(0^-)\varphi(0)}$. The final result is $\partial_t G_{\varphi\varphi}(t)|_{t\to0^-}=-\partial_t G_{\varphi\varphi}(t)|_{t\to0^+}=1/2$.} $t\to0^+$, we have, using the relation \eqref{eq:fdrealtime} and Eq.~\eqref{eq:staro}
\begin{align}
\rho(t=0^+)=-2\partial_t G_{\varphi\varphi}(t)|_{t\to0^+}=-2\ev{\dot\varphi\varphi}=\ev{2\varphi \partial_\varphi V}.
\end{align}
The equal-time average in the last equality can be computed with the one-point equilibrium distribution \eqref{eq:equilibrium} with the proper rescaling \eqref{eq:rescaling}. The result \eqref{eq:rhoone} follows from the identity
\begin{align}
\int_{-\infty}^{+\infty}\dd{\varphi}2\varphi\qty(\partial_\varphi V)e^{-2V}=\int_{-\infty}^{+\infty}\dd{\varphi}e^{-2V},
\end{align}
obtained after integration by parts.
\subsection{Causality}
Further interesting information can be obtained from causality. The latter implies, in particular, that the response function vanishes identically for negative times, $G_{\varphi\tilde\varphi}(t)\propto \theta(t)$ \cite{Canet:2011wf}. From the definition \eqref{eq:spectral} of the spectral function and the fact that $G_{\varphi\tilde\varphi}(t)\in\mathds{R}$, we easily deduce that $\rho(t)=G_{\varphi\tilde\varphi}(t)-G_{\varphi\tilde\varphi}(-t)$ and thus that\footnote{This is equivalent to $G_{\varphi F}(t) = i \,{\rm sign}(t)\, \partial_t G_{\varphi\varphi}(t)$ \cite{Zinn-Justin:1996} }
\begin{equation}
G_{\varphi\tilde\varphi}(t)=\theta(t)\rho(t),
\end{equation}
or, equivalently, in frequency space,
\begin{equation}
G_{\varphi\tilde\varphi}(\omega)=\int\frac{\dd{\omega'}}{2\pi}\frac{i\rho(\omega')}{\omega-\omega'+i0^+},
\end{equation}
which implies that $G_{\varphi\tilde\varphi}$ is analytic in the upper half complex frequency plane.
Also, using the fuctuation-dissipation relation \eqref{eq:spectral}, we deduce
\begin{equation}
G_{\varphi\tilde\varphi}(\omega=0)= 2 \int \frac{\dd{\omega}}{2\pi}G_{\varphi\varphi}(\omega)=2G_{\varphi\varphi}(t=0).
\label{eq:ggrel}
\end{equation}
This yields an exact expression for the so-called\footnote{Note though that this is actually a static (equal-time) quantity.} dynamical mass $m_{\rm dyn}$, which measures the amplitude of the equal-time fluctuations of the stochastic field within a Hubble patch as
\begin{equation}\label{eq:mdyn}
G_{\varphi\varphi}(t=0)=\ev{\varphi^2} \equiv \frac{1}{2m_{\rm dyn}^2}.
\end{equation}
Using Eqs.\eqref{eq:Gretarded} and \eqref{eq:ggrel}, we deduce
\begin{equation}
m_{\rm dyn}^2 = \gamma(0).
\label{eq:mdyngamma}
\end{equation}
Such a relation is reminiscent of the concept of screening mass, or susceptibility in thermal (quantum/statistical) field theory, which are related to the value of the (inverse) propagator at vanishing momentum and frequency and typically measure the overall response of the system to a static perturbation. These are to be distinguished from the so-called pole masses, or correlation lengths, which are associated to the poles of the response function and describe correlations between different spacetime points. The latter have their analogs in the present stochastic model, which we now discuss.
\subsection{Mass hierarchy}
Using the Fokker-Planck formulation of the Langevin equation \eqref{eq:staro}, one shows that the unequal-time (connected) correlator for a given local function ${\cal A}(\varphi)$ of the field can be written as \cite{Starobinsky:1994bd,Markkanen:2019kpv}
\begin{equation}
G_{{\cal A}{\cal A}}(t-t') = \ev{{\cal A}(t){\cal A}(t')}=\sum_{n\ge0} \sum_{\ell=0}^nC^{\cal A}_{n,\ell} e^{-\Lambda_{n,\ell} |t-t'|},
\label{eq:AAdecomp}
\end{equation}
where the $\Lambda_{n,\ell}$'s are the eigenvalues of the (properly rescaled) Fokker-Planck operator and the $C^{\cal A}_{n,\ell}$'s are appropriate coefficients.
Because of the O($N$) symmetry, the latter can be labelled in terms of the eigenvalues $\ell\in\mathds{N}$ of the $N$-dimensional angular momentum and another possible index $n$. In the case of a quadratic potential, the latter is a single positive integer and the possible values of $\ell$ are constrained such that $n-\ell$ is even and positive. We expect this to remain true for $\lambda\neq0$.
The eigenvalues are nonnegative real numbers\footnote{Supersymmetry guarantees that the lowest eigenvalue $\Lambda_{0,0}=0$.}. Of course, some $C_{n,\ell}^{\cal A}$ may vanish, {\it e.g.}, due to symmetry selection rules \cite{Markkanen:2019kpv}. For instance, the case ${\cal A}=\varphi$ only involves the vector channel $\ell=1$, so that the only nonvanishing coefficients in the decomposition \eqref{eq:AAdecomp} are $C^\varphi_{2n+1,1}$. Similarly, for the composite field $\chi=\varphi^2/(2N)$ in the scalar ($\ell=0$) channel, the only nonvanishing terms are $C_{2n,0}^\chi$. The correlations of various quantities of interest at large time separations are thus governed by the lowest eigenvalues contributing to the decomposition \eqref{eq:AAdecomp}.
Below, we shall compute the $\ev{\varphi\varphi}$ and $\ev{\chi\chi}$ correlators in various approximation schemes, from which we can extract the eigenvalues $\Lambda_{2n+1,1}$ and $\Lambda_{2n,0}$, respectively, at each approximation order. Introducing the redefinitions $C^\varphi_{2n+1,1}=c^\varphi_{2n+1}/(2\Lambda_{2n+1,1})$, $C^\chi_{2n,0}=c^\chi_{2n}/(2\Lambda_{2n,0})$, and the following notation for the tree-level correlator of a field of mass $m$
\begin{equation}\label{eq:freeprop}
G_{m^2}(t)=\frac{e^{-m^2|t|}}{2m^2}\quad\Leftrightarrow\quad G_{m^2}(\omega)=\frac{1}{\omega^2+m^4},
\end{equation}
we have
\begin{align}
\label{eq:propagphi}
G_{\varphi\varphi}(t) &=\sum_{n\ge0} c^\varphi_{2n+1}G_{\Lambda_{2n+1,1}}(t),\\
\label{eq:propagchi}
G_{\chi\chi}(t) &= \sum_{n\ge0} c^\chi_{2n}G_{\Lambda_{2n,0}}(t).
\end{align}
The eigenvalues $\Lambda_{n,\ell}$ and the coefficients $c^{\varphi,\chi}_{n}$ are directly obtained as the poles and residues of the relevant response function, {\it e.g.},
\begin{equation}\label{eq:responsefraction}
G_{\varphi\tilde\varphi}(\omega)=\sum_{n\ge0} \frac{ic^\varphi_{2n+1}}{\omega+i\Lambda_{2n+1,1}}.
\end{equation}
An obvious relation is
\begin{equation}
\sum_{n\ge0} \frac{c^\varphi_{2n+1}}{\Lambda_{2n+1,1}}=\frac1{m_{\rm dyn}^2},
\label{eq:mdyn2}
\end{equation}
which directly follows from the definition \eqref{eq:mdyn}. Another constraint on the coefficients $c^\varphi_{2n+1}$ is the
following sum rule
\begin{equation}\label{eq:sumrule}
\sum_{n\ge0}c^\varphi_{2n+1}=-2\partial_t G_{\varphi\varphi}(t)|_{t\to0^+}= 1,
\end{equation}
which directly follows from Eqs.~\eqref{eq:fdrealtime} and \eqref{eq:rhoone}.
\subsection{Effective noise correlator}
\label{sec:noisecorrelator}
Finally, we mention that the $\eta$ component of the self-energy \eqref{eq:gamma2general} can be interpreted as the effective noise correlator dressed by the nonlinear effect of the infrared modes themselves. Indeed, as recalled in the Appendix~\ref{sec:quantumvsstochastic}, the general expression of the correlator of a Langevin process with a colored noise
\begin{align}
\ev{\xi(t)\xi(t')} =\int\frac{\dd\omega}{2\pi}e^{-i\omega(t-t')}{\cal N}(\omega)
\end{align}
is, in frequency space,
\begin{equation}
G_{\varphi\varphi}(\omega)={\cal N}(\omega)|G_{\varphi\tilde\varphi}(\omega)|^2.
\end{equation}
Using the exact relations \eqref{eq:Gphiphi} and \eqref{eq:Gretarded}, we deduce that
\begin{equation}\label{eq:noiseandsigma}
{\cal N}(\omega)=\eta(\omega)
\end{equation}
can be interpreted as an effective colored noise kernel as announced. The tree-level exppression $\eta_{\rm free}(\omega)=1$ corresponds to the white noise contribution \eqref{eq:whitenoise} from the ultraviolet modes in the present effective stochatic theory. As we shall see below, nonlocal loop corrections bring a nontrivial frequency dependence which corresponds to the effective dressing of the noise kernel from the nonlinear infrared dynamics.
\section{Explicit calculations}
\label{sec:resummations}
\begin{figure}[t]
\centering
\includegraphics{onebubble}
\caption{One-loop diagram giving the expression of $C_{12}^{m^2}$ in a free theory. The lines denote the tree-level propagator \eqref{eq:treeprop}.}
\label{fig:onebubble}
\end{figure}
We now turn to explicit computations of the $\ev{\varphi\varphi}$ and $\ev{\chi\chi}$ correlators in two approximation schemes previously studied in the D-dimensional QFT \cite{Gautier:2013aoa,Gautier:2015pca}, namely, the perturbative expansion and the $1/N$ expansion.
We consider an $O(N)$-symmetric scalar theory with quartic self interaction, whose superpotential is given by
\begin{equation}
V(\Phi) = \frac{m^2}2 \Phi_a^2 + \frac\lambda{4!N} \qty(\Phi_a^2)^2 .
\label{eq:potential}
\end{equation}
There is no possibility of spontaneously broken symmetry in the present low dimensional system \cite{Mermin:1966fe,Coleman:1974jh,Serreau:2013eoa}. We thus have $\ev{\Phi_a}=0$ and $G_{12}^{ab}(\omega)=G_{12}(\omega) \delta^{ab}$, including in the case $m^2<0$.
In the following, we define the superfield selfenergy $\Sigma$ as
\begin{equation}
\Gamma^{(2)}_{12}(\omega) =im^2 \delta_{12} + K_\omega\delta_{12} + \Sigma_{12}(\omega)
\label{eq:}
\end{equation}
where the first two terms on the right-hand side correspond to the free field case. We denote the tree-level superpropagator for a field with mass $m$ as
\begin{equation}\label{eq:treeprop}
G_{12}^{m^2}(\omega)=\frac{-im^2\delta_{12}+K_\omega\delta_{12}}{\omega^2+m^4}.
\end{equation}
We also introduce the supercorrelator of the composite field $X=\Phi^2/(2N)$,
\begin{equation}
C_{12}(t) = \ev{X(t,\bar \theta_1,\theta_1)X(0,\bar\theta_2,\theta_2)},
\label{eq:phi2phi2}
\end{equation}
which, in the free theory, is simply given by the one-loop diagram of Fig.~\ref{fig:onebubble}. This is easily computed as [see Eq.~\eqref{eq:relation2}]
\begin{align}
C^{m^2}_{12}(\omega) &= \frac1{2N} \int\frac{\dd{\omega'}}{2\pi}G^{m^2}_{12}(\omega-\omega')G^{m^2}_{12}(\omega')=\frac{ G^{2m^2}_{12}(\omega)}{2Nm^2}.
\label{eq:Cm2}
\end{align}
The component at $\theta_{1,2}=\bar\theta_{1,2}=0$ is
\begin{equation}
G^{m^2}_{\chi\chi}(\omega) = \frac1{2N m^2} \frac1{\omega^2+4m^4}.
\label{eq:Gchifree}
\end{equation}
From the decompositions \eqref{eq:propagphi} and \eqref{eq:propagchi} and the free-field expressions \eqref{eq:freeprop} and \eqref{eq:Gchifree}, we read $\Lambda^{\rm free}_{1,1}=m^2$, $c^{\rm free}_{2n+1}=\delta_{n,0}$, $\Lambda^{\rm free}_{2,0}=2m^2$, and $c^{\rm free}_{2n}=\delta_{n,1}/(2Nm^2)$. This agrees with the known spectrum of the free case, which is just that of a O($N$)-symmetric harmonic oscillator \cite{Starobinsky:1994bd,Markkanen:2019kpv}
\begin{equation}
{\Lambda_{n,\ell}^{\rm free}} = n m^2.
\label{eq:lambdafree}
\end{equation}
\subsection{The perturbative expansion}
\label{sec:perturbative}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{perturbative}
\caption{Perturbative contributions to the self-energy $\Sigma$ at one- and two-loop orders. The interactions vertex is represented with a dot and contributes a factor $-i\lambda/(4!N)$ while the propagator lines are given by the tree-level propagator \eqref{eq:treeprop}.}
\label{fig:perturbative}
\end{figure}
We first compute the self-energy at two-loop order in a perturbative expansion (the three-loop order is computed in Appendix \ref{sec:perturbative4}). The relevant diagrams are shown in Fig.~\ref{fig:perturbative}. Their explicit evaluation is straightforward and we shall only give the resulting expressions here. The details can be found in Appendix \ref{sec:perturbative3}. The one-loop contribution, diagram (b), yields
\begin{equation}
\Sigma^{(b)}_{12}(\omega) = i \frac{N+2}{3N} \frac\lambda{4m^2} \delta_{12},
\label{eq:sigma1looppert}
\end{equation}
which simply corresponds to a constant shift of $\gamma(\omega)$, that is, a mere mass renormalization.
The same is true for the two-loop local\footnote{Here, local means that both external legs are attached to the same vertex.} contribution given by diagram $(c)$ in Fig.~\ref{fig:perturbative}, which reads
\begin{equation}
\Sigma^{(c)}_{12}(\omega) = -i \qty(\frac{N+2}{3N})^2 \frac{\lambda^2}{16 m^6} \delta_{12}.
\label{eq:sigma2looppert}
\end{equation}
A nontrivial frequency dependence appears with the nonlocal contribution, diagram $(d)$, which can be written as
\begin{equation}
\Sigma^{(d)}_{12}(\omega) = \frac{N+2}{3{ N^2}} \frac{\lambda^2}{8m^4} G^{3m^2}_{12}(\omega).
\label{eq:selfenergy}
\end{equation}
Altogether, we obtain, for the functions $\gamma$ and $\eta$ in Eq.~\eqref{eq:gamma2general},
\begin{align}
\label{eq:gammatwoloop}
\gamma(\omega) &= M^2 - \frac{6\bar\lambda^2}{N+2}\frac{3 m^6}{\omega^2 + 9m^4} + {\cal O}({\bar\lambda^3}) ,\\
\label{eq:etatwoloop}
\eta(\omega) &= 1 +\frac{6\bar\lambda^2}{N+2}\frac{m^4}{\omega^2 + 9m^4} + {\cal O}({\bar\lambda^3}) ,
\end{align}
where we have introduced the dimensionless coupling
\begin{equation}\label{eq:barlambda}
\bar\lambda=\frac{N+2}{3N}\frac\lambda{4m^4}
\end{equation}
and the renormalized mass
\begin{equation}
M^2 = m^2 \left(1+ \bar\lambda - \bar\lambda^2\right).
\label{eq:renormmass}
\end{equation}
We immediately obtain the expression of the dynamical mass as
\begin{equation}
m_{\rm dyn}^2=\gamma(0)=m^2 \left[ 1 +\bar\lambda- \frac{N+4}{N+2}\bar\lambda^2 +{\cal O}({\bar\lambda^3})\right]
\end{equation}
As explained in Sec.~\ref{sec:generalfeatures}, the relevant mass hierarchy can be directly read off the response function. Using the expressions \eqref{eq:gammatwoloop} and \eqref{eq:etatwoloop}, the latter can be written as
\begin{equation}
G_{\varphi\tilde\varphi}(\omega) = \frac{ic_1}{\omega +i \Lambda_{1,1}} + \frac{ic_3}{\omega+i\Lambda_{3,1}}+ \order{\bar\lambda^3},
\label{eq:propagFouriertwoloop}
\end{equation}
with the poles given by
\begin{align}
{\Lambda_{1,1}} &= m^2 \left[ 1 +\bar\lambda- \frac{N+5}{N+2}\bar\lambda^2 +{\cal O}({\bar\lambda^3})\right],\\
{\Lambda_{3,1}} &= 3m^2 \left[1+ {\cal O}(\bar \lambda)\right],
\end{align}
and the residues
\begin{align}
c^\varphi_1 &= 1 - \frac{3\bar\lambda^2}{2(N+2)}+{\cal O}(\bar\lambda^3),\\
c^\varphi_3 &=\frac{3\bar\lambda^2}{2(N+2)}+{\cal O}(\bar\lambda^3).
\end{align}
In particular, we verify the sum rule \eqref{eq:sumrule} at this order.
The two-pole structure \eqref{eq:propagFouriertwoloop} at the present order of approximation precisely coincides to the splitting of the propagator obtained in the QFT calculation of Ref.~\cite{Gautier:2013aoa}, which reads
\begin{equation}
G_{\varphi\varphi}(t) = c_+ G_{m_+^2}(t) + c_- G_{m_-^2}(t),
\label{eq:propagphiphi}
\end{equation}
with $G_{m^2}$ given in Eq.~\eqref{eq:freeprop}.
The expressions of the various masses and coefficients exactly agree, with the identifications $c_+=c^\varphi_1$, $c_-=c^\varphi_3$, $m_+^2= \Lambda_{1,1}$, $m_-^2= \Lambda_{3,1}$, and with the rescaling \eqref{eq:rescaling}, that is,\footnote{In particular, the quantity named $\bar\lambda$ in Ref.~\cite{Gautier:2013aoa} is the same as here.}
\begin{equation}
m^2 = \frac{\hat m^2}d \qc \lambda = \frac2{d^2\Omega_{D+1} }\hat\lambda.
\label{}
\end{equation}
We now come to the two-loop correction to the $\ev{\chi\chi}$ correlator, given by the two diagrams in Fig.~\ref{fig:twobubbles}. The diagram (e) simply corresponds to the effect of the one-loop mass renormalisation of one propagator line (the same is true for the diagram (c) of Fig.~\ref{fig:perturbative}) and can be easily computed. Equivalently, we can treat this diagram with the following trick \cite{Gautier:2013aoa,Gautier:2015pca}. We implicitly include it in the one-loop diagram (a) of Fig.~\ref{fig:onebubble} by using effective propagator lines with an effective mass $M$. We then replace the latter by its expression \eqref{eq:renormmass} and systematically expand at the relevant order of approximation.
\begin{figure}[t]
\centering
\includegraphics{twobubbles}
\caption{Two-loop contributions to the $\ev{\chi\chi}$ correlator. The diagram (e) is just an effect of the mass renormalization.}
\label{fig:twobubbles}
\end{figure}
Each loop in the diagram (a) of \ref{fig:onebubble} and the diagram (b) of \ref{fig:twobubbles} is given by Eq.~\eqref{eq:Cm2}, with $m^2\to M^2$ and the sum reads
\begin{align}
C^{(a+f)}_{12}(\omega) &=C^{M^2}_{12}\!(\omega)-i\lambda \frac{N+2}{3} \int_3C^{M^2}_{13}\!(\omega) \,C^{M^2}_{32} \!(\omega).
\label{}
\end{align}
with $\int_{3}=\int\dd{\theta_3}\dd{\bar\theta_3}$. Using the identity
\begin{equation}
\int_3G^{m^2}_{13}(\omega) G^{m^2}_{32} (\omega)=\frac{(\omega^2-m^4)\delta_{12}-2im^2K_\omega\delta_{12}}{(\omega^2+m^4)^2}
\end{equation}
and extracting the component at vanishing Grassmann variables, we obtain, in terms of the renormalized mass $M^2$
\begin{equation}
\begin{aligned}
G_{\chi\chi}(\omega) &= \frac1{2N M^2} \frac1{\omega^2+4M^4} \qty[ 1 - \frac{8\bar\lambda M^4}{\omega^2+4M^4} + \order{\bar\lambda^2}]\\
&= \frac1{2N M^2} \frac1{\omega^2 + 4M^4\qty(1+\bar\lambda)^2} + \order{\bar\lambda^2}.
\end{aligned}
\label{}
\end{equation}
In the last equation, we have used the knowledge of the general structure \eqref{eq:AAdecomp} of the correlator to resum the two-loop correction to the propagator in the appropriate form ({\it i.e.}, a correction to the corresponding self-energy). We can directly read off the expressions
\begin{align}
\Lambda_{2,0}&=2M^2\qty[1+\bar\lambda+{\cal O}(\bar\lambda^2)]=2m^2\qty[1+2\bar\lambda+{\cal O}(\bar\lambda^2)],\\
c^\chi_2&=\frac{1}{2NM^2}\qty[1+{\cal O}(\bar\lambda^2)]=\frac{1}{2Nm^2}\qty[1-\bar\lambda+{\cal O}(\bar\lambda^2)].
\end{align}
We note that the perturbative calculation of the propagator at order $\bar\lambda^2$ only gives access to the leading-order expression of the infrared subleading eigenvalue $\Lambda_{3,1}$ because the corresponding coefficient $c^\varphi_3$ is, itself, of order $\bar\lambda^2$. It is interesting to push our perturbative calculation to three-loop order so as to obtain the first correction to $\Lambda_{3,1}$ and compare to the perturbative results of Ref.~\cite{Markkanen:2019kpv} obtained by directly solving the Fokker-Planck equation. We present this calculation in the Appendix \ref{sec:perturbative4}. The three-loop expressions of $m_{\rm dyn}^2$, $\Lambda_{1,1}$, $c^\varphi_1$ and $c^\varphi_3$ can be found there. Here, we simply gather the next-to-leading results for the lowest eigenvalues:
\begin{align}
\Lambda_{1,1} &= m^2\qty[1 +\bar\lambda+\order{\bar\lambda^2}],\\
\Lambda_{2,0} &=2m^2\qty[1+2\bar\lambda+\order{\bar\lambda^2}],\\
\Lambda_{3,1} &=3m^2\qty[1+\frac{5N+22}{3(N+2)}\bar\lambda+\order{\bar\lambda^2}],
\end{align}
which reproduce (and generalize to arbitrary $D$ and $N$) the perturbative results of Ref.~\cite{Markkanen:2019kpv} for $D=4$ and $N=1$ (in that case, $\Lambda_{n,\ell}=\Lambda_n$).
\begin{figure}[t]
\centering
\includegraphics{largeN}
\caption{Left: the topology of diagrams contributing the self-energy at NLO in the $1/N$ expansion. Right: The single bubble $\Pi_{12}$.}
\label{fig:largeN}
\end{figure}
The present perturbative calculations are controlled by the dimensionless expansion parameter $\bar\lambda\propto\lambda/m^4$ and are thus invalid in the zero mass limit as well as in the negative square mass case. These cases require a nonperturbative treatment, such as the $1/N$ expansion, studied in Ref.~\cite{Gautier:2015pca} in the QFT context and that we now describe in the present stochastic framework.
\subsection{The $1/N$ expansion}
\label{sec:largeN}
We closely follow Ref.~\cite{Gautier:2015pca} for the diagrammatic formulation of the $1/N$ expansion, which we adapt to the present (supersymmetric) theory. In particular, we separate the local and nonlocal contributions\footnote{As mentioned earlier, local contributions consist of all diagrams where the two external legs are attached to the same vertex. These give the constant, frequency-independent contribution $\sigma$ to the function $\gamma(\omega)$ in Eq.~\eqref{eq:gamma2general}.} to the self-energy $\Sigma$ and grab the former in an effective square mass $M^2$, which satisfies the following exact gap equation
\begin{equation}
M^2 = m^2 + \sigma,
\label{eq:gap}
\end{equation}
where $\sigma$ is given by the diagram $(b)$ of Fig.~\ref{fig:perturbative}, but computed with the full propagator, namely,
\begin{equation}
\sigma = \frac{N+2}{3N} \frac\lambda2 \int\frac{\dd\omega}{2\pi} G_{11}(\omega) = \frac{N+2}{3N} \frac\lambda{4\gamma(0)} .
\label{}
\end{equation}
Here, we have used $G_{11}(\omega)=G_{\varphi\varphi}(\omega)$ together with Eqs.~\eqref{eq:Gretarded} and \eqref{eq:ggrel}.
In the spirit of the $1/N$ expansion, we write $M^2=M_0^2+\order{1/N}$. At LO, there are no nonlocal contributions to the self-energy and the propagator is simply given by a tree-level-like propagator $G_f^{M_0^2}$ with the LO effective mass $M_0$. In particular, we have $\gamma(0)=M_0^2+\order{1/N}$. The LO gap equation \eqref{eq:gap} is thus solved as
\begin{equation}
M_0^2 = \frac{m^2}2 + \sqrt{\frac{m^4}{4} + \frac{\lambda}{12}}.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics{sumbubbles}
\caption{ Top: diagrammatic representation of the function $\mathbb{I}_{12}$ which sums the infinite series of bubble diagrams. Bottom: The nonlocal contribution to the self-energy at NLO in the $1/N$ expansion. }
\label{fig:sumbubbles}
\end{figure}
To compute the NLO propagator, we first compute the nonlocal contributions to the self-energy $\Sigma$ at NLO in terms of the LO propagator $G_f^{M_0^2}$ (this automatically resums all LO local insertions on internal lines) and then we solve the implicit equation \eqref{eq:gap} for the local contributions at NLO.
The NLO nonlocal contributions $\Sigma^{\rm nonloc}$ are given by the infinite series of bubble diagrams with the topology depicted in Fig.~\ref{fig:largeN}$(g)$. Each one-loop bubble, corresponding to the diagram $(h)$, gives a contribution
\begin{equation}
\Pi_{12}(\omega) = -\frac\lambda6 \int\frac{\dd{\omega'}}{2\pi} G^{M_0^2}_{12}(\omega') G^{M_0^2}_{12}(\omega-\omega')
\label{eq:Pi}
\end{equation}
and summing the infinite sum of bubbles is achieved by solving the integral equation
\begin{equation}
\mathbb{I}_{12}(\omega)=\Pi_{12}(\omega) + i \int_{3} \Pi_{13}(\omega) \mathbb{I}_{32}(\omega) .
\label{eq:integeq}
\end{equation}
The function $\mathbb{I}$ resums the infinite chain of bubble diagrams, as is depicted in Fig.~\ref{fig:sumbubbles}, where it is represented as a wiggly line. In terms of the latter the nonlocal contribution to the NLO self-energy is obtained as the diagram (i) of Fig.~\ref{fig:sumbubbles}, which gives the one-loop expression
\begin{equation}
\Sigma^{\rm nonloc}_{12}(\omega) = -\frac\lambda{3N} \int \frac{\dd{\omega'}}{2\pi} G^{M_0^2}_{12}(\omega') \mathbb{I}_{12}(\omega-\omega').
\label{}
\end{equation}
Again, we skip the details of the calculations and refer the reader to the Appendix \ref{sec:largeN2} for details. The calculation of the one-loop bubble follows the same lines as that of diagram (a) above. It can be written as
\begin{equation}
\Pi_{12}(\omega) = -2\tilde\lambda M_0^2G^{2M_0^2}_{12}(\omega)
\label{}
\end{equation}
and we get, for the infinite series of bubbles,
\begin{equation}\label{eq:IIsol}
\mathbb{I}_{12}(\omega) = -2\tilde\lambda M_0^2G^{2 M_0^2 (1+\tilde{\lambda})}_{12}(\omega),
\end{equation}
where we defined the effective dimensionless coupling
\begin{equation}\label{eq:lambdatilde}
\tilde{\lambda} = \frac\lambda{12M_0^4},
\end{equation}
which is the large-$N$ analog of $\bar\lambda$ defined in Eq.~\eqref{eq:barlambda}. The nonlocal self-energy at NLO reads
\begin{equation}
\Sigma^{\rm nonloc}_{12}(\omega) = \frac{2M_0^4}N \frac{\tilde\lambda^2(3+2\tilde\lambda)}{1+\tilde\lambda} \,G^{M_0^2(3+2\tilde\lambda)}_{12}(\omega)
\label{}
\end{equation}
and has a similar structure as the two-loop nonlocal self-energy in the previous perturbative calculation, Eq.~\eqref{eq:selfenergy}.
We finally get, for the functions $\gamma(\omega)$ and $\eta(\omega)$,
\begin{align}
\label{eq:gammaNLO}
\gamma(\omega) &= M^2 - \frac{2M_0^4}N \frac{\tilde\lambda^2(3+2\tilde\lambda)}{1+\tilde\lambda} \frac{M_0^2 (3+2\tilde{\lambda})}{\omega^2 + M_0^4 (3+2\tilde{\lambda})^2} \\
\label{eq:etaNLO}
\eta(\omega) &= 1 + \frac{2M_0^4}N \frac{\tilde\lambda^2(3+2\tilde\lambda)}{1+\tilde\lambda} \frac1{\omega^2 + M_0^4 (3+2\tilde{\lambda})^2}
\end{align}
As in the previous case, the response function and the field correlator can be decomposed as a sum of two poles, see Eq.~\eqref{eq:propagFouriertwoloop}. At the present order of approximation, we get
\begin{align}
\Lambda_{1,1} &= M^2 \qty[ 1 - \frac{1}{N} \frac{\tilde\lambda^2(3+2\tilde\lambda)}{(1+\tilde{\lambda})^2} + \order{\frac1{N^2}}] \\
\Lambda_{3,1} &= M^2 \qty[ 3 + 2\tilde{\lambda} + \order{\frac1{N}}]
\label{}
\end{align}
and
\begin{align}
c^\varphi_1&=1-\frac{\tilde\lambda^2(3+2\tilde\lambda)}{2N(1+\tilde{\lambda})^3}+ \order{\frac1{N^2}}\\
c^\varphi_3 &= \frac{\tilde\lambda^2(3+2\tilde\lambda)}{2N(1+\tilde{\lambda})^3} + \order{\frac1{N^2}}
\label{eq:cpmN}
\end{align}
Similarly to the previous perturbative calculation, the coefficient $c^\varphi_3$ being of order $1/N$, we only obtain the LO expression for $\Lambda_{3,1}$.
Let us now consider the $\ev{\chi\chi}$ correlator which, at LO, is simply given by the infinite chain of bubbles. Indeed, one easily shows (see Appendix \ref{sec:largeN2}) that
\begin{equation}
C_{12}(\omega) = - \frac{3}{\lambda N}\mathbb{I}_{12}(\omega).
\label{eq:phi2phi2I}
\end{equation}
From this, we get the (connected) correlator of the composite field $\chi = \varphi^2/(2N)$
\begin{equation}
G_{\chi\chi}(\omega)= \frac{1}{2 N M_0^2 }G_{2M_0^2(1+\tilde\lambda)}(\omega)+\order{\frac1{N^2}}
\label{eq:rhorho}
\end{equation}
and we deduce the LO expressions
\begin{align}
\Lambda_{2,0} &= 2 M_0^2(1+\tilde{\lambda}),\\
c^\chi_2&=\frac{2}N(1+\tilde\lambda).
\end{align}
We finally need to solve Eq.~\eqref{eq:gap} for the local contribution $M^2$ at NLO. To this aim, we use
\begin{equation}
\gamma(0)=M^2\left[1 - \frac{2}N \frac{\tilde\lambda^2}{ 1+\tilde{\lambda}}+\order{\frac1{N^2}}\right],
\end{equation}
from which we obtain
\begin{equation}
M^2 = M_0^2 \qty[1+\frac2N \frac{\tilde{\lambda} (1 + \tilde{\lambda} + \tilde{\lambda}^2)}{(1+\tilde{\lambda})^2} + \order{\frac1{N^2}}].
\label{eq:selfconsistentmassN}
\end{equation}
Collecting the previous results, we have, for the dynamical mass,
\begin{equation}
m_{\rm dyn}^2=M_0^2\left[1 + \frac{2}N \frac{\tilde{\lambda}}{ (1+\tilde{\lambda})^2}+ \order{\frac1{N^2}}\right]
\end{equation}
and for the lowest eigenvalues
\begin{align}
\label{eq:lambda1NLO}
\Lambda_{1,1} &= M_0^2 \qty[ 1 + \frac{1}{N}\frac{\tilde\lambda(2-\tilde\lambda)}{(1+\tilde{\lambda})^2} + \order{\frac1{N^2}}], \\
\label{eq:lambda2NLO}
\Lambda_{2,0} &= M_0^2\qty[2+2\tilde{\lambda} + \order{\frac1N}], \\
\label{eq:lambda3NLO}
\Lambda_{3,1} &= M_0^2 \qty[ 3 + 2\tilde{\lambda} + \order{\frac1{N}}].
\end{align}
As for the previous perturbative expressions, the above results exactly agree with those of the direct QFT calculations in Ref.~\cite{Gautier:2015pca}. In fact the agreement concerns all the intermediate quantities $\Pi$, $\mathbb I$ and $\Sigma$, using the rescalings \eqref{eq:rescaling} of the parameters and
\begin{equation}
\hat G = \frac{d\Omega_{D+1}}2 G \qc \hat{\mathbb{I}} = \frac{\Omega_{D+1}}2 \mathbb{I}, \qand \hat \Sigma = \frac{\Omega_{D+1}}{2d} \Sigma
\label{}
\end{equation}
for the different two-point functions. The very same results have also been recently obtained from a QFT calculation in Euclidean de Sitter in Ref.~\cite{LopezNacir:2019ord}. That such very different calculations agree is a nontrivial result. Such an agreement between the stochastic approach and direct QFT calculations on either Lorentzian or Euclidean de Sitter was already well-known for equal-time correlators, {\it e.g.}, $\ev{\varphi^n}$, which measure the local field fluctuations \cite{Tsamis:2005hd,Rajaraman:2010xd,Beneke:2012kn}. Although expected on the basis of general arguments \cite{Tsamis:2005hd,Garbrecht:2013coa,Garbrecht:2014dca}, the agreement mentioned here for unequal-time (nonlocal) correlators is far less trivial, in particular, for nonperturbative approximation schemes, and the present results, together with those of Refs.~\cite{Gautier:2015pca} and \cite{LopezNacir:2019ord} provide an explicit nontrivial check.
\subsection{Discussion}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{lambda}
\caption{Effective coupling $\tilde \lambda$ as a function of the bare squared mass $m^2$. The bare coupling is taken as $\lambda=1$. The coupling becomes strongly non perturbative for small and negative values of $m^2$.}
\label{fig:lambda}
\end{figure}
We now discuss the results we obtained for the eigenvalues and associated correlators in several regimes. First, we check that the expressions we have for the $\Lambda_{n,\ell}$ and $c^{\varphi,\chi}_n$ coincide in the limit where we take both $N$ large and a the coupling $\bar \lambda$ small. In this regime, introducing $\bar\lambda_\infty=\lim_{N\to\infty}\bar\lambda=\lambda/(12m^4)$, we have $M_0^2=m^2[1+\bar\lambda_\infty-\bar\lambda_\infty^2+\order{\bar\lambda_\infty^3}]$ and $\tilde\lambda=\bar\lambda_\infty+\order{\bar\lambda_\infty^2}$, thus the two effective coupling coincide. For example, it is easy to check that Eq.~\eqref{eq:lambda1threeloop} coincides with the first member of Eq.~\eqref{eq:lambda1NLO} to give
\begin{align}
\frac{\Lambda_{1,1}}{m^2}&= 1 + \bar \lambda_\infty - \bar \lambda_\infty^2 + 2\bar \lambda_\infty^3+ \frac{2 \bar \lambda_\infty - 7 \bar \lambda_\infty^2 + 27 \bar \lambda_\infty^3}{N} \nonumber\\
&+ \order{\bar\lambda_\infty^4,\frac1{N^2}}.
\label{}
\end{align}
The $1/N$ expansion allows the study of the nonperturbative regime in $\bar\lambda$, which correspond to either small or negative $m^2$ \cite{Gautier:2015pca}. This is illustrated in Fig.~\ref{fig:lambda}, where we show the effective coupling $\tilde\lambda$ as a function of $m^2$ for fixed coupling $\lambda$. The latter is of order one in the small mass regime $m^2=0$ and becomes large for $m^2<0$. Let us discuss the leading-order eigenvalues \eqref{eq:lambda1NLO}--\eqref{eq:lambda3NLO} in these regimes. The latter rewrite, in terms of the parameters $m^2$ and $\lambda$,
\begin{align}
\Lambda_{1,1} &= \frac{m^2}2 + \sqrt{ \frac{m^4}4 + \frac\lambda{12} } , \\
\Lambda_{2,0} &= 4 \sqrt{ \frac{m^4}4 + \frac\lambda{12} } , \\
\Lambda_{3,1} &= \frac{m^2}2 + 5\sqrt{ \frac{m^4}4 + \frac\lambda{12} } .
\label{}
\end{align}
and are plotted as functions of $m^2$ for $\lambda=1$ in Fig.~\ref{fig:eigenvalues}.
In the small mass regime, we have
\begin{align}
\Lambda_{1,1} = \sqrt{\lambda/12}\,,\quad\Lambda_{2,0} = 4\Lambda_{1,1}\,,\quad\Lambda_{3,1} = 5\Lambda_{1,1},
\end{align}
where we see that all eigenvalues are of the same order, so that all the correlators computed here have relatively large autocorrelation times, in particular, in the case of small coupling $\lambda\ll1$. This reflects the fact that the potential is very flat in that case.
Instead, in the case of a steep symmetry breaking tree-level potential, with $m^2<0$ and $\lambda/m^4\ll1$, we have
\begin{align}
\label{eq:lambda1m2neg}
\Lambda_{1,1} &= \lambda/(12|m^2|) \\
\label{eq:lambda23m2neg}
\Lambda_{2,0} &= \Lambda_{3,1}=2 |m^2|\gg\Lambda_{1,1} .
\end{align}
The presence of a small ($\Lambda_{1,1}$) and a large ($\Lambda_{3,1}$) eigenvalue in the correlator of the vector field $\varphi$ reflects the existence of a flat (Goldstone mode) and a steep (Higgs mode) direction in the tree-level potential.\footnote{The absence of a true Goldtsone mode in the actual spectrum, $\Lambda_{1,1}\neq0$, is due to the effective symmetry restoration by the infrared modes \cite{Ford:1985qh,Ratra:1984yq,Serreau:2011fu,Serreau:2013eoa}.} The eigenvalue $\Lambda_{2,0}$ is, again the longitudinal Higgs mode, the only one which contributes to the correlator of the scalar field $\chi$.
Interestingly, the present large-$N$ results share similarities with similar analytical results in the case $N=1$ in the limit of a steep double-well potential \cite{Starobinsky:1994bd}. When the two minima are far apart, the situation can be described as a superposition of two single-well spectra with tunnel effect yielding infinitesimally split energy levels. Because the ground (equilibrium) state has $\Lambda_0=0$, this results in an exponentially suppressed, instanton-like value of $\Lambda_1\propto\exp(-a/\lambda)$, with $a$ a constant. Higher eigenvalues are essentially those of the unperturbed separate Gaussian wells of square mass $2|m^2|$, that is, $\Lambda_{2n}\approx\Lambda_{2n+1}\approx 2 n |m^2|$. Not surprisingly, our result \eqref{eq:lambda23m2neg} for $\Lambda_{2,0}$ and $\Lambda_{3,1}$ agree with the case $N=1$ in the limit of a steep symmetry breaking potential since these correspond to the heavy longitudinal mode with tree-level square mass $2|m^2|$. The main difference in the case of a continuous symmetry $N>1$ is the presence of flat directions (Golsdtone modes) in the potential, which result in a milder, power law suppression for $\Lambda_{1,1}$. In fact, we observe that the $1/N$ expansion becomes singular for the latter when $N\to1$. In the limit of a steep symmetry breaking potential $\tilde\lambda\gg1$, we have
\begin{equation}
\Lambda_{1,1} = \frac\lambda{12\abs{m^2}}\qty[1-\frac1N + \order{\frac1{N^2}}],
\label{}
\end{equation}
and similarly for the coefficient $c^\varphi_1=1-1/N + \order{1/N^2}$.
In terms of correlation functions, this implies that in the case $m^2<0$ the correlation time in the vector channel ($\varphi$) is considerably larger than in the scalar channel ($\chi$), which does not see the flat transverse direction. It is to be expected that such large correlation time also occur for composite fields in higher representations (tensor channels). These correlation times are related to other quantities of physical interest, such as the relaxation (or equilibration) times from an excited state to the BD vacuum, decoherence time scales \cite{Giraud:2009tn,Gautier:2012vh}, or, closest to standard phenomenological interest, to the spectral index of various observables \cite{Starobinsky:1994bd,Markkanen:2019kpv}. Exploiting de Sitter invariance, the spectral index of a given field ${\cal A}$ can be read off the decomposition \eqref{eq:AAdecomp} as $n_{\cal A}-1 = 2 \Lambda_{\cal A}$, with $\Lambda_{\cal A}$ the lowest eigenvalue contributing to the sum \eqref{eq:AAdecomp}. For instance, the spectral index of the field $\varphi$ is given by $n_\varphi-1=2\Lambda_{1,1}$. Similarly, $\Lambda_{2,0}$ is related to the spectral index of the field $\chi$ or of other typical O($N$) scalar quantity. For instance, as discussed in Ref.~\cite{Markkanen:2019kpv}, the density contrast $\delta= (V - \ev{V})/\!\ev{V}$ has a spectral index $n_\delta-1=2\Lambda_{2,0}$.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{eigenvalues}
\caption{The eigenvalues $\Lambda_{1,1}$, $\Lambda_{2,0}$ and $\Lambda_{3,1}$ at leading order in the $1/N$ expansion as functions of the tree-level square mass $m^2$, for $\lambda=1$.}
\label{fig:eigenvalues}
\end{figure}
Finally, we mention an interesting role played by the eigenvalue $\Lambda_{3,1}$ based on the discussion in Sec.~\ref{sec:noisecorrelator}. According to Eqs.~\eqref{eq:etatwoloop} and \eqref{eq:etaNLO}, we have
\begin{equation}
\eta(\omega) = 1 + 2c \frac{\Lambda_{1,1}\Lambda_{3,1}}{\omega^2 + \Lambda_{3,1}^2}
\end{equation}
both in the perturbative and in the $1/N$ expansions, with
\begin{equation}
c=\frac{\bar\lambda^2}{N+2} +\order{\bar\lambda^3}=\frac{1}N \frac{\tilde\lambda^2}{1+\tilde\lambda} +{\cal O}\qty(N^{-2}).
\end{equation}
In real time, this gives
\begin{equation}
\eta(t) = \delta(t) + c\Lambda_{1,1}e^{-\Lambda_{3,1}|t|}.
\end{equation}
With Eq.~\eqref{eq:noiseandsigma}, we see that $\Lambda_{3,1}^{-1}$ is the autocorrelation time of the effective colored noise correlation in the vector channel due to the infrared modes while $c\Lambda_{1,1}$ controls the amplitude of the colored contribution. In the perturbative regime $m^2>0$, the autocorrelation time is small $\sim 1/m^2$ with small amplitude $\sim\lambda^2/m^2$. However the autocorrelation time can be either parametrically large $\sim1/\sqrt\lambda$ with a small amplitude $\sim\sqrt\lambda$ for $m^2=0$, or small $\sim 1/|m^2|$ with ``large'' amplitude $\sim1$ for $m^2<0$.
We close this Section by comparing the expressions of $\Lambda_{1,1}$ at leading and next-to-leading orders in the $1/N$ expansion as a function of the parameters of the theory for the extreme case $N=2$ in Figure~\ref{fig:LOvsNLO}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{LOvsNLO}
\caption{The lowest nonzero eigenvalue $\Lambda_{1,1}$ at leading and next-to-leading orders in the $1/N$ expansion as a function of $m^2$ with $\lambda=1$, and for $N=2$. }
\label{fig:LOvsNLO}
\end{figure}
\section{Conclusions}
\label{sec:Concl}
To conclude, we used the JdD path integral formulation of the stochastic equation that describes the infrared dynamics of an $O(N)$ theory of test scalar fields to study the two-point unequal-time correlators of various operators. The resulting field theory is a one-dimensional supersymmetric theory with $N$ scalar superfields. This supersymmetry is a mere consequence of the symmetries of the original system in stationary state, and was used to show that the correlators of the various fields are not independent. One of the obtained relation can be interpreted as a fluctuation-dissipation relation, showing the analogy of the system with a Brownian motion with a thermal noise at de Sitter temperature \cite{Rigopoulos:2016oko}.
Having in mind the result of the Fokker-Planck formulation, which is usually solved as an eigenvalue problem \cite{Starobinsky:1994bd,Markkanen:2019kpv}, we then discussed the general structure of the unequal-time two-point correlator of composite operators of the scalar field. It can be expressed as a sum of free propagator with a hierarchy of mass scales, which corresponds to a subset of the tower of eigenvalues previously mentioned. We have computed explicitly the $\ev{\varphi\varphi}$ and $\ev{\chi\chi}$ correlators in two specific limits, the perturbative case up to three loop and the $1/N$ expansion at NLO, and we have obtained the values of the first three eigenvalues in both cases. We have checked that our results coincide with other computations, done either in the Lorentzian \cite{Gautier:2013aoa,Gautier:2015pca} or Euclidean \cite{LopezNacir:2019ord} field theory.
The result from the $1/N$ expansion is particularly interesting as it allows us to probe nonperturbative regimes. Such regimes corresponds to the massless and symmetry breaking case, the latter being difficult to probe numerically. In the limit of a deeply broken initial potential, we find that the lowest nonzero eigenvalue is strongly suppressed with respect to higher order ones. This has direct physical consequence, {\it e.g.}, in terms of equilibration times or power spectra of fields in the different representations of the O($N$) group. For instance, the vector channel $\ell=1$ has typically long range spacetime correlations whereas the scalar channel $\ell=0$ is typically (sometimes significantly) of shorter range.
There are several directions to extend the present analysis. First, we used the computation of correlators to access the mass scale hierarchy. When combined with our expansion schemes, this only gives the first eigenvalues due to the coefficients appearing in the sums of free propagators of the correlators. Alternative formulations, directly at the level of the Fokker-Planck equation, may be able to grasp the full hierarchy directly.
On a more speculative level, this work is limited to test scalar fields, and an important question would be to extend the present considerations to a more realistic inflationary setup.
\section*{Acknowledgements}
We are grateful to T. Prokopec, G. Rigopoulos, and V. Vennin for useful discussions at the various stages of this work.
|
1,108,101,564,403 | arxiv | \section{Introduction}
The angular size-redshift (theta-z) relation for a cosmological
population of standard rods is a powerful probe of the large-scale
geometry of the Universe. However, previous attempts to measure
the cosmological parameters from the angular size-redshift
relation of double radio-lobed radio sources have been marred by a
variety of selection effects, destroying the integrity of the data
sets, and by inconsistencies in the analysis, undermining the
results and leading to data consistent with a static Euclidean
Universe rather than with standard Friedmann models
(e.g. Wardle \& Miley 1974, Barthel \& Miley 1988, Nilsson et al. 1993).
Interpretation of this observation has caused disagreements among
various authors.
\begin{figure}[h]
\caption{The angular size - redshift relation for deprojected rods
of length 100 h$_{0}^{-1}$ kpc for different cosmologies. The
choice of the cosmological parameters for the three Friedmann
models are listed on the figure. The curve for a static Euclidean
universe is shown for comparison. In practice, the curves actually
define upper limits to the observed angular sizes, since
projection effects will scatter the observed sizes downward. Note
the presence of the minimum near z$\sim$1.5.}\label{FRWe}
\includegraphics[height=.35\textheight]{FRWexactmodel.ps}
\end{figure}
\section{The sample and selection criteria}
The VLA FIRST survey (Becker et al. 1995) has mapped $\sim$9000
deg$^{2}$ of the sky at 1.4 GHz to a sensitivity of $\sim$1 mJy
with 5\arcsecpoint4 FWHM Gaussian beam and has cataloged 111,115
sources with subarcsecond positional accuracy. We cross-correlated
this survey with a sample of 51,516 quasars compiled from the
optical Sloan Digital Sky Survey Data Release 3 (DR3) quasar list
and the list of quasars from the 2dF survey. By using a lower
detection limit of 3$\sigma$ for the Poisson probability
distribution of finding companions of the quasars within an area
of 5$\raise .9ex \hbox{\littleprime}$ radius from the quasar position, we defined an
initial sample of double-lobed radio sources with their cores
centered on the quasar positions. We then followed a series of
carefully selected criteria, and specifically we: a) included only
radio sources, FRIIs, with symmetric and collinear triple
structure (core + two lobes), thereby minimizing asymmetrical
effects that might distort the apparent angular size, such as
relative motion with respect to the IGM, b) further restricted the
sample to sources with redshift z$>$0.3 beyond which quasars begin
to dominate and carry more info about the cosmology. This way we
bypass the problem of different mean orientations or power-size
correlations which produce non- cosmological effects in the
theta-z plane. Our final clean sample consists of 389 FRIIs, which
is the largest homogeneous population of double-lobed sources to
date.
\section{The angular size - redshift relationship}
Fig.~\ref{FRWe} illustrates the $\theta - z$ relationship for
deprojected rods ($\phi$ = 90$^\circ$) with an intrinsic size of
l=100 h$_{0}^{-1}$ kpc for three Friedmann cosmologies: a) an
Einstein-de Sitter universe, b) a flat universe, and 3) a
nonclosed, matter-dominated universe. The particular values of
$\Omega_{0}$ and $\Omega_{\Lambda}$ are listed in the figure. The
curve for a static Euclidean universe, with $\theta \propto
z^{-1}$ is also shown for comparison. The location of the minimum
in the angular size (typically between z=1 and z=2) depends on
$\Omega_{0}$ and $\Omega_{\Lambda}$.
\noindent Fig.~\ref{thetaz} shows a scatter plot of the $\theta -
z$ data. The errors in the measured values of $\theta$ are
typically $\sim$1\raise .9ex \hbox{\littleprime\hskip-3pt\littleprime}, far less that the scatter in the angular
sizes at any redshift. With 5\arcsecpoint4 FWHM beam, the FIRST
survey can detect extended structure down to 2\raise .9ex \hbox{\littleprime\hskip-3pt\littleprime}. However,
due to the survey resolution limit, uncertainties in the quasar
optical positions, variations in the morphologies of double-lobed
sources, and inspection of numerous FIRST radio maps, we also
introduce an effective cutoff in the data of 12\raise .9ex \hbox{\littleprime\hskip-3pt\littleprime}, shown in
Fig.~\ref{thetaz}, following Buchalter et al. (1998), below which
an accurate morphological classification could not be assigned
with certainty. Such a cutoff ensures that we do not include the
so called core-jet, diffuse, cometary and other type of extended
radio sources that may be mistaken for double-lobed objects at low
resolution.
\begin{figure}[h]
\begin{minipage}[t]{7.4cm}
\includegraphics[width=1\textwidth]{histogram_0.2bin.ps}
\end{minipage}
\hfill
\begin{minipage}[t]{7.4cm}
\includegraphics[width=1\textwidth]{theta_z_log.ps}
\caption {(Left:) The distribution of redshift in our sample. The
sample includes sources out to a redshift of 2.94 significantly
higher than the redshifts at which the minima in the theta-z
curves typically occur for different Friedmann models, and in
contrast to previous work which included samples that contained
significant numbers of sources with z$<$1 where roughly Euclidean
behavior is expected. The median redshift of our sample is 1.074
while that of the FIRST survey is ~1, evidence that indicates that
the two populations have similar distributions, and therefore our
sample selected using the optical SDSS survey is fairly
representative of double-lobed radio sources as a whole. (Right:)
Scatter plot of the peak-to-peak angular sizes vs redshift. The
dashed line represents the effective resolution limit at
12\raise .9ex \hbox{\littleprime\hskip-3pt\littleprime}, below which accurate morphological classifications
could not be determined. The errors in the measured values of the
angular size are typically $\sim$1\raise .9ex \hbox{\littleprime\hskip-3pt\littleprime}, far less than the
scatter in the angular sizes at any redshift.} \label{thetaz}
\end{minipage}
\hfill
\end{figure}
\section{Standard and Nonstandard cosmological models}
For graphical purposes, we bin the data in redshift, in equal
numbers per bin, and calculate the mean and median values of the
theta, together we standard errors of the mean values and median
absolute deviations for each bin. The results are shown in
Fig.~\ref{model12} and Fig.~\ref{model34}, for the median values
of $\theta$, along with the curves from Fig.~\ref{FRWe}, whose
amplitudes (corresponding to the median intrinsic sizes), have
been scaled to provide a rough visual fit. Apart from the
Friedmann models we have also experimented with the Steady State
and two nonstandard cosmology models, the Tired-Light and the
Gauge models, the latter shown in Fig.~\ref{model34}. The most
striking feature of the data is that, regardless of the binning
details, the observed data seem to be more consistent with
Friedmann models than with a Euclidean model. The Friedmann curves
shown are not the best-fit results but are merely intended for
qualitative reference.
\begin{figure}[h]
\begin{minipage}[t]{7.4cm}
\includegraphics[width=1\textwidth]{FRWm1.ps}
\end{minipage}
\hfill
\begin{minipage}[t]{7.4cm}
\includegraphics[width=1\textwidth]{FRWm2.ps}
\caption{Median angular sizes, binned in redshift (to compensate
for projection effects), with roughly equal numbers per bin
($\sim$square-root of N the number of sources in the sample). The
Friedmann curves, corresponding to models 1 and 2 from
Fig.~\ref{FRWe}, have been scaled to provide a rough visual fit.
The error bars represent the median absolute deviation in each
bin.} \label{model12}
\end{minipage}
\hfill
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{7.4cm}
\includegraphics[width=1\textwidth]{FRWm3.ps}
\end{minipage}
\hfill
\begin{minipage}[t]{7.4cm}
\includegraphics[width=1\textwidth]{gauge.ps}
\caption {(Left:) Median angular sizes, binned in redshift (to
compensate for projection effects), with roughly equal numbers per
bin ($\sim$square-root of N the number of sources in the sample).
The Friedmann curve, corresponding to model 3 from
Fig.~\ref{FRWe}, has been scaled to provide a rough visual fit.
The error bars represent the median absolute deviation in each
bin. (Right:) The Gauge curve has been scaled to provide a
qualitative reference. In the Gauge model, when z tends to
infinity the angular size tends to a constant similar to the
Friedmann model with qo=0 but more rapidly.} \label{model34}
\end{minipage}
\hfill
\end{figure}
\section{Discussion and Future work}
Before attempting to find the best-fit cosmological parameters,
and determine whether we can distinguish with high significance
between the different models with the present sample, we need to
explore the relationships between the intrinsic properties (P
intrinsic power, l projected linear size and redshift z) of the
sources in our sample using both parametric and non-parametric
analysis. Such correlations have important implications in
determining the parameters. Therefore our next steps are to: a)
Optimize the analysis by defining and exploring a parameter space
and use an iterative procedure that will lead to the best model
fit parameters, b) Apply a chi-square goodness-to-fit statistic to
determine the best fit values for the free parameters in each
model and, c) Examine the values of the Hubble constant implied by
the data.
\begin{theacknowledgments}
EXs work was performed under the auspices of the U.S. Department
of Energy, National Nuclear Security Administration by the
University of California, Lawrence Livermore National Laboratory
under contract No. W-7405-Eng-48 and she also acknowledges support
from the National Science Foundation (grant AST 00-98355).
\end{theacknowledgments}
|
1,108,101,564,404 | arxiv | \section{Introduction}
One of the most difficult challenges in theoretical physics is to find the ultimate quantum gravity theory describing the Universe at high energies. Up to date, many consistent aspects of a quantum cosmology theory have appeared in the literature, with the most promising being the Loop Quantum Cosmology theory \cite{LQC1,LQC3,LQC4,LQC5,LQC6,LQC7,LQC9,LQC10,LQC11,LQC12,LQC13,LQC14,LQC15}. However, even Loop Quantum Cosmology cannot be considered as a complete quantum gravity theory describing the Universe at high energies, since many theoretical questions need to be addressed.
The complete quantum gravity theory will describe gravitational and particle phenomena at high energies, and the high energy regime needs to modify in some way the background spacetime that the interaction takes place. An interesting gravitational solution that describes a massless particle moving with high energy was found some time ago by Aichelburg and Sexl and later developed by Dray and t'Hooft \cite{gsw1,gsw2}. This high energy gravitational solution was called a gravitational shock wave, and actually this solution describes the spacetime around a particle whose energy is dominated by kinetic energy rather that rest mast, and hence it is effectively massless. The gravitational shock wave can be considered as a promising way to reveal quantum gravitational phenomena, since this solution distorts the background spacetime of a massless particle with energy near or higher than the Planck scale. Various aspects of gravitational shock waves have been studied in the literature after the seminal work of Aichelburg and Sexl \cite{gsw1} and Dray and t'Hooft \cite{gsw2}, and for an incomplete list see for example \cite{gsw3,gsw4,gsw5,gsw6,gsw7,gsw8,gsw9,gsw10,gsw11,gsw12,gsw13,gsw14,gsw15,gsw16,gsw17,gsw18}. To our opinion, the feature that a highly energetic particle actually deforms the background spacetime is very appealing, and can potentially have applications in heavy ion collisions, see for example Ref. \cite{gsw9,gsw10}, or can also have applications at cosmological scales, see for example Ref. \cite{gsw11}.
With the gravitational shock wave being a very simple but potentially interesting doorway towards the quantum aspects of gravity, in this paper we shall calculate the gravitational shock wave solutions of various viable higher order gravities \cite{highord1,highord2,highord3,highord4,highord5}, focusing on cosmologically viable $F(R)$ gravities, but also other realistic gravities (see \cite{reviews1,reviews1a,reviews2,reviews3,reviews4} for reviews on higher order gravities and modified gravity). We shall be interested mainly in theories which the gravitational action contains functions of the form $F(R,\Psi,\Omega)$, with $R$ being the Ricci scalar, while $\Psi$ is the Ricci tensor squared scalar, that is $\Psi=R_{\mu \nu}R^{\mu \nu}$, and $\Omega$ is the Kretschmann scalar, that is, $\Omega=R_{\mu \nu \alpha \beta}R^{\mu \nu \alpha \beta}$. We will study the gravitational shock waves solutions for some classes of these higher order gravities, and as we will see, most of the solutions are similar to the general relativistic solution and only in two classes of models non-trivial solutions occur which are different from the general relativistic case. Some similar works on a specific class of higher order gravity was performed sometime ago, see for example \cite{gsw14,gsw15}. Also we shall discuss another higher order gravity class of models, containing the Gauss-Bonnet scalar $\mathcal{G}$ \cite{gaussb1,gaussb2,gaussb3,gaussb4}.
The motivation for studying gravitational shock waves solutions in the context of higher order gravities comes mainly from the recent observational confirmation of gravitational waves from the LIGO collaboration \cite{LIGO}. Particularly, it is known that the linearization of higher order gravities leads to extra polarization modes corresponding to gravitational waves, among which are a spin 0 and spin 2 massive modes \cite{highord1}, which are ghost modes. The possibility of detecting some of these polarization modes of a stochastic gravitational wave renders the study of higher order gravities very timely in all contexts, since this would be a direct indication that the standard Einstein-Hilbert gravity should be enlarged \cite{ref11,ref12}. However the gravitational shock wave is not a usual gravitational wave, since the shock wave accompanies a highly energetic moving particle and thus it is not generated by extreme gravitational processes, but resides more to the quantum aspects of gravity phenomena. Apart from the above reasoning, the reason for studying higher order gravities comes from the fact that many paradigms coming from cosmology and quantum field theory suggest that the standard Einstein-Hilbert gravity has to be extended, for example the early and late-time era of the expanding Universe and the physics of extreme gravitational phenomena. With our study we aim to bridge two different kinds of theories, which may give some insights to the quantum aspects of gravity, and particularly we will study the gravitational shock waves generated by the classical higher order gravities. Thus in some sense, even we do not actually make use of any quantum gravity assumption, we study a quantum gravitational phenomenon in the context of classical gravitational theories, with quantum being justified by the assumption of a high energy particle.
The first class of higher order theories of gravity we shall study is the $F(R)$ gravity, and we focus on various viable models of $F(R)$ gravity which satisfy both astrophysical local constraints as well as global constraints \cite{reviews1,reviews1a,reviews2,reviews3,reviews4}. Particularly, we focus on several well-known viable models, and specifically we study an exponential model of $F(R)$ gravity which unifies the late and early-time acceleration era firstly studied in \cite{frexponential}, and a variant form studied in \cite{oikofr}. In addition, we discuss the Hu-Sawicki model \cite{frhu}, the Appleby-Battye model \cite{frbattye}, and the Starobinsky model \cite{frstarob}. As we demonstrate, all these viable models have very similar gravitational shock wave solutions which are rescaled forms of the Einstein-Hilbert solution.
After discussing the $F(R)$ cases, we proceed to discuss various higher order gravities of the form $F(R,\Psi,\Omega)$. As we show, most of the solutions are either similar to the general relativistic case and only two cases yield non-trivial results. The similarities of the resulting solutions is probably due to the specific form of the gravitational shock wave solution, and we discuss this issue in some detail. Finally, in the end we discuss in brief the case of a specific Gauss-Bonnet gravity. A general comment is that due to the form of the resulting Einstein equations, the form of the $F(R,\Psi,\Omega)$ is very much restricted, since it has to obey $F(0,0,0)=0$, as we show. Thus in some cases, unless a cosmological constant is included, the modified gravity is trivial.
This paper is organized as follows: In section II we present the essential features of the gravitational shock wave solution corresponding to the Einstein-Hilbert gravity. In section III we study many types of higher order gravity, starting from $F(R)$ gravity. We discuss the quantitative features of the gravitational shock wave solution corresponding to various viable $F(R)$ cosmological models. In addition, we study the higher order gravities of the form $F(R,\Psi)$ and we present the differences and similarities among the solutions we found. In the end of section III, we study in brief various realistic Gauss-Bonnet gravities of the form $R+F(\mathcal{G})$, and we compare the various gravitational shock waves we found. The conclusions follow in the end of the paper.
\section{Essential Features of Gravitational Shock Waves from Einstein-Hilbert Gravity}
In this section we describe the gravitational shock wave solution in the context of standard Einstein-Hilbert gravity. The gravitational shock wave solution was firstly discovered by Aichelburg and Sexl in \cite{gsw1}, where it was shown that the gravitational field of a massless highly energetic particle propagating in Minkowski space is a gravitational impulsive wave, which is also an asymmetric plane fronted gravitational wave. The spacetime metric is equal to,
\begin{equation}\label{specificmetric}
\mathrm{d}^2s=-\mathrm{d}u\mathrm{d}v+H(u,x,y)\mathrm{d}u^2+\mathrm{d}x^2+\mathrm{d}y^2\, ,
\end{equation}
with the coordinates $u$ and $v$ being equal to $u=t-z$ and $v=t+z$. For similar interesting metrics see \cite{hervik}. Also we shall assume that the function $H(u,x,v)$ is equal to $H(u,x,y)=G(x,y)\, \delta (u)$, so practically the gravitational shock wave is located at $u=0$ and the particle that generates this solution has momentum $p$ and moves along the $v$ direction, with the gravitational shock wave accompanying the particle. The geometric quantities corresponding to the metric (\ref{specificmetric}) are given in detail in the Appendix. The gravitational shock waves have the characteristic effect of a discontinuity $\Delta v$ at $u=0$ and also a refraction effect takes place, see for example \cite{gsw2} for details. The profile function $H(u,x,y)$ depends crucially on the source of the wave, which we will assume to be a massless particle with momentum $p$, and the only non-zero component of the energy momentum tensor is $T_{uu}=p\,\delta(x,y) \delta (u)$. The Einstein-Hilbert equations are of course $G_{\mu \nu}=8\pi G T_{\mu \nu}$, so by using the non-vanishing components of the Ricci tensor, and the fact that the Ricci scalar $R$ is zero for the metric (\ref{specificmetric}), the resulting Einstein equations are,
\begin{equation}\label{einteincaseequation}
\left (\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right )G_{EH}(x,y)=-16\pi G p\,\delta(x,y)\, ,
\end{equation}
with the solution being the well known result \cite{gsw1,gsw2},
\begin{equation}\label{einsteinsolution}
G_{EH}(x,y)=-4\,G\,p\,\ln \left( \frac{r^2}{r_0^2}\right)\, ,
\end{equation}
with $r_0$ being an integration constant having the same dimensions as $r$, and $r$ is the radial coordinate in the $(x,y)$ plane, that is $r=\sqrt{x^2+y^2}$. Note we used the notation $G_{EH}(x,y)$ for the Einstein-Hilbert profile function solution to discriminate from the other cases we study later. Notice that the profile function and therefore the gravitational wave is singular at the origin $(x,y)=(0,0)$, an issue which we shall discuss in the next sections, and we compare the resulting higher order profile functions to the Einstein-Hilbert profile (\ref{einsteinsolution}). By looking the metric of the gravitational shock wave (\ref{specificmetric}) it can be seen that it is relatively simple, which an intriguing feature to see how this metric becomes in the case that the theory is a higher order effective theory of gravity. In this way we will grasp the quantum effects of a highly energetic moving massless particle in flat space, even though using a classical effective gravitational theory. The qualitative features of such a theory can provide us with useful information about these quantum aspects of classical gravitational extensions of Einstein-Hilbert gravity. In the next section we shall investigate how the profile function $G(x,y)$ becomes in the case of higher order gravity theories, by using the same assumption of a massless point-like spin-less source propagating in Minkowski spacetime.
\section{Gravitational Shock Waves from Higher Order Gravities}
In this section we study the gravitational shock wave solutions for various higher order gravity theories with gravitational action,
\begin{equation}\label{actionhigherorder}
\mathcal{S}=\int \mathrm{d}^4x\sqrt{-g}F(R,\Psi,\Omega)+\mathcal{S}_m\, ,
\end{equation}
where $R$ is the Ricci scalar, and $\Psi$ and $\Omega$ are the Ricci tensor squared scalar, that is $\Psi=R_{\mu \nu}R^{\mu \nu}$, and the Kretschmann scalar, that is, $\Omega=R_{\mu \nu \alpha \beta}R^{\mu \nu \alpha \beta}$, respectively, and also $\mathcal{S}_m$ is the matter action containing the sources of the gravitational field, which corresponds to an energy momentum tensor $T_{\mu \nu}$. In the context of the metric formalism, by varying the action (\ref{actionhigherorder}) with respect to the metric tensor $g_{\mu \nu}$, we obtain the following equations of motion \cite{highord1,highord2,highord3,highord4,highord5},
\begin{align}\label{generalequationsofmotion}
& F_RG_{\mu \nu}=\frac{1}{2}g_{\mu \nu}(F-R\,F_R)-(g_{\mu \nu}\square -\nabla_{\mu}\nabla_{\nu})F_R-2(F_{\Psi}R^{\alpha}_{\mu}R_{\alpha \nu}+F_{\Omega}R_{\alpha b c \mu}R^{abc}_{\nu})\\ \notag &-g_{\mu \nu}\nabla_a\nabla_b(F_{\Psi} R^{ab})-\square (F_{\Psi}R_{\mu \nu})+
2\nabla_a\nabla_b (F_{\Psi}R^{\alpha}\, _{(\mu}\delta^b_{\nu )}+2F_{\Omega}R^{\alpha}_{(\mu \nu )}\, ^{b})+8\pi G T_{\mu \nu}\, ,
\end{align}
where $F_R=\frac{\partial F}{\partial R}$, $F_{\Psi}=\frac{\partial F}{\partial \Psi}$, $F_{\Omega}=\frac{\partial F}{\partial \Omega}$, and $\square = g^{ab}\nabla_a \nabla_b$ is the d'Alembert operator. In the following sections we shall consider various higher order gravities. As we will see, there are striking similarities between the gravitational shock waves solutions of various higher order gravities. These similarities are due to the fact that the Ricci scalar $R$, the Ricci squared scalar $\Psi$ and the Kretschmann scalar $\Omega$, are equal to zero for the metric (\ref{specificmetric}) as it can be seen in the Appendix.
\subsection{Solutions from Cosmologically Viable $F(R)$ Gravities}
In the literature there exist various cosmologically viable $F(R)$ gravity models which we mentioned in the introduction, which satisfy very stringent constraints in order for these to be considered as viable, see for example the reviews \cite{reviews1,reviews1a,reviews2,reviews3,reviews4}. These models satisfy the local astrophysical constraints and also the global constraints, and in addition these have quite appealing features, since they describe successfully the early and late-time acceleration eras, and also yield quite interesting astrophysical solutions. The reader is referred to the informative reviews \cite{reviews1,reviews1a,reviews2,reviews3,reviews4} for details.
In our analysis we shall seek for gravitational shock waves solutions for several well known viable models. Particularly, we consider the exponential model of Ref. \cite{frexponential}, in which case the $F(R)$ gravity is,
\begin{equation}\label{expon}
F(R)=R-2\Lambda (c-e^{-R/R_0}),\,\,\, R_0,\Lambda ,c>0\, ,
\end{equation}
and a variant form of this exponential model, studied in Ref. \cite{oikofr}, in which case, the $F(R)$ gravity function is,
\begin{equation}\label{oikoexp}
F(R)=R-\frac{1}{A+Be^{-R/D}}+\frac{C}{A+B}\, ,
\end{equation}
and as we will see shortly, these two models have quite similar gravitational shock wave solutions. To this end, we focus on the model (\ref{expon}) and we present only the result of the model (\ref{oikoexp}). In the $F(R)$ gravity case, the equations of motion become quite simpler and these read,
\begin{align}\label{frofmotion1}
& F_RG_{\mu \nu}=\frac{1}{2}g_{\mu \nu}(F-R\,F_R)-(g_{\mu \nu}\square -\nabla_{\mu}\nabla_{\nu})F_R\, ,
\end{align}
and by using the fact that the Ricci scalar is zero for the metric (\ref{specificmetric}), these result to the following equation,
\begin{equation}\label{expodiff1}
(1-\frac{2\Lambda}{R_0})R_{\mu \nu}-\frac{1}{2}g_{\mu \nu}2 (1-c)\Lambda =8\pi GT_{\mu \nu}\, .
\end{equation}
For the metric (\ref{specificmetric}) the only non-zero component is $R_{uu}=-\frac{1}{2}\left (\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right )H(u,x,y)$, and also the source of the gravitational field is a highly energetic particle with momentum $p$, located at $u=0$, so the energy momentum tensor has only one non-zero component $T_{uu}=p\,\delta(x,y) \delta (u)$. Hence, the only non-trivial components of the Einstein equations are the $(u,u)$ and $(x,x)$, $(u,v)$ and $(y,y)$ for which the metric is non-zero. The latter components $(x,x)$, $(u,v)$ and $(y,y)$ yield the constraint,
\begin{equation}\label{revision}
\frac{1}{2}g_{\mu \nu}2 (1-c)\Lambda=0\, ,
\end{equation}
which means that $c=1$. Effectively, the $(u,u)$ components of the Einstein equations are,
\begin{equation}\label{expofrdiff1}
\left (\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right )G_{F}(x,y)=\frac{16\pi G p}{\frac{2\Lambda}{R_0}-1}\,\delta(x,y)T_{\mu \nu}\, ,
\end{equation}
where we used the fact that the profile function is of the form $H(u,x,y)=G_F(x,y)\delta (u)$. Hence, the profile solution is similar to the Einstein-Hilbert case, that is,
\begin{equation}\label{exposolution1}
G_{F}(x,y)=\frac{8\,G\,p}{1-\frac{2\Lambda}{R_0}}\,\ln \left( \frac{r^2}{r_0^2}\right)\, ,
\end{equation}
where $r$ is the radial coordinate in the $(x,y)$ plane, that is $r=\sqrt{x^2+y^2}$. By doing a similar analysis, the gravitational shock wave solution corresponding to the $F(R)$ gravity (\ref{oikoexp}) is,
\begin{equation}\label{oikosol}
G_{F}(x,y)= \frac{8\,G\,p}{1-\frac{B}{(A+B)^2 D}}\,\ln \left( \frac{r^2}{r_0^2}\right)\, .
\end{equation}
Notice that due to the $(x,x)$, $(u,v)$ and $(y,y)$ components of the Einstein equations, the $F(R)$ gravity is forced to satisfy the constraint $F(0)=0$, which means that in this case $C=1$.
Clearly this is because of the constraint $F(0)=0$ coming from the $(x,x)$, $(u,v)$ and $(y,y)$ components of the Einstein equations. Let us study some other viable models, starting with the Hu-Sawicki model \cite{frhu}, in which case the $F(R)$ gravity is,
\begin{equation}\label{hufr}
F(R)=R-\mu \lambda \frac{(R/\lambda)^{2n}}{(R/\lambda)^{2n}+1},\,\,\,n,\mu,\lambda>0\, ,
\end{equation}
in which case the gravitational shock wave solution reads,
\begin{equation}\label{husolution}
G_{F}(x,y)=-4\,G\,p\,\ln \left( \frac{r^2}{r_0^2}\right)\,,
\end{equation}
which is identical to the Einstein-Hilbert solution of Eq. (\ref{einsteinsolution}). Accordingly, the Appleby-Battye model \cite{frbattye}, in which case the $F(R)$ gravity function is,
\begin{equation}\label{applebyfr}
F(R)=R-\mu \lambda \tanh (R/\lambda),\,\,\,\mu,\lambda>0,
\end{equation}
the gravitational shock wave solution is,
\begin{equation}\label{battyesolution}
G_{F}(x,y)=-\frac{4\,G\,p}{1-\mu}\,\ln \left( \frac{r^2}{r_0^2}\right)\,,
\end{equation}
which is a rescaled version of the Einstein-Hilbert solution (\ref{einsteinsolution}). Finally, the Starobinsky model \cite{frstarob}, with the $F(R)$ gravity being of the form,
\begin{equation}\label{starobfr}
F(R)=R-\mu \lambda \left [1-\frac{1}{(1+\frac{R^2}{\lambda^2})} \right],\,\,\, n,\mu, \lambda>0\, ,
\end{equation}
has the following gravitational shock wave solution,
\begin{equation}\label{starobsol}
G_{F}(x,y)=-4\,G\,p\,\ln \left( \frac{r^2}{r_0^2}\right)\, ,
\end{equation}
which is identical to the Einstein-Hilbert solution (\ref{einsteinsolution}). In Table \ref{table1} we gathered our results for the $F(R)$ gravities we mentioned above.
As it can be seen from the solutions we obtained, the gravitational shock wave solutions are rescaled versions of the Einstein-Hilbert solution (\ref{einsteinsolution}). This result is not accidental as we now discuss.
\begin{table*}
\small
\caption{\label{table1} The gravitational shock wave solutions $H(u,x,y)=G_F(x,y)\delta (u)$ and the constraints for various cosmologically viable $F(R)$ gravities.}
\begin{tabular}{@{}crrrrrrrrrrr@{}}
\tableline
\tableline
\tableline
Cosmological Viable F(R) Gravity Model& The Gravitational Shock Wave Profile $G_F(x,y)$
\\\tableline
$F(R)=R-2\Lambda (c-e^{-R/R_0})$ & Constraint $c=1$, and profile $G_{F}(x,y)=\frac{8\,G\,p}{1-\frac{2\Lambda}{R_0}}\,\ln \left( \frac{r^2}{r_0^2}\right)$
\\\tableline
$F(R)=R-\frac{1}{A+Be^{-R/D}}+\frac{C}{A+B}$ & Constraint $C=1$, and profile $G_{F}(x,y)= \frac{8\,G\,p}{1-\frac{B}{(A+B)^2 D}}\,\ln \left( \frac{r^2}{r_0^2}\right)$
\\\tableline
$$ &
\\\tableline
$F(R)=R-\mu \lambda \frac{(R\lambda)^{2n}}{(R\lambda)^{2n}+1}$ & $G_{F}(x,y)=-4\,G\,p\,\ln \left( \frac{r^2}{r_0^2}\right)$
\\\tableline
$F(R)=R-\mu \lambda \tanh (R/\lambda)$ & $G_{F}(x,y)=-\frac{4\,G\,p}{1-\mu}\,\ln \left( \frac{r^2}{r_0^2}\right)$
\\\tableline
$F(R)=R-\mu \lambda \left [1-\frac{1}{(1+\frac{R^2}{\lambda^2})} \right]$ & $G_{F}(x,y)=-4\,G\,p\,\ln \left( \frac{r^2}{r_0^2}\right)$
\\\tableline
\tableline
\end{tabular}
\end{table*}
All the viable cosmological models we discussed have a similarity, since the condition $F(0)=0$ is either imposed or holds true from the beginning. So in all the above cases the following conditions hold true, $F(0)=0$ and $F'(0)\neq 0$.
In principle the two classes of models do not by any means cover all the possible $F(R)$ gravities, but we consider here only viable models. For example, for the $R+\alpha R^2$ model \cite{starobinsky}, the gravitational shock wave solution is a simple extension of the Einstein-Hilbert solution, so we did not addressed these types of $F(R)$ gravity. A vital feature that plays a crucial role in the classification is the fact that the Ricci scalar is zero for the metric (\ref{specificmetric}), and this issue plays some role for a particular form of $F(R)$ gravities, as we evince shortly. It is worth providing the general form of gravitational shock waves solutions, for the first class of $F(R)$ gravities. Suppose that $F'(0)= \mathcal{C}_1$, then the equations of motion for the profile function $G_F(x,y)$ become,
\begin{equation}\label{profilegeneral}
\left (\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right )G_{F}(x,y)=-\frac{16\pi G p}{\mathcal{C}_1}\,\delta(x,y)\, ,
\end{equation}
so the gravitational shock wave solution is,
\begin{equation}\label{gravitationalgeneral}
G_F(x,y)=\frac{8\pi G p}{\mathcal{C}_1}\,\ln \left( \frac{r^2}{r_0^2}\right)\, .
\end{equation}
Actually the solution (\ref{gravitationalgeneral}) is the most general non-trivial solution we can have in the case that the higher order gravity theory is an $F(R)$ gravity, except for the cases that either the $F(R)$ gravity or the first derivative $F'(R)$ are singular at $R=0$. For example, the following $F(R)$ gravities are problematic when someone seeks for gravitational shock wave solutions,
\begin{equation}\label{frsolutionsnontrivial}
F(R)=R-\alpha R^{-n},\,\,\,F(R)=R+\alpha \ln R\, ,
\end{equation}
with $n>0$. These situations obviously are different from the ones we discussed here, so this analysis is deferred to a future work, were singular situations like these will be studied.
Finally, let us discuss another non-trivial issue. Consider the $F(R)$ gravity,
\begin{equation}\label{frextracase}
F(R)=b\, e^{\Lambda R}\, ,
\end{equation}
and owing to the condition $F(0)=0$, this gravity is forced to obey $b=0$, so it yields a trivial result. In order for these gravities to yield a non-trivial result, a cosmological constant $\Lambda_1$ has to be added in their functional form, so eventually the $F(R)$ gravity would be,
\begin{equation}\label{frextracase}
F(R)=b\, e^{\Lambda R}-\Lambda_1\, ,
\end{equation}
so the condition $F(0)=0$ imposes the constraint $b=\Lambda_1$. The gravitational shock wave profile is also a rescaled version of the Einstein-Hilbert solution.
\subsection{Solutions from Various Higher Order Gravities}
Now we proceed to other higher order gravities, in which case the higher curvature invariants $\Psi$ and $\Omega$ are included in the Lagrangian. Since for the specific metric (\ref{specificmetric}), we have $\Psi=\Omega=0$, in effect all the $F(R,\Psi,\Omega)$ gravities that contain terms $R^n$, $\Psi^m$, $\Omega^k$, with $n,m,k\geq 2$, will have the gravitational shock wave solution corresponding to general relativity, given in Eq. (\ref{einsteinsolution}). In Table \ref{table2} we list some interesting cases that all have the Einstein-Hilbert gravitational shock wave solution. So we focus on the non-trivial cases, and we mainly discuss $F(R,\Psi)$ gravities for simplicity, since similar results can be obtained for the $F(R,\Omega)$ case. The non-trivial case $F(R,\Psi,\Omega)=R+\alpha R^2+b\Psi+d\Omega$, was studied in detail in Ref. \cite{gsw14}, so we do not discuss this case here.
\begin{table*}
\small
\caption{\label{table2} Higher order $F(R,\Psi,\Omega)$ gravities for which the gravitational shock wave solution is the Einstein-Hilbert one $H(u,x,y)=-4\,G\,p\,\ln \left( \frac{r^2}{r_0^2}\right)\delta (u)$.}
\begin{tabular}{@{}crrrrrrrrrrr@{}}
\tableline
\tableline
\tableline
$F(R,\Psi,\Omega)$ Gravity
\\\tableline
$F(R,\Omega)=R+\gamma \Omega^{n},\,\,\,\,n\geq 2$
\\\tableline
$F(R,\Psi,\Omega)=R+\gamma \Omega^{n}\Psi^m,\,\,\,\,n,m\geq 2$
\\\tableline
$F(R,\Psi,\Omega)=R+\gamma \Omega^{n}+\delta \Psi^m,\,\,\,\,n,m\geq 2$
\\\tableline
$F(R,\Psi,\Omega)=f(R)+\gamma \Omega^{n}+\delta \Psi^m,\,\,\,\,n,m\geq 2,\,\,\,f(0)=0,\, f'(0)=1$
\\\tableline
$F(R,\Psi,\Omega)=f(R)+K(\Omega,\Psi),\,\,f(0)=0,\, f'(0)=1,\,\,\frac{\partial K(\Psi,\Omega)}{\partial \Psi}(0,0)=0,\,\,\,K(\Psi,\Omega)(0,0)=0$
\\\tableline
\tableline
\end{tabular}
\end{table*}
The non-triviality in the gravitational shock wave solutions for $F(R,\Psi)$ gravities can occur if one of the following conditions hold true,
\begin{equation}\label{conditionsfrpsi}
F_R(0,\Psi)\neq 0,\,\,\,F_{\Psi}(R,0)\neq 0.
\end{equation}
There are various forms of $F(R,\Psi)$ gravities which can satisfy all, or at least some of these conditions, for example if one chooses a gravity of the form $F(R,\Psi)=ae^{b\,R}+ce^{d\,\Psi}$, or $F(R,\Psi)=F(R)\times f(\Psi)$, and the function $F(R)$ is chosen to be one of the viable cosmological models we studied in the previous section.
Let us explicitly demonstrate what are the solutions for a simple example satisfying one of the conditions of Eq. (\ref{conditionsfrpsi}), so we study the case $F(\Psi)=\beta e^{\Lambda \Psi}-\Lambda_1$, the cosmological implications of which, were studied in Ref. \cite{highord3}. For a general $F(\Psi)$ function and metric, the gravitational equations read,
\begin{align}\label{fpsicase}
& \frac{1}{2}g_{\mu \nu}F(\Psi)-2(F_{\Psi}R^{\alpha}_{\mu}R_{\alpha \nu}-g_{\mu \nu}\nabla_a\nabla_b(F_{\Psi} R^{ab})+2\nabla_a\nabla_{\nu}(F_{\Psi} R^{a}_{\mu})+2\nabla_a\nabla_{\mu}(F_{\Psi} R^{a}_{\nu})+8\pi G T_{\mu \nu}=0\, ,
\end{align}
so for the specific metric (\ref{specificmetric}) and for the function $F(\Psi)=\beta e^{\Lambda \Psi}-\Lambda_1$, the gravitational equations become,
\begin{equation}\label{nontrivialcase1}
\nabla_{x,y}^4 G_{F}(x,y)=-\frac{16\pi G p}{\Lambda \beta}\delta (x,y)\, ,
\end{equation}
with $\nabla_{x,y}=\frac{\partial}{\partial x}\hat{x}+\frac{\partial}{\partial y}\hat{y}$. Notice that the $(x,x)$, $(u,v)$ and $(y,y)$ components of the field equations impose the constraint $F(0)=0$, so this means that $\beta=\Lambda_1$. The differential equation (\ref{nontrivialcase1}) is known as the biharmonic equation \cite{biharmonic}, so by using the integral,
\begin{equation}\label{integral}
\int \frac{e^{ikr}}{k^4}=\frac{\pi}{2}r^2\left(\ln r-1\right)\, ,
\end{equation}
and seeking for radially symmetric solutions, the methods of the fourier transformed Green's function yields the following solution,
\begin{equation}\label{solutionnontrivial}
G_{F}(x,y)=-\frac{G p}{\pi \Lambda \beta}r^2\left(\ln r-1\right)\, .
\end{equation}
The singularity structure of the above gravitational shock wave solution will be studied at a later section.
Another non-trivial gravitational shock wave solution results in the case that the $F(R,\Psi)$ function is of the form $F(R,\Psi)= R+\beta e^{\Lambda \Psi}-\Lambda_1$, or more generally in the case that $F(R,\Psi)=F_1(R)+F_2(\Psi)-\Lambda_1$, with $F_1(0)=0$, $F_1'(0)=1$, $F_{2}(0)\neq 0$ and $F_2'(0)\neq 0$, in which case, the gravitational equations become,
\begin{align}\label{fpsicase}
& G_{\mu \nu}=\frac{1}{2}g_{\mu \nu}(F(\Psi)-\Lambda_1)-2(F_{\Psi}R^{\alpha}_{\mu}R_{\alpha \nu}-g_{\mu \nu}\nabla_a\nabla_b(F_{\Psi} R^{ab})+2\nabla_a\nabla_{\nu}(F_{\Psi} R^{a}_{\mu})+2\nabla_a\nabla_{\mu}(F_{\Psi} R^{a}_{\nu})+8\pi G T_{\mu \nu}\, ,
\end{align}
so for the metric (\ref{specificmetric}) and for $R+\beta e^{\Lambda \Psi}-\Lambda_1$, the gravitational equations become,
\begin{equation}\label{highlynontrivial}
\left ( \Lambda \beta \nabla_{x,y}^4+\nabla_{x,y}^2\right) G_F(x,y)=-16 \pi G p \delta (x,y)\, .
\end{equation}
Notice that in this case, the $(x,x)$, $(u,v)$ and $(y,y)$ components of the gravitational equations yield the constraint $\beta =\Lambda_1$. This differential equation has been solved in Ref. \cite{gsw15}, so the solution is,
\begin{equation}\label{newsolutionrev}
G_F(x,y)=-8\,G\,p\left( K_0(\frac{r}{\sqrt{-\beta \Lambda}})+\ln (\frac{r}{r_0})\right)\, .
\end{equation}
In general, this class of solutions is obtained from $F(R,\Psi)$ gravities of the form,
\begin{equation}\label{peculiarcomplexcase}
F(R,\Psi)=F_1(R)+F_2(\Psi),\,\,\,F_1(0)=C_1,\,F_1'(0)=C_2,\,F_2(0)=C_3,\,F_2'(0)=C_4\, ,
\end{equation}
with $C_i$, $i=1,...4$, being constants. The resulting gravitational equations in this case are,
\begin{equation}\label{highlynontrivial}
\left ( C_3C_4 \nabla_{x,y}^4+C_2\nabla_{x,y}^2\right) G_F(x,y)=-16 \pi G p \delta (x,y)\, ,
\end{equation}
since the $(x,x)$, $(u,v)$ and $(y,y)$ components of the gravitational equations impose the constraint $C_1=-C_3$. A similar gravitational shock wave solution to the general relativistic one, can be obtained in the case that $F(R,\Psi)=R\gamma e^{\Lambda \Psi}$, in which case the gravitational shock wave solution is a rescaled version of solution (\ref{einsteinsolution}), which is,
\begin{equation}\label{avak}
G_F(x,y)=-\frac{8Gp}{\gamma} \ln \frac{r}{r_0}\, .
\end{equation}
Finally, we need to note that it is conceivable that any combination of the $F(R)$ gravities we studied in the previous section, with polynomials of $\Psi$ and $\Omega$, that is,
\begin{align}\label{finalformsfunctions}
& F(R,\Psi,\Omega)=f(R)+\gamma \Omega ^n+\beta \Psi^m,\\ \notag &
F(R,\Psi,\Omega)=f(R)+\gamma \Omega ^n\beta \Psi^m\, ,
\end{align}
with $n,m\geq 2$ will yield the solutions of the simple $f(R)$ gravity case which we studied in the previous section. So practically the higher polynomials of the Kretschmann scalar and the Ricci tensor squared, have no effect on the gravitational shock wave solutions.
\subsubsection{The Gauss-Bonnet Gravity Case}
Having discussed the solutions of gravitational shock waves in the context of the higher order gravities, in this section we study the case of an $R+F(\mathcal{G})$ higher order gravity, with $\mathcal{G}=R^2-4R_{\mu \nu}R^{\mu \nu}+R_{\mu \nu \lambda k}R^{\mu \nu \lambda k}$ being the Gauss-Bonnet scalar. Gauss-Bonnet modified gravity theories have been studied both in cosmological and astrophysical contexts, see for example \cite{gaussb1,gaussb2,gaussb3,gaussb4,lobo,myrzafgfinite}. In this section we are interested to find the gravitational shock wave solutions for several realistic examples of $R+F(\mathcal{G})$ gravities. Particularly, we shall discuss in some detail the following models,
\begin{equation}\label{cand1}
F(\mathcal{G})=\frac{a_1\mathcal{G}^n+b_1}{a_2\mathcal{G}^n+b_2}\, ,
\end{equation}
\begin{equation}\label{cand2}
F(\mathcal{G})=\frac{a_1\mathcal{G}^{n+N}+b_1}{a_2\mathcal{G}^n+b_2}\, ,
\end{equation}
\begin{equation}\label{cand3}
F(\mathcal{G})=a_3\mathcal{G}^n (b_3\mathcal{G}^m+1)\, ,
\end{equation}
\begin{equation}\label{cand4}
F(\mathcal{G})=\mathcal{G}^m\frac{a_1\mathcal{G}^n+b_1}{a_2\mathcal{G}^n+b_2}\, ,
\end{equation}
with the parameters $a_i$, and $b_i$, $i=1,2,3$, being arbitrary real constants, and $n>1$, $N>0$, $m>0$. Note that these types of $F(\mathcal{G})$ are known to exhibit cosmological finite time singularities \cite{Nojiri:2005sx,oiksing}, but here we shall reveal another astrophysical aspect of these modified gravity models.
The gravitational action of a general $R+F(\mathcal{G})$ theory is \cite{reviews1,reviews1a,gaussb1,gaussb2,gaussb3,gaussb4,lobo,myrzafgfinite},
\begin{equation}\label{actionfggeneral}
\mathcal{S}=\frac{1}{2\kappa^2}\int \mathrm{d}x^4\sqrt{-g}\left [ R+F(\mathcal{G})\right ]+S_m,
\end{equation}
with $\kappa^2=8\pi G$ denoting the gravitational constant and $S_m$ stands for the matter content of the theory at hand, with energy momentum tensor $T_{\mu \nu}$. Upon variation of the action with respect to the metric tensor, the gravitational equations of motion easily follow,
\begin{eqnarray}
\label{fgr1}
&& \!\!\!\!\!\!\!\!\!\!
G_{\mu \nu}-\frac{1}{2}g_{\mu \nu}F(\mathcal{G})+\left(2RR_{\mu \nu}-4R_{\mu
\rho}R_{\nu}^{\rho}+2R_{\mu}
^{\rho \sigma \tau}R_{\nu \rho \sigma \tau}-4g^{\alpha \rho}g^{\beta \sigma}R_{\mu \alpha
\nu \beta}
R_{\rho \sigma}\right)F'(\mathcal{G})\notag
\\
&& \ +4 \left[\nabla_{\rho}\nabla_{\nu}F'(\mathcal{G})\right ] R_{\mu}^{\rho}
-4g_{\mu \nu} \left [\nabla_{\rho}\nabla_{\sigma }F'(\mathcal{G})\right ]R^{\rho \sigma }+4 \left
[\nabla_{\rho}\nabla_{\sigma }F'(\mathcal{G})\right ]g^{\alpha \rho}g^{\beta \sigma }R_{\mu \alpha
\nu \beta }
\notag
\\
&& \
-2 \left [\nabla_{\mu}\nabla_{\nu}F'(\mathcal{G})\right ]R+2g_{\mu \nu}\left [\square F'(\mathcal{G})
\right]R
\notag
\\
&&\
-4 \left[\square F'(\mathcal{G}) \right ]R_{\mu \nu }+4
\left[\nabla_{\mu}\nabla_{\nu}F'(\mathcal{G})\right]R_{\nu}^{\rho }
=\kappa^2T_{\mu \nu }\, .
\end{eqnarray}
A crucial feature in our analysis, that will eventually determine the final form of the gravitational shock wave solution, is the fact that the Gauss-Bonnet scalar is zero for the metric (\ref{specificmetric}). This simplifies to a great extent the final picture as we now demonstrate, and practically the gravitational shock wave solutions for the $F(\mathcal{G})$ models (\ref{cand1}), (\ref{cand2}), (\ref{cand3}) and (\ref{cand4}) have similar gravitational shock wave solutions.
Let us see this explicitly, so for the models (\ref{cand1}) and (\ref{cand2}), we have $F(0)= \frac{b_1}{b_2}$ and $F'(0)=0$, and therefore the gravitational equations of motion (\ref{fgr1}) become,
\begin{equation}\label{profilegeneralgaussbonnet}
\left (\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right )G_{F}(x,y)=-16\pi G p\,\delta(x,y)\, .
\end{equation}
Notice that for deriving the resulting gravitational equations (\ref{profilegeneralgaussbonnet}), we also took into account the $(x,x)$, $(u,v)$ and $(y,y)$ components of the gravitational equations, which result in the constraint $F(0)=0$ which for the models (\ref{cand1}) and (\ref{cand2}) implies that $b_1=0$. Also in this case too we assumed that $T_{uu}=p\,\delta(x,y) \delta (u)$, so the shock wave is generated by a massless ultra-relativistic particle mass source, with momentum $p$. The gravitational shock wave solution in this case is,
\begin{equation}\label{gravitationalgeneralgaussbonnet}
G_F(x,y)=-4\,G\,p\,\ln \left( \frac{r^2}{r_0^2}\right)\, ,
\end{equation}
with $r=\sqrt{x^2+y^2}$, and which is identical to the Einstein-Hilbert solution.
\begin{table*}
\small
\caption{\label{table3} The gravitational shock wave solutions $H(u,x,y)=G_F(x,y)\delta (u)$, for realistic $F(\mathcal{G})$ gravities, and the imposed constraints in their functional form.}
\begin{tabular}{@{}crrrrrrrrrrr@{}}
\tableline
\tableline
\tableline
Form of the $R+F(\mathcal{G})$ Gravity Model& The Gravitational Shock Wave Profile $G_F(x,y)$
\\\tableline
$F(R,\mathcal{G})=R+\frac{a_1\mathcal{G}^n+b_1}{a_2\mathcal{G}^n+b_2}$ & $b_1=0$ and solution $G_{F}(x,y)=-4\,G\,p\,\ln \left( \frac{r^2}{r_0^2}\right)$
\\\tableline
$F(R,\mathcal{G})=R+\frac{a_1\mathcal{G}^{n+N}+b_1}{a_2\mathcal{G}^n+b_2}$ & $b_1=0$ and solution $G_{F}(x,y)= -4\,G\,p\,\ln \left( \frac{r^2}{r_0^2}\right)$
\\\tableline
$F(R,\mathcal{G})=R+a_3\mathcal{G}^n (b_3\mathcal{G}^m+1)$ & $G_{F}(x,y)=-4\,G\,p\,\ln \left( \frac{r^2}{r_0^2}\right)$
\\\tableline
$F(R,\mathcal{G})=R+\mathcal{G}^m\frac{a_1\mathcal{G}^n+b_1}{a_2\mathcal{G}^n+b_2}$ & $G_{F}(x,y)=-4\,G\,p\,\ln \left( \frac{r^2}{r_0^2}\right)$
\\\tableline
\tableline
\end{tabular}
\end{table*}
The other two $F(\mathcal{G})$ models, namely (\ref{cand3}) and (\ref{cand4}), have the property that $F(0)=0$ and $F'(0)$, so the gravitational equations in this case become,
\begin{equation}\label{einteincaseequationgaussbonnet}
\left (\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right )G_{EH}(x,y)=-16\pi G p\,\delta(x,y)\, ,
\end{equation}
with the solution to this differential equation being identical to the Einstein-Hilbert gravitational shock wave solution of Eq. (\ref{einsteinsolution}). In conclusion, even in the case that realistic Gauss-Bonnet modified gravities are considered, the gravitational shock waves solutions are identical to the Einstein-Hilbert solution. The results for the Gauss-Bonnet models are gathered in Table \ref{table3}. It is conceivable that the list is not exhaustive, since there exist models that may contain inverse powers of the Gauss-Bonnet scalar, or even terms proportional to $\ln \mathcal{G}$, which could potentially generate problems, however the focus in this section was on well-known and realistic $F(\mathcal{G})$ theories.
\subsection{Brief Comparison of the Gravitational Shock Wave Profile Singularity Structure}
The main result we obtained in the previous sections is that the profile function of the gravitational shock wave in the context of realistic higher order gravities, has three different forms, which we list in Table \ref{table4}. As it can be seen, the profile I is the general relativistic profile, the profile II appears only in specific forms of $F(\Psi,R)$ gravities, and the profile III occurs only for $F(R_{\mu \nu}R^{\mu \nu})$ theories, and only in the case that $F_{\Psi}(0)\neq 0$.
\begin{table*}[h]
\small
\caption{\label{table4} Three Categories of the Most Frequently Occurring Gravitational Shock Wave Profile Functions $H(u,x,y)=G_F(x,y)\delta (u)$, for Realistic Higher Order Gravities.}
\begin{tabular}{@{}crrrrrrrrrrr@{}}
\tableline
\tableline
Profile I & $G_{F}(x,y)=-4\,G\,p\,\ln \left( \frac{r^2}{r_0^2}\right)$
\\\tableline
Profile II & $G_F(x,y)=-8\,G\,p\left( K_0(\frac{r}{\sqrt{-\beta \Lambda}})+\ln (\frac{r}{r_0})\right)$
\\\tableline
Profile III & $G_{F}(x,y)=-\frac{G p}{\pi \Lambda \beta}r^2\left(\ln r-1\right)$
\\\tableline
\tableline
\end{tabular}
\end{table*}
The most peculiar case corresponds to the profile III, and the plot of the profile can be seen in the bottom plot of Fig. \ref{plot1}. In this case, the $r=0$ singularity seems not to occur, in contrast to the profile I but also with profile II cases.
\begin{figure}[h] \centering
\includegraphics[width=15pc]{plot1.eps}
\includegraphics[width=15pc]{plot2.eps}
\includegraphics[width=15pc]{plot3.eps}
\caption{The gravitational shock wave profiles for higher order gravities. The left plot corresponds to profile I, the right plot to profile II and the bottom plot to profile III, listed in Table \ref{table4}.}
\label{plot1}
\end{figure}
The profiles I and II have similar properties, as it can be seen by left and right plots of Fig. \ref{plot1}, but it is worth having a more quantitative picture, so we expand the profile function I in the limit $r\to 0$, and we get,
\begin{align}\label{asymptoticlimit}
& -K_0(r)-\ln r\simeq (\gamma-ln 2)+\frac{1}{4} (-1+\gamma-\ln 2+\ln r) r^2,\\ \notag &
\, ,
\end{align}
where $\gamma$ is the Euler gamma number. The rest two profiles have identical expansion with their functional form. By taking the limits $r\to 0$, it can be seen that the singularities of the profiles II and III are milder, and in fact the profile III yields $\lim _{r\to 0}r^2(\ln r-1)=0$.
In conclusion the effects of higher order gravity to the singularity structure of the gravitational shock wave profile are important, since the profile function approaches the $r=0$ singularity in a less steeper way for some higher order gravity theories, and this feature has also been pointed out in \cite{gsw14}. More importantly, for higher order gravity theories of the form $F(R_{\mu \nu}R^{\mu \nu})$ which satisfy $F_{\Psi}(0)\neq 0$, no singularity occurs at $r=0$, and we believe this is a significant difference between the Einstein-Hilbert solution and the higher order gravity solutions.
\section{Conclusions}
In this work we studied gravitational shock waves solutions in the context of higher order gravities. Particularly, we focused on higher order gravities of the form $F(R,R_{\mu \nu}R^{\mu \nu}, R_{\mu \nu k \lambda }R^{\mu \nu k \lambda})$ and also to Gauss-Bonnet theories of gravity of the form $R+F(\mathcal{G})$. In the case of $F(R)$ gravity, we investigated which are the gravitational shock waves solutions corresponding to various cosmologically viable theories and we found that the gravitational shock waves solutions are similar to the Einstein-Hilbert solutions. The same picture occurs also in the case of $R+F(\mathcal{G})$ gravity, when realistic gravities are taken into account. The same solutions also appear in the case of various combinations of $F(R,R_{\mu \nu}R^{\mu \nu}, R_{\mu \nu k \lambda }R^{\mu \nu k \lambda})$ gravities. Notably, polynomial functional forms of the Ricci squared tensor or the Kretschmann scalar give no contribution to the gravitational shock wave solution. A highly non-trivial gravitational shock wave solution results for an $F(R_{\mu \nu}R^{\mu \nu})$ gravity, which satisfies $F_{\Psi}(0)\neq 0$, with $\Psi=R_{\mu \nu}R^{\mu \nu}$. This particular solution has the appealing property that it has no singularity at $r=0$, in contrast to the other two solutions we found.
The study we performed is not exhaustive, meaning that there are more possibilities of choosing the higher order gravities. However, our study was devoted in well known viable and realistic gravities, and we found that there are three classes of solutions. An interesting task would be to study the gravitational shock wave solutions in the case of $F(R,T)$ gravities \cite{saridakis,capp1}, or even more complicated forms of $F(R,\mathcal{G})$ gravities \cite{cappofrg}. In principle, other non trivial solutions might appear too.
Another interesting issue that could formally be addressed in a future work is to find a rigid interpretation for the cases that the higher order gravities functional forms are singular at $R=0$ or equivalently at $\Psi=\Omega=0$, with $\Omega$ being the Kretschmann scalar. We discussed in brief some cases in the previous sections, and this study should be done in detail in a future work.
Also, the study we performed assumed that the ultra-relativistic particle that generates the gravitational shock wave background, propagates in a Minkowski background, so it would be interesting to examine the higher order gravitational shock wave solutions for the case that the particle propagates in a curved background, like one of the black holes backgrounds. In the latter case, it would be interesting to see if the Hawking radiation effect is affected from these ultra-relativistic propagating particles, always in the context of higher order gravities.
Finally, and in relation to propagation in curved backgrounds, it is worth studying the effects of higher order gravities in collisions of gravitational shock waves. Some studies in the past were devoted in this issue, see for example \cite{gsw9,gsw10}, so it would be interesting to extend these works in the context of higher order gravity. Also the cosmological implications of gravitational shock waves are also studied in the literature \cite{gsw11}, so an interesting study would be to seek for cosmological implications of gravitational shock waves in the case that the waves originate from a higher order gravity.
\section*{Acknowledgments}
This work is supported by Min. of Education and Science of Russia (V.K.O).
\section*{Appendix: Christoffel Symbols and Curvature Tensors of the Gravitational Shock Wave Metric}
Here we present in detail the Christoffel symbols and the components of the Riemann and Ricci tensors corresponding to the gravitational shock wave metric,
\begin{equation}\label{specificmetricappendix}
\mathrm{d}s^2=-\mathrm{d}u\mathrm{d}v+H(u,x,y)\mathrm{d}u^2+\mathrm{d}x^2+\mathrm{d}y^2\, .
\end{equation}
The Christoffel symbols are given below,
\begin{align}\label{christf}
& \Gamma^2_{1\,1}=-\partial_u\,H(u,x,y),\,\,\,\Gamma^2_{3\,1}=-\partial_x\,H(u,x,y),\,\,\,\Gamma^2_{4\,1}=-\partial_y\,H(u,x,y),\\ \notag &\,\,\,\Gamma^3_{1\,1}=-\frac{1}{2}\partial_x\,H(u,x,y),\,\,\,\Gamma^4_{1\,1}=-\frac{1}{2}\partial_y\,H(u,x,y)\, .
\end{align}
The only non-zero component of the Ricci tensor $R_{\mu \nu}$ is,
\begin{equation}\label{Ruu}
R_{uu}=-\frac{1}{2}\left (\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right )H(u,x,y)\, .
\end{equation}
The Ricci scalar $R$, the Ricci tensor squared $R_{\mu \nu }R^{\mu \nu}$, the Riemann tensor squared $R_{\mu \nu k \lambda}R^{\mu \nu k \lambda}$ and the Gauss-Bonnet scalar, calculated for the metric (\ref{specificmetricappendix}), are equal to zero, that is,
\begin{equation}\label{highercurvatures}
R=0,\,\,\,R_{\mu \nu }R^{\mu \nu}=0,\,\,\,R_{\mu \nu k \lambda}R^{\mu \nu k \lambda}=0,\,\,\,\mathcal{G}=R^2-4R_{\mu \nu}R^{\mu \nu}+R_{\mu \nu \lambda k}R^{\mu \nu \lambda k}=0\, .
\end{equation}
Finally, the non-zero components of the Riemann tensor are the following,
\begin{align}\label{riemanntensor}
& R^{2}_{3\,1\,3}=\partial_{x}^2H(u,x,y),\,\,\,R^{2}_{3\,1\,4}=\partial_{(x,y)}H(u,x,y),\,\,\,R^{2}_{3\,3\,1}=-\partial_{x}^2H(u,x,y),\,\,\,\\ \notag &
R^{2}_{3\,4\,1}=-\partial_{(x,y)}H(u,x,y),\,\,\,R^{2}_{4\,1\,3}=\partial_{y}H(u,x,y),\,\,\,R^{2}_{4\,1\,4}=\partial_{y}^2H(u,x,y),\,\,\,\\ \notag &
R^{2}_{4\,3\,1}=-\partial_{(x,y)}H(u,x,y),\,\,\,R^{2}_{4\,4\,1}=-\partial_{y}^2H(u,x,y),\,\,\,R^{3}_{1\,1\,3}=\frac{1}{2}\partial_{x}^2H(u,x,y),\,\,\,\\ \notag &
R^{3}_{1\,1\,4}=\frac{1}{2}\partial_{(x,y)}H(u,x,y),\,\,\,R^{3}_{1\,3\,1}=-\frac{1}{2}\partial_{x}^2H(u,x,y),\,\,\,R^{3}_{1\,4\,1}=-\frac{1}{2}\partial_{(x,y)}H(u,x,y),\,\,\,\\ \notag &
R^{4}_{1\,1\,2}=\frac{1}{2}\partial_{(x,y)}H(u,x,y),\,\,\,R^{4}_{1\,1\,4}=\frac{1}{2}\partial_{y}^2H(u,x,y),\,\,\,R^{4}_{1\,3\,1}=-\frac{1}{2}\partial_{(x,y)}H(u,x,y),\,\,\,\\ \notag &
R^{4}_{1\,4\,1}=-\frac{1}{2}\partial_{y}^2H(u,x,y)
\end{align}
|
1,108,101,564,405 | arxiv | \section{Introduction}\label{sec:introduction}
A graph is considered to be \textit{expander} when the absolute value of all the eigenvalues of its transition matrix except one are bounded away from $1$. Expander graphs are one of the most useful combinatorial objects in theoretical computer sciences. They have a wide range of applications in areas such as derandomization, complexity theory, and coding theory. In particular, random walks on the vertices of expander graphs are typically used to generate sequences of vertices satisfying desirable pseudorandom properties. They serve as an efficient replacement of $t$ independent sample vertices chosen uniformly at random. It is natural to study then how good of a replacement these sequences are, or equivalently, to measure the randomness of random walks on expander graphs. More precisely, we consider a \textit{balanced labelling} on a graph, that is, a map $\Val$ that assigns the value $0$ to half of the vertices, and $1$ to the other half. Given a test function $f\colon\{0,1\}^t \longrightarrow \mathbb{R}$, we want to compare $f(\Val(v_0),\ldots,\Val(v_{t-1}))$ when the vertices $v_0,\ldots,v_{t-1}$ are sampled whether from a random walk on an expander, or independently and uniformly at random. This problem was studied by Guruswami and Kumar in \cite{guruswami2021pseudobinomiality} for sticky random walks, and later on, by Cohen, Peri, and Ta-Shma in \cite{cohen2021expander} for general expander graphs. Significant progress for this problem can be found also in \cite{cohen2022expander} and \cite{golowich2022pseudorandomness}. In this paper, we study the asymptotic behavior of $f(\Val(v_0),\ldots,\Val(v_{t-1}))$ for symmetric test functions as the size of the sample $t$ grows. Our results answer Question 3 in \cite{cohen2021expander} and Questions 2 and 3 in their follow up paper \cite{cohen2022expander}.
Let $(X_i)$ be the simple random walk on the vertices of an expander graph $G$, where $X_0$ is uniformly distributed. Similarly, let $(U_i)$ be a sequence of independent vertices chosen uniformly at random. We are specially interested in the asymptotic behavior of the distributions $Z_t=\sum_{i=0}^{t-1} \Val(X_i)$ and $B_t=\sum_{i=0}^{t-1} \Val(U_i)$. The total variation distance between $Z_t$ and $B_t$ measures the best distinguishing probability a symmetric function can achieve on $(X_i)_{i=0}^{t-1}$ and $(U_i)_{i=0}^{t-1}$. Write $\mathcal{N}(\mu,\sigma^2)$ for a normal distribution with mean $\mu$ and variance $\sigma^2$, and $\phi$ for the density function of a standard normal distribution $\mathcal{N}(0,1)$. Our main result gives a local central limit theorem for $Z_t$.
\begin{theorem}\label{theo:main}
Let $G$ be a $d$-regular $\lambda$-expander graph with $n$ vertices. Fix a balanced labelling $\operatorname{val}\colon V \longrightarrow \{0,1\}$ on $G$ and let $(X_i)_{i=0}^{t-1}$ be the simple random walk on $G$ started from a vertex chosen uniformly at random from $V$. For $Z_t= \sum_{i=0}^{t-1} \Val(X_i)$ we have that $(Z_t)$ satisfies the local central limit theorem, that is, there is some $\sigma^2>0$ for which
\[ t^{1/2}\sup_{u\in \mathbb{Z}} \left\{ \left |\prob{Z_t =u} - t^{-1/2}\sigma^{-1}\phi\left (\frac{u-t/2}{t^{1/2}\sigma}\right )\right | \right\}\to 0\quad \mbox{as } t \to \infty.\]
\end{theorem}
As a consequence of the previous result, as $t$ grows the sequence $(Z_t)$ converges in total variation distance to a discretized normal distribution with mean $\mathbb{E}(Z_t)=t/2$. More precisely, let $N_d(\mu,\sigma^2)$ be a random variable on $\mathbb{Z}$ whose density function $f_{N_d(\mu,\sigma^2)}$ is given by
\begin{equation}\label{eq:disc normal}
f_{N_d(\mu,\sigma^2)}(u)=C_{\sigma^2} \sigma^{-1}\phi\left (\frac{u-\mu}{\sigma}\right ) \quad \forall \, u \in \mathbb{Z},
\end{equation}
where $C_{\sigma^2}=(\sum_{u \in \mathbb{Z}} \sigma^{-1}\phi\left (\frac{u-\mu}{\sigma}\right ))^{-1}$ is a normalizing constant. Theorem \ref{theo:main} implies the following.
\begin{corollary}\label{cor:main}
Let $G$ be a $d$-regular $\lambda$-expander graph with $n$ vertices. Fix a balanced labelling $\operatorname{val}\colon V \longrightarrow \{0,1\}$ on $G$ and let $(X_i)_{i=0}^{t-1}$ be the simple random walk on $G$ started from a vertex chosen uniformly at random from $V$. For $Z_t= \sum_{i=0}^{t-1} \Val(X_i)$ there is some $\sigma^2>0$ for which
\[ \lim_{t\to \infty}\left \|Z_t - N_d(t/2,t\sigma^2) \right \|_{TV}=0.\]
\end{corollary}
On the other hand, the asymptotic behavior of $(B_t)$ is well known. Indeed, $U_i$ is a Bernoulli random variable with parameter $p=\frac{1}{2}$. Hence, $B_t$ follows a binomial $\operatorname{Bin}(t,\frac{1}{2})$, and so $t^{-1/2}(B_t-t/2)$ converges in distribution to the normal distribution $\mathcal{N}(0,1/4)$. Moreover, the local central limit theorem implies that $B_t$ converges in total variation distance to a discretized normal distribution. More precisely,
\[ \lim_{t \to \infty} \|B_t- N_d(t/2,t/4)\|_{TV}=0.\]
Even though both $(Z_t)$ and $(B_t)$ converges to a discretized normal distribution with mean $0$, their variances may not be equal. As the size of the sample $t$ grows, we obtain
\begin{equation}\label{eq:distance Z_t and B_t}
\lim_{t \to \infty} \|B_t - Z_t\|_{TV} = \|N_d(t/2,t/4) - N_d(t/2,t\sigma^2)\|_{TV}=\|N_d(0,t/4) - N_d(0,t\sigma^2)\|_{TV}.
\end{equation}
Therefore, we cannot expect the distance to converge to $0$ as $t$ grows. Instead, the ability of a random walk $(X_i)$ on an expander graph to fool symmetric functions as $t$ grows is measured by the variance of $(Z_t)$. In fact, we can easily obtain a bound in terms of $\sigma^2$. The total variation distance between two normal distributions is standard and easy to compute:
\begin{equation}\label{eq:distance normals} \|\mathcal{N}(0,t/4)-\mathcal{N}(0,t\sigma^2)\|_{TV}=\|\mathcal{N}(0,1)-\mathcal{N}(0,4\sigma^2)\|_{TV}=
\begin{cases}
4 \prob{\frac{z}{2\sigma} \leq \mathcal{N}(0,1)\leq z} &\quad\mbox{if } 4\sigma^2>1; \\
4\sigma \prob{2\sigma z \leq \mathcal{N}(0,1)\leq z}&\quad \mbox{if } 4\sigma^2<1,
\end{cases}
\end{equation}
where $z>0$ is the point of intersection of the density functions of $\mathcal{N}(0,1)$ and $\mathcal{N}(0,4\sigma^2)$, namely
\[z=\sqrt{\frac{2\log 2\sigma}{1-\frac{1}{4\sigma^2}}}.\]
Since $N_d(0,t/4)$ and $N_d(0,t\sigma^2)$ are the discretized versions of $\mathcal{N}(0,t/4)$ and $\mathcal{N}(0,t\sigma^2)$, it is easy to check that the total variation distance between them is of the same order. The following result shows that the variance of $Z_t$ can be expressed in terms of the eigenvalues $\lambda_j$, eigenvectors $f_j$, and the labelling $\Val$.
\begin{proposition}\label{prop: variance}
Let $G$ be a $d$-regular graph with $n$ vertices. Fix a balanced labelling $\operatorname{val}\colon V \longrightarrow \{0,1\}$ on $G$ and let $(X_i)_{i=0}^{t-1}$ be the simple random walk on $G$ started from a vertex chosen uniformly at random from $V$. Consider $Z_t= \sum_{i=0}^{t-1} \Val(X_i)$. Then,
\begin{equation}\label{eq:var formula}
\Var(Z_t)= \frac{t}{4} + \frac{1}{2}\sum_{k=1}^{t-1}(t-k)\sum_{j=2}^n\langle \pi_A,f_j\rangle^2\lambda^{k}_j,
\end{equation}
where $A=\{x \in V \colon \Val(x)=1\}$ and $\pi_A$ is the uniform distribution on $A$. In particular, if $G$ is $\lambda$-expander, we have
\begin{equation}\label{eq:var bound}
\left |\Var(Z_t)-\frac{t}{4}\right |\leq \frac{t}{2}\frac{\lambda}{1-\lambda}.
\end{equation}
\end{proposition}
As a consequence, if $\sigma^2$ is the variance appearing in Theorem \ref{theo:main}, we have $|4\sigma^2-1|\leq\frac{2\lambda}{1-\lambda}$. In practice, we are interested in the case when $\lambda$ is small. In that case, we can write $2\sigma=1+O(\lambda)$, and so $z$ is close to $1$. If $4\sigma^2>1$, a trivial upper bound for $\prob{\frac{z}{2\sigma} \leq \mathcal{N}(0,1)\leq z}$ is $z-\frac{z}{2\sigma}=z\frac{2\sigma-1}{2\sigma}=O(\lambda)$. A similar argument when $4\sigma^2<1$ gives
\[ \lim_{t \to \infty} \|Z_t-B_t\|_{TV}= O(\lambda).\]
As we comment later on, the main result of \cite{cohen2022expander} shows that this bound is sharp and holds for any fixed $t$.
As we discussed above, in general the ability of a random walk on an expander graph to fool symmetric functions does not increase as $t$ grows. However, the sequence of bits generated by the random walk serves as a replacement for a sample of bits obtained from the sticky random walk for a convenient parameter $p$ as $t$ grows.
\begin{theorem}\label{theo: sticky}
Let $G$ be a $d$-regular $\lambda$-expander graph with $n$ vertices. Fix a balanced labelling $\Val \colon V \longrightarrow \{1,-1\}$ on $G$ and let $(X_i)$ be the simple random walk on $G$ started from a vertex chosen uniformly at random from $V$. Let $(Q_i)$ be the sticky random walk on $\{0,1\}$ with parameter $p=\frac{4\sigma^2-1}{4\sigma^2+1}$, where $\sigma^2$ is given by Theorem \ref{theo:main}. For $Z_t=\sum_{i=0}^{t-1} \Val(X_i)$ and $R_t=\sum_{i=0}^{t-1} Q_i$, we have
\[ \lim_{t \to \infty} \|Z_t-R_t\|_{TV} = 0.\]
\end{theorem}
The structure of the paper is organized as follows. In Section \ref{sec:preliminaries} we introduce notation and definitions that will be used throughout the paper. In Section \ref{sec:previous work} we discuss significant previous work of several authors on this topic. Section \ref{sec:main result} is dedicated to prove Theorem \ref{theo:main}. In Section \ref{sec:variance} we prove Proposition \ref{prop: variance} and Corollary \ref{cor:main}. Section \ref{sec:sticky} is devoted to prove Theorem \ref{theo: sticky}. Finally, in Section \ref{sec:extension} we extend our results to unbalanced labellings.
\section{Preliminaries}\label{sec:preliminaries}
Given two probability measures $\mu$, $\nu$ on a set $V$, their \textit{total variation distance} is
\[\|\mu-\nu\|_{TV} = \max_{A\subseteq V}|\mu(A)-\nu(A)|.\]
Let $G=(V,E)$ be a connected graph with $n$ vertices, where $V$ is the set of vertices and $E$ the set of edges. We say that $G$ is $d$\textit{-regular} if every vertex $v \in V$ has \textit{degree} $d$, that is, $v$ has exactly $d$ adjacent vertices. Let $(X_i)$ be the \textit{simple random walk} on $G$ started at a vertex chosen uniformly at random from $V$, i.e., at every step the chain goes to an adjacent vertex chosen uniformly at random. The \textit{transition matrix} of $(X_t)$ is denoted by $P$, and its \textit{stationary distribution} is denoted by $\pi$. Notice that $\pi$ is the uniform distribution on $V$ since $G$ has regular degree. It is well known that $P$ is a self-adjoint stochastic matrix, and so it has real eigenvalues $1=\lambda_1>\lambda_2\geq\ldots\geq \lambda_n\geq -1$. Write $\lambda^*=\max\{|\lambda_j|\colon j\geq 2\}$. We say that $G$ is a $\lambda$\textit{-expander} graph if $\lambda^* \leq \lambda$. Denote $\langle\cdot,\cdot \rangle$ the usual inner product on $\mathbb{R}^V$, given by $\langle f,g \rangle= \sum_{x \in V} f(x)g(x)$. We will also consider the inner product $\langle \cdot, \cdot\rangle_\pi$ defined by
$$\langle f,g \rangle_\pi = \sum_{x \in V} f(x)g(x)\pi(x) \quad \forall \, f,g\colon V \longrightarrow \mathbb{R}.$$
The spectral theorem applied to $P$ gives an orthonormal basis of eigenvalues $(f_j)_{j=1}^n$ corresponding to the eigenvalues $(\lambda_j)$. As a consequence, for any $f \colon V \longrightarrow \mathbb{R}$ we have
\begin{equation}\label{eq:spectral form Pf}
P^t f (x)= \sum_{j=1}^n \langle f,f_j \rangle_\pi f_j(x)\lambda_j^t \quad \forall \, x \in V \quad \forall \, t \in \mathbb{N}.
\end{equation}
We refer to Lemma 12.2 in \cite{levin2017markov} for a more detailed explanation. We say that a random variable $X$ is \textit{integrable} if $\mathbb{E}|X|<\infty$. We say that $X$ has a \textit{lattice distribution} if there exists $h>0$ such that $(X-a)/h \in \mathbb{Z}$ almost surely for some $a \in \mathbb{R}$. If $X$ is a lattice, then the largest such $h$ is called the \textit{span} of $X$, and is denoted by $h_X$. If $X$ is non-lattice, then we set $h_X=0$. To prove our main result we will use the following general local central limit theorem.
\begin{theorem}[Theorem 2.1 in \cite{penrose2011local}]\label{theo:Yuval}
Let $V,V_1,V_2,V_3,\ldots$ be independent identically distributed random variables with positive variance. Suppose for each $t \in \mathbb{N}$ that $(Y_t,S_t,Z_t)$ is a triple of integrable random variables on $\mathbb{Z}$ such that (i) $Y_t$ and $S_t$ are independent, with $S_t= \sum_{i=0}^t V_i$; (ii) both $t^{-1/2}\mathbb{E}|Z_t-(Y_t+S_t)|$ and $t^{1/2}\prob{Z_t\neq Y_t+S_t}$ tend to zero as $t\to\infty$; and (iii) for some $\sigma \in [0,\infty)$,
\begin{equation}\label{eq: Penrose-Yuval 1}
n^{-1/2}(Z_t-\mathbb{E}(Z_t))\xrightarrow{\mathcal{D}} \mathcal{N}(0,\sigma^2)\quad \mbox{as } n\to \infty.
\end{equation}
Then, $\Var(V)\leq \sigma^2$ and
\[ t^{1/2}\sup_{u\in\mathbb{R}} \left\{ \left |\prob{Z_t \in [u,u+h_V)} - t^{-1/2}\sigma^{-1}h_V\phi\left (\frac{u-\mathbb{E}(Z_t)}{t^{1/2}\sigma}\right )\right | \right\}\to 0\quad \mbox{as } t \to \infty.\]
\end{theorem}
In other words, if $Z_t$ satisfies a central limit theorem and can be expressed (with high probability) as a sum of two independent random variables, one of which satisfies a central limit theorem while the other satisfies a local central limit theorem, then $Z_t$ also satisfies a local central limit theorem. It is known that the simple random walk on expander graphs satisfies a central limit theorem (see Theorem C in \cite{kloeckner2017effective}.) The strategy to prove Theorem \ref{theo:main} will be decomposing $Z_t$ as a sum of two independent random variables, one of each satisfies a local central limit theorem. The asymptotic behavior of $Z_t$ can be easily studied using Theorem \ref{theo:main}. In fact, the expander Chernoff bound \cite{gillman1998a} tells us that $Z_t$ is highly concentrated around its mean. Therefore, a local central limit theorem for $Z_t$ implies convergence in total variation distance to a normal distribution. This idea is discussed in more detail in the proof of Corollary \ref{cor:main}.
\section{Previous work}\label{sec:previous work}
The \textit{sticky random walk} is a Markov chain on $\{0,1\}$. The initial state is chosen uniformly at random. At each step, the chain stays at the same state with probability $\frac{1+p}{2}$, and switches states with probability $\frac{1-p}{2}$. This straightforward chain can be seen as a simplified version of general random walks on expander graphs. If $(X_i)$ is the sticky random walk, the \textit{Hamming weight} of $(X_i)$ is the given by $(\sum_{k=0}^i X_k)$. In the recent paper \cite{guruswami2021pseudobinomiality}, among other results Guruswami and Kumar showed that the total variation distance between the Hamming weight of the sticky random walk and the binomial distribution is $\Theta(\lambda)$. A major open problem they raise is whether the same is true for random walks on $\lambda$-expanders. This problem has been studied very recently by several authors. We discuss here the most significant advances on the matter.
Cohen, Peri, and Ta-Shma present a Fourier-Analytic approach to study random walks on expander graphs in \cite{cohen2021expander}. Using our notation, let $(X_i)$ be the simple random walk on $G$ with uniform initial distribution, and $(U_i)$ be a sequence of independent vertices chosen uniformly at random. The main result in \cite{cohen2021expander} states that the simple random walk on $\lambda$-expander graphs ``fools" symmetric functions for small values of $\lambda$. More precisely, Theorem 1.1 in \cite{cohen2021expander} states that given $t\in \mathbb{N}$ if $f \colon\{ 0,1\}^t \longrightarrow \{0,1\}$ is a symmetric function, then for every balanced labelling $\Val \colon V \longrightarrow\{0,1\}$ we have
\begin{equation}\label{eq:cohen symmetric funcions} |\mathbb{E}(f(X_0,\ldots,X_{t-1})) - \mathbb{E}(f(U_0,\ldots,U_{t-1}))| = O(\lambda \cdot \log^{3/2}(1/\lambda)).
\end{equation}
Let $Z_t=\sum_{k=0}^{t-1}\Val(X_k)$ and $B_t=\sum_{k=0}^{t-1} \Val(U_k)$. In \cite{cohen2021expander} it is stated that the best distinguishing probability a symmetric function can achieve on $(X_i)$ and $(U_i)$ is the same as the total variation distance between $Z_t$ and $B_t$. Thus, Theorem 1.3 in \cite{cohen2021expander} gives the following equivalent result.
\begin{equation}\label{eq:cohen main}
\| Z_t - B_t\|_{TV}= O(\lambda \cdot \log^{3/2}(1/\lambda)).
\end{equation}
The authors then propose several open questions. On the one hand, they wonder if (\ref{eq:cohen main} holds for unbalanced labellings. They also ask whether the above bound is sharp. These questions are addressed by Theorem 3 in \cite{cohen2022expander}. It states that for any labelling $\Val\colon V \longrightarrow\{1,-1\}$ with $\mathbb{E}(\Val)=\mu \in (-1,1)$ and $0<\lambda<\frac{1-|\mu|}{128e}$ we have
\begin{equation}\label{eq:cohen 2 theorem 3}
\|Z_t-B_t\|_{TV}\leq \frac{124}{\sqrt{1-|\mu|}}\lambda.
\end{equation}
An equivalent bound is achieved by Corollary 2 in \cite{golowich2022pseudorandomness}, which also provides interesting bounds for the tails of the distributions. Moreover, Corollary 4 in \cite{golowich2022pseudorandomness} extends these bounds from binary to arbitrary labellings. Finally, the authors show that the dependence on $\lambda$ in the above results is sharp up to a constant (see Theorem 5 in \cite{golowich2022pseudorandomness}).
On the other hand, while \cite{cohen2021expander} shows that the total variation distance between $Z_t$ and $B_t$ vanishes with $\lambda$, it leaves open the possibility that a better convergence exists, namely, for some fixed $\lambda$ the total variation distance goes to $0$ as the length of the walk $t$ grows. This is the case for some well-known symmetric functions, such as $\operatorname{AND}$, $\operatorname{OR}$, and $\operatorname{PARITY}$, where the error decreases exponentially with $t$ (see \cite{cohen2021expander} for details), and $\operatorname{MAJ}$, where the error goes down polynomially with $t$ (see Theorem 4.6 in \cite{cohen2021expander}). This question is addressed by Theorem 1 in \cite{cohen2022expander}, where it is presented a family of symmetric functions $f_t \colon\{1,-1\}^t \longrightarrow \{1,-1\}$ such that for every $\lambda$ there is a $\lambda$-spectral graph and a balanced labelling $\Val \colon V \longrightarrow \{1,-1\}$ such that
\begin{equation}\label{eq:cohen2 theorem1} \|Z_t - B_t\|_{TV}=\Theta(\lambda) \quad \forall \, t \in \mathbb{N}.
\end{equation}
However, this bound is obtained using Cayley graphs over Abelian groups, which cannot provide constant degree expanders. An open question that the authors in \cite{cohen2022expander} propose is whether a similar bound holds for constant degree graphs. They also ask about the existence of a family of expander graphs that fools all symmetric functions with error going down to zero as the length of the walk $t$ grows, independently of the chosen labelling. Both of these questions are answered by Theorem \ref{theo:main}. Recall that the total variation distance between $Z_t$ and $B_t$ measures the best distinguishing probability of symmetric functions. In view of Corollary \ref{cor:main}, the error will go to zero as $t$ grows only when $t^{-1/2}Z_t$ and $t^{-1/2}B_t$ share the same variance, that is, $\sigma^2=\frac{1}{4}$. Taking a look at (\ref{eq:var of sum}) we see that this happens when the sum of the covariances is $0$, which is not true in general.
\section{Proof of Theorem \ref{theo:main}}\label{sec:main result}
This section is devoted to prove Theorem \ref{theo:main}. The next classic lemma will be useful for this purpose.
\begin{lemma}[Expander mixing lemma]\label{lem:expander mixing lemma}
Let $G$ be a $d$-regular $\lambda$-expander graph with $n$ vertices. For any subsets $F_1$, $F_2$ of $V$ we have
\begin{equation}\label{eq: expander mixing lemma}
\left ||E(F_1,F_2)|-\frac{d}{n}|F_1||F_2|\right |\leq \lambda d \sqrt{\left (|F_1|-\frac{|F_1|^2}{n}\right )\left (|F_2|-\frac{|F_2|^2}{n}\right )},
\end{equation}
where $|E(F_1,F_2)|=\{(x,y) \in F_1 \times F_2 \colon \{x,y\} \in E\}|$ is the number of edges connecting $F_1$ and $F_2$ (counting edges contained in the intersection of $F_1$ and $F_2$ twice).
\end{lemma}
We will need lower bounds for $|E(F_1,F_2)|$ when $F_1$ is a subset of $V$ and $F_2$ is its complement.
\begin{corollary}\label{cor:expander mixing lemma}
Let $G$ be a $d$-regular $\lambda$-expander graph with $n$ vertices. For any $F_1\subseteq V$ and $F_2= \stcomp{F_1}$ we have
\[ |E(F_1,F_2)|\geq \frac{1}{2}(1-\lambda)d \min\{|F_1|,|F_2|\}.\]
\end{corollary}
\begin{proof}
Since $F_2=\stcomp{F_1}$, we have $|F_1|-\frac{|F_1|^2}{n}=|F_2|-\frac{|F_2|^2}{n}=\frac{|F_1||F_2|}{n}$. Thus, the right hand side in (\ref{eq: expander mixing lemma}) is equal to $\lambda\frac{d}{n}|F_1||F_2|$. The result follows from the fact that $\frac{x(x-n)}{n}\geq \frac{x}{2}$ for every $x\in\left [0,\frac{n}{2}\right ]$.
\end{proof}
Given a labelling $\operatorname{val}\colon V \longrightarrow \{0,1\}$ on $G$, let $A=\{x \in V \colon \Val(x)=0\}$ and $B=\{x \in V \colon \Val(x)=1\}$. Clearly, if the labelling $\Val$ is balanced, then $|A|=|B|=\frac{n}{2}$. Given $x \in V$, write $q(x)$ for the number of neighbors $y$ of $x$ with $\Val(y)=0$. Define the sets
\[ A_j = \{x\in A\colon q(x)=j\} \quad \mbox{and} \quad B_j=\{x \in B \colon q(x)=j\} \quad \forall \, j \in \{0,\ldots,d\}.\]
The next Lemma shows that for some $k \in \{1,\ldots,d-1\}$ either the set $A_k$ or the set $B_k$ is not small.
\begin{lemma}\label{lem:A_j or B_j}
Let $G$ be a $d$-regular $\lambda$-expander graph with $n$ vertices and fix a balanced labelling $\Val$ on $G$. Write $\delta=\frac{(1-\lambda)^2}{3}$. Then, there is $k \in \{1,\ldots,d-1\}$ such that either $$|A_k|\geq \frac{\delta|A|}{d-1}\quad \mbox{or} \quad |B_k|\geq \frac{\delta|B|}{d-1}.$$
\end{lemma}
\begin{proof}
Assume that the statement is false. Then, we must have
\begin{equation}
\label{eq:|A_0|+|A_d|} |A_0|+|A_d|>(1-\delta)|A| \quad \mbox{and} \quad |B_0|+|B_d|> (1-\delta)|B|.
\end{equation}
Take $F_1=A$ and $F_2=B$. Corollary \ref{cor:expander mixing lemma} gives $|E(A,B)|\geq (1-\lambda) d\frac{n}{4}.$ Notice also that
$$|E(A,B)|\leq d |A\setminus A_d|\leq d(|A_0|+\delta|A|).$$
Therefore, we obtain
\begin{equation}\label{eq:Ad}
|A_0|\geq (1-\lambda)\frac{n}{4}-\delta\frac{n}{2}=(1-\lambda-2\delta)\frac{n}{4}.
\end{equation}
A completely analogous argument replacing $A_d$ with $B_0$ and $A_0$ with $B_d$ gives
\begin{equation}\label{eq:B0}
|B_d|\geq (1-\lambda-2\delta)\frac{n}{4}.
\end{equation}
Next, take $F_1=F_2=A$. The expander mixing lemma gives $|E(A,A)|\geq (1-\lambda) d\frac{n}{4}.$ Observe also that
\[ |E(A,A)|\leq d|A\setminus A_0|\leq d(|A_d|+\delta|A|).\]
Thus, we get
\begin{equation}\label{eq:A0} |A_d|\geq (1-\lambda-2\delta)\frac{n}{4}.
\end{equation}
Similarly, taking $F_1=F_2=B$ we obtain
\begin{equation}\label{eq:Bd}
|B_0|\geq (1-\lambda-2\delta)\frac{n}{4}.
\end{equation}
Finally, take $F_1=A_d\cup B_0$ and $F_2=\stcomp{F_1}$. In view of (\ref{eq:Ad}), (\ref{eq:B0}), (\ref{eq:A0}), and (\ref{eq:Bd}) we have $\min\{|F_1|,|F_2|\}\geq (1-\lambda-2\delta)\frac{n}{2}$. Hence, Corollary \ref{cor:expander mixing lemma} gives
\begin{align*} |E(F_1,F_2)|&\geq (1-\lambda) d (1-\lambda-2\delta)\frac{n}{4}=(1-\lambda)^2 d\left (1- \frac{2(1-\lambda)}{3}\right )\frac{n}{4}\geq d\delta\frac{n}{4}=2d\delta|A|.
\end{align*}
Therefore, we must have either $|E(A_d,F_2)|\geq d\delta |A|$ or $|E(B_0,F_2)|\geq d\delta |B|$. In the first case, for any $e=\{x,y\} \in E$ with $x \in A_d$ and $y \in F_2$, we must have $\Val(y)=0$. Thus, $y \in A\setminus A_d$. Moreover, $y$ cannot belong to $A_0$ since it is adjacent to $x$ and $\Val(x)=0$. Therefore, $y \in A\setminus(A_0\cup A_d)$. Consequently,
\[ |A\setminus(A_0\cup A_d)|\geq \frac{|E(A_0,F_2)|}{d}\geq \delta|A|,\]
which contradicts (\ref{eq:|A_0|+|A_d|}). If $|E(B_0,F_2)|\geq d \delta |B|$ we obtain $|B\setminus(B_0\cup B_d)|\geq \delta|B|$, also a contradiction.\qedhere
\end{proof}
Let $(X_i)$ be the simple random walk on an $d$-regular $\lambda$-expander graph $G$. Recall that $Z_t=\sum_{k=0}^{t-1} \Val(X_i)$. As we previously stated, we use Theorem \ref{theo:Yuval} to prove our main result Theorem \ref{theo:main}. The sets $A_k$ and $B_k$ provided by Lemma \ref{lem:A_j or B_j} will be used to decompose $Z_t$ as a sum of two independent random variables $Y_t$ and $S_t$, where $S_t$ satisfies a local central limit theorem. The strategy consists on using the fact that, as we run the chain, we frequently see cycles of length $2$. More precisely, $X_i=X_{i+2}$ with probability $\frac{1}{d}$. If we assume that for instance $|A_k|\geq \frac{\delta|A|}{d-1}$, then many of these $2$-cycles will start at a vertex of $A_k$. The contribution to $Z_t$ of each of these $2$-cycles is either $0$ with probability $\frac{k}{d}$, or $1$ with probability $\frac{d-k}{d}$, and the contributions are independent of each other. If $S_t$ represents the total contribution of the $2$-cycles starting from $A_k$, and $Y_t$ represents the contribution of the rest of the walk, then $S_t$ satisfies a local central limit theorem and $Z_t=S_t+Y_t$. Although the idea is simple, making it rigorous requires a careful analysis of these variables. The following Chernoff-type tail bound for the binomial distribution will be used for that purpose. For $a>0$ set $\varphi(a)=1-a+a\log a$. Then, $\varphi(1)=0$ and $\varphi(a)>0$ for $a\in(0,\infty)$.
\begin{lemma}[Lemma 8.1 in \cite{penrose2011local}]\label{lem: binomial}
Let $N$ be a binomial distributed random variable with $\mathbb{E}(N)=\mu>0$. Then,
\[ \prob{N\leq x}\leq e^{-\mu\varphi\left (\frac{x}{\mu}\right )}\quad \forall \, 0<x\leq \mu.\]
\end{lemma}
We can now present the proof of our main result.
\begin{proof}[Proof of Theorem \ref{theo:main}]
First, apply Lemma \ref{lem:A_j or B_j} to find $k \in \{1,\ldots,d-1\}$ such that $\max\{|A_k|,|B_k|\}\geq \frac{\delta|A|}{d-1}$, where $\delta=\frac{(1-\lambda)^2}{3}$. By symmetry, we may assume that $|A_k|\geq \frac{\delta|A|}{d-1}$. Observe that $\prob{X_2=x |X_0=x}=\frac{1}{d}$ for every $x \in V$. For $t>0$, let $N_{t}$ be a random variable that counts the number of times that one of these cycles of length $2$ starting from a vertex of $A_k$ appears at even time within the first $t$ steps. We claim that $N_{t}$ follows a binomial distribution. Let $(X^2_i)$ denote the $2$-steps simple random walk, that is, the Markov chain with transition matrix $P^2$. Then, $N_{t}$ counts the number of times that $X^2_i=X^2_{i+1}$ with $X^2_i \in A_{k}$ within the first $\lfloor \frac{t}{2} \rfloor$ steps of the chain. Since the initial distribution is the uniform one, which is the stationary distribution of $(X^2_i)$, we have that
\[ \prob{X^2_i \in A_{k}}=\frac{|A_k|}{n}\geq \frac{\delta}{2(d-1)}\quad \forall \, i \in \mathbb{N}\cup\{0\}.\]
Moreover $\prob{X^2_i=X^2_{i+1}}=\frac{1}{d}$ for every $i \in \mathbb{N}\cup\{0\}$. Since these two events are independent, we get
\[ \prob{X^2_i \in A_k \mbox{ and } X^2_i=X^2_{i+1}}=\frac{|A_k|}{dn}\geq \frac{\delta}{2d(d-1)} \quad \forall \, i \in \mathbb{N}\cup\{0\}.\]
For $i \in \mathbb{N}\cup\{0\}$, let $U_i$ be the random variable given by
\[
U_i=
\begin{cases}
1 &\quad\mbox{if } X^2_i \in A_k \mbox{ and } X^2_i=X^2_{i+1};\\
0 &\quad \mbox{ otherwise.} \\
\end{cases}
\]
The above calculations show that $(U_i)$ is a sequence of independent Bernoulli random variables with parameter $p=\frac{|A_k|}{dn}\geq \frac{\delta}{2d(d-1)}$. Finally, we can write
\[ N_{t}=\sum_{i=0}^{\lfloor \frac{t}{2} \rfloor -1} U_i.\]
Therefore $N_{t}$ follows a binomial $\operatorname{Bin}(\lfloor \frac{t}{2}\rfloor,p)$. Let $b_t=\frac{\mathbb{E}(N_t)}{2}=\frac{|A_k|}{2dn}\lfloor \frac{t}{2} \rfloor$. For every $i \in\{1,\ldots,\min\{b_t,N_t\}\}$, let $(x_i,y_i)$ be the vertices appearing in the $i$-th $2$-cycle, where $x_i\in A_k$ and $y_i$ is some neighbor of $x_i$. Then, let $V_i$ be the indicator of the event that $\Val(y_i)=1$ (which happens with probability $\frac{d-k}{d}$). Then, $V_1,V_2,\ldots$ are independent identically distributed random variables. Moreover, the $i$-th $2$-cycle adds $V_i$ to the sum of the labels. Consider as well $\widetilde{V}_1,\widetilde{V}_2,\ldots$ be independent from all previous random variables and identically distributed random variables given by
\[
\widetilde{V}_1=
\begin{cases}
1 &\quad\mbox{with probability } \frac{d-k}{d};\\
0 &\quad \mbox{ otherwise.} \\
\end{cases}
\]
Define the random variables
\[ S'_t=\sum_{i=1}^{\min\{b_t,N_t\}} V_i, \quad Y_t= Z_t - S'_t, \quad \mbox{and}\quad S_t=S'_t+\sum_{i=1}^{(b_t-N_t)^+} \widetilde{V}_i. \]
We will conclude the proof by showing that $(Y_t,S_t,Z_t)$ satisfy the hypotheses of Theorem \ref{theo:Yuval} with $h_V=1$. First, $S_t$ and $Y_t$ are independent. Indeed, whenever we have a $2$-cycle $(x,y)$ with $x\in A_k$, the label of $y$ will not affect the rest of the walk. Second, notice that
\[ Z_t-(Y_t+S_t)=S'_t-S_t=-\sum_{i=1}^{(b_n-N_n)^+} \widetilde{V}_i.\]
Using Lemma \ref{lem: binomial} we obtain
\[ \prob{N_t\leq b_t}\leq e^{-2b_t\varphi(\frac{1}{2})}\leq e^{-C(d,\lambda)t},\]
for some constant $C(d,\lambda)>0$ that only depends on $d$ and $\lambda$. Therefore, both $t^{-1/2}\mathbb{E}|Z_t-(Y_t+S_t)|$ and $t^{1/2}\prob{Z_t\neq Y_t+S_t}$ tend to zero as $t\to\infty$. Finally, the fact that $(Z_t)$ satisfies the central limit theorem (as stated in (\ref{eq: Penrose-Yuval 1})) follows from Theorem C in \cite{kloeckner2017effective}.
\end{proof}
\section{Study of the variance}\label{sec:variance}
This section is dedicated to prove Proposition \ref{prop: variance} and Corollary \ref{cor:main}. Let $G=(V,E)$ be a $d$-regular $\lambda$-expander graph, let $\Val \colon V\longrightarrow \{0,1\}$ be a balanced labelling, and let $(X_i)$ be the simple random walk on $G$ starting from a vertex chosen uniformly at random. For simplicity, write $Y_i=\Val(X_i)$. Then, the variance of a sum of random variables is given by
\begin{equation}\label{eq:var of sum}
\Var(Z_t)=\sum_{j=0}^{t-1} \Var(Y_i) + 2\sum_{i<j} \Cov(Y_i,Y_j).
\end{equation}
Observe that $\Var(Y_i)=\mathbb{E}(Y_i^2)- \mathbb{E}(Y_i)^2 = \frac{1}{2} - \frac{1}{4}=\frac{1}{4}$ and $\Cov(Y_i,Y_j)=\mathbb{E}(Y_iY_j) - \frac{1}{4}$. Recall that $A=\{x \in V \colon \Val(x)=0\}$ and $B=\stcomp{A}$. We have
\begin{equation}\label{eq: E(Y_i Y_j)} \mathbb{E}(Y_iY_j)= \prob{Y_i=1,Y_j=1}=\frac{1}{2}\prob{Y_j=1|Y_i=1}=\frac{1}{2}\prob{X_{j-i}\in B|X_0 \in B}.
\end{equation}
Thus, we want to find the probability that the chain is at a vertex of $B$ after $j-i$ steps when the initial vertex is chosen uniformly at random from $B$.
\begin{lemma}\label{lem:prob Xk in A}
Let $G=(V,E)$ be a $d$-regular connected graph with $n$ vertices and let $B\subseteq V$. If $(X_i)$ is the simple random walk on $V$ starting uniformly at random from $B$, we have
\[\prob{X_k \in B} = \pi(B)+\frac{1}{2} \sum_{j=2}^n \langle \pi_B,f_j\rangle^2 \lambda^k_j \quad \forall \, k \in \mathbb{N},\]
where $(f_j)$ is the orthonormal basis of eigenvectors corresponding to the eigenvalues $(\lambda_j)$ and $\pi_B$ is the uniform distribution on $B$.
\end{lemma}
\begin{proof}
Write $1_B$ for the indicator function of $B$, that is, $1_B(y)=1$ if $y\in B$ and $1_B(y)=0$ otherwise. Recall that $P$ denotes the transition matrix of $(X_i)$. By linearity, $P^k \pi_B$ is the vector of probabilities of the chain $(X_i)$ after $k$ steps. Therefore, $\prob{X_k \in B}=\langle P^k \pi_B,1_B\rangle$. We can use (\ref{eq:spectral form Pf}) to decompose $P$ and obtain
\[ P^k \pi_B= \sum_{j=1}^n \langle \pi_B,f_j\rangle_\pi f_j \lambda^k_j=\pi + \sum_{j=2}^n \langle \pi_B,f_j\rangle_\pi f_j \lambda^k_j.\]
Since $\pi_B=\frac{2}{n}1_B$, we conclude
\[ \langle P^k\pi_B,1_B\rangle = \pi(B) + \sum_{j=2}^n \langle \pi_B,f_j\rangle_\pi \langle f_j,1_B\rangle \lambda^k_j= \pi(B)+\frac{1}{2} \sum_{j=2}^n \langle \pi_B,f_j\rangle^2 \lambda^k_j.\qedhere\]
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop: variance}]
Recall that $A=\{x\in V \colon \Val(x)=0\}$ and $B=\stcomp{A}$. Fix $0\leq i<j\leq t-1$ and write $k=j-i$. Since $\prob{Y_i=1,Y_j=1}=\frac{1}{2}\prob{X_{j-i}\in A|X_0 \in A}$, Lemma \ref{lem:prob Xk in A} and (\ref{eq: E(Y_i Y_j)}) gives
\[ \Cov(Y_i,Y_j)=\mathbb{E}(Y_i Y_j)- \frac{1}{4} = \frac{1}{4} \sum_{k=2}^n \langle \pi_B,f_j\rangle^2\lambda^{j-i}_k \quad \forall\, i<j.\]
Adding all covariances yields
\[ \sum_{i<j} \Cov (Y_i,Y_j) = \frac{1}{4}\sum_{k=1}^{t-1}(t-k)\sum_{j=2}^n\langle \pi_B,f_j\rangle^2\lambda^{k}_j.\]
The formula (\ref{eq:var formula}) follows now from (\ref{eq:var of sum}). Finally, if we asssume that $G$ is $\lambda$-expander we get
\begin{align*}
\left |\Var(Z_t)-\frac{t}{4}\right |&\leq\frac{1}{2} \sum_{k=1}^{t-1}(t-k)\lambda^{k}\sum_{j=2}^n\langle \pi_B,f_j\rangle^2=\frac{1}{2} \sum_{k=1}^{t-1} (t-k) \lambda^k (n\|\pi_B\|_2^2-\langle \pi_B,f_1\rangle^2) \leq \frac{t}{2}\sum_{k=1}^{t-1}\lambda^k
\leq \frac{t}{2}\frac{\lambda}{1-\lambda}.\qedhere
\end{align*}
\end{proof}
We can use the bound of the variance provided by Proposition \ref{prop: variance} to get Corollary \ref{cor:main} as a consequence of Theorem \ref{theo:main} and the Chernoff-type bound for random walks on expander graphs given in \cite{gillman1998a}.
\begin{proof}[Proof of Corollary \ref{cor:main}]
Given $\sigma^2\geq1$, let $C_{\sigma^2}$ be the normalizing constant appearing in (\ref{eq:disc normal}). We claim that $C_{\sigma^2}=1+o(\sigma^{-1})$. Indeed,
\begin{align*}
\sum_{u \in\mathbb{Z}} \sigma^{-1}\phi\left (\frac{u}{\sigma}\right )&=\sigma^{-1}\phi(0) + 2\sum_{u \geq 1} \sigma^{-1}\phi\left (\frac{u}{\sigma}\right ) = 2\left (\sum_{u\geq 0} \sigma^{-1}\phi\left (\frac{u}{\sigma}\right )\right ) - \sigma^{-1}\phi(0)\\
&\geq 2 \int_0^\infty \sigma^{-1}\phi\left (\frac{u}{\sigma}\right )\, du - \sigma^{-1}\phi(0) =1- \sigma^{-1}\phi(0)=1-\frac{1}{\sqrt{2\pi}}\sigma^{-1}.
\end{align*}
Similarly,
\begin{align*}
\sum_{u \in\mathbb{Z}} \sigma^{-1}\phi\left (\frac{u}{\sigma}\right )&=\sigma^{-1}\phi(0) + 2\sum_{u \geq 1} \sigma^{-1}\phi\left (\frac{u}{\sigma}\right ) \leq \sigma^{-1}\phi(0)+ 2 \int_0^\infty \sigma^{-1}\phi\left (\frac{u}{\sigma}\right )\, du = 1+\frac{1}{\sqrt{2\pi}} \sigma^{-1}.
\end{align*}
This shows that $|C_{\sigma^2}^{-1}-1|\leq \frac{1}{\sqrt{2\pi}} \sigma^{-1}$, from where we easily get that $C_{\sigma^2}= 1+o(\sigma^{-1})$. In fact, a straightforward computation gives $|C_{\sigma^2}-1|\leq \frac{3}{2\sqrt{2\pi}}\sigma^{-1}$.
Recall that $B=\{x\in V \colon \Val(x)=1\}$. Notice that $Z_t$ is the number of times that $(X_i)$ visits $B$ within the first $t$ steps of the chain. Then, Theorem 2.1 in \cite{gillman1998a} gives that for any $\gamma\geq 0$,
\[ \prob{Z_t-\frac{t}{2}\geq \gamma} \leq \left (1+\frac{\gamma(1-\lambda)}{10t}\right )e^{-\gamma^2(1-\lambda)/20t}. \]
Applying \cite[Theorem 2.1]{gillman1998a} to $\stcomp{B}$ instead gives the same inequality for $\prob{Z_t-t/2\leq -\gamma}$. Take $\gamma=c\sqrt{t}$ for some $c>0$. Then,
\begin{equation}\label{eq:chernoff bound Nt}
\prob{\left |Z_t-\frac{t}{2}\right |\geq c\sqrt{t}} \leq (1+c)e^{-c^2\frac{(1-\lambda)}{20}},
\end{equation}
which converges to $0$ when $c$ goes to infinity. For simplicity, write
\[Q_t=\sup_{u\in\mathbb{Z}} \left\{ \left |\prob{Z_t=u} - t^{-1/2}\sigma^{-1}\phi\left (\frac{u-t/2}{t^{1/2}\sigma}\right )\right | \right\}.\]
Theorem \ref{theo:main} gives $Q_t=o(t^{-1/2})$. Recall that $N_d(t/2,t\sigma^2)$ is a random variable on $\mathbb{Z}$ with density function $f_{N_d(t/2,t\sigma^2)}(u)=C_{t\sigma^2} t^{-1/2}\sigma^{-1}\phi\left (\frac{u-t/2}{t^{1/2}\sigma}\right )$. We have
\begin{align*}
\|Z_t - N_d(t/2,t\sigma^2)\|_{TV}&= \frac{1}{2}\sum_{u\in \mathbb{Z}} |\prob{Z_t=u} - f_{N_d(t/2,t\sigma^2)}(u)|\\
&\leq \frac{1}{2} \sum_{|u-t/2|\geq c\sqrt{t}} (\prob{Z_t=u} + f_{N_d(t\sigma^2)}(u)) + \frac{1}{2}\sum_{|u-t/2|< c\sqrt{t}} |\prob{Z_t=u} - f_{N_d(t\sigma^2)}(u)|
\end{align*}
The first sum can be bounded above by
\[ \frac{1}{2}\prob{|Z_t-t/2|\geq c\sqrt{t}} + \frac{C_{t\sigma^2}}{2}\prob{ |N_d(t/2,t\sigma^2)-t/2|\geq c\sqrt{t}}.\]
Since Proposition \ref{prop: variance} implies $\sigma^2\leq\frac{1}{2} \frac{1}{1-\lambda}$, we can use standard bounds for the tail of normal distributions, we deduce that $\prob{ |N_d(t/2,t\sigma^2)-t/2|\geq c\sqrt{t}}$ also goes to $0$ as $c$ diverges. Finally, the second sum can be bounded above by
\begin{align*}
\sum_{|u-t/2|< c\sqrt{t}} |\prob{Z_t=u} - C_{t\sigma^2}^{-1}f_{N_d(t/2,t\sigma^2)}(u)| + |C_{t\sigma^2}^{-1}-1|\sum_{u \in\mathbb{Z}} f_{N_d(t/2,t\sigma^2)}(u)
\leq (2c\sqrt{t}+1)Q_t + \frac{1}{\sqrt{2\pi}} t^{-1/2}\sigma^{-1}.
\end{align*}
Thus, we can take $c=(Q_t\sqrt{t})^{-1/2}$ above to conclude that $\|Z_t -N_d(t/2,t\sigma^2)\|_{TV}$ goes to $0$ as $t$ grows.
\end{proof}
\section{Proof of Theorem \ref{theo: sticky}}\label{sec:sticky}
This section is devoted to prove Theorem \ref{theo: sticky}. We will show that sticky random walks satisfy the local central limit theorem. In view of Theorem \ref{theo:main}, we just need to match the mean and variance of $Z_t$ and $R_t$ to obtain the result. The mean and variance of a sticky random walk on $\{1,-1\}$ are easy to calculate. We do it in the next lemma.
\begin{lemma}
Let $(Q_i)$ be the sticky random walk on $\{0,1\}$ with parameter $p \in (-1,1)$, and let $R_t=\sum_{i=0}^{t-1} Q_i$. Then, $\mathbb{E}(R_t)=\frac{t}{2}$ and
\[ \Var(R_t)=\frac{p^{t+1} -p(t+1)+t}{2(1-p)^2}-\frac{t}{4}.\]
In particular, $\lim_{t \to \infty} \Var(t^{-1/2}R_t)=\frac{1}{4}\frac{1+p}{1-p}$.
\end{lemma}
\begin{proof}
Recall that $Y_0$ is chosen uniformly at random on $\{0,1\}$. Hence, $\mathbb{E}(R_t)=\sum_{k=0}^{t-1} \mathbb{E}(Q_k)=\frac{t}{2}$. To calculate the variance,
consider $P$ be the transition matrix of $(Q_i)$, that is,
\[ P =
\begin{pmatrix}
\frac{1+p}{2} & \frac{1-p}{2} \\
\frac{1-p}{2} & \frac{1+p}{2}
\end{pmatrix}.
\]
For any $k \in \mathbb{N}$, we can multiply $P$ by itself $k$ times to obtain that $P^k(1,1)=\frac{1+p^t}{2}$. Consequently,
\[ \mathbb{E}(Q_0 Q_k)=\prob{Q_0=1, Q_k=1}=\frac{1+p^t}{4}.\]
Using the Markov condition we have
\begin{align*}
\mathbb{E}(R_t^2)&= \sum_{k=0}^{t-1} \mathbb{E}(Q_k^2) + 2\sum_{k=0}^{t-1} \sum_{j=k+1}^{t-1} \mathbb{E}(Q_k Q_j)
= \sum_{k=0}^{t-1} \mathbb{E}(Y_0^2) + 2\sum_{k=1}^{t-1}(t-k)\mathbb{E}(Y_0 Y_k)\\
&= \frac{t}{2} + \frac{1}{2}\sum_{k=1}^{t-1}(t-k)+\frac{1}{2}\sum_{k=1}^{t-1}(t-k)p^k
=\frac{t^2}{4} - \frac{t}{4} + \frac{1}{2}\sum_{k=0}^{t-1}(t-k)p^k= \frac{p^{t+1} -p(t+1)+t}{2(1-p)^2}-\frac{t}{4} + \frac{t^2}{4}.\qedhere
\end{align*}
\end{proof}
The next result shows that the local central limit theorem holds for sticky random walks.
\begin{lemma}\label{lemma:localsticky}
Let $(Q_i)$ be the sticky random walk on $\{0,1\}$ with parameter $p \in (-1,1)$, and let $R_t=\sum_{k=0}^{t-1} Y_k$. Then, for $\sigma^2=\frac{1}{4}\frac{1+p}{1-p}$ we have
\[ t^{1/2}\sup_{u\in\mathbb{Z}} \left\{ \left |\prob{R_t=u} - t^{-1/2}\sigma^{-1}\phi\left (\frac{u-t/2}{t^{1/2}\sigma}\right )\right | \right\}\to 0\quad \mbox{as } t \to \infty.\]
\end{lemma}
\begin{proof}
We will decompose $R_t$ into a sum of two independent random variables and use Theorem \ref{theo:Yuval} to obtain the result. Let $(Q_i^2)$ be the $2$-steps Markov chain, that is, the markov chain with transition matrix $P^2$. Let $N_t$ be a random variable that counts the number of times that $Q_i^2\neq Q_{i+1}^2$ within $(Q_0,\ldots,Q_{t-1})$. Let $J=\{i \colon Q_i^2\neq Q_{i+1}^2\}$ and denote its elements as $j_1,\ldots, j_{N_t}$. Since $\prob{Q_i^2\neq Q_{i+1}^2}=\frac{1-p^2}{2}$ independently of previous values of the chain, we deduce that $N_t$ follows a binomial $Bin(\lfloor \frac{t-1}{2} \rfloor, \frac{1-p^2}{2})$. For every $k \in \{1,\ldots,N_t\}$, let $V_k=Q_{2j_k+1}$ be the bit that we skip to go from $Q_{j_k}^2$ to $Q_{j_k+1}^2$. Then, $(V_k)$ is a sequence of independent Bernoulli random variables with parameter $\frac{1}{2}$. Take $b_t=\frac{\mathbb{E}(N_t)}{2}$ and define the random variables
\[ S'_t=\sum_{k=0}^{\min\{b_t,N_t\}} V_k, \quad Y_t=R_t - S'_t, \quad S_t=S'_t + \sum_{k=0}^{(b_t-N_t)^+} V'_k,\]
where $(V'_k)$ are Bernoulli random variables with parameter $\frac{1}{2}$ independent of everything else. We claim that $(Y_t,S_t,R_t)$ satisfies the hypotheses of Theorem \ref{theo:Yuval}. First, notice that $S_t$ and $Y_t$ are independent. Indeed, once we know that the $Q_{2i}\neq Q_{2i+2}$, the value of $Q_{2i+1}$ does not affect the rest of the path. Moreover, $R_t= S_t+Y_t$ with high probability. More precisely, using Lemma \ref{lem: binomial} we have
\[ \prob{N_t \leq b_t}\leq e^{-\mathbb{E}(N_t) \varphi(1/2)}\leq e^{-C(p)(t-1)},\]
where $C(p)$ only depends on $p$. Therefore, both $t^{-1/2} \mathbb{E}(R_t -(Y_t+S_t))$ and $t^{1/2} \prob{R_t\neq Y_t+S_t}$ tend to zero as $t \to \infty$. Finally, Lemma 17 in \cite{guruswami2021pseudobinomiality} shows that $t^{-1/2}(2R_t-t)$ converges in distribution to a normal $\mathcal{N}(0,\frac{1+p}{1-p})$, and so $t^{-1/2}(R_t-t/2)$ converges to $\mathcal{N}(0,\frac{1}{4}\frac{1+p}{1-p})$.
\end{proof}
We are ready to prove Theorem \ref{theo: sticky}.
\begin{proof}[Proof of Theorem \ref{theo: sticky}]
Recall that Corollary \ref{cor:main} says that $\|Z_t - N_d(t/2,t\sigma^2) \|_{TV}$ converges to $0$ as $t$ goes to infinity. We can repeat word by word the proof of Corollary \ref{cor:main} using $R_t$ instead of $Z_t$ and Lemma \ref{lemma:localsticky} instead of Theorem \ref{theo:main} to show that $\|R_t - N_d(t/2,\frac{t}{4} \frac{1+p}{1-p}) \|_{TV}$ also converges to $0$. Thus, if $\sigma^2=\frac{1}{4} \frac{1+p}{1-p}$ the result follows from the triangle inequality.
\end{proof}
\section{Generalizations to all labellings}\label{sec:extension}
In this section, we extend the previous results for labellings $\operatorname{val}\colon V \longrightarrow \{0,1\}$ on $G$ with $\mathbb{E}(\operatorname{val})=\alpha \in [0,1]$. First, we need an extension for Lemma \ref{lem:A_j or B_j}.
\begin{lemma}\label{lem:A_j or B_j extended}
Let $G$ be a $d$-regular $\lambda$-expander graph with $n$ vertices. Fix a labelling $\operatorname{val}\colon V \longrightarrow \{0,1\}$ on $G$ with $\mathbb{E}(\operatorname{val})=\alpha \in [0,1]$. Write $\delta=\frac{(1-\lambda)^2\alpha}{8}$. Then, there is $k \in \{1,\ldots,d-1\}$ such that either
$$|A_k|\geq \frac{\delta(1-\alpha)n}{d-1}\quad \mbox{or} \quad |B_k|\geq \frac{\delta\alpha n}{d-1}.$$
\end{lemma}
\begin{proof}
First, assume that $\alpha\leq \frac{1}{2}$. We will follow the argument in the proof of Lemma \ref{lem:A_j or B_j}. Suppose the statement false. Then,
\begin{equation}\label{eq:|A_0|+|A_d| extended}
|A_0|+|A_d|>(1-\delta)(1-\alpha)n \quad \mbox{and} \quad |B_0|+|B_d|> (1-\delta)\alpha n.
\end{equation}
Take $F_1=A$ and $F_2=B$. Corollary \ref{cor:expander mixing lemma} gives $|E(A,B)|\geq \frac{1}{2}(1-\lambda) d \alpha n$. Notice also that
$$|E(A,B)|\leq d |A\setminus A_d|\leq d(|A_0|+\delta|A|).$$
Therefore, we obtain
\begin{equation}\label{eq:Ad extended}
|A_0|\geq \frac{1}{2}(1-\lambda)\alpha n - \delta(1-\alpha)n\geq\frac{1}{2}(1-\lambda)\alpha n \left (1-\frac{1}{2}(1-\lambda)(1-\alpha)\right ) \geq \frac{1}{4}(1-\lambda)\alpha n.
\end{equation}
A completely analogous argument replacing $A_d$ with $B_0$ and $A_0$ with $B_d$ gives
\begin{equation}\label{eq:B0 extended}
|B_d|\geq \frac{1}{2}(1-\lambda)\alpha n - \delta \alpha n \geq \frac{1}{4}(1-\lambda) \alpha n.
\end{equation}
Next, take $F_1=A_d\cup B_0$ and $F_2=\stcomp{F_1}$. In view of (\ref{eq:Ad extended}) and (\ref{eq:B0 extended}) we have $\min\{|F_1|,|F_2|\}\geq \frac{1}{4}(1-\lambda)\alpha n$. Hence, Corollary \ref{cor:expander mixing lemma} gives
\begin{align*} |E(F_1,F_2)|&\geq \frac{1}{8}d ((1-\lambda)^2\alpha n.
\end{align*}
Therefore, we must have either $|E(A_d,F_2)|\geq \frac{1}{8}d ((1-\lambda)^2\alpha (1-\alpha)n$ or $|E(B_0,F_2)|\geq \frac{1}{8}d ((1-\lambda)^2\alpha^2 n$. In the first case, for any $e=\{x,y\} \in E(A_d,F_2)$ with $x \in A_d$, we must have $\Val(y)=1$. Thus, $y \in A\setminus A_d$. Moreover, $y$ cannot belong to $A_0$ since it is adjacent to $x$ and $\Val(x)=1$. Therefore, $y \in A\setminus(A_0\cup A_d)$. Consequently,
\[ |A\setminus(A_0\cup A_d)|\geq \frac{|E(A_d,F_2)|}{d}\geq \delta (1-\alpha )n,\]
which contradicts (\ref{eq:|A_0|+|A_d|}). Similarly, if $|E(B_0,F_2)|\geq \frac{1}{2}d\delta (1-\alpha) n$ we also obtain a contradiction.
Finally, assume that $\alpha> \frac{1}{2}$. We can consider the opposite labeling $\Val^*=1-\Val$ which satisfies $\mathbb{E}(\Val^*)=1-\alpha\leq \frac{1}{2}$. Then, applying the previous case to $\Val^*$ shows that there is $k \in \{1,\ldots,d-1\}$ such that
$$|A^*_k|\geq \frac{\delta \alpha n}{d-1}\quad \mbox{or} \quad |B^*_k|\geq \frac{\delta(1-\alpha) n}{d-1},$$
where $A_k^*$ and $B_k^*$ be the corresponding sets for $\Val^*$. It is easy to see that $A_k^*=B_{d-k}$ and $B_k^*=A_{d-k}$, from where the result follows.
\end{proof}
Now we can extend our main result for general labellings. The following theorem says that the asymptotic behavior of $Z_t$ is given by a discretized normal distribution with mean $t\alpha$ and variance $t \sigma^2$, for some $\sigma^2>0$.
\begin{theorem}\label{theo:main extension}
Let $G$ be a $d$-regular $\lambda$-expander graph with $n$ vertices. Fix a labelling $\operatorname{val}\colon V \longrightarrow \{0,1\}$ on $G$ with $\mathbb{E}(\operatorname{val})=\alpha \in (0,1)$. Let $(X_i)_{i=0}^{t-1}$ be the simple random walk on $G$ started from a vertex chosen uniformly at random from $V$. For $Z_t= \sum_{i=0}^{t-1} \Val(X_i)$ we have that $(Z_t)$ satisfies the local central limit theorem, that is, there is some $\sigma^2>0$ for which
\[ t^{1/2}\sup_{u\in\mathbb{Z}} \left\{ \left |\prob{Z_t=u} - t^{-1/2}\sigma^{-1}\phi\left (\frac{u-t\alpha}{t^{1/2}\sigma}\right )\right | \right\}\to 0\quad \mbox{as } t \to \infty.\]
\end{theorem}
\begin{proof}
We just need to repeat word by word the proof of Theorem \ref{theo:main}, using Lemma \ref{lem:A_j or B_j extended} instead of Lemma \ref{lem:A_j or B_j}.
\end{proof}
Similarly, we can extend Corollary \ref{cor:main}. We just need to repeat its proof using Theorem \ref{theo:main extension} instead of Theorem \ref{theo:main}.
\begin{corollary}\label{cor:main extension}
Let $G$ be a $d$-regular $\lambda$-expander graph with $n$ vertices. Fix a labelling $\operatorname{val}\colon V \longrightarrow \{0,1\}$ on $G$ with $\mathbb{E}(\operatorname{val})=\alpha \in (0,1)$. Let $(X_i)_{i=0}^{t-1}$ be the simple random walk on $G$ started from a vertex chosen uniformly at random from $V$. For $Z_t= \sum_{i=0}^{t-1} \Val(X_i)$ there is some $\sigma^2>0$ for which
\[ \lim_{t\to \infty}\left \|Z_t - N_d(t\alpha,t\sigma^2) \right \|_{TV}=0.\]
\end{corollary}
\bibliographystyle{plain}
|
1,108,101,564,406 | arxiv | \section{Introduction}
Deep neural networks (DNNs) have achieved state-of-the-art performances on many tasks, such as image classifications. However, DNNs have been shown to be vulnerable to adversarial attacks \cite{szegedy2014intriguing} \cite{goodfellow2015explaining}. Adversarial attacks are carefully designed small perturbations to clean data which significantly change the predictions of target models. The lack of robustness w.r.t adversarial attacks of DNNs brings out security concerns.
In this paper, we focus on defending DNNs from adversarial attacks for image classifications. Many algorithms have been proposed to achieve this purpose. Roughly, those algorithms fall into three categories:
\begin{itemize}
\item data preprocessing, such as JPEG compression \cite{das2017keeping} and image denoise \cite{xu2018feature}.
\item adding stochastic components into DNNs to hide gradient information \cite{athalye2018obfuscated}.
\item adversarial training \cite{madry2018towards}.
\end{itemize}
Data preprocessing or stochastic components are usually combined with adversarial training since it is the most successful defense algorithm.
Recent works show that deep neural networks trained on image classification dataset bias towards textures which are the high-frequency components of images \cite{geirhos2019imagenet-trained}. Meanwhile, researchers empirically find that the perturbations generated by adversarial attacks are also high-frequency signals. This means DNNs are mainly fooled by carefully designed textures. Those facts suggest that suppressing high-frequency components of images is helpful to reduce the effects of adversarial attacks and improve the robustness of DNNs. On the other hand, the basic information on clean images will be retained when suppression high-frequency components because it converges on low frequencies. In this paper, we aim to develop a high-frequency suppressing module which is expected to have the following properties:
\begin{enumerate}
\item \textbf{separability}: it should suppress high-frequency components while keep low-frequency ones.
\item \textbf{efficiency}: it should have low computational costs compared with the standard DNNs.
\item \textbf{differentiability}: it should be differentiable which allows to jointly optimize with adversarial training.
\item \textbf{controllability}: it should be easy to control the degree of high-frequency suppression and the degree of how the original images are modified (e.g. $L_2$ distance).
\end{enumerate}
Discrete Fourier transform (DFT) which maps images into frequency domain is a good tool to achieve those goals. Based on (inverse) DFT, we propose a high-frequency suppressing module which has all those properties. We evaluate our method in the IJCAI-2019 Alibaba Adversarial AI Challenge \cite{IJCAI}. Our code is available on \textcolor{red}{{\small \url{https://github.com/zzd1992/Adversarial-Defense-by-Suppressing-High-Frequencies}}}.
\begin{figure*}[htb]
\centering
\subfigure[AAAC]{
\label{fig:cse_a}
\includegraphics[width=0.4\textwidth]{aaac_base.pdf}}
\subfigure[CIFAR-10]{
\label{fig:cse_b}
\includegraphics[width=0.4\textwidth]{cifar_base.pdf}}
\caption{Cumulative spectrum energy for $5,000$ images of AAAC in (a) and for test images of CIFAR-10 in (b). Blue line for clean images and orange line for adversarial perturbations.}
\end{figure*}
\section{Method}
\subsection{High-frequency suppression}
As mentioned earlier, suppressing high-frequency components is helpful to reduce the effects of adversarial attacks and improve the robustness of DNNs. Given an input image, we transform it into frequency domain via DFT. Then we reduce the high-frequency components in frequency domain. Finally, we transform the modified frequency image back to time domain.
Formally, denote $\mathbf{x} \in \mathcal{R}^{M\times N}$ as the input image and $\mathbf{\hat{x}} \in \mathcal{C}^{M\times N}$ as its frequency representation.
\begin{equation}
\mathbf{\hat{x}}_{u, v} = \sum_{a=0}^{M-1}\sum_{b=0}^{N-1} \mathbf{x}_{a, b}e^{-j2\pi \left( \frac{u}{M}a+ \frac{v}{N}b\right)}
\end{equation}
To suppress the high-frequency components, we modify $\mathbf{\hat{x}}$ as follows:
\begin{equation}
\mathbf{\hat{x}} \leftarrow \mathcal{M} \odot \mathbf{\hat{x}}
\end{equation}
where $\mathcal{M} \in \mathcal{R}^{M\times N}$ and $\odot$ is element-wise multiplication. $\mathcal{M}$ controls how different frequency is scaled. Intuitively, $\mathcal{M}$ should close to $0$ for high-frequency components and close to $1$ for low-frequency ones. In this paper, we set $\mathcal{M}$ to a box window with fixed radius $r$. That is
\begin{equation}
\mathcal{M}_{u, v} = \left\{
\begin{array}{lc}
1, \qquad & 0<=|u|, |v|<=r\\
0, \qquad & else
\end{array}
\right.
\end{equation}
To simplify the notation, we set $\mathcal{M}_{-u, \cdot} = \mathcal{M}_{M-u, \cdot}$ and $\mathcal{M}_{\cdot, -v} = \mathcal{M}_{\cdot, N-v}$. The overall function of our high-frequency suppression module is
\begin{equation}
\mathbf{x} \leftarrow \mathcal{F}^{-1}\left( \mathcal{M}\odot \mathcal{F}(\mathbf{x})\right)
\end{equation}
where $\mathcal{F}$ means DFT. An image is processed by this module and a standard DNN in order.
Now we analyze the properties of our proposed module.
\textbf{separability}: because $\mathcal{M}$ is a box window, high-frequency components are completely removed and low-frequency ones are perfectly reserved.
\textbf{efficiency}: the computational costs are dominated by DFT. For an $M \times N$ (we suppose $M>=N$) image, the time complexity of DFT is $\mathcal{O}(MN\log_2 M)$. In practice, DFT of a color image is faster than a convolutional layer. Thus the costs of our proposed module are cheap compared with DNNs.
\textbf{differentiability}: DFT can be expressed in matrix form:
\begin{equation}
\mathcal{F}(\mathbf{x}) = \mathbf{F_MxF_N}
\end{equation}
where $\mathbf{F}_M \in \mathcal{C}^{M\times M}$ is the so-called Fourier transform matrix. Clearly, DFT is differentiable. Instead of an image pre-processing method, this property makes it possible to integrate our module into DNNs and optimize with adversarial training jointly.
\textbf{controllability}: denote $\mathbf{x}_o$ as the output of the proposed module. Based on Parseval theory, we have
\begin{equation}
\lVert \mathbf{x} - \mathbf{x}_o\rVert_2^2= \lVert \mathbf{\hat{x}} - \mathcal{M} \odot \mathbf{\hat{x}}\rVert_2^2
\end{equation}
Thus the degree of high-frequency suppression and the $L_2$ norm between the original image and the modified image are easily controlled by varying $r$ of the box window. For nature images, spectral energy is converged on low-frequency regions. Thus $\lVert \mathbf{x} - \mathbf{x}_o\rVert_2$ is small enough even when most of the frequency components are suppressed ($r$ is small).
\subsection{Adversarial training}
The idea of adversarial training is optimizing DNNs w.r.t both clean samples and adversarial samples.
\begin{equation}
\min_{\mathbf{w}} \left\{
\mathcal{L}(f_{\mathbf{w}}(\mathbf{x}), y) + \beta \max_{\lVert \mathbf{\delta} \rVert<\epsilon} \mathcal{L}(f_{\mathbf{w}}(\mathbf{x+\delta}), y)
\right\}
\end{equation}
where $f$ maps an image into classification probability, $\mathbf{w}$ is the parameters of $f$ and $\mathcal{L}$ is the cross-entropy loss. $\mathbf{\delta}$ is obtained by (iteratively) projected gradient descent (PGD). $\beta$ controls the tradeoff between clean samples and adversarial samples.
Recently, \cite{zhang2019theoretically} propose a novel adversarial training method called TRADES. TRADES is formalized as follows:
\begin{equation}
\min_{\mathbf{w}} \left\{
\mathcal{L}(f_{\mathbf{w}}(\mathbf{x}), y) + \beta \max_{\lVert \mathbf{\delta} \rVert<\epsilon} \mathcal{L}(f_{\mathbf{w}}(\mathbf{x}), f_{\mathbf{w}}(\mathbf{x+\delta}))
\right\}
\end{equation}
Instead of minimizing the difference between $f_{\mathbf{w}}(\mathbf{x+\delta})$ and the true label, TRADES minimizes the difference between $f_{\mathbf{w}}(\mathbf{x+\delta})$ and $f_{\mathbf{w}}(\mathbf{x})$ which encourages the output to be smooth. In this paper, we use TRADES as the adversarial training method because it has a better tradeoff between robustness and accuracy. Refer \cite{zhang2019theoretically} for more information.
\begin{table*}[htb]
\centering
\Large
\begin{tabular}{cccr}
\hline
High-frequency suppression & Adversarial training & Model ensemble & Score\\
\hline
$\times$ & $\times$ & $\times$ & 2.0350 \\
$\times$ & $\surd$ & $\times$ & 9.9880 \\
$\surd$ & $\times$ & $\times$ & 14.9736 \\
$\surd$ & $\surd$ & $\times$ & 19.0510 \\
$\surd$ & $\surd$ & $\surd$ & 19.7531 \\
\hline
\end{tabular}
\caption{Ablation study for three strategies and their combinations. }
\label{tab:plain}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{bar.pdf}
\caption{We show the trade-off between robustness and accuracy with high-frequency suppression modules whose $r$ are different. Robustness is measured by the score of AAAC.}
\label{fig:bar}
\end{figure}
\section{Experiments}
We first analyze the statistics of clean images and adversarial images in frequency domain. Then we evaluate the proposed method in the IJCAI-2019 Alibaba Adversarial AI Challenge (AAAC).
In AAAC, models are evaluated by image classification task for electric business. There are about $11,000$ color images from $110$ classes for training. There are $550$ images for test. Given an image, the score of a defense model is calculated as follows:
\begin{equation}
score = \left\{
\begin{array}{cl}
0, \qquad & P_y \neq y \\
mean(\mathbf{\lVert\delta}\rVert_2), \qquad & P_y = y
\end{array}
\right.
\end{equation}
where $P_y$ is the predicted label. The final score is averaged over all images and all black-box attack models. Note that before computing the score, images are resized to $299\times 299$.
We use ResNet-18 as the DNN architecture for all experiments. Our method is implemented with PyTorch.
\subsection{Statistics in frequency domain}
We analyze the statistics of clean samples and adversarial samples in frequency domain. Specifically, we study the distributions of cumulative spectrum energy (CSE) w.r.t frequency. Given a 2D signal $\mathbf{\hat{x}} \in \mathcal{C}^{M\times N}$ in frequecy domain, we define CSE as follows:
\begin{equation}
CSE(r) = \sum_{i=-r}^{r}\sum_{j=-r}^{r} \mathbf{\hat{x}}_{i, j}^* \mathbf{\hat{x}}_{i, j}
\end{equation}
where $r<=\frac{min(M, N)}{2}$. We randomly select $5,000$ images from AAAC. We calculate the CSE score of each image and average all scores. We also calculate the averaged CSE score for the corresponding adversarial perturbations which are generated by iteratively PGD. The results are shown in Fig.~\ref{fig:cse_a}. As we can see, CSE for clean images converges on low-frequency regions while CSE for adversarial perturbations is nearly uniform. Thus, when we suppress the high-frequency components, the effects of adversarial attacks will be significantly reduced while most of the information on clean images will be retained. This is the main motivation of our work.
We calculate CSE score for CIFAR-10, as shown in Fig.~\ref{fig:cse_b}. The distribution is similar to AAAC's.
\subsection{AAAC results}
As analyzed earlier, when we remove the high-frequency components, the model will be more robust w.r.t adversarial attacks while the accuracy on clean images will be decreased. We evaluate this phenomenon with different $r$ \emph{without} adversarial training. The accuracy is obtained on $5,000$ validation clean images and the robustness is measured by the score of AAAC. We show the results in Fig.~\ref{fig:bar}. As $r$ decreased, the robustness w.r.t adversarial attacks is substantially increased.
Then we do ablation study for three strategies and their combinations: 1) the proposed high-frequency suppression module; 2) adversarial training via TRADES; 3) ensembles of models with different $r$. As we can see in Tab.~\ref{tab:plain}, our proposed module is even better than adversarial training in this challenge and those two methods are complementary to each other. The best score is obtained by ensembling models with different $r$ each of which is trained together with the proposed module and adversarial training. We secure the 5th place in this challenge (the score for the 1st solution is $20.13$).
\section{Conclusions and discussions}
Motived by the difference of frequency spectrum distributions between clean images and adversarial perturbations, we have proposed a high-frequency suppression module to improve the robustness of DNNs. This module is efficient, differentiable and easy to control. We have evaluated our method in AAAC.
We list several directions or questions which are worth to be further explored:
\begin{itemize}
\item Is it helpful to change the radius of box window dynamically?
\item Is it helpful to suppress the high-frequency components of intermediate convolutional features?
\item We evaluate our method for image classification. Does this method work for other tasks or other kinds of data, such as speech recognition?
\end{itemize}
\bibliographystyle{named}
|
1,108,101,564,407 | arxiv | \section{Introduction}
\label{sec:intro}
With the availability of cheap computing power, modern cameras can rely on computational post-processing to extend their capabilities under the physical constraints of existing sensor technology. Sophisticated techniques, such as those for denoising~\cite{ndnz,epll}, deblurring~\cite{dblr2,dblr1}, etc., are increasingly being used to improve the quality of images and videos that were degraded during acquisition. Moreover, researchers have posited novel sensing strategies that, when combined with post-processing algorithms, are able to produce higher quality and more informative images and videos. For example, coded exposure imaging~\cite{codexp} allows better inversion of motion blur, coded apertures~\cite{codap1,codap2} allow passive measurement of scene depth from a single shot, and compressive measurement strategies~\cite{cs0,cs2,cs1} combined with sparse reconstruction algorithms allow the recovery of visual measurements with higher spatial, spectral, and temporal resolutions.
Key to the success of these latter approaches is the co-design of sensing strategies and inference algorithms, where the measurements are designed to provide information complimentary to the known statistical structure of natural scenes. So far, sensor design in this regime has largely been either informed by expert intuition (\emph{e.g.},~ \cite{cfz}), or based on the decision to use a specific image model or inference strategy---\emph{e.g.},~ measurements corresponding to random~\cite{cs0}, or dictionary-specific~\cite{csD1}, projections are a common choice for sparsity-based reconstruction methods. In this paper, we seek to enable a broader data-driven exploration of the joint sensor and inference method space, by learning both sensor design and the computational inference engine end-to-end.
We leverage the successful use of back-propagation and stochastic gradient descent (SGD)~\cite{lecun-98b} in learning deep neural networks for various tasks~\cite{imagenet,fcn,overfeat,wang2015designing}. These networks process a given input through a complex cascade of layers, and training is able to jointly optimize the parameters of all layers to enable the network to succeed at the final inference task. Treating optical measurement and computational inference as a cascade, we propose using the same approach to learn both jointly. We encode the sensor's design choices into the learnable parameters of a ``sensor layer'' which, once trained, can be instantiated by camera optics. This layer's output is fed to a neural network that carries out inference computationally on the corresponding measurements. Both are then trained jointly.
We demonstrate this approach by applying it to the sensor-inference design problem in a standard digital color camera. Since image sensors can physically measure only one color channel at each pixel, cameras spatially multiplex the measurement of different colors across the sensor plane, and then computationally recover the missing intensities through a reconstruction process known as demosaicking. We jointly learn the spatial pattern for multiplexing different color channels---that requires making a hard decision to use one of a discrete set of color filters at each pixel---along with a neural network that performs demosaicking. Together, these enable the recovery of high-quality color images of natural scenes. We find that our approach significantly outperforms the traditional Bayer pattern~\cite{bayer1976color} used in most color cameras. We also compare it to a recently introduced design~\cite{cfz} based on making sparse color measurements, that has superior noise performance and fewer aliasing artifacts. Interestingly, our network automatically learns to employ a similar measurement strategy, but is able outperform this design by finding a more optimal spatial layout for the color measurements.
\begin{figure}[!t]
\centering
\includegraphics[width=0.943\textwidth]{Fig/net0.pdf}
\caption{We propose a method to learn the optimal color multiplexing pattern for a camera through joint training with a neural network for reconstruction. ({\bf Top}) Given $C$ possible color filters that could be placed at each pixel, we parameterize the incident light as a $C-$channel image. This acts as input to a ``sensor layer'' that learns to select one of these channel at each pixel. A reconstruction network then processes these measurements to yield a full-color RGB image. We jointly train both for optimal reconstruction quality. ({\bf Bottom left}) Since the hard selection of individual color channels is not differentiable, we encode these decisions using a Soft-max layer, with a ``temperature'' parameter $\alpha$ that is increased across iterations. ({\bf Bottom right}) We use a bifurcated architecture with two paths for the reconstruction network. One path produces $K$ possible values for each color intensity through multiplicative and linear interpolation, and the other weights to combine these into a single estimate.}
\label{fig:teaser}
\end{figure}
\section{Background}
\label{sec:prelim}
Since both CMOS and CCD sensors can measure only the total intensity of visible light incident on them, color is typically measured by placing an array of color filters (CFA) in front of the sensor plane. The CFA pattern determines which color channel is measured at which pixel, with the most commonly pattern used in RGB color cameras being the Bayer mosaic~\cite{bayer1976color} introduced in 1976. This is a $4\times 4$ repeating pattern, with two measurements of the green channel and one each of red and blue. The color values that are not directly measured are then reconstructed computationally by demosaciking algorithms. These algorithms~\cite{li2008image} typically rely on the assumption that different color channels are correlated and piecewise smooth, and reason about locations of edges and other high-frequency image content to avoid creating aliasing artifacts.
This approach yields reasonable results, and the Bayer pattern remains in widespread use even today. However, the choice of the CFA pattern involves a trade-off. Color filters placed in front of the sensor block part of the incident light energy, leading to longer exposure times or noisier measurements (in comparison to grayscale cameras). Moreover, since every channel is regularly sub-sampled in the Bayer pattern, reconstructions are prone to visually disturbing aliasing artifacts even with the best reconstruction methods. Most consumer cameras address this by placing an anti-aliasing filter in front of the sensor to blur the incident light field, but this leads to a loss of sharpness and resolution.
To address this, Chakrabarti \emph{et al.}~\cite{cfz} recently proposed the use of an alternative CFA pattern in which a majority of the pixels measure the total unfiltered visible light intensity. Color is measured only sparsely, using $2\times 2$ Bayer blocks placed at regularly spaced intervals on the otherwise unfiltered sensor plane. The resulting measured image corresponds to an un-aliased full resolution luminance image (\emph{i.e.},~ the unfiltered measurements) with ``holes'' at the color sampling site; with point-wise color information on a coarser grid. The reconstruction algorithm in \cite{cfz} is significantly different from traditional demosaicking, and involves first recovering missing luminance values by hole-filling (which is computationally easier than up-sampling since there is more context around the missing intensities), and then propagating chromaticities from the color measurement sites to the remaining pixels using edges in the luminance image as a guide. This approach was shown to significantly improve upon the capabilities of a Bayer sensor---in terms of better noise performance, increased sharpness, and reduced aliasing artifacts.
That \cite{cfz}'s CFA pattern required a very different reconstruction algorithm illustrates the fact that both the sensor and inference method need to be modified together to achieve gains in performance. In \cite{cfz}'s case, this was achieved by applying an intuitive design principles---of making high SNR non-aliased measurements of one color channel. However, these principles are tied to a specific reconstruction approach, and do not tell us, for example, whether regularly spaced $2\times 2$ blocks are the optimal way of measuring color sparsely.
While learning-based methods have been proposed for demosaicking~\cite{ldm1,ldm3,ldm2} (as well as for joint demosaicking and denoising~\cite{ldmz2,ldmz1}), these work with a pre-determined CFA pattern and training is used only to tune the reconstruction algorithm. In contrast, our approach seeks to learn, automatically from data, \emph{both} the CFA pattern and reconstruction method, so that they are jointly optimal in terms of reconstruction quality.
\section{Jointly Learning Measurement and Reconstruction}
\label{sec:method}
We formulate our task as that of reconstructing an RGB image $y(n) \in \mathbb{R}^3$, where $n \in \mathbb{Z}^2$ indexes pixel location, from a measured sensor image $s(n) \in \mathbb{R}$. Along with this reconstruction task, we also have to choose a multiplexing pattern which determines the color channel that each $s(n)$ corresponds to. We let this choice be between one of $C$ channels---a parameterization that takes into account which spectral filters can be physically synthesized. We use $x(n) \in \mathbb{R}^C$ denote the intensity measurements corresponding to each of these color channels, and a zero-one selection map $I(n) \in \{0,1\}^C, |I(n)| = 1$ to encode the multiplexing pattern, such that the corresponding sensor measurements are given by $s(n) = I(n)^Tx(n)$. Moreover, we assume that $I(n)$ repeats periodically every $P$ pixels, and therefore only has $P^2$ unique values.
Given a training set consisting of pairs of output images $y(n)$ and $C$-channel input images $x(n)$, our goal then is to learn this pattern $I(n)$, jointly with a reconstruction algorithm that maps the corresponding measurements $s(n)$ to the full color image output $y(n)$. We use a neural network to map sensor measurements $s(n)$ to an estimate $\hat{y}(n)$ of the full color image. Furthermore, we encode the measurement process into a ``sensor layer'', which maps the input $x(n)$ to measurements $s(n)$, and whose learnable parameters encode the multiplexing pattern $I(n)$. We then learn both the reconstruction network and the sensor layer simultaneously, with respect to a squared loss $\|\hat{y}(n)-y(n)\|^2$ between the reconstructed and true color images.
\subsection{Learning the Multiplexing Pattern}
\label{sec:sensor}
The key challenge to our joint learning problem lies in recovering the optimal multiplexing pattern $I(n)$, since it is ordinal-valued and requires learning to make a hard non-differentiable decision between $C$ possibilities. To address this, we rely on the standard soft-max operation, which is traditionally used in multi-label classification tasks.
However, we are unable to use the soft-max operation directly---unlike in classification tasks where the ordinal labels are the final output, and where the training objective prefers hard assignment to a single label, in our formulation $I(n)$ is used to generate sensor measurements that are then processed by a reconstruction network. Indeed, when using a straight soft-max, we find that the reconstruction network converges to real-valued $I(n)$ maps that correspond to measuring different weighted combinations of the input channels. Thresholding the learned $I(n)$ to be ordinal valued leads to a significant drop in performance, even when we further train the reconstruction network to work with this thresholded version.
Our solution to this is fairly simple. We use a soft-max with a temperature parameter that is increased slowly through training iterations. Specifically, we learn a vector $w(n) \in \mathbb{R}^C$ for each location $n$ of the multiplexing pattern, with the corresponding $I(n)$ given during training as:
\begin{equation}
\label{eq:tempmax}
I(n) = \mbox{Soft-max}\left[\alpha_t w(n)\right],
\end{equation}
where $\alpha_t$ is a scalar factor that we increase with iteration number $t$.
Therefore, in addition to changes due to the SGD updates to $w(n)$, the effective distribution of $I(n)$ become ``peakier'' at every iteration because of the increasing $\alpha_t$, and as $\alpha_t\rightarrow\infty$, $I(n)$ becomes a zero-one vector. Note that the gradient magnitudes of $w(n)$ also scale-up, since we compute these gradients at each iteration with respect to the current value of $t$. This ensures that the pattern can keep learning in the presence of a strong supervisory signal from the loss, while retaining a bias to drift towards making a hard choice for a single color channel.
As illustrated in Fig.~\ref{fig:teaser}, our sensor layer contains a parameter vector $w(n)$ for each pixel of the $P\times P$ multiplexing pattern. During training, we generate the corresponding $I(n)$ vectors using \eqref{eq:tempmax} above, and the layer then outputs sensor measurements based on the $C$-channel input $x(n)$ as $s(n)=I(n)^Tx(n)$. Once training is complete (and for validation during training), we replace $I(n)$ with its zero-one version as $I(n)^c = 1$ for $c = \arg \max_c w^c(n)$, and $0$ otherwise.
As we report in Sec.~\ref{sec:exp}, our approach is able to successfully learn an optimal sensing pattern, which adapts during training to match the evolving reconstruction network. We would also like to note here two alternative strategies that we explored to learn an ordinal $I(n)$, which were not as successful. We considered using a standard soft-max approach with a separate entropy penalty on the distribution $I(n)$---however, this caused the pattern $I(n)$ to stop learning very early during training (or for lower weighting of the penalty, had no effect at all). We also tried to incrementally pin the lowest $I(n)$ values to zero after training for a number of iterations, in a manner similar to Han \emph{et al.}'s~\cite{han2015deep} approach to network compression. However, even with significant tuning, this approach caused a large parts of the pattern search space to be eliminated early, and was not able to adapt to the fact that a channel with a low weight at a particular location might eventually become desirable based on changes to the pattern at other locations, and corresponding updates to the reconstruction network.
\subsection{Reconstruction Network Architecture}
Traditional demosaicking algorithms~\cite{li2008image} produce a full color image by interpolating the missing color values from neighboring measurement sites, and by exploiting cross-channel dependencies. This interpolation is often linear, but in some cases takes the form of transferring chromaticities or color ratios (\emph{e.g.},~ in \cite{cfz}). Moreover, most demosaicking algorithms reason about image textures and edges to avoid smoothing across boundaries or creating aliasing artifacts.
We adopt a simple bifurcated network architecture that leverages these intuitions. As illustrated in Fig.~\ref{fig:teaser}, our network reconstructs each $P\times P$ patch in $y(n)$ from a receptive field that is centered on that patch in the measured image $s(n)$, and thrice as large in each dimension. The network has two paths, both of operate on the entire input and both output $(P\times P \times 3K)$ values, \emph{i.e.},~ $K$ values for each output color intensity. We denote these outputs as $\lambda(n,k), f(n,k) \in \mathbb{R}^3$.
One path produces $f(n,k)$ by first computing multiplicative combinations of the entire $3P\times 3P$ input patch---we instantiate this using a fully-connected layer without a bias term that operates in the log-domain---followed by a linear combinations across each of the $3K$ values at each location. We interpret these $f(n,k)$ values as $K$ proposals for each $y(n)$. The second path uses a more standard cascade of convolution layers---all of which have $F$ outputs with the first layer having a stride of $P$---followed by a fully connected layer that produces the outputs $\lambda(n,k)$ with the same dimensionality as $f(n,k)$. We treat $\lambda(n,k)$ as gating values for the proposals $f(n,k)$, and generate the final reconstructed patch $\hat{y}(n)$ as $\sum_k \lambda(n,k)f(n,k)$.
\section{Experiments}
\label{sec:exp}
We follow a similar approach to \cite{cfz} for training and evaluating our method. Like \cite{cfz}, we use the Gehler-Shi database~\cite{gehler,shi} that consists of 568 color images of indoor and outdoor scenes, captured under various illuminants. These images were obtained from RAW sensor images from a camera employing the Bayer pattern with an anti-aliasing optical filter, by using the different color measurements in each Bayer block to construct a single RGB pixel. These images are therefore at half the resolution of the original sensor image, but have statistics that are representative of aliasing-free full color images of typical natural scenes. Unlike \cite{cfz} who only used 10 images for evaluation, we use the entire dataset---using 56 images for testing, 461 images for training, and the remaining 51 images as a validation set to fix hyper-parameters.
We treat the images in the dataset as the ground truth for the output RGB images $y(n)$. As sensor measurements, we consider $C=4$ possible color channels. The first three correspond to the original sensor RGB channels. Like \cite{cfz}, we choose the fourth channel to be white or panchromatic, and construct it as the sum of the RGB measurements. As mentioned in \cite{cfz}, this corresponds to a conservative estimate of the light-efficiency of an unfiltered channel. We construct the $C$-channel input image $x(n)$ by including these measurements, followed by addition of different levels of Gaussian noise, with high noise variances simulating low-light capture.
We learn a repeating pattern with $P=8$. In our reconstruction network, we set the number of proposals $K$ for each output intensity to 24, and the number of convolutional layer outputs $F$ in the second path of our network to 128. When learning our sensor multiplexing pattern, we increase the scalar soft-max factor $\alpha_t$ in \eqref{eq:tempmax} according to a quadratic schedule as $\alpha_t = 1 + (\gamma t)^2$, where $\gamma=2.5\times10^{-5}$ in our experiments. We train a separate reconstruction network for each noise level (positing that a camera could select between these based on the ISO settings). However, since it is impractical to employ different sensors for different settings, we learn a single spatial multiplexing pattern, optimized for reconstruction under moderate noise levels with standard deviation (STD) of $0.01$ (with respect to intensity values in $x(n)$ scaled to be between $0$ and $1$).
We train our sensor layer and reconstruction network jointly at this noise level on sets of $8\times 8$ $y(n)$ patches and corresponding $24\times 24$ $x(n)$ patches sampled randomly from the training set. We use a batch-size of 128, with a learning rate of 0.001 for 1.5 million iterations. Then, keeping the sensor pattern fixed to our learned version, we train reconstruction networks from scratch for other noise levels---training again with a learning rate of 0.001 for 1.5 million iterations, followed another 100,000 iterations with a rate of $10^{-4}$. We also train reconstruction networks at all noise levels in a similar way for the Bayer pattern, as well the pattern of \cite{cfz} (with a color sampling rate of 4). Moreover, to allow consistent comparisons, we re-train the reconstruction network for our pattern at the $0.01$ noise level from scratch following this regime.
\subsection{Evaluating the Reconstruction Network}
We begin by comparing the performance of our learned reconstruction networks to traditional demosaicking algorithms for the standard Bayer pattern, and the pattern of \cite{cfz}. Note that our goal is not to propose a new demosaicking method for existing sensors. Nevertheless, since our sensor pattern is being learned jointly with our proposed reconstruction architecture, it is important to determine whether this architecture can learn to reason effectively with different kinds of sensor patterns, which is necessary to effectively cover the joint sensor-inference design space.
We compare our learned networks to Zhang and Wu's method~\cite{dmtrad} for the Bayer pattern, and Chakrabarti \emph{et al.}'s method~\cite{cfz} for their own pattern. We measure performance in terms of the reconstruction PSNR of all non-overlapping $64\times 64$ patches from all test images (roughly 40,000 patches).
Table \ref{tab:trad} compares the median PSNR values across all patches for reconstructions using our network to those from traditional methods, at two noise levels---low noise corresponding to an STD of $0.0025$, and moderate noise corresponding to $0.01$. For the pattern of \cite{cfz}, we find that our network performs similar to their reconstruction method at the low noise level, and significantly better at the higher noise level. On the Bayer pattern, our network achieves much better performance at both noise levels. We also note here that reconstruction using our network is significantly faster---taking 9s on a six core CPU, and 200ms when using a Titan X GPU, for a 2.7 mega-pixel image. In comparison, \cite{cfz} and \cite{dmtrad}'s reconstruction methods take 20s and 1 min.~respectively on the CPU.
\subsection{Visualizing Sensor Pattern Training}
\begin{figure}[!t]
\centering
\renewcommand{\arraystretch}{0.1}
{\small
\begin{tabular}{cccccc}
\includegraphics[width=0.13\textwidth]{Fig/001.png}&
\includegraphics[width=0.13\textwidth]{Fig/002.png}&
\includegraphics[width=0.13\textwidth]{Fig/003.png}&
\includegraphics[width=0.13\textwidth]{Fig/004.png}&
\includegraphics[width=0.13\textwidth]{Fig/005.png}&
\includegraphics[width=0.13\textwidth]{Fig/010.png}\\~\\
It \# 2,500 & It \# 5,000 & It \# 7,500 &
It \# 10,000 & It \# 12,500 & It \# 25,000\\
Entropy: 1.38 & Entropy: 1.38 & Entropy: 1.38 & Entropy: 1.38 & Entropy: 1.38&Entropy: 1.37\\~\\~\\~\\~\\~\\~\\~\\
\includegraphics[width=0.13\textwidth]{Fig/040.png}&
\includegraphics[width=0.13\textwidth]{Fig/080.png}&
\includegraphics[width=0.13\textwidth]{Fig/120.png}&
\includegraphics[width=0.13\textwidth]{Fig/160.png}&
\includegraphics[width=0.13\textwidth]{Fig/200.png}&
\includegraphics[width=0.13\textwidth]{Fig/240.png}\\~\\
It \# 100,000 & It \# 200,000 & It \# 300,000 &
It \# 400,000 & It \# 500,000 & It \# 600,000\\
Entropy: 1.02 & Entropy: 0.78 & Entropy: 0.75 & Entropy: 0.82 & Entropy: 0.86&Entropy: 0.85\\~\\~\\~\\~\\~\\~\\~\\
\includegraphics[width=0.13\textwidth]{Fig/400.png}&
\includegraphics[width=0.13\textwidth]{Fig/440.png}&
\includegraphics[width=0.13\textwidth]{Fig/481.png}&
\includegraphics[width=0.13\textwidth]{Fig/522.png}&
\includegraphics[width=0.13\textwidth]{Fig/560.png}&
\includegraphics[width=0.13\textwidth]{Fig/600.png}\\~\\
It \# 1,000,000 & It \# 1,100,000 & It \# 1,200,000 &
It \# 1,300,000 & It \# 1,400,000 & It \# 1,500,0000\\~\\
Entropy: 0.57 & Entropy: 0.37 & Entropy: 0.35 & Entropy: 0.25 & Entropy: 0.18
& (Final)\\
\end{tabular}
}
\caption{Evolution of sensor pattern through training iterations. We find that the our network's color sensing pattern changes qualitatively through the training process. In initial iterations, the sensor layer learns to sample color channels directly. As training continues, these color measurements are replaced by panchromatic (white) pixels. The final iterations see fine refinements to the pattern. We also report the mean (across pixels) entropy of the underlying distribution $I(n)$ for each pattern. Note that, as expected, this entropy decreases across iterations as the distributions $I(n)$ evolve from being soft selections of color channels, to zero-one vectors that make hard ordinal decisions.}
\label{fig:evolve}
\end{figure}
\begin{table}
\caption{Median Reconstruction PSNR (dB) using Traditional demosaicking and Proposed Network}
\centering
{\small
\begin{tabular}{|c||c|c||c|c|}
\hline
&\multicolumn{2}{|c|}{Bayer}&\multicolumn{2}{|c|}{CFZ~\cite{cfz}}\\\hline
&Noise STD=0.0025 & Noise STD=0.01 & Noise STD=0.0025 & Noise STD=0.01\\\hline\hline
Traditional & 42.69 & 32.44 & 48.84 & 39.55\\\hline
Network & 47.55 & 43.72 & 49.08 & 44.64\\\hline
\end{tabular}}
\label{tab:trad}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{Fig/ex.pdf}
\caption{Example reconstructions from (noisy) measurements with different sensor multiplexing patterns. Best viewed at higher resolution in the electronic version.}
\label{fig:qual}
\end{figure}
In Fig.~\ref{fig:evolve}, we visualize the evolution of our sensor pattern during the training process, while it is being jointly learned with the reconstruction network. In the initial iterations, the sensor layers displays a preference for densely sampling the RGB channels, with very few panchromatic measurements---in fact, in the first row of Fig.~\ref{fig:evolve}, we see panchromatic pixels switching to color measurements. This is likely because early on in the training process, the reconstruction network hasn't yet learned to exploit cross-channel correlations, and therefore needs to measure the output channels directly.
However, as training progresses, the reconstruction network gets more sophisticated, and we see the number of color measurements get sparser and sparser, in favor of panchromatic pixels that offer the advantage of higher SNR. Essentially, the sensor layer begins to adopt one of the design principles of \cite{cfz}. However, it distributes the color measurement sites across the pattern, instead of concentrating them into separated blocks like \cite{cfz}. In the last 500K iterations, we see that most changes correspond to fine refinements of the pattern, with a few individual pixels swapping the channels they measure.
While the patterns themselves in Fig.~\ref{fig:evolve} correspond to the channel at each pixel with the maximum value in the selection map $I(n)$, remember that these maps themselves are soft. Therefore, we also report the mean entropy of the underlying $I(n)$ for each pattern in Fig.~\ref{fig:evolve}. We see that this entropy decreases across iterations, as the choice of color channel for more and more pixels becomes fixed, with their distributions in $I(n)$ becoming peakier and closer to being zero-one vectors.
\subsection{Evaluating Learned Pattern}
Finally, we evaluate the performance of neural network-based reconstruction from measurements with our learned pattern, to those with the Bayer pattern and the pattern of \cite{cfz}. Table~\ref{tab:psnr} shows different quantiles of reconstruction PSNR for various noise levels, with noise STDs raning from 0 to 0.04. Even though our sensor pattern was trained at the noise level of STD=0.01, we find it achieves the highest reconstruction quality over a large range of noise levels. Specifically, it always outperforms the Bayer pattern, by fairly significant margins at higher noise levels. The improvement in performance over \cite{cfz}'s pattern is less pronounced, although we do achieve consistently higher PSNR values for all quantiles at most noise levels. Figure~\ref{fig:qual} shows examples of color patches reconstructed from our learned sensor, and compare these to those from the Bayer pattern and \cite{cfz}.
We see that the reconstructions from the Bayer pattern are noticeably worse. This is because it makes lower SNR measurements, and the reconstruction networks learn to smooth their outputs to reduce this noise. Both \cite{cfz} and our pattern yield significantly better reconstructions. Indeed, most of our gains over the Bayer pattern come from choosing to make most measurements panchromatic, a design principle shared by \cite{cfz}. However, remember that our sensor layer learns this principle entirely automatically from data, without expert supervision. Moreover, we see that \cite{cfz}'s reconstructions tend to have a few more instances of ``chromaticity noise'', in the form of contiguous regions with incorrect hues, which explain its slightly lower PSNR values in Table~\ref{tab:psnr}.
\begin{table}
\caption{Network Reconstruction PSNR (dB) Quantiles for various CFA Patterns}
\centering
{\small
\begin{tabular}{|c|c||c|c|c|}
\hline
~Noise STD~&~Percentile~&
~Bayer~\cite{bayer1976color}~&
~CFZ~\cite{cfz}~&
~Learned~\\\hline\hline
& 25\% & 47.62 & \bf 48.04 & 47.97\\
0 & 50\% & 51.72 & \bf 52.17 & 52.12\\
& 75\% & 54.97 & \bf 55.32 & 55.30\\\hline
& 25\% & 44.61 & 46.05 & \bf 46.08\\
0.0025 & 50\% & 47.55 & 49.08 & \bf 49.17\\
& 75\% & 50.52 & 51.57 & \bf 51.76\\\hline
& 25\% & 42.55 & 44.33 & \bf 44.37\\
0.0050 & 50\% & 45.63 & 47.01 & \bf 47.19\\
& 75\% & 48.73 & 49.68 & \bf 49.94\\\hline
& 25\% & 41.34 & 42.92 & \bf 43.08\\
0.0075 & 50\% & 44.48 & 45.60 & \bf 45.85\\
& 75\% & 47.77 & 48.41 & \bf 48.69\\\hline
& 25\% & 40.58 & 41.97 & \bf 42.16\\
0.0100 & 50\% & 43.72 & 44.64 & \bf 44.94\\
& 75\% & 47.10 & 47.56 & \bf 47.80\\\hline
& 25\% & 40.29 & 41.17 & \bf 41.41\\
0.0125 & 50\% & 43.36 & 43.88 & \bf 44.22\\
& 75\% & 46.65 & 47.04 & \bf 47.27\\\hline
& 25\% & 39.97 & 40.54 & \bf 40.85\\
0.0150 & 50\% & 43.03 & 43.29 & \bf 43.69\\
& 75\% & 46.25 & 46.69 & \bf 46.86\\\hline
& 25\% & 39.60 & 40.03 & \bf 40.31\\
0.0175 & 50\% & 42.62 & 42.83 & \bf 43.12\\
& 75\% & 45.82 & 46.39 & \bf 46.45\\\hline
& 25\% & 39.31 & 39.49 & \bf 39.96\\
0.0200 & 50\% & 42.39 & 42.39 & \bf 42.78\\
& 75\% & 45.56 & 46.14 & \bf 46.23\\\hline
& 25\% & 38.18 & 38.31 & \bf 38.92\\
0.0300 & 50\% & 41.17 & 41.48 & \bf 41.85\\
& 75\% & 44.23 & 45.61 & \bf 45.63\\\hline
& 25\% & 37.14 & 37.43 & \bf 38.00\\
0.0400 & 50\% & 39.98 & 40.86 & \bf 41.02\\
& 75\% & 43.17 & \bf 45.11 & 44.98\\\hline
\end{tabular}}
\label{tab:psnr}
\end{table}
\section{Conclusion}
In this paper, we proposed learning sensor design jointly with a neural network that carried out inference on the sensor's measurements, specifically focusing on the problem of finding the optimal color multiplexing pattern for a digital color camera. We learned this pattern by joint training with a neural network for reconstructing full color images from the multiplexed measurements. We used a soft-max operation with an increasing temperature parameter to model the non-differentiable color channel selection at each point, which allowed us to train the pattern effectively. Finally, we demonstrated that our learned pattern enabled better reconstructions than past designs. An implementation of our method, along with trained models, data, and results, is available at our project page at \url{http://www.ttic.edu/chakrabarti/learncfa/}.
Our results suggest that learning measurement strategies jointly with computational inference is both useful and possible. In particular, our approach can be used directly to learn other forms of optimized multiplexing patterns---\emph{e.g.},~ spatio-temporal multiplexing for video, viewpoint multiplexing in light-field cameras, etc. Moreover, these patterns can be learned to be optimal for inference tasks beyond reconstruction. For example, a sensor layer jointly trained with a neural network for classification could be used to discover optimal measurement strategies for say, distinguishing between biological samples using multi-spectral imaging, or detecting targets in remote sensing.
\subsubsection*{Acknowledgments}
We thank NVIDIA corporation for the donation of a Titan X GPU used in this research.
\input{dcam.bbl}
\end{document}
|
1,108,101,564,408 | arxiv |
\section{Introduction}
\label{sec:intro}
Multiple Sclerosis (MS) is an autoimmune disease of the central nervous system, in which
inflammatory demyelination
of axons causes focal lesions to occur in the brain. White matter lesions in MS can be detected
with standard magnetic resonance imaging (MRI) acquisition protocols without contrast injection. It
has been shown that many features of lesions, such as volume \cite{kalincik2012} and location
\cite{sati2016}, are important biomarkers of MS, and can be used to detect disease onset or track
its progression. Therefore accurate segmentation of white matter lesions is important in
understanding the progression and prognosis of the disease. With $T_2$-w FLAIR (fluid attenuated
inversion recovery) imaging sequences, most lesions appear as bright regions in MR images, which
helps its automatic segmentation. Therefore FLAIR is the most common imaging contrast for detection
of MS lesions and is often used in conjunction with other structural MR contrasts, including
$T_1$-w, $T_2$-w, or $PD$-w images. Although manual delineations are considered as the gold
standard, manually segmenting lesions from 3D images is tedious, time consuming, and often not
reproducible. Therefore automated lesion segmentation from MRI is an active area of development in
MS research.
Automated lesion segmentation in MS is a challenging task for various reasons: (1) the
lesions are highly variable in terms of size and location, (2) lesion boundaries are often not well
defined, particularly on FLAIR images, and (3) clinical quality FLAIR images may possess low
resolution and often have imaging artifacts. It has also been observed that there is very high
inter-rater variability even with experienced raters \cite{carass2017,egger2017}. Therefore there
is an inherent reliability challenge associated with lesion segmentation. This problem is
accentuated by the fact that MRI does not have any uniform intensity scale (like CT); acquisition of
images in different scanners and with different contrast properties can therefore add to the
complexity of segmentation.
Many automated lesion segmentation methods have been proposed in the past decade
\cite{lorenzo2013}. There are usually two broad categories of segmentations, supervised and
unsupervised. Unsupervised lesion segmentation methods rely on intensity models of brain tissue,
where image voxels containing high intensities in FLAIR images are modeled as outliers
\cite{lorenzo2011,shiee2009} based on the intensity distributions. The outlier voxels then become
potential candidates for lesions and then the segmentation can be refined by a simple threshold
\cite{souplet2008,llado2015,jain2015}. Alternatively, Bayesian models such as mixtures
of Gaussians \cite{schmidt2012,strumia2016,leemput2001,sudre2015} or Student's t mixture models
\cite{ferrari2016} can be applied on the intensity distributions of potential lesions and normal
tissues. Optimal segmentation is then achieved via an expectation-maximization algorithm. Additional
information about intensity distributions and expected locations of normal tissues via a collection
of healthy subjects \cite{warfield2015} can be included to determine the lesions more accurately.
Local intensity information can also be included via Markov random field to obtain a smooth
segmentation \cite{harmouche2006,harmouche2015}.
Supervised lesion segmentation methods make use of atlases or templates, which typically consist of
multi-contrast MR images and their manually delineated lesions. As seen in the
\texttt{ISBI-2015}\footnote[1]{\url{https://smart-stats-tools.org/lesion-challenge-2015}} lesion
segmentation
challenge \cite{carass2017}, supervised methods have become more popular and are usually superior
to unsupervised ones, with $4$ out of top $5$ methods being supervised. These methods learn the
transformation from the MR image intensities to lesion labels (or memberships) on atlases, and then
the learnt transformation is applied onto a new unseen image to generate its lesion labels. Logistic
regression \cite{sweeney2013,sweeney2016} and support vector machines \cite{christos2008} have
been used in lesion classification, where features include voxel-wise intensities from
multi-contrast images and the classification task is to label an image voxel as lesion or
non-lesion. Instead of using voxel-wise intensities, patches have been shown to be a robust and
useful feature \cite{roy2014spie1}. Random forest \cite{maier2015,geremia2011,jog2015} and
k-nearest neighbors \cite{griffanti2016} based algorithms have used patches and other features,
computed at a particular voxel, to predict the label of that voxel. Dictionary based methods
\cite{roy2015,roy2015mlmi,guizard2015,deshpande2015} use image patches from atlases to learn a
patch dictionary that can sufficiently describe potential lesion and non-lesion patches. For a new
unseen patch, similar patches are found from the dictionary and combined with weights based on the
similarity.
In recent years, convolutional neural networks (CNN), also known as deep learning
\cite{hinton2015}, have been successfully applied to many medical image processing applications
\cite{summers2016,litjens2017}. CNN based methods produce state-of-the-art results in many computer
vision problems such as object detection and recognition \cite{szegedy2015}. The primary advantage
of neural networks over traditional machine learning algorithms is that CNNs do not need
hand-crafted features, making it applicable to a diverse set of problems when it is not obvious what
features are optimal. Because neural networks can handle 3D images or image patches, both 2D
\cite{roth2016} and 3D \cite{brosch2016} algorithms have been proposed, with 2D
patches often being preferred for memory and speed efficiency. With advancements in graphics
processor units (GPU), neural network models can be trained in a GPU within a fraction of time taken
by that with multiple CPUs. Also CNNs can handle very large datasets without incurring too much
increase in processing time. Therefore they have gained popularity in the medical imaging community
in solving increasingly difficult problems.
CNNs have been shown to be better or on par with both probabilistic and multi-atlas label fusion
based methods for whole brain segmentation on adult \cite{wachinger2017,isgum2016a,chen2017} and
neonatal brains \cite{zhang2015,isgum2016b}. They have been especially successful in tumor
segmentations \cite{kamnitsas2017,pereira2015,veronica2016}, as seen on the \texttt{BRATS 2015}
challenge \cite{menze2015}. They have recently been applied for brain extraction in the presence
of tumors \cite{kleesiek2016}. Missing image contrasts pose a significant challenge in medical
imaging, where not all available image contrasts may not be acquired for all subjects.
Traditional
CNN architectures can be modified to include image statistics in addition to image intensities to
circumvent missing image contrasts \cite{bengio2016} without sacrificing too much accuracy. CNN
models have also been applied to segment both cross-sectional
\cite{prieto2017,yoo2014,ghafoorian2017a,ghafoorian2017b,moeskops2017} and longitudinal
\cite{birenbaum2016} lesions from multi-contrast MR images. Recently, a two-step cascaded CNN
architecture \cite{llado2017} has been
proposed, where two separate networks are learnt; the first one computes an initial lesion
membership based on MR images and manual segmentations, while the second one refines the
segmentation from the first network by including its false positives in the training samples.
In this paper, we propose a fully convolutional neural network model, called Fast Lesion EXtraction
using COnvolutional Neural Networks (FLEXCONN), to segment MS lesions,
where parallel pathways of convolutional filters are first applied to multiple contrasts. The
outputs of those pathways are then concatenated and another
set of convolutional filters is applied on the joined output. Similar to
\cite{ghafoorian2017b}, we used large 2D patches and show that larger patches produce more accurate
results compared to smaller patches. The paper is organized
as follows. First the experimental data is described in Sec.~\ref{sec:materials}. The proposed
FLEXCONN network architecture and its various parameter optimizations are described in
Sec.~\ref{sec:method}. The segmentation results and the comparison with other methods are described
in Sec.~\ref{sec:results}.
\begin{table}[!bt]
\caption{
Short description of the four datasets is presented here. Details can be found in Sec.~\ref{sec:materials}. For \texttt{ISBI-21} and \texttt{ISBI-61}, each image has two manual
lesion masks from two raters.
}
\tabcolsep 3pt
\begin{center}
\begin{tabular}{ccccc}
\toprule[2pt]
Dataset & \#Images & \#Masks & Usage & Availability \\
\cmidrule[2pt](lr){1-5}
\texttt{ISBI-21} & 21 & 42 & Training & Public \\
\texttt{VAL-28} & 28 & 28 & Validation & Private \\
\texttt{ISBI-61} & 61 & 122 & Testing & Private \\
\texttt{MS-100} & 100 & 100 & Testing & Private \\
\bottomrule[2pt]
\end{tabular}
\end{center}
\label{tab:dataset}
\end{table}
\begin{table}[!tb]
\caption{
Imaging parameters, such as repetition time $T_R$ (ms), echo time $T_E$(ms), inversion time $T_I$
(ms), flip angle, and resolution (mm\ts{3}) are shown. These parameters are same for all datasets
described in Sec.~\ref{sec:materials}.
}
\tabcolsep 2pt
\begin{center}
\begin{tabular}{cccccc}
\toprule[2pt]
& $T_R$ & $T_E$ & $T_I$ & Flip & Resolution \\
& & & & Angle & \\
\cmidrule[2pt](lr){1-6}
{3D MPRAGE} & 10.3 & 6 & 835 & 8\degree & $0.82\times 0.82\times 1.17$\\
$T_2$-w & 4177 & 12.31 & N/A & 90\degree & $0.82\times 0.82\times 2.2$\\
$PD$-w & $4177$ & 80 & N/A & 90\degree & $0.82\times 0.82\times 2.2$\\
2D FLAIR & $11000$ & 68 & 2800 & 90\degree & $0.82\times 0.82\times 2.2$ \\
& & & & & \& $0.82\times 0.82\times 4.4$\\
\bottomrule[2pt]
\end{tabular}
\end{center}
\label{tab:scan_param}
\end{table}
\section{Materials}
\label{sec:materials}
Two sets of data are used to evaluate the proposed algorithm. The first dataset is from the
\texttt{ISBI 2015} challenge \cite{carass2017}, which includes two groups, training and
testing. The training group, denoted by \texttt{ISBI-21}, is publicly available and comprises
$21$ scans from $5$ subjects. Four of the subjects have $4$ time-points and one has $5$ time-points,
each time-point separated by approximately a year. The test group, denoted by \texttt{ISBI-61}, is
not public and has $14$ subjects with $61$ images, each subject with $4-5$ time-points, each
time-point also being separated by a year. Although these images actually contain longitudinal scans
of the same subject, we treat the dataset as a cross-sectional study and report numbers on each
image separately since longitudinal information is not used within our approach.
A short description of the datasets is provided in Table~\ref{tab:dataset}.
The second dataset consists of $128$ patients enrolled in a natural history study of MS, $79$ with
relapsing-remitting, $30$ with secondary progressive, and $19$ with primary progressive MS. For
experimentation purpose, we arbitrarily divided this dataset into two groups, validation ($n=28$)
and test ($n=100$), denoted as \texttt{VAL-28} and \texttt{MS-100} respectively. The proposed
algorithm as well as the other competing methods were trained using \texttt{ISBI-21} as
training data. Then various parameters, as described in Sec.~\ref{sec:param}, were optimized using
\texttt{VAL-28} as the validation set. Finally the optimized algorithms were compared on the
\texttt{ISBI-61} and \texttt{MS-100} datasets, as detailed in Sec.~\ref{sec:ms100} and
Sec.~\ref{sec:isbi61}.
Each subject from both datasets had $T_1$-w MPRAGE, $T_2$-w, $PD$-w, and FLAIR images acquired in a
Philips 3T scanner. The imaging parameters are listed in Table~\ref{tab:scan_param}. Each image in
\texttt{MS-100} and \texttt{VAL-28} has one manually delineated lesion segmentation mask. Every
image in \texttt{ISBI-21} and \texttt{ISBI-61} has two masks, drawn by two different raters, as
explained in \cite{carass2011}.
\section{Method}
\label{sec:method}
\subsection{Image Preprocessing}
\label{sec:preprocessing}
The $T_1$-w images of every subject in the \texttt{MS-100} and \texttt{VAL-28} dataset were first
rigidly registered \cite{avants2011} to the axial $1$ mm\ts{3} MNI template \cite{mori2009}. They
were then skullstripped \cite{carass2011,roy2017} and corrected for any intensity inhomogeneity by
N4 \cite{tustison2010}. The other contrasts, i.e. $T_2$-w, $PD$-w, and FLAIR images were then
registered to the $T_1$-w image in MNI space, stripped with the same skull-stripping mask, and
corrected by N4 after stripping.
The preprocessing steps for the \texttt{ISBI-21} and \texttt{ISBI-61} datasets were very similar
and detailed in \cite{carass2017}. Briefly, the $T_1$-w images of the baseline of every
subject were rigidly registered to the MNI template, skullstripped \cite{carass2011}, and corrected
by N4. Then the other contrasts of the baseline and all contrasts of the followup time-points were
rigidly registered to the baseline $T_1$-w and corrected by N4. Lesions for both data sets were
manually delineated on pre-processed FLAIR images, although the other contrasts were available for
reference.
\begin{figure*}[!tbh]
\begin{center}
\includegraphics[height=0.55\textwidth]{figs/network}
\end{center}
\caption{Proposed neural network architecture for lesion segmentation is shown. 2D patches from
multi-channel images are used as features and convolutional filters ($f_j,j=1,2,\ldots$) are first
applied in parallel. Here $j$ denotes the index of the filter. Note that each ``filter ($f_j$)"
includes a convolution and a ReLU module. The filter outputs are concatenated and passed through
another convolutional pathway to predict a membership function of the patch. The filter number and
sizes are shown as $128\ @ \ 3^2$, indicating the corresponding filter bank contains $128$ filters,
each with size $3\times 3$. }
\label{fig:Fig1}
\end{figure*}
\subsection{CNN Architecture}
\label{sec:cnn}
Cascade type neural network architectures have become popular in medical image
segmentation, where features are either 2D slices or 3D patches from MR images. Typically,
multi-channel patches are first independently passed through convolutional filter banks, then a
fully connected (FC) layer is applied to predict the voxel-wise membership at the center of the
patches \cite{wachinger2017} from the concatenated outputs of the filters. We follow a similar
architecture, shown in Fig.~\ref{fig:Fig1}, where multi-channel 2D $p_1\times p_2$ patches are
convolved with multiple filter banks of various sizes (called a ``convolutional
pathway"), and the outputs of the convolutional pathways are concatenated. The details
of a convolutional pathway is given in Table~\ref{tab:conv_architecture}. After concatenation,
instead of an FC layer to predict the membership or probability of the center voxel of the
$p_1\times p_2$ patch, we add another convolutional pathway that predicts a membership value of the
whole $p_1\times p_2$ patch. Note that with variable pad sizes (see
Table~\ref{tab:conv_architecture}), the sizes of the input and outputs of the filters are kept
identical to the original MR image patch size. The training memberships are generated by simply
convolving the manual hard segmentations with a $3\times 3$ (denoted $3^2$) Gaussian kernel. We
observed that larger patches produce mored accurate segmentations compared to smaller patches, and
determined that a $35\times 35$ patch produced the best results based on the \texttt{VAL-28}
dataset. The estimation of the optimal patch size is described in Sec.~\ref{sec:param}.
\begin{table*}[htb]
\caption{Filter parameters of a convolutional pathway, as shown in Fig.~\ref{fig:Fig1}, are
provided. Every size is in voxels. Note that with variable pad sizes, the input and output size of
the filters are kept identical to the training patch size $p_1\times p_2$.}
\begin{center}
\begin{tabular}{clccccc}
\toprule[1pt]
Filter & Type & Number & Filter Size & Pad Size & Parameters & \# Parameters \\
Bank & & of Filters & & & & \\
\cmidrule[1.5pt](lr){1-7
1 & Convolution & 128 & $3^2$ & $1^2$ & $3\times 3\times 128$ & 1152 \\
2 & Convolution & 64 & $5^2$ & $2^2$ & $5\times 5\times 128 \times 64$ & 204800\\
3 & Convolution & 32 & $3^2$ & $1^2$ & $3\times 3\times 64 \times 32$ & 18432\\
4 & Convolution & 16 & $5^2$ & $2^2$ & $5\times 5\times 32\times 16$ & 12800 \\
5 & Convolution & 8 & $3^2$ & $1^2$ & $3\times 3\times 16 \times 8$ & 1152\\
\bottomrule[2pt]
\end{tabular}
\end{center}
\label{tab:conv_architecture}
\end{table*}
\begin{figure*}[!tbh]
\begin{center}
\tabcolsep 0pt
\begin{tabular}{ccccc}
\textbf{MPRAGE} & \textbf{FLAIR} & \textbf{Manual} & \textbf{With FC} & \textbf{Without FC} \\
\includegraphics[width=0.19\textwidth]{figs/2000310_T1} &
\includegraphics[width=0.19\textwidth]{figs/2000310_FLAIR} &
\includegraphics[width=0.19\textwidth]{figs/2000310_manual} &
\includegraphics[width=0.19\textwidth]{figs/2000310_fc_mem} &
\includegraphics[width=0.19\textwidth]{figs/2000310_cnn_mem} \\
\end{tabular}
\end{center}
\caption{Example of lesion memberships are shown when generated with a fully connected (FC) layer
predicting memberships at a voxel, compared to the proposed model where patch based memberships
are predicted using convolutional pathways. Note that a FC layer produces fuzzier memberships and
potentially more false positives. Memberships are scaled between $0$ (dark blue) and $1$ (red).
}
\label{fig:Fig2}
\end{figure*}
Improved segmentation results were achieved using a set of $5$ convolutional filter banks with
decreasing numbers of filters in one convolutional pathway, as shown in Fig.~\ref{fig:Fig1} and
Table~\ref{tab:conv_architecture}. The optimal number of filter banks in a pathway was also
estimated from a validation strategy discussed in Sec.~\ref{sec:param}. Each convolution is followed
by a rectified linear unit (ReLU) \cite{hinton2010}. The combination of convolution and ReLU is
indicated by $f_j$ in Fig.~\ref{fig:Fig1}. Our experiments showed that smaller filter sizes such as
$3^2$ and $5^2$ generally produce better segmentation than bigger filters (such as $7^2$ and $9^2$),
which was also observed before \cite{karen2015}. We hypothesize that since lesion boundaries are
often not well defined, small filters tend to capture the boundaries better. Also the number of free
parameters ($9$ for $3^2$) increases for larger filters ($49$ for $7^2$), which in
turn can either decrease the stability of the result or incur overfitting. However, smaller filters
may perform worse for larger lesions. Therefore we empirically used a combination of $3^2$ and $5^2$
filters based on our validation set \texttt{VAL-28}.
As noted, a major difference in the network architecture proposed here
in contrast to other popular CNN based segmentation methods is the use of a convolutional
layer to predict membership functions. The advantages of such a configuration compared to a FC layer
are as follows:
\begin{enumerate}
\item Depending on the number of convolutions and the patch size, the number of
free parameters for a FC layer can be large, thereby increasing the possibility of overfitting.
Recent successful deep learning networks such as ResNet \cite{he2016b} and GoogLeNet
\cite{szegedy2015} have put more focus on fully convolutional networks and networks, with ResNet
having no FC layer at all. Although dropout \cite{srivastava2014} has been proposed to reduce the
effect of overfitting a network to the training data, the mechanism of randomly turning off
different neurons inherently results in slightly different segmentations every time the training is
performed even with the same training data.
\item We observed that memberships predicted with an FC layer result
in more false positives compared to a fully convolutional network. An example is shown in
Fig.~\ref{fig:Fig2}, where lesion memberships are generated from MPRAGE and FLAIR using
the proposed model of convolutional pathways and a comparable model where the last convolutional
pathway after concatenation (see Fig.~\ref{fig:Fig1}) is replaced with a FC layer predicting
voxel-wise memberships. The membership image
generated with an FC layer, although being close to $1$ inside the lesions, has high values ($\ge
0.5$) in the left and right frontal cortex where the FLAIR image shows some artifacts. However, the
membership obtained with the proposed method shows relatively low values near the frontal cortex.
\item With FC layer, voxel-wise predictions are performed for each voxel on a new image. Therefore
the prediction time for the whole image comprising millions of voxels can take some time even on a
GPU, as mentioned in \cite{wachinger2017}. In contrast, with fully
convolutional prediction, lesion membership estimation of a $1$ mm\ts{3} MR volume of size
$181\times 217\times 181$ takes only a couple of seconds. Note that although patches are used for
training, the final trained model contains only convolution filters and does not depend in any way
on the input patch size. Therefore during testing, the lesion membership of a whole 2D slice,
irrespective of the slice size, is predicted at a time by applying convolutions on the whole slice.
Without an FC layer, the images need not be decomposed into sub-regions, e.g.,
\cite{kamnitsas2017}. Consequently, there is no need to employ membership smoothing between
sub-regions. In addition, since the training memberships, generated by Gaussian blurring of hard
segmentations, are smooth, the resultant predicted memberships are also smooth (Fig.~\ref{fig:Fig2}
last column).
\end{enumerate}
MS lesions are heavily under-represented as a tissue class in a brain MR image,
compared to GM or WM. In the training dataset
\texttt{ISBI-21}, lesions represent on an average $1$\% of all brain tissue voxels. For a binary
lesion classification, most supervised machine learning algorithms thus require balanced
training data \cite{he2009}, where number of patches with lesions are approximately equal to lesion
free patches. Therefore normal tissue patches are randomly undersampled \cite{llado2017,roy2015}
to generate a balanced training dataset. This is true for a small $5^2$ or $7^2$ patch, which may
have all or most voxels as lesions, thereby requiring some other patches with all or most voxels as
normal tissue. In Sec.~\ref{sec:param}, we show that using larger patches, such as $25\times 25$ or
$35\times 35$, produce more accurate segmentations compared to smaller $9^2$ or $13^2$ patches.
Since we use large patches which cover most of the largest lesions, the effect of data imbalance
is reduced.
With large patches, our training data consists of patches where the center voxel
of a patch has a lesion label, i.e., all lesion patches are included in the training data with a
stride of $1$. We do not include any normal tissue patches, where none of the voxels have a lesion
label. Experiments showed that inclusion of the normal tissue patches does not improve segmentation
accuracy, but incurs longer training time by requiring more training epochs to achieve similar
accuracy. However, one drawback of only including patches with lesions is that generally more
training data are required, especially when the number of lesions become much less than number of
parameters to be optimized, as shown in Table~\ref{tab:conv_architecture}.
\begin{figure}[!tb]
\begin{center}
\includegraphics[width=0.47\textwidth]{figs/optimal_threshold_v2}
\end{center}
\caption{
Dice coefficients of segmentations from \texttt{VAL-28} dataset are shown at different
membership thresholds from $0.05$ to $0.85$. The highest median Dice was observed at $0.30$. See
Sec.~\ref{sec:param} for details.
}
\label{fig:Fig3}
\end{figure}
\begin{figure*}[!bth]
\begin{center}
\tabcolsep 2pt
\begin{tabular}{ccccc}
\texttt{MPRAGE} & \texttt{T2} & \texttt{PD} & \texttt{FLAIR} & \texttt{Manual} \\
\includegraphics[width=0.18\textwidth]{figs/2121101_T1} &
\includegraphics[width=0.18\textwidth]{figs/2121101_T2} &
\includegraphics[width=0.18\textwidth]{figs/2121101_PD} &
\includegraphics[width=0.18\textwidth]{figs/2121101_FLAIR} &
\includegraphics[width=0.18\textwidth]{figs/2121101_manual} \\
\texttt{$13\times 13$} & \texttt{$17\times 17$} & \texttt{$19\times 19$}
& \texttt{$31\times 31$} & \texttt{$35\times 35$} \\
\includegraphics[width=0.18\textwidth]{figs/2121101_13x13x13} &
\includegraphics[width=0.18\textwidth]{figs/2121101_17x17x13} &
\includegraphics[width=0.18\textwidth]{figs/2121101_19x19x11} &
\includegraphics[width=0.18\textwidth]{figs/2121101_31x31x3} &
\includegraphics[width=0.18\textwidth]{figs/2121101_35x35x3}
\end{tabular}
\end{center}
\caption{Memberships of a subject from \texttt{VAL-28} dataset with various patch sizes are shown.
As the patch size increases, the false positives in the cortex begins to decrease.
}
\label{fig:diff_patch_size}
\end{figure*}
\subsection{Comparison Metrics}
\label{sec:metrics}
We chose $4$ comparison metrics: Dice coefficient, lesion false positive rate (LFPR), positive
predictive value (PPV), and absolute volume difference (VD) to compare segmentations. For a manual
and an automated binary segmentation $\mathcal{M}$ and $\mathcal{A}$
respectively, Dice is a voxel-wise overlap measure defined as,
\begin{eqnarray*}
\mathrm{Dice}(\mathcal{A},\mathcal{M})=\frac{2|\mathcal{A}
\cap \mathcal{M}|}{|\mathcal{A}|+|\mathcal{M}|},
\end{eqnarray*}
where $|\cdot|$ denotes number of non-zero voxels. Since lesions are often small and their total
volumes are typically very small ($1-2$\%) compared to the whole brain volume, Dice can be affected
by the low volume of the segmentations \cite{geremia2011}. Therefore LFPR is defined based on
distinct lesion counts. A distinct lesion is defined as an $18$-connected object,
although such a description of lesions may or may not be biologically accurate.
LFPR is the number of lesions in the automated segmentation that do not overlap with any lesions in
the manual segmentation, divided by the total number of lesions in the automated segmentation. Two
lesions are considered overlapped when they share at least one voxel. PPV is defined as the ratio of
true positive voxels and total number of positive voxels, expressed as
\begin{eqnarray*}
\mathrm{PPV}(\mathcal{A},\mathcal{M})=\frac{2|\mathcal{A} \cap \mathcal{M}|}{|\mathcal{A}|},
\end{eqnarray*}
Absolute volume difference is defined as
\begin{eqnarray*}
\mathrm{VD}(\mathcal{A},\mathcal{M}) =
\frac{\mathrm{abs}(|\mathcal{A}| - |\mathcal{M}|)}{|\mathcal{M}|}.
\end{eqnarray*}
All statistical tests were performed with a non-parametric paired Wilcoxon signed rank test.
\subsection{Parameter Optimization}
\label{sec:param}
In this section, we describe a validation strategy to optimize user selectable parameters of the
proposed network: (1) patch size, (2) number of filter banks in a convolutional pathway, and (3) the
final threshold to create hard segmentations from memberships. After training with \texttt{ISBI-21},
the network was applied to the images of \texttt{VAL-28} to generate their lesion membership images.
Memberships were thresholded and then masked with a cerebral white matter mask \cite{shiee2009} to
remove any residual false positives. Dice was used as the primary metric for optimizing the
parameters, with LFPR used as a secondary metric for patch size optimization. Although our model is
capable of using all four available contrasts, initial experiments on \texttt{VAL-28} data showed
negligible improvement in segmentation accuracy with $T_2$-w and $PD$-w images. Therefore all
results were obtained with only MPRAGE and FLAIR contrasts.
\begin{figure*}[tbh]
\begin{center}
\includegraphics[width=0.6\textwidth]{figs/dice_exp_patchsize} \\
\includegraphics[width=0.6\textwidth]{figs/lfpr_exp_patchsize}
\end{center}
\caption{Dice coefficients (top) and LFPR (bottom) are plotted for the segmentations of
\texttt{VAL-28} data when trained with \texttt{ISBI-21} for various patch sizes. See
Sec.~\ref{sec:param} for details.
}
\label{fig:Fig5}
\end{figure*}
To optimize the membership threshold, we trained a network with $35\times 35$ patches. Memberships
generated on \texttt{VAL-28} were segmented with thresholds from $0.05$ to $0.85$ with an increment
of $0.05$. The range of Dice coefficients is shown in Fig.~\ref{fig:Fig3}.
The highest median Dice coefficient was observed for a threshold of
$0.30$. This is intuitively reasonable because during training, the lesion memberships of atlases
were generated from their hard segmentations using a $3\times 3$ Gaussian kernel, and it can
be shown that the half max of a $3\times 3$ discrete Gaussian is at $0.31$.
Next we varied the depth of a convolutional pathway from $2$ to $6$ filter banks while keeping
the number of filters as a multiple of $2$, with the last filter bank having $8$ filters.
The highest median Dice coefficient was observed at a depth of $5$, which is
significantly larger than Dice coefficients with depths $3$ and $4$ ($p<0.05$). Although the
differences in Dice coefficients were small between various depths, we used a depth of $5$ for the
rest of the experiments. With more than $6$ filter banks, the Dice slowly decreases, which can be
attributed to overfitting the training data.
Patch size is another important parameter of the network. In computer vision applications such as
object detection, usually a whole 2D image is used as a feature. However, full 3D medical images can
not typically be used because of memory limitations. Fig.~\ref{fig:diff_patch_size} shows examples
of lesion memberships obtained with different sized 2D patches. As the patch sizes increases, the
false positives that are mostly observed in the cortex tend to decrease.
Fig.~\ref{fig:Fig5} shows a plot of Dice and LFPR with various patch sizes, ordered from left to
right according to their increasing size. Note that smaller patches ($9^2$ to $17^2$) produced
significantly lower Dice and higher LFPR compared to other patches ($p<0.001$), as seen from the
memberships in Fig.~\ref{fig:diff_patch_size}. Also some of the highest Dice and lowest LFPR were
observed for patches with large in-plane size, i.e., $31\times 31$, $27\times 27$, and $35\times
35$. It was observed in Fig.~\ref{fig:Fig5} that there is no significant difference between Dice
coefficients for $31\times
31$, $35\times 35$, or $27\times 27$, but LFPR of both $35\times 35$ and $31\times 31$ are
significantly lower than that of $27 \times 27$ ($p<0.05$). We chose $35\times 35$ as the optimal
patch size. Other choices of smaller $5^2$ and $7^2$ patches (not shown) yielded worse results.
Note that although training was performed with different patch sizes, the
memberships were generated slice by slice, as the trained model consisted only of convolutions
and did not need any information about patch sizes.
\subsection{Competing Methods}
\label{sec:competing_method}
We compared FLEXCONN with LesionTOADS \cite{shiee2009}, OASIS \cite{sweeney2013},
LST \cite{schmidt2012}, and S3DL \cite{roy2015}. LesionTOADS (Topology reserving Anatomy Driven
Segmentation) does not need any parameter tuning and uses MPRAGE and FLAIR. OASIS (OASIS is
Automated Statistical Inference for Segmentation) has a threshold parameter that is used to
threshold the memberships to create a hard segmentation. It was optimized as
$0.15$ by training a logistic regression on the \texttt{ISBI-21} and applying the regression model
to \texttt{VAL-28}. A similar value was reported in the original paper. OASIS requires all four
contrasts, MPRAGE, $T_2$-w, $PD$-w, and FLAIR. LST (Lesion Segmentation Toolbox) has a parameter
$\kappa$, which initializes the lesion segmentation. Lower values of $\kappa$ produces bigger
lesions. We optimized $\kappa$ to maximize the Dice coefficient on \texttt{VAL-28} data and found
that $\kappa=0.10$ yielded the highest median Dice. LST uses MPRAGE and FLAIR images. S3DL has two
parameters, number of atlases and membership threshold. We observed that adding more than $4$
atlases did not improve Dice coefficients significantly, as was reported in the original paper.
Hence we used $5$ atlases as the last time-points of the $5$ subjects from the
\texttt{ISBI-21} dataset. The optimal threshold for S3DL was also found to be $0.80$. S3DL used
MPRAGE and FLAIR as adding $T_2$-w and $PD$-w images did not improve the segmentation.
\subsection{Implementation Details}
\label{sec:implementation}
Our model was implemented in Tensorflow and Keras\footnote[7]{\url{https://keras.io/}}. We used
Adam \cite{kingma2015} as the optimizer, which has been shown to produce the fastest convergence in
neural network parameter optimization. The optimization was run with fixed learning rate of $0.0001$
for $20$ epochs, which was empirically found to produce sufficient convergence without overfitting.
During training with $35\times 35$ patches using lesions from $21$ subjects of \texttt{ISBI-21}
dataset, we used $20$\% of the total number patches for validation and the remaining $80$\% for
training. Training with $128$ minibatches required about $6$ hours on an Nvidia Titan X GPU
with $12$ GB memory. Segmenting a new subject took $3-5$ seconds.
\section{Results}
\label{sec:results}
In this section, we show comparison of FLEXCONN with other methods on two datasets \texttt{MS-100}
and \texttt{ISBI-61} (see Section~\ref{sec:materials}). Research
code\footnote[6]{\url{http://www.nitrc.org/projects/flexconn}} implementing our method is freely
available.
\subsection{\texttt{MS-100} Dataset}
\label{sec:ms100}
For this dataset, the training was performed separately with two sets of masks from the two raters
of \texttt{ISBI-21} data. Then two memberships were generated for each of the $100$ images. For each
image, the two memberships were averaged and thresholded to form the final segmentation.
Fig.~\ref{fig:ms100-example} shows MR images and segmentations of $3$ subjects from the
\texttt{MS-100} dataset, where the subjects have high ($22$cc), moderate ($8$cc), and low ($1$cc)
lesion loads. For the subject with high lesion loads (\#1), all $5$ methods performed comparably,
although OASIS and LST underestimated some small and subtle lesions (yellow arrow). For the subject
with moderate lesion load (\#2), OASIS and S3DL underestimated some lesions (orange arrow) and
LesionTOADS overestimated some (green arrow). When the lesion load is small and the FLAIR image has
some artifacts (subject \#3), LesionTOADS, S3DL, and OASIS produce a false positive (yellow arrow)
in the cortex. LST shows
underestimation, but FLEXCONN does not produce the false positive. The reason is partly because of
the use of large patches, which can successfully distinguish between bright voxels in cortex and
peri-ventricular regions.
\begin{figure*}[!tbh]
\begin{center}
\tabcolsep 0pt
\begin{tabular}{ccccc}
\textsc{LesionTOADS} & \textsc{S3DL} & \textsc{OASIS} & \textsc{LST} & \textsc{Proposed} \\
\includegraphics[width=0.2\textwidth]{figs/LTOADSplot} &
\includegraphics[width=0.2\textwidth]{figs/S3DLplot} &
\includegraphics[width=0.2\textwidth]{figs/OASISplot} &
\includegraphics[width=0.2\textwidth]{figs/LSTplot} &
\includegraphics[width=0.2\textwidth]{figs/CNNplot}
\end{tabular}
\end{center}
Manual vs automated lesion volumes for the $5$ methods on \texttt{MS-100} dataset. The solid lines
show robust linear fits of the points and the dotted black line represents the unit slope line.
Numbers are in mm\ts{3}.
\label{fig:ms100-scatterplot}
\end{figure*}
\begin{table*}[!tbh]
\caption{Slopes and intercepts from Fig.~\ref{fig:ms100-scatterplot} are shown for the
\texttt{MS-100} dataset. Bold indicates the largest absolute value among the $5$ methods.}
\begin{center}
\tabcolsep 5pt
\begin{tabular}{ccc}
\toprule[2pt]
Method & Slope & Intercept (cc) \\
\cmidrule[2pt](lr){1-3
LesionTOADS \cite{shiee2009} & 0.6112 & 7783.2 \\
S3DL \cite{roy2015} & 0.7488 & 1570.1 \\
OASIS \cite{sweeney2013} & 0.8002 & 1163.1\\
LST \cite{schmidt2012} & 0.4650 & \textbf{-44.9} \\
FLEXCONN & \textbf{0.9421} & 143.2 \\
\bottomrule[2pt]
\end{tabular}
\end{center}
\label{tab:ms100-stats}
\end{table*}
\begin{table*}[!tbh]
Median values of Dice, lesion false positive rate (LFPR), positive predictive value (PPV), and
volume difference (VD) are shown for competing methods on \texttt{MS-100} dataset. Bold indicates
significantly highest/lowest $(p<0.05)$ number. See Sec.~\ref{sec:metrics} for the definition of the
metrics.
\begin{center}
\tabcolsep 5pt
\begin{tabular}{ccccc}
\toprule[2pt]
& Dice & LFPR & PPV & VD \\
\cmidrule[1pt](lr){1-5}
LesionTOADS & 0.4678 & 0.6865 & 0.3968 & 0.4718 \\
S3DL & 0.5526 & 0.4164 & 0.5968 & 0.2755 \\
OASIS & 0.4993 & 0.5081 & 0.6242 & 0.2681 \\
LST & 0.4239 & 0.4409 & \textbf{0.7820} & 0.5623 \\
FLEXCONN & \textbf{0.5639} & \textbf{0.3077} & 0.6040 & \textbf{0.1978} \\
\bottomrule[1pt]
\end{tabular}
\end{center}
\label{tab:ms100-volstats}
\end{table*}
\begin{figure*}[!tbh]
\begin{center}
\tabcolsep 1pt
\begin{tabular}{cccccc}
& \textbf{MPRAGE} & \textbf{FLAIR} & $\mathbf{T_2}$-w & $\mathbf{PD}$-w & \textbf{Manual} \\
{\rotatebox{90}{\hspace{1em}\texttt{Subject \#1}}} &
\includegraphics[width=0.13\textwidth]{figs/2023902_T1} &
\includegraphics[width=0.13\textwidth]{figs/2023902_FLAIR_arrow} &
\includegraphics[width=0.13\textwidth]{figs/2023902_T2} &
\includegraphics[width=0.13\textwidth]{figs/2023902_PD} &
\includegraphics[width=0.13\textwidth]{figs/2023902_manual_arrow}
\end{tabular}
\tabcolsep 1pt
\begin{tabular}{cccccc}
\textbf{LesionTOADS} & \textbf{S3DL} & \textbf{OASIS} & \textbf{LST} & \textbf{FLEXCONN} & \textbf{Membership} \\
\includegraphics[width=0.13\textwidth]{figs/2023902_LTOADS} &
\includegraphics[width=0.13\textwidth]{figs/2023902_S3DL} &
\includegraphics[width=0.13\textwidth]{figs/2023902_OASIS} &
\includegraphics[width=0.13\textwidth]{figs/2023902_LST} &
\includegraphics[width=0.13\textwidth]{figs/2023902_CNN} &
\includegraphics[width=0.13\textwidth]{figs/2023902_CNN_memb}
\end{tabular}
\tabcolsep 1pt
\begin{tabular}{cccccc}
& \textbf{MPRAGE} & \textbf{FLAIR} & $\mathbf{T_2}$-w & $\mathbf{PD}$-w & \textbf{Manual} \\
{\rotatebox{90}{\hspace{1em}\texttt{Subject \#2}}} &
\includegraphics[width=0.13\textwidth]{figs/2000607_T1} &
\includegraphics[width=0.13\textwidth]{figs/2000607_FLAIR_arrow} &
\includegraphics[width=0.13\textwidth]{figs/2000607_T2} &
\includegraphics[width=0.13\textwidth]{figs/2000607_PD} &
\includegraphics[width=0.13\textwidth]{figs/2000607_manual}
\end{tabular}
\tabcolsep 1pt
\begin{tabular}{cccccc}
\textbf{LesionTOADS} & \textbf{S3DL} & \textbf{OASIS} & \textbf{LST} & \textbf{FLEXCONN} & \textbf{Membership} \\
\includegraphics[width=0.13\textwidth]{figs/2000607_LTOADS} &
\includegraphics[width=0.13\textwidth]{figs/2000607_S3DL} &
\includegraphics[width=0.13\textwidth]{figs/2000607_OASIS} &
\includegraphics[width=0.13\textwidth]{figs/2000607_LST} &
\includegraphics[width=0.13\textwidth]{figs/2000607_CNN} &
\includegraphics[width=0.13\textwidth]{figs/2000607_CNN_memb}
\end{tabular}
\tabcolsep 1pt
\begin{tabular}{cccccc}
& \textbf{MPRAGE} & \textbf{FLAIR} & $\mathbf{T_2}$-w & $\mathbf{PD}$-w & \textbf{Manual} \\
{\rotatebox{90}{\hspace{1em}\texttt{Subject \#3}}} &
\includegraphics[width=0.13\textwidth]{figs/2011604_T1} &
\includegraphics[width=0.13\textwidth]{figs/2011604_FLAIR} &
\includegraphics[width=0.13\textwidth]{figs/2011604_T2} &
\includegraphics[width=0.13\textwidth]{figs/2011604_PD} &
\includegraphics[width=0.13\textwidth]{figs/2011604_manual}
\end{tabular}
\tabcolsep 1pt
\begin{tabular}{cccccc}
\textbf{LesionTOADS} & \textbf{S3DL} & \textbf{OASIS} & \textbf{LST} & \textbf{FLEXCONN} & \textbf{Membership} \\
\includegraphics[width=0.13\textwidth]{figs/2011604_LTOADS_arrow} &
\includegraphics[width=0.13\textwidth]{figs/2011604_S3DL} &
\includegraphics[width=0.13\textwidth]{figs/2011604_OASIS} &
\includegraphics[width=0.13\textwidth]{figs/2011604_LST} &
\includegraphics[width=0.13\textwidth]{figs/2011604_CNN} &
\includegraphics[width=0.13\textwidth]{figs/2011604_CNN_memb}
\end{tabular}
\end{center}
\vspace{-1em}
\caption{
$T_1$-w, $T_2$-w, $PD$-w, and FLAIR images of three subjects with high,
medium, and low lesion load from \texttt{MS-100} dataset, along with segmentations from $5$
competing methods and one lesion membership from the proposed CNN based FLEXCONN.
}
\label{fig:ms100-example}
\end{figure*}
\begin{figure*}[!tbh]
\begin{center}
\tabcolsep 1pt
\begin{tabular}{cccc}
\textbf{MPRAGE} & \textbf{FLAIR} & \textbf{Rater 1} & \textbf{Rater 2} \\
\includegraphics[width=0.2\textwidth]{figs/test01_02_T1} &
\includegraphics[width=0.2\textwidth]{figs/test01_02_FLAIR} &
\includegraphics[width=0.2\textwidth]{figs/test01_02_rater1} &
\includegraphics[width=0.2\textwidth]{figs/test01_02_rater2}
\end{tabular}
\begin{tabular}{ccc}
\textbf{Membership\#1} & \textbf{Membership\#2} & \textbf{FLEXCONN} \\
\includegraphics[width=0.2\textwidth]{figs/test01_02_CNN_membership1} &
\includegraphics[width=0.2\textwidth]{figs/test01_02_CNN_membership2} &
\includegraphics[width=0.2\textwidth]{figs/test01_02_CNN_seg}
\end{tabular}
\end{center}
\caption{A typical segmentation example for one subject from the \texttt{ISBI-61} dataset.
Membership\#1 and \#2 refer to the lesion memberships obtained using training data from rater 1 and
2, respectively.
}
\label{fig:isbi-example}
\end{figure*}
\begin{table*}[tbh]
Mean values of comparison metrics are shown for various competing methods on \texttt{ISBI-61}
dataset. Bold indicates highest or lowest value. See Sec.~\ref{sec:metrics}
for the definition of the metrics. The score was computed as a weighted average of the other
metrics.
\begin{center}
\tabcolsep 3pt
\begin{tabular}{cccccc}
\toprule[2pt]
& Dice & LFPR & PPV & VD & Score\\
\cmidrule[1pt](lr){1-6}
Birenbaum \textit{et. al.}\cite{birenbaum2016} & 0.6271 & 0.4976 & 0.7890 & 0.3523 & 90.07\\
Jain \textit{et. al.}\cite{jain2015} & 0.5243 & 0.4005 & 0.6947 & 0.3886 & 88.74\\
Tomas-Fernandez \textit{et. al.}\cite{warfield2015} & 0.4317 & 0.4116 & 0.6974 & 0.5110 & 87.07\\
Gghafoorian \textit{et. al.}\cite{ghafoorian2017a}& 0.5009 & 0.5766 & 0.5942 & 0.5708 & 86.92\\
Sudre \textit{et. al.}\cite{sudre2015} & 0.5226 & 0.6776 & 0.6690 & 0.3887 & 86.44\\
Maier \textit{et. al.}\cite{maier2015} & 0.6050 & 0.2658 & 0.7746 & 0.3654 & 90.28\\
Deshpande \textit{et. al.}\cite{deshpande2015} & 0.5920 & 0.2806 & 0.7622 & \textbf{0.3214} & 89.81\\
Valverde \textit{et. al.}\cite{llado2017} & \textbf{0.6305} & 0.1529 & 0.7867 & 0.3385 & \textbf{91.33} \\
FLEXCONN & 0.5243 & \textbf{0.1103} & \textbf{0.8660} & 0.5207 & 90.48\\
\bottomrule[1pt]
\end{tabular}
\end{center}
\label{tab:isbi61-volstats}
\end{table*}
Since lesion volume is an important outcome measure for evaluating disease progression, we compared
automated lesion volume vs the manual lesion volume in Fig.~\ref{fig:ms100-scatterplot}. Solid lines
represent a robust linear fit of the points, and the black dotted line represents unit slope. It is
observed that LesionTOADS (blue) overestimates lesions when lesion load is small, and LST (magenta)
underestimates the lesion when the lesion load is high. S3DL, OASIS, and FLEXCONN show less bias
with respect to lesion load, while FLEXCONN has the slope closest to unity ($0.94$). The
slopes and intercepts with manual lesion volumes are also shown in Table~\ref{tab:ms100-stats}.
Table~\ref{tab:ms100-volstats} shows median values of various comparison metrics for the $5$
competing methods. FLEXCONN produces significantly better Dice, LFPR, and VD ($p<0.01$)
among the four methods. LST produces the highest PPV.
\subsection{\texttt{ISBI-61} Dataset}
\label{sec:isbi61}
Although \texttt{ISBI-61} includes longitudinal images, we performed the segmentation in a
cross-sectional manner. The segmentations were generated in a similar fashion as the \texttt{MS-100}
dataset (Sec.~\ref{sec:ms100}) by averaging two memberships obtained using two sets of training.
A typical segmentation example is shown in Fig.~\ref{fig:isbi-example}, where the subject has high
lesion load ($40$cc).
Table~\ref{tab:isbi61-volstats} shows a comparison with some of the methods that participated in
the \texttt{ISBI 2015} challenge. The proposed method achieves the lowest LFPR ($0.1102$)
and the highest PPV ($0.866$) compared to others, while the highest Dice was produced by another
recent CNN based method \cite{llado2017}. The lowest VD was achieved by a dictionary based
method \cite{deshpande2015}. To rank a method, a score was computed using a weighted average of
various metrics including Dice, LFPR, PPV, and VD, as detailed in \cite{carass2017}. For the two
raters, the inter-rater score was $0.67$, which is scaled to $90$. Therefore, a score of $90$ or
more indicates segmentation accuracy similar to the consistency between two human raters. FLEXCONN
achieved a score of $90.48$, while the other CNN based methods, \cite{llado2017} and
\cite{birenbaum2016}, achieved scores of $91.33$ and $90.07$, respectively, indicating their
performance to be comparable to human raters. Most of the top scoring methods in the challenge were
based on CNN.
\section{Discussion}
We have proposed a simple end-to-end fully convolutional neural network based method to segment
MS lesions from multi-contrast MR images. Our network is does not have any fully connected layers
and after training, takes only a couple of seconds to segment lesions on a new image. Although we
validated using only $T_1$-w and FLAIR contrasts, other contrasts can easily be included in the
framework. We have shown that using large, two-dimensional, patches provide significantly better
segmentations than smaller patches. Comparisons with four other publicly available lesion
segmentation methods, two supervised and two unsupervised, showed superior performance over $100$
images.
During training, there were several parameters that were empirically determined. First, for a
$(2w+1)^2$ filter, we used $w^2$ zero padding at each convolution so as to have a uniform input and
output patch size to all filters. Without padding, the output size after every filter bank decreases
and care should be taken to keep the input and output patches properly aligned. With padding, we can
add or remove filter banks without worrying about alignment. Another important parameter is the
batch size. With too small a batch size, the gradient computation becomes noisy and the stochastic
gradient descent optimization may not lead to a local minima. With too large a batch size, the
optimization may lead to a sharp local minima, making the model not generalizable to new data
\cite{keskar2016}. Therefore, an appropriate batch size should be chosen based on the data. During
training, we empirically chose $128$ for training and $64$ as a test batch size.
With the removal of fully connected layers, the proposed fully convolutional network can generate
the membership of a 2D slice without the need for dividing images into sub-regions
\cite{kamnitsas2017}. With large
enough patches, the contextual information of a lesion voxel can be obtained from within the
patch. This is representative of a human observer looking at a large neighborhood while considering
a voxel to be lesion of not. Note that although the training is performed with patches, the
prediction step does not need the patch information because the trained convolutions are applied to
a whole 2D slice. As a consequence, the memberships are inherently smooth, and the problem of
possible discontinuities between sub-regions does not arise.
MS lesion segmentation is associated with high inter-rater variability in manual delineations, as
seen on both the \texttt{MICCAI 2008} and \texttt{ISBI 2015} challenges. For
example, in the \texttt{MICCAI 2008} lesion segmentation challenge, the average Dice overlap between
two raters was $0.25$, and in the \texttt{ISBI 2015} challenge, the inter-rater Dice overlap was
$0.63$ \cite{carass2017}. Therefore it is expected that the average Dice coefficients of the
proposed segmentations are as low as $0.5$ and sometimes are even lower. However, Dice coefficients
can be artificially low when the actual lesion volume is small, therefore having fewer false
positives can be more desirable than having a high Dice. Our proposed model had the lowest false
positive rate compared to all other methods on both test datasets while maintaining good
sensitivity.
In our experiments, we used large 2D patches similar to \cite{ghafoorian2017b}, in comparison to
isotropic 3D patches as used before, e.g., $11^3$ in \cite{llado2017}, $23^3$ in
\cite{wachinger2017}, and $17^3$ in \cite{kamnitsas2017}. The rationale behind using large
anisotropic patches is twofold. First, experiments with full 3D isotropic $9^3$ or $11^3$ patches
showed little or no improvement in Dice and led to increased false positives, with memberships
similar to the one with $13\times 13$ patches, as shown in
Fig.~\ref{fig:diff_patch_size}. Larger isotropic patches, e.g. $19^3$ or $25^3$, showed inferior
segmentation, and in some cases, optimization did not converge. The reason is that the FLAIR
images in the test datasets had inherently low resolution in the inferior-superior direction,
$2.2$ mm and $4.4$ mm compared to in-plane resolution of $0.82\times 0.82$ mm. Therefore 2D axial
patches capture the high resolution in-plane information that represents the original thick axial
slices. Second, the lesions are usually focal and small in size, unlike other brain
structures. Therefore a very large isotropic patch around a small lesion can include superfluous
information about the lesion, which can increase the amount of false positives.
Note that with in more recent studies employing high resolution 3D FLAIR sequences, it is
trivial to extend the algorithm to accommodate for 3D patches.
One drawback of the proposed method is that it requires a large number of training patches. With the
\texttt{ISBI-21} as training data, there are approximately only $270,000$ training patches.
Patch
rotation \cite{guizard2015} is a standard data augmentation technique where training patches are
rotated by $45$\degree, $90$\degree, $135$\degree, and $180\degree$ in the axial plane, and added to
the training patch set in addition to the non-rotated patches. Our initial experiments with rotated
patches on the
\texttt{VAL-28} dataset showed only $1-2$\% increase in average Dice coefficients with rotated
patches at the cost of significantly more memory and training time, indicating that the network is
already sufficiently generalizable with the original training data. Therefore we did not use rotated
patches in the final segmentation. However, further experiments are needed to understand the full
scope of performance improvement with respect to the available training data and other augmentation
techniques, such as patch cropping or adding visually imperceptible jitters to images.
Table~\ref{tab:isbi61-volstats} shows that there is no single method that has the highest
metrics among the six. This is consistent with previously reported results \cite{carass2017} on
the same \texttt{ISBI-61} data. There are several methods with score more than
$90$, such as \cite{birenbaum2016,maier2015,llado2017}. \cite{llado2017} produced the highest Dice,
while FLEXCONN produced the highest LFPR and PPV. Both these methods are based on CNN, outperforming
other traditional machine learning based algorithms. Note that FLEXCONN has a very simple network
architecture and does not have a longitudinal component like \cite{birenbaum2016} or two-pass
correction mechanism like \cite{llado2017}. Still it was able to achieve similar overall
performance. Future work will include further comparison with other CNN based methods such
as \cite{ghafoorian2017a,llado2017,birenbaum2016}. We will also explore more recent and
state-of-the-art networks such as \cite{szegedy2015,he2016,he2016b} to achieve better accuracy and
temporal consistency in segmentations.
\section{Acknowledgements}
Support for this work included funding from the Department of Defense in the Center for Neuroscience
and Regenerative Medicine and intramural research program at NIH and NINDS.
This work was also partially supported by grant from National MS Society RG-1507-05243 and
NIH R01NS082347.
\bibliographystyle{elsarticle-harv}
|
1,108,101,564,409 | arxiv | \section{Introduction}\label{intro}
{The smallest enclosing ball (SEB) problem is considered in this paper. The SEB
problem is to find a ball with the smallest radius that can enclose
the union of all given balls $B_i~(i=1,2,\cdots,m)$ with center
$c_i$ and radius $r_i\geq0$ in $\mathbb{R}^n$, i.e., $$B_i=\left\{x\in
\mathbb{R}^n\,|\,\left\|x-c_i\right\|\leq r_i\right\}.$$ Define $$f(x)=\max_{1\leq
i\leq m}\left\{f_i(x)\right\},$$ where
\begin{equation*}
f_i(x)=\|x-c_i\|+r_i,\,i=1,2,\ldots,m.
\end{equation*}
Then, the SEB problem can be
formulated as the following nonsmooth convex optimization problem \cite{16}:
\begin{equation}\label{nonsmooth}
\displaystyle \min_{x\in \mathbb{R}^n} f(x).
\end{equation} It is shown in \cite{16} that problem (\ref{nonsmooth}) has a unique solution.
The SEB problem arises in a large number of important applications,
often requiring that it should be solved in large dimensions,
such as location analysis \cite{location}, gap tolerant classifiers in machine learning
\cite{1,machinelearning,machinelearning2}, tuning support vector machine parameters
\cite{2}, support vector clustering \cite{4,support}, $k$-center clustering
\cite{6},
testing of radius clustering \cite{7}, pattern recognition \cite{1,pattern},
and it is itself of interest as a
problem in computational geometry \cite{9,10,circle,11,12,13}.
Many algorithms have been proposed for the special case of problem (\ref{nonsmooth}) with all $r_i$ degenerating into
zero, i.e., the problem of the smallest enclosing ball of points. To
the best of our knowledge, if the points lie in low $n$-dimensional
space, methods \cite{14,15,8,fischer,points} from computational geometry community can
yield quite satisfactory solutions in both theory and practice.
Nevertheless, these approaches cannot handle most of very recent
applications in connection with machine learning \cite{machinelearning,machinelearning2} and support vector machines \cite{4,support}
that require the problem of higher
dimensions to be solved
Obviously, the non-differentiable convex SEB problem \eqref{nonsmooth} can be solved directly by the subgradient method \cite{subgradient,subgradient2}. With an appropriate step size rule, the subgradient method is globally convergent. However, the subgradient method suffers a quite slow
convergence rate, and it is very sensitive to the choice of the initial step size. By introducing additional slack variables $r\in\mathbb{R},\left\{s_i\in\mathbb{R}^n\right\}_{i=1}^m,\left\{t_i\in\mathbb{R}\right\}_{i=1}^m,$ the SEB problem \eqref{nonsmooth} can be equivalently reformulated as a second order cone program (SOCP) as follows:
\begin{equation*
\begin{array}{cl}
\displaystyle\min_{\left\{x,\,r,\,\left\{s_i\right\}_{i=1}^m,\,\left\{t_i\right\}_{i=1}^m\right\}} & r \\
\mbox{s.t.} & r-t_i=r_i,\\
& x-s_i=c_i,\\
&\|s_i\|\leq t_i.
\end{array}\end{equation*} As shown in \cite{16}, while the above SOCP reformulation of the SEB problem can be efficiently solved by using the standard software package like SDPT3 \cite{SDPT3} with special structures taking into account, it typically requires too much memory space to store intermediate data, which makes the approach prohibitively being used for solving the SEB problem with large dimensions.
Recently, various smooth approximation-based methods \cite{16,21,20,smooth,yubo,polak,polak2,smoothing,jorsc1,jorsc2} have been proposed for solving the SEB problem in high dimensions. For instance, the log-exponential aggregation function \cite{17,aggregate} was used in \cite{16} to smooth the maximum function, and then the limited-memory BFGS algorithm \cite{lbfgs} was presented to solve the resulting smoothing problem. In \cite{20}, the authors used the Chen-Harker-Kanzow-Smale (CHKS) function \cite{chks1,chks2,chks3} to approximate the maximum function, and again, applied the limited-memory BFGS algorithm to solve the smoothing approximation problem.
{The goal of this paper is to develop a computationally efficient algorithm that
could be used to solve the SEB problems with large $mn$. Different from the existing literatures \cite{16,20,17,aggregate,chks1,chks2,chks3}, our emphasis is not to develop new smoothing techniques but to design efficient algorithms for solving the existing smoothing approximation problems by using their special structures.
The main contribution of this paper is as follows. We propose a computationally efficient inexact Newton-CG algorithm that can efficiently solve the SEB problems with large $mn.$ At each iteration, the proposed algorithm first applies the CG method to approximately solve the inexact Newton equation and obtain the search direction; and then a line search is performed along the obtained direction. The distinctive advantage of the proposed inexact Newton-CG algorithm over the classical Newton-CG algorithm is that the gradient and the Hessian-vector product are inexactly computed, which makes it more suitable to be used to solve the SEB problem of large dimensions. Under an appropriate choice of parameters, we establish global convergence of the proposed inexact Newton-CG algorithm. Numerical simulations show that the proposed algorithm takes substantially less CPU time to solve the SEB problems than the classical Newton-CG algorithm and the state-of-the-art algorithm in \cite{16}.
The rest of the paper is organized as follows. In section \ref{log-exp}, we briefly review the log-exponential aggregation function. In section \ref{our algorithm},
by taking into the special structure of the log-exponential aggregation function into consideration, the inexact Newton-CG algorithm is proposed for solving the SEB problem. Global convergence of the proposed algorithm is established in Section \ref{convergence}. Numerical results are reported in Section \ref{experiment} to illustrate the efficiency of the proposed algorithm}, and conclusion is
drawn in section \ref{conclusion}. %
\section{Review of Log-Exponential Aggregation Function}
\label{log-exp} For any $\mu>0,$ the smooth log-exponential
aggregation function of $f(x)$ in \eqref{nonsmooth} is defined
as\begin{equation}\label{smoothfunction}
f(x;\mu)=\mu\ln\left(\sum_{i=1}^m\exp\left(f_i(x;\mu)/\mu\right)\right),
\end{equation}
where
\begin{equation*
f_i(x;\mu)=\sqrt{\|x-c_i\|^2+\mu^2}+r_i,\,i=1,2,\ldots,m.
\end{equation*}
\begin{lemma}[\cite{16,21,17,aggregate}]\label{yinli}
The function $f(x;\mu)$ in \eqref{smoothfunction} has the following
properties:
\begin{enumerate}
\item [(i)] For any $x\in \mathbb{R}^n$ and $0<\mu_1<\mu_2,$ we have
$f(x;\mu_1)<f(x;\mu_2);$
\item [(ii)] For any $x\in \mathbb{R}^n$ and $\mu>0,$ we have
$f(x)<f(x;\mu)\leq f(x)+\mu\left(1+\ln{m}\right);$
\item [(iii)] For any $\mu>0,$ $f(x;\mu)$ is
continuously differentiable and strictly convex in $x\in \mathbb{R}^n$, and its gradient
and Hessian are given as follows:
\begin{align}
\nabla f(x;\mu)=&\sum_{i=1}^m \lambda_i(x;\mu) \nabla f_i(x;\mu),
\label{gradient}\\
\displaystyle \nabla^2
f(x;\mu)=&\displaystyle\sum_{i=1}^m\left(\lambda_i(x;\mu)\nabla^2f_i(x;\mu)+\frac{1}{\mu}\lambda_i(x;\mu)\nabla
f_i(x;\mu)\nabla f_i(x;\mu)^T\right)\label{hessian}\\%[5pt]
&\displaystyle-\frac{1}{\mu}\nabla f(x;\mu)\nabla
f(x;\mu)^T,\nonumber
\end{align}
where \begin{align}
\lambda_i(x;\mu)&=\frac{\exp\left(f_i(x;\mu)/\mu\right)}{\displaystyle\sum_{j=1}^m\exp\left(f_j(x;\mu)/\mu\right)}\in
(0,\,1),\,i=1,2,\ldots,m,\label{lambda}\\
\nabla f_i(x;\mu)&=\dfrac{x-c_i}{{g_i(x;\mu)}},\,i=1,2,\ldots,m,
\label{ggi}\\
\nabla^2f_i(x;\mu)&=\displaystyle\frac{I_n}{{g_i(x;\mu)}}-\dfrac{(x-c_i)(x-c_i)^T}{g_i(x;\mu)^{3}},\,i=1,2,\ldots,m,\label{hi}\\
g_i(x;\mu)&=\sqrt{\|x-c_i\|^2+\mu^2},\,i=1,2,\ldots,m,\label{gi}
\end{align}
and $I_n$ denotes the $n\times n$ identity matrix.
\end{enumerate}
\end{lemma}
It can be easily seen from \eqref{lambda}, \eqref{ggi}, and \eqref{gi} that, for any $\mu>0,$
\begin{align}
\sum_{i=1}^m\lambda_{i}(x;\mu)=&~1,\label{1sum}\\
\left\|\nabla f_i(x;\mu)\right\|<&~1,~i=1,2,\ldots,m.\label{1gradient}
\end{align}Combining \eqref{gradient}, \eqref{1sum}, and \eqref{1gradient}, we further obtain
\begin{equation}\label{1gra}
\left\|\nabla
f(x;\mu)\right\|<1.
\end{equation
The algorithm proposed in \cite{16} is based on the log-exponential aggregation function \eqref{smoothfunction}. We rewrite it as Algorithm \ref{alg1} in this paper as follows.
\begin{algorithm}\caption{}\label{alg1}
\begin{algorithmic}[1]
\STATE Let $\sigma\in(0,1),~\epsilon_1,~\epsilon_2\geq 0,~x_0\in
\mathbb{R}^n,$ and $\mu_0>0$ be given, and set $k=0.$
\REPEAT
\STATE Use the limited-memory BFGS algorithm \cite{lbfgs} to solve problem
\begin{equation}\label{sub}
\min_{x\in\mathbb{R}^n} f(x;\mu_k),
\end{equation}
and obtain $x_k$ such that $\left\|\nabla f(x_k;\mu_k)\right\|\leq \epsilon_2.$
\STATE Set $\mu_{k+1}=\sigma\mu_k$ and $k=k+1.$
\UNTIL {$\mu_k\leq\epsilon_1$}
\end{algorithmic}
\end{algorithm}
\begin{theorem}(\label{convergenceZhou}\cite{16}) Let $\epsilon_1=\epsilon_2=0$ in Algorithm \ref{alg1}.
Suppose that $\{x_k\}_{k\geq1}$ be the sequence generated by
Algorithm \ref{alg1} and $x^*$ be the unique solution to problem
\eqref{nonsmooth}. Then $$\lim_{k\rightarrow+\infty}x_k=x^*.$$
\end{theorem}
In the next section, we shall exploit the special (approximate) sparsity property of the log-exponential aggregation function $f(x;\mu)$ and propose an inexact Newton-CG algorithm for solving the smoothing approximation problem \eqref{sub} and thus the SEB problem \eqref{nonsmooth}.
\section{Inexact Newton-CG Algorithm} \label{our algorithm}
As can be seen from \eqref{gradient} and \eqref{hessian}, the gradient $\nabla f(x;\mu)$ and Hessian $\nabla^2 f(x;\mu)$ of $f(x;\mu)$ are (convex) combinations of $\nabla f_i(x;\mu)$ and $\nabla^2 f_i(x;\mu)~(i=1,2,\ldots,m)$ with the vector $$\lambda(x;\mu)=\left(\lambda_1(x;\mu),\lambda_2(x;\mu),\ldots,\lambda_m(x;\mu)\right)^T$$ being the combination coefficients. As the parameter $\mu$ gets smaller, a large number of $\lambda_i(x;\mu)~(i=1,2,\ldots,m)$ become close to zero and thus are neglectable. To see this clearly, we define
\begin{equation*}\label{fxmu}
f_{\infty}(x;\mu)=\max_{1\leq i\leq m} \left\{f_i(x;\mu)\right\}.
\end{equation*}
Since
$$1<\displaystyle\sum_{i=1}^m\exp\left(\left(f_i(x;\mu)-f_{\infty}(x;\mu)\right)/\mu\right)\leq m$$ and
\begin{align*}
\lambda_i(x;\mu)=\frac{\displaystyle\exp\left(f_i(x;\mu)/\mu\right)}{\displaystyle\sum_{j=1}^m\exp\left(f_j(x;\mu)/\mu\right)}=\frac{\displaystyle\exp\left(\left(f_i(x;\mu)-f_{\infty}(x;\mu)\right)/\mu\right)}{\displaystyle\sum_{j=1}^m\exp\left(\left(f_j(x;\mu)-f_{\infty}(x;\mu)\right)/\mu\right)},
\end{align*}it follows that \begin{equation}\label{sparse}
\displaystyle\frac{\exp\left(\left({f_i(x;\mu)-f_{\infty}(x;\mu)}\right)/{\mu}\right)}{m}\leq\lambda_i(x;\mu)<\displaystyle\exp\left(\left({f_i(x;\mu)-f_{\infty}(x;\mu)}\right)/{\mu}\right),~i=1,2,\ldots,m.
\end{equation} The second inequality of \eqref{sparse} shows that if $\mu$ is sufficiently small or $f_i(x;\mu)$ is much smaller than $f_{\infty}(x;\mu)$, then $\lambda_i(x;\mu)$ is approximately equal to zero, and $f_i(x;\mu)$ has little contribution to $f(x;\mu)$ in \eqref{smoothfunction}.
Motivated by the above observation of the (approximate) sparsity of the vector $\lambda(x;\mu)$, we propose to compute $\nabla f(x;\mu)$ and $\nabla^2 f(x;\mu)$ in an inexact way by judiciously neglecting some terms associated with very small $\lambda_i(x;\mu).$ In such a way, the computational cost is significantly reduced (compared to compute $\nabla f(x;\mu)$ and $\nabla^2 f(x;\mu)$ exactly). Then, we propose an inexact Newton-CG algorithm to solve the smoothing approximation problem \eqref{sub}. The search direction in the inexact Newton-CG algorithm is computed by applying the CG method to solve the inexact Newton equation in an inexact fashion.
\subsection{An adaptive criterion and error analysis}\label{sectruncated}
In this subsection, we give an adaptive criterion of inexactly computing the gradient/Hessian and analyze the errors between the inexact gradient/Hessian and the true ones. For any given $\epsilon_3\in(0,1],\;\mu\in(0,1],$ define \begin{equation}\label{set}
{S(x;\mu,\epsilon)}=\left\{\,i\,|\,\lambda_i(x;\mu)\geq\epsilon\right\}
\end{equation}
with \begin{equation}\label{epsilon}
\epsilon=\frac{\mu\epsilon_3}{10m}.
\end{equation} It is simple to see $S(x;\mu,\epsilon)\neq \emptyset.$ Otherwise, suppose $S(x;\mu,\epsilon)=\emptyset.$ Then it follows from \eqref{epsilon} and the facts $\epsilon_3\leq 1$ and $\mu\leq 1$ that
$$\sum_{i=1}^m\lambda_i(x;\mu)<\sum_{i=1}^m\epsilon=\frac{\mu\epsilon_3}{10}<1,$$ which contradicts \eqref{1sum}.
Hence, it makes sense to define \begin{align}
\tilde f(x;\mu)=&~\mu\ln\left(\sum_{i\in
{S(x;\mu,\epsilon)}}\exp\left(
f_i(x;\mu)/\mu\right)\right),\label{tildef} \\
\nabla\tilde f(x;\mu)=&\displaystyle\sum_{i\in
{S(x;\mu,\epsilon)}}\tilde\lambda_i(x;\mu)\nabla
f_i(x;\mu),\label{tildeg}\\
\displaystyle
\nabla^2 \tilde f(x;\mu)=&\displaystyle\sum_{i\in
S(x;\mu,\epsilon)}\left(\tilde\lambda_i(x;\mu)\nabla^2f_i(x;\mu)+\frac{1}{\mu}\tilde\lambda_i(x;\mu)\nabla
f_i(x;\mu)\nabla f_i(x;\mu)^T\right)\label{tildehessian}\\%[20pt]
&\displaystyle-\frac{1}{\mu}\nabla\tilde f(x;\mu)\nabla\tilde
f(x;\mu)^T,\nonumber
\end{align}
where \begin{equation*}\label{tildel} \tilde
\lambda_i(x;\mu)=\frac{\exp\left(f_i(x;\mu)/\mu\right)}{\displaystyle\sum_{j\in
{S(x;\mu,\epsilon)}}\exp\left(f_j(x;\mu)/\mu\right)}\in
(0,\,1),\,i\in {S(x;\mu,\epsilon)}.
\end{equation*}
According to (ii) of Lemma \ref{yinli}, we have\begin{equation}\label{pointappro} f(x)<\tilde f(x;\mu)\leq
f(x;\mu)\leq f(x)+\mu\left(1+\ln{m}\right),
\end{equation}where the second inequality holds with ``=" if and only if
${S(x;\mu,\epsilon)}$ defined in \eqref{set} equals the set
$\left\{1,2,\ldots,m\right\}.$ Inequality \eqref{pointappro} gives a nice explanation of $\tilde f(x;\mu)$ defined in \eqref{tildef}. For any given $\mu\in(0,1],$ (ii) of Lemma \ref{yinli} shows $f(x;\mu)$ is a uniform approximation to $f(x)$, while $\tilde f(x;\mu)$ could be explained as a ``better'' point-wise approximation to $f(x)$ (compared to $f(x;\mu)$).
The error estimations associated with \eqref{set} is given in the following theorem.
\begin{theorem}\label{errorth}Given $\epsilon_3\in(0,1],\;\mu\in(0,1],$ let $\epsilon,$ $S(x;\mu,\epsilon),$ $\tilde f(x;\mu),$ $\nabla \tilde
f(x;\mu),$ and $\nabla^2\tilde
f(x;\mu)$ be defined as in \eqref{epsilon}, \eqref{set},~\eqref{tildef},~\eqref{tildeg},~and~\eqref{tildehessian}, respectively.
Then, there hold \begin{align}
f(x;\mu)-\tilde f(x;\mu)\leq&~\mu^2\epsilon_3/9,\label{ferror}\\
\left\|\nabla f(x;\mu)-\nabla \tilde
f(x;\mu)\right\|\leq&~\mu\epsilon_3/5,\label{gerror}\\
\left\|\nabla^2f(x;\mu)-\nabla^2\tilde
f(x;\mu)\right\|\leq&~4\epsilon_3/5.\label{herror}
\end{align}
\end{theorem}
\begin{proof}
We first prove \eqref{ferror} holds true. It follows from \eqref{1sum} and \eqref{set} that
\begin{equation}\label{lambdabound}\displaystyle\sum_{i\notin {S(x;\mu,\epsilon)}}\lambda_i(x;\mu)\leq m\epsilon,~
\displaystyle\sum_{i\in {S(x;\mu,\epsilon)}}\lambda_i(x;\mu)\geq
1-m\epsilon.\end{equation}
Recalling the definitions of $f(x;\mu)$ and $\tilde
f(x;\mu)$ (cf. \eqref{smoothfunction} and \eqref{tildef}), we obtain
\begin{align*
f(x;\mu)-\tilde
f(x;\mu)~&=~\mu\ln\left(\displaystyle\sum_{i=1}^m\exp\left(f_i(x;\mu)/\mu\right)\right)-\mu\ln\left(\displaystyle\sum_{i\in {S(x;\mu,\epsilon)}}\exp\left(
f_i(x;\mu)/\mu\right)\right)\\%[25pt]
&\overset{(a)}{\leq}~\mu\dfrac{\displaystyle\sum_{i\notin
{S(x;\mu,\epsilon)}}\exp\left(
f_i(x;\mu)/\mu\right)}{\displaystyle\sum_{i\in
{S(x;\mu,\epsilon)}}\exp\left(
f_i(x;\mu)/\mu\right)}=~\mu\dfrac{\displaystyle\sum_{i\notin
{S(x;\mu,\epsilon)}}\lambda_i(x;\mu)}{\displaystyle\sum_{i\in
{S(x;\mu,\epsilon)}}\lambda_i(x;\mu)}\\
&~\leq~\mu\dfrac{m\epsilon}{1-m\epsilon}\leq\frac{\mu^2\epsilon_3}{9},~(\text{from}~\eqref{lambdabound}~\text{and}~\text{\eqref{epsilon}})
\end{align*}where $(a)$ comes from the fact that $\ln(1+x)\leq x$ for any $x\geq 0.$
Now we prove \eqref{gerror} holds true. Since
\begin{equation}\label{lambdab2}
\displaystyle\sum_{i\in {S(x;\mu,\epsilon)}}\tilde\lambda_i(x;\mu)-\sum_{i\in
{S(x;\mu,\epsilon)}}\lambda_i(x;\mu)
=1-\displaystyle\sum_{i\in
{S(x;\mu,\epsilon)}}\lambda_i(x;\mu)
=\displaystyle\sum_{i\notin
{S(x;\mu,\epsilon)}}\lambda_i(x;\mu)\geq0,
\end{equation}
and $$\tilde \lambda_i(x;\mu)>\lambda_i(x;\mu),~\forall~i\in S(x;\mu,\epsilon),$$
it follows from \eqref{gradient} and \eqref{tildeg} that
\begin{align}\label{in2}
&~~~\left\|\nabla f(x;\mu)-\nabla\tilde f(x;\mu)\right\|\\\nonumbe
&=\left\|\displaystyle\sum_{i\in {S(x;\mu,\epsilon)}}\left(\lambda_i(x;\mu)-\tilde\lambda_i(x;\mu)\right)\nabla f_i(x;\mu)+\displaystyle\sum_{i\notin {S(x;\mu,\epsilon)}}\lambda_i(x;\mu)\nabla
f_i(x;\mu)\right\|\\\nonumbe
&\leq\displaystyle\sum_{i\in
{S(x;\mu,\epsilon)}}\left(\tilde\lambda_i(x;\mu)-\lambda_i(x;\mu)\right)\left\|\nabla
f_i(x;\mu)\right\|+\displaystyle\sum_{i\notin
{S(x;\mu,\epsilon)}}\lambda_i(x;\mu)\left\|\nabla
f_i(x;\mu)\right\|\\\nonumbe
&\leq2\displaystyle\sum_{i\notin
{S(x;\mu,\epsilon)}}\lambda_i(x;\mu)~(\text{from}~\eqref{1gradient}~\text{and}~\eqref{lambdab2})\\\nonumbe
&\leq 2m\epsilon=\mu\epsilon_3/5.~(\text{from}~\eqref{lambdabound}~\text{and}~\text{\eqref{epsilon}})
\end{align}
Finally, we show \eqref{herror} is also true. Combining \eqref{hessian} and \eqref{tildehessian} yields
\begin{equation}\label{term}
\begin{array}{rl}
&~~~\left\|\nabla^2f(x;\mu)-\nabla^2\tilde
f(x;\mu)\right\|\\[8pt
&\leq \underbrace{\left\|\displaystyle\sum_{i=1}^m\lambda_i(x;\mu)\nabla^2f_i(x;\mu)-\displaystyle\sum_{i\in
{S(x;\mu,\epsilon)}}\tilde\lambda_i(x;\mu)\nabla^2f_i(x;\mu)\right\|}_{\text{Term
A}}\\%\nonumbe
&+\displaystyle\frac{1}{\mu}\underbrace{\left\|\displaystyle\sum_{i=1}^m\lambda_i(x;\mu)\nabla
f_i(x;\mu)\nabla f_i(x;\mu)^T-\displaystyle\sum_{i\in
{S(x;\mu,\epsilon)}}\tilde\lambda_i(x;\mu)\nabla f_i(x;\mu)\nabla
f_i(x;\mu)^T\right\|}_{\text{Term B}}\\[15pt]
&+\displaystyle\frac{1}{\mu}\underbrace{\left\|\nabla
f(x;\mu)\nabla f(x;\mu)^T-\nabla\tilde
f(x;\mu)\nabla\tilde f(x;\mu)^T \right\|}_{\text{Term C}}
\end{array}
\end{equation}
Noticing that all eigenvalues of
$\displaystyle\nabla^2f_i(x;\mu)$ (cf. \eqref{hi}) are
$$\rho_1=\rho_2=\cdots=\rho_{n-1}=\dfrac{1}{g_i(x;\mu)},~\rho_n=\dfrac{\mu^2}{g_i(x;\mu)^{3}},$$
it follows from \eqref{gi} that \begin{equation*}\label{hbound}
\left\|\nabla^2f_i(x;\mu)\right\|\leq \frac{1}{\mu},~
i=1,2,\ldots,m.
\end{equation*}
The same argument as in \eqref{in2} shows
\begin{equation}\label{terma}\text{Term
A}\leq \dfrac{2m\epsilon}{\mu},
~\text{Term B}\leq {2m\epsilon}.\end{equation}
Combining \eqref{1gra}, \eqref{in2}, and the fact $\left\|\nabla\tilde
f(x;\mu)\right\|\leq1,$ we have
\begin{equation}\label{termc}
\begin{array}{rcl}
\text{Term C}&\leq& \left\|\nabla
f(x;\mu)\left(\nabla f(x;\mu)-\nabla\tilde f(x;\mu)\right)^T+\left(\nabla f(x;\mu)-\nabla\tilde f(x;\mu)\right)\nabla\tilde
f(x;\mu)^T\right\|\\[10pt]
&\leq&\displaystyle \left\|\nabla
f(x;\mu)\right\|\left\|\nabla f(x;\mu)-\nabla\tilde
f(x;\mu)\right\|+\left\|\nabla f(x;\mu)-\nabla\tilde f(x;\mu)\right\|\left\|\nabla\tilde
f(x;\mu)\right\|\\[10pt]
&\leq&2\left\|\nabla f(x;\mu)-\nabla\tilde
f(x;\mu)\right\|\leq{4m\epsilon}
\end{array}
\end{equation}Now we can use \eqref{term}, \eqref{terma}, and \eqref{termc} to conclude
\begin{equation*}\label{in3}
\left\|\nabla^2f(x;\mu)-\nabla^2\tilde f(x;\mu)\right\|\leq
\dfrac{8m\epsilon}{\mu}=\frac{4\epsilon_3}{5}.
\end{equation*} This completes the proof of Theorem \ref{errorth}.
\end{proof}
\subsection{Solving inexact Newton equation}
In the classical (line search) Newton-CG algorithm \cite{program,Yuan}, the
search direction is computed by applying the CG method to the Newton
equation
\begin{equation}\label{newton}
\nabla^2f(x;\mu)d=-\nabla f(x;\mu),
\end{equation}
until a direction $d$ is found to satisfy
\begin{equation*}\label{termination}
\left\|\nabla^2f(x;\mu)d+\nabla f(x;\mu)\right\|\leq
\eta(x;\mu)\left\|\nabla f(x;\mu)\right\|,
\end{equation*}
where $\eta(x;\mu)$ controls the solution accuracy. For instance, $\eta(x;\mu)$ can be chosen to be $$\min\left\{0.5,\,\sqrt{\left\|\nabla f(x;\mu)\right\|}\right\}.$$
A drawback of the classical Newton-CG algorithm when applied to solve the SEB problem with large $m$ and $n$ is that it is computationally expensive to obtain the Hessian and the Hessian-vector product
Fortunately, Theorem \ref{errorth} shows that $\nabla^2 \tilde f(x;\mu)$ and $\nabla \tilde f(x;\mu)$ are good approximations to $\nabla^2 f(x;\mu)$ and $\nabla f(x;\mu)$, respectively. Therefore, it is reasonable to replace the (exact) Newton equation \eqref{newton} with the inexact Newton equation
\begin{equation}\label{newtonapp}
\nabla^2\tilde f(x;\mu) \tilde d=-\nabla \tilde f(x;\mu).
\end{equation} Using the similar idea as in the classical Newton-CG algorithm, we do not solve \eqref{newtonapp} exactly but attempt to find a direction $\tilde d$ satisfying
\begin{equation}\label{termination}
\left\|\nabla^2\tilde f(x;\mu)\tilde d+\nabla \tilde f(x;\mu)\right\|\leq
\tilde\eta(x;\mu) \left\|\nabla \tilde f(x;\mu)\right\|,
\end{equation}
where $\tilde\eta(x;\mu)$ controls the solution accuracy. For instance, we can set $\tilde \eta(x;\mu)$ to be $$\tilde \eta(x;\mu)=\min\left\{0.5,\,\sqrt{\left\|\nabla \tilde f(x;\mu)\right\|}\right\}.$$
We apply the CG method to inexactly solve the linear equation \eqref{newtonapp} to obtain a search direction $\tilde d$ satisfying \eqref{termination}. The reasons for choosing the CG method for solving \eqref{newtonapp} are as follows. First, the matrix $\nabla^2\tilde f(x;\mu)$ is positive definite, which can be shown in the same way as in (iii) of Lemma \ref{yinli}, and the CG method is one of the most useful techniques for solving linear systems with positive definite coefficient matrices \cite{Yuan}. Second, in the inner CG iteration,
the Hessian-vector product $\nabla^2\tilde f(x;\mu)\tilde d$ is only required but not the Hessian $\nabla^2\tilde f(x;\mu)$ itself. This property makes the CG method particularly amenable to solve the linear equation \eqref{newtonapp}. Specifically, due to the special structure of
$\nabla^2\tilde f(x;\mu)$, the product $\nabla^2\tilde f(x;\mu)\tilde d$ can be obtained very fast for any given $\tilde d$. From
\eqref{ggi}, \eqref{hi}, \eqref{tildeg}, and \eqref{tildehessian},
simple calculations yield
\begin{equation}\label{hessian-vector}
\begin{array}{rcl}
\nabla^2\tilde f(x;\mu)\tilde d&=&\displaystyle\sum_{i\in
{S(x;\mu,\epsilon)}}\left(\left(\frac{1}{\mu}-\frac{1}{{g_i(x;\mu)}}\right)\frac{\tilde\lambda_i(x;\mu)}{g_i(x;\mu)^2}(x-c_i)(x-c_i)^T\right)\tilde d\\
&&+\displaystyle\sum_{i\in
{S(x;\mu,\epsilon)}}\frac{\tilde\lambda_i(x;\mu)}{{g_i(x;\mu)}}\tilde d-\frac{1}{\mu}\nabla
\tilde f(x;\mu)\nabla \tilde f(x;\mu)^T\tilde d\\
&=&\displaystyle\sum_{i\in
{S(x;\mu,\epsilon)}}\left[\left(\frac{1}{\mu}-\frac{1}{{g_i(x;\mu)}}\right)\frac{\tilde\lambda_i(x;\mu)}{g_i(x;\mu)^2}(x-c_i)\right](x-c_i)^T\tilde d\\
&&+\displaystyle\left[\sum_{i\in
{S(x;\mu,\epsilon)}}\frac{\tilde \lambda_i(x;\mu)}{{g_i(x;\mu)}}\right]\tilde d-{\nabla \tilde f(x;\mu)^T\tilde d}\left[\frac{\nabla
\tilde f(x;\mu)}{\mu}\right].
\end{array}
\end{equation} The way of calculating $\nabla^2\tilde f(x;\mu)\tilde d$ by
\eqref{hessian-vector}\footnote{The terms in square brackets in \eqref{hessian-vector} are constants in the inner CG iteration, since they are not related to the variable $\tilde d$.} is typically different from the way of first
calculating $\nabla^2\tilde f(x;\mu)$ and then calculating
$\nabla^2\tilde f(x;\mu)\tilde d.$ The complexity of computing $\nabla^2\tilde f(x;\mu)\tilde d$ using the above two ways are $O(|S(x;\mu,\epsilon)|n)$ and $O\left((|S(x;\mu,\epsilon)|+n)n^2\right),$ respectively. It is worthwhile remarking that the computational complexity of calculating $\nabla^2 f(x;\mu)d$ using the above mentioned two ways of are $O(mn)$ and $O\left((m+n)n^2\right),$ respectively. Notice that $|S(x;\mu,\epsilon)|$ is usually much less than $m.$ Hence, computing $\nabla^2\tilde f(x;\mu)\tilde d$ by \eqref{hessian-vector} can sharply reduce the computational cost and simultaneously save a lot of memory (since we do not need to store the $n\times n$ matrix $\nabla^2\tilde f(x;\mu)$).
Let $\tilde d_f$ be the obtained direction satisfying \eqref{termination} by applying the CG method to solve the linear equation \eqref{newtonapp} with the starting point $\tilde d_0=0.$ In the sequential, we state two properties of the direction $\tilde d_f$. These two properties shall be used late in global convergence analysis of the proposed algorithm.%
\begin{lemma}\label{descent}
Consider applying the CG method to solve \eqref{newtonapp} with the starting point $\tilde d_0=0.$ Suppose $\nabla \tilde f(x;\mu)\neq 0$ and $\tilde d_f$ is the obtained search direction satisfying \eqref{termination}. Then \begin{equation}\label{descentproperty}
{\tilde d}_f^T\nabla \tilde f(x;\mu) = -\tilde d_f^T\nabla^2\tilde f(x;\mu)\tilde d_f<0.
\end{equation}
\end{lemma}
\begin{proof} Since the starting point $\tilde d_0=0,$ the final point $\tilde d_f$ in the CG iteration must have the form $\tilde d_f=\sum_{j=0}^{f-1}\tilde \alpha_j\tilde p_j$ \cite{program,Yuan}, where $\left\{\tilde \alpha_j\right\}_{j=0}^{f-1}$ and $\left\{\tilde p_j\right\}_{j=0}^{f-1}$ are step sizes and search directions in the CG iteration. Notice that $\nabla \tilde f(x;\mu)\neq 0,$ then $\tilde d_f\neq 0.$ Otherwise, substituting $\tilde d_f=0$ into \eqref{termination}, we shall get
$$\left\|\nabla \tilde f(x;\mu)\right\|\leq
\tilde\eta(x;\mu) \left\|\nabla \tilde f(x;\mu)\right\|\leq 0.5\left\|\nabla \tilde f(x;\mu)\right\|,$$ which contradicts the fact $\nabla \tilde f(x;\mu)\neq 0.$ Let \begin{equation}\label{tilder}\tilde r_f=\nabla^2\tilde f(x;\mu)\tilde d_f+\nabla \tilde f(x;\mu).\end{equation}
Then, it follows from \cite[Theorem 5.2]{program} that \begin{equation}\label{ortho}\tilde r_f^T\tilde p_j=0,~j=0,1,\ldots,f-1.\end{equation} Hence, \begin{align*}
\tilde d_f^T\nabla \tilde f(x;\mu)=&~\tilde d_f^T\left(\tilde r_f-\nabla^2\tilde f(x;\mu)\tilde d_f\right)~(\text{from}~\eqref{tilder})\\
=&~\tilde d_f^T\tilde r_f-\tilde d_f^T\nabla^2\tilde f(x;\mu)\tilde d_f\\
=&~\sum_{j=0}^{f-1}\tilde \alpha_j\tilde p_j^T\tilde r_f-\tilde d_f^T\nabla^2\tilde f(x;\mu)\tilde d_f~(\text{substituting}~\tilde d_f=\sum_{j=0}^{f-1}\tilde \alpha_j\tilde p_j)\\
=&~-\tilde d_f^T\nabla^2\tilde f(x;\mu)\tilde d_f~(\text{from}~\eqref{ortho})\\
<&~0
\end{align*}where the last inequality is due to positive definiteness of~$\nabla^2\tilde f(x;\mu)$ and the fact $\tilde d_f\neq0.$ The proof is completed. \end{proof}
\begin{lemma}
Suppose $\tilde d$ satisfies \eqref{termination}, and $\tilde \sigma_{\max}(x;\mu)$ and $\tilde \sigma_{\min}(x;\mu)>0$ are the maximum and minimum eigenvalues of $\nabla^2\tilde f(x;\mu),$ respectively. Then
\begin{equation}\label{length}
\dfrac{1-\tilde\eta(x;\mu)}{\tilde \sigma_{\max}(x;\mu)}\left\|\nabla\tilde f(x;\mu)\right\|\leq \left\|\tilde d\right\|\leq \frac{1+\tilde\eta(x;\mu)}{\tilde \sigma_{\min}(x;\mu)}\left\|\nabla\tilde f(x;\mu)\right\|.
\end{equation}
\end{lemma}
\begin{proof}
Suppose the second inequality in \eqref{length} does not hold true, i.e., $$\left\|\tilde d\right\|> \frac{1+\tilde\eta(x;\mu)}{\tilde \sigma_{\min}(x;\mu)}\left\|\nabla\tilde f(x;\mu)\right\|.$$ Then
\begin{align*}
\left\|\nabla^2\tilde f(x;\mu)\tilde d+\nabla \tilde f(x;\mu)\right\|\geq &~\left\|\nabla^2\tilde f(x;\mu)\tilde d\right\|-\left\|\nabla \tilde f(x;\mu)\right\|\\
\geq &~\tilde \sigma_{\min}(x;\mu)\left\|\tilde d\right\|-\left\|\nabla \tilde f(x;\mu)\right\|\\
> &~\left(1+\tilde\eta\left(x;\mu\right)\right)\left\|\nabla\tilde f(x;\mu)\right\|-\left\|\nabla \tilde f(x;\mu)\right\|\\
=&~\tilde\eta(x;\mu)\left\|\nabla\tilde f(x;\mu)\right\|,
\end{align*}which contradicts \eqref{termination}. Hence, the second inequality in \eqref{length} holds true.
The similar argument shows the first inequality in \eqref{length} is also true. The proof is completed.
\end{proof}
\subsection{Inexact Newton-CG algorithm}\label{secother}
When the smoothing parameter $\mu$ approaches zero,
$\exp\left(f_i(x;\mu)/\mu\right)$ tends to be very large.
The special care should be taken in computing $f(x;\mu)$ and $\tilde\lambda_i(x;\mu)$ to prevent overflow \cite{21}, i.e.,
\begin{align}
f(x;\mu)&=f_{\infty}(x;\mu)+\mu\ln\left(\sum_{i=1}^m\exp(\left(f_i(x;\mu)-f_{\infty}(x;\mu)\right)/\mu)\right),\label{fcom}\\
\tilde\lambda_i(x;\mu)&=\dfrac{\exp\left(\left(f_i(x;\mu)-f_{\infty}(x;\mu)\right)/\mu\right)}{\sum_{j\in
{S(x;\mu,\epsilon)}}\exp(\left(f_j(x;\mu)-f_{\infty}(x;\mu)\right)/\mu)},~i\in {S(x;\mu,\epsilon)}.\label{lamcom}
\end{align}
Based on the above discussions, the specification of the proposed inexact Newton-CG algorithm for solving the SEB problem is given as follows.
\begin{algorithm}\caption{}\label{alg2}
\begin{algorithmic}[1]
\STATE Let $\epsilon_1,~c_1\in(0,1),~\beta\in(0,1),~\left\{\mu_k,~\epsilon_2(\mu_k),~\epsilon_3(\mu_k)\right\}_k,~x_{0,0}\in
\mathbb{R}^n$ be given and set $k=j=0.$
\REPEAT
\REPEAT
\STATE Compute $S(x_{k,j};\mu_k,\frac{\mu_k\epsilon_3(\mu_k)}{10m})$ according to \eqref{set}.\label{line:epsilon3}
\STATE Compute the search direction $\tilde d_{k,j}$ by applying
the CG method to the inexact Newton equation $\nabla^2\tilde
f(x_{k,j};\mu_k)\tilde d=-\nabla\tilde f(x_{k,j};\mu_k)$ such that \begin{equation}\label{direction}\left\|\nabla^2\tilde f(x_{k,j};\mu_k)\tilde d_{k,j}+\nabla \tilde f(x_{k,j};\mu_k)\right\|\leq \tilde\eta_{k,j}\left\|\nabla \tilde f(x_{k,j};\mu_k)\right\|,\end{equation} where \begin{equation}\label{eta}
\tilde \eta_{k,j}=\min\left\{0.5,\,\sqrt{\left\|\nabla \tilde f(x_{k,j};\mu_k)\right\|}\right\},
\end{equation}
the Hessian-vector product $\nabla^2\tilde f(x_{k,j};\mu_k)\tilde d$ in the inner CG iteration, $\nabla\tilde
f(x_{k,j};\mu_k),$ and
$\tilde\lambda_i(x_{k,j};\mu_k)$ are computed by \eqref{hessian-vector}, \eqref{tildeg}, and \eqref{lamcom}, respectively.
\STATE Set $x_{k,j+1}=x_{k,j}+\alpha_{k,j}\tilde d_{k,j},$ where
$\alpha_{k,j}=\beta^l,$ with $\beta\in(0,1)$ and $l$ being the smallest integer satisfying the sufficient decrease condition
\begin{equation}\label{sufficient}
f(x_{k,j}+\beta^l\tilde d_{k,j};\mu_k)\leq f(x_{k,j};\mu_k)+c_1\beta^l \tilde d_{k,j}^T\nabla \tilde f(x_{k,j};\mu_k),
\end{equation}where $f(x_{k,j};\mu_k)$ is computed by \eqref{fcom}.
\STATE Set $j=j+1.$
\UNTIL {$\left\|\nabla\tilde f(x_{k,j};\mu_k)\right\|\leq {\epsilon_2(\mu_k)}$}
\STATE Set $x_{k+1,0}=x_{k,j}$ and $k=k+1.$
\UNTIL {$\mu_k\leq\epsilon_1$}
\end{algorithmic}
\end{algorithm}
The actual parameter values used for $\epsilon_1,~c_1,~\beta,~\left\{\mu_k,~\epsilon_2(\mu_k),~\epsilon_3(\mu_k)\right\}$ in Algorithm \ref{alg2} shall be given in Section \ref{experiment}. As we can see, all parameters in Algorithm \ref{alg2} are updated adaptively. For instance, the final iterate $x_{k,j}$ is set to be a warm starting point for the problem $\min_{x\in\mathbb{R}^n}f(x;\mu_{k+1}),$ and the tolerance $\epsilon_2(\mu_k)$ is set to be related to the approximation parameter $\mu_k.$
It is worthwhile pointing out that if we set $\epsilon_3(\mu_k)$ to be zero in line \ref{line:epsilon3} of the proposed Algorithm 2, then the proposed algorithm reduces to apply the classical Newton-CG algorithm to solve the smoothing approximation problem \eqref{sub}. Hence, the sequences generated by the proposed Algorithm \ref{alg2} converge to the unique solution of problem \eqref{nonsmooth} according to \cite[Theorem 3]{16}. In the next section, we shall show that even though the parameters $\epsilon_3(\mu_k)$ are positive, i.e., the gradient and Hessian-vector product are inexactly computed to reduce the computational cost, the proposed inexact Newton-CG Algorithm 2 is still globally convergent.
\section{Convergence Analysis}\label{convergence
In this section, we establish global convergence of the proposed Algorithm \ref{alg2} with an appropriate choice of parameters. For any $\mu>0,$ since $f(x;\mu)$ is strictly convex (see Lemma \ref{yinli}) and coercive in $x$, the level set \begin{equation}\label{omega}\Omega{(\mu)}=\left\{x\,|\,f(x;\mu)\leq f(x_{0,0};\mu_0)\right\}\end{equation} must be convex and bounded, where $x_{0,0}$ is the initial point in Algorithm \ref{alg2}. Furthermore, since the set $\left\{1,2,\ldots,m\right\}$ has a finite number of subsets, then
there must exist $\sigma_{\max}(\mu)\geq\sigma_{\min}(\mu)>0$ such that, for $\nabla^2\tilde f(x;\mu)$ defined on any proper subset of $\left\{1,2,\ldots,m\right\},$ we have \begin{equation}\label{sigma}\sigma_{\max}(\mu)I_n\succeq\nabla^2\tilde f(x;\mu)\succeq \sigma_{\min}(\mu) I_n,~\forall~x\in\Omega(\mu).\end{equation} As a particular case, we have$$\sigma_{\max}(\mu)I_n\succeq\nabla^2f(x;\mu)\succeq \sigma_{\min}(\mu) I_n,~\forall~x\in\Omega(\mu).$$ Before establishing global convergence of the proposed Algorithm \ref{alg2}, we first show that it is well defined. In particular, we prove that the proposed algorithm can always find a step length $\alpha_{k,j}=\beta^l$ satisfying \eqref{sufficient} in finite steps (see Lemma \ref{step}) and there exists $j_k$ such that $\left\|\nabla\tilde f(x_{k,j_k};\mu_k)\right\|\leq {\epsilon_2(\mu_k)}$ (see Lemma \ref{lemma-termination}).
\begin{lemma}\label{step}
Suppose $\left\|\nabla\tilde f(x_{k,j};\mu_k)\right\|> {\epsilon_2(\mu_k)},$ and set \begin{equation}\label{epsilon3}\epsilon_3(\mu_k)\leq \frac{\epsilon_2(\mu_k)}{c_2(\mu_k)},\end{equation} where \begin{equation}\label{c2}c_2(\mu_k)>\frac{6\mu_k\sigma_{\max}^2(\mu_k)}{5(1-c_1)\sigma_{\min}^2(\mu_k)}.\end{equation} Then the step length $\alpha_{k,j}$ satisfying the sufficient decrease condition \eqref{sufficient} can be found in
$\left\lceil\frac{\ln\left(\bar\alpha(\mu_k)\right)}{\ln\left(\beta\right)}\right\rceil$ steps, and \begin{equation}\label{lowerbound}
\alpha_{k,j}\geq \beta\bar\alpha(\mu_k),~\forall~j,
\end{equation}where \begin{equation}\label{alphaupper}\bar\alpha(\mu_k)=\frac{2(1-c_1)\sigma_{\min}^3(\mu_k)}{9\sigma_{\max}^3(\mu_k)}-\frac{4\mu_k\sigma_{\min}(\mu_k)}{15\sigma_{\max}(\mu_k)c_2(\mu_k)}>0.\end{equation} \end{lemma}
\begin{proof} By the mean value theorem, there exists $s\in(0,1)$ such that
\begin{equation}\label{ABC}
\begin{array}{rl}
&f(x_{k,j}+\alpha \tilde d_{k,j};\mu_k)-f(x_{k,j};\mu_k)-c_1\alpha \tilde d_{k,j}^T\nabla\tilde f(x_{k,j};\mu_k)\\[5pt]
=&\alpha \underbrace{\tilde d_{k,j}^T\left(\nabla f(x_{k,j};\mu_k)-\nabla\tilde f\left(x_{k,j};\mu_k\right)\right)}_{\text{Term A}}+\left(1-c_1\right)\alpha \underbrace{\tilde d_{k,j}^T\nabla\tilde f(x_{k,j};\mu_k)}_{\text{Term B}}\\
&+\frac{\alpha^2}{2}\underbrace{\tilde d_{k,j}^T\nabla^2 f\left(x_{k,j}+\alpha s \tilde d_{k,j};\mu_k\right)\tilde d_{k,j}}_{\text{Term C}}.
\end{array}
\end{equation}
Next, we upper bound Term A, Term B, and Term C in the above, respectively.
It follows from \eqref{gerror} and \eqref{epsilon3} that
\begin{equation}\label{mepsilon}\left\|\nabla f(x;\mu_k)-\nabla \tilde
f(x;\mu_k)\right\|\leq \frac{\mu_k\epsilon_3(\mu_k)}{5}\leq \frac{\mu_k\epsilon_2(\mu_k)}{5c_2(\mu_k)}.\end{equation} Furthermore, since
\begin{equation}\label{rate1}\left\|\nabla\tilde f(x_{k,j};\mu_k)\right\|> {\epsilon_2(\mu_k)},\end{equation}there holds
$$\left\|\nabla f(x_{k,j};\mu_k)-\nabla\tilde f(x_{k,j};\mu_k)\right\|\leq \frac{\mu_k}{5c_2(\mu_k)}\left\|\nabla\tilde f(x_{k,j};\mu_k)\right\|.$$
Combining the above inequality, the second inequality of \eqref{length}, and \eqref{eta}, we have
\begin{align}
\tilde d_{k,j}^T\left(\nabla f(x_{k,j};\mu_k)-\nabla\tilde f(x_{k,j};\mu_k)\right) &\leq \displaystyle\left\|\tilde d_{k,j}\right\|\left\|\nabla f(x_{k,j};\mu_k)-\nabla\tilde f(x_{k,j};\mu_k)\right\|\nonumber\\
&\leq {\frac{3\mu_k}{10c_2(\mu_k)\sigma_{\min}(\mu_k)} \left\|\nabla\tilde f(x_{k,j};\mu_k)\right\|^2}.\label{bb1}
\end{align}
From the first inequality of \eqref{length}, \eqref{descentproperty}, \eqref{eta} and \eqref{sigma}, we obtain
\begin{align}\label{bb2}
\tilde d_{k,j}^T\nabla\tilde f(x_{k,j};\mu_k)~=&~-\tilde d_{k,j}^T\nabla^2\tilde f(x_{k,j};\mu_k)\tilde d_{k,j}~(\text{from}~\eqref{descentproperty})\\\nonumber
\leq &~-\sigma_{\min}(\mu_k)\left\|\tilde d_{k,j}\right\|^2~(\text{from}~\eqref{sigma})\\\nonumber
\leq &~\frac{-\sigma_{\min}(\mu_k)}{4\sigma_{\max}^2(\mu_k)}\left\|\nabla\tilde f(x_{k,j};\mu_k)\right\|^2.~(\text{from}~\eqref{length}~\text{and}~\eqref{eta})\nonumber
\end{align}The similar argument as in \eqref{bb2} shows that Term C in \eqref{ABC} can be upper bounded by
\begin{equation}\label{bb3}
\tilde d_{k,j}^T\nabla^2 f(x_{k,j}+\alpha s \tilde d_{k,j};\mu_k)\tilde d_{k,j}\leq\frac{9\sigma_{\max}(\mu_k)}{4\sigma_{\min}^2(\mu_k)} \left\|\nabla\tilde f(x_{k,j};\mu_k)\right\|^2.
\end{equation}
By combining \eqref{ABC},~\eqref{bb1},~\eqref{bb2}, and \eqref{bb3}, we obtain
\begin{align*}\label{errABC}
&~f(x_{k,j}+\alpha \tilde d_{k,j};\mu_k)-f(x_{k,j};\mu_k)-c_1\alpha \tilde d_{k,j}^T\nabla\tilde f(x_{k,j};\mu_k)\\\nonumber
\leq&~\alpha\left\|\nabla\tilde f(x_{k,j};\mu_k)\right\|^2\left(\frac{3\mu_k}{10c_2(\mu_k)\sigma_{\min}(\mu_k)}- \frac{(1-c_1)\sigma_{\min}(\mu_k)}{4\sigma_{\max}^2(\mu_k)}+{\alpha}{\frac{9\sigma_{\max}(\mu_k)}{8\sigma_{\min}^2(\mu_k)}}\right).\nonumber\end{align*}
Consequently, it follows from \eqref{alphaupper} that, for any $l$ such that $\beta^l\leq \bar\alpha(\mu_k),$ $\beta^l$ satisfies the inequality \eqref{sufficient}, and the inequality \eqref{lowerbound} holds true.
This completes the proof of Lemma \ref{step}.
\end{proof}
\begin{lemma}\label{lemma-termination}
Suppose $\mu=\mu_k$ and $\left\{x_{k,j}\right\}$ be the sequence generated by Algorithm \ref{alg2}. Then there must exist $j_k$ such that \begin{equation}\label{epsilon2}\left\|\nabla\tilde f(x_{k,j_k};\mu_k)\right\|\leq {\epsilon_2(\mu_k)}.\end{equation}
\end{lemma}
\begin{proof}
We prove Lemma \ref{lemma-termination} by contradiction, i.e., suppose \begin{equation}\label{contradiction}\left\|\nabla\tilde f(x_{k,j};\mu_k)\right\|> {\epsilon_2(\mu_k)},~\forall~j=1,2,\ldots.\end{equation}
Since $f(x,\mu_k)$ is lower bounded (by zero), it follows that $$+\infty>\sum_{j=0}^{+\infty}\left(f(x_{k,j};\mu_k)-f(x_{k,j+1};\mu_k)\right)=\sum_{j=0}^{+\infty}\left(f(x_{k,j};\mu_k)-f(x_{k,j}+\alpha_{k,j}\tilde d_{k,j};\mu_k)\right).$$ Moreover, since
\begin{align*}
+\infty>&~\sum_{j=0}^{+\infty}\left(f(x_{k,j};\mu_k)-f(x_{k,j}+\alpha_{k,j}\tilde d_{k,j};\mu_k)\right)\\
\geq&~\sum_{j=0}^{+\infty}\left(-c_1\alpha_{k,j} \tilde d_{k,j}^T\nabla \tilde f(x_{k,j};\mu_k)\right)~(\text{from}~\eqref{sufficient})\\
\geq &~\sum_{j=0}^{+\infty}\left(c_1 \beta \bar\alpha(\mu_k) \tilde d_{k,j}^T\nabla^2 \tilde f(x_{k,j};\mu_k)\tilde d_{k,j}\right)~(\text{from}~\eqref{descentproperty}~\text{and}~\eqref{lowerbound})\\
\geq &~{c_1 \beta \bar\alpha(\mu_k)\sigma_{\min}(\mu_k)}\sum_{j=0}^{+\infty}\|\tilde d_{k,j}\|^2,
\end{align*}it follows that
$\displaystyle \lim_{j\rightarrow+\infty}\|\tilde d_{k,j}\|=0.$ Taking limits from both sides of \eqref{direction}, we obtain
\begin{equation*}\label{limitgradient}\lim_{j\rightarrow+\infty}\left\|\nabla\tilde f(x_{k,j};\mu_k)\right\|= 0,\end{equation*} which contradicts \eqref{contradiction}. Hence, Lemma \ref{lemma-termination} is true.
\end{proof}
Now, we are ready to present the global convergence result of Algorithm \ref{alg2}.
\begin{theorem}\label{thm-convergence} Let $\epsilon_1=0,$ $\displaystyle \lim_{k\rightarrow+\infty}\epsilon_2(\mu_k)= 0,$ and $\epsilon_3(\mu_k)$ satisfies \eqref{epsilon3} for all $k$ in Algorithm \ref{alg2}. Suppose that $\left\{x_{k,j_k}\right\}$ be the sequences generated by Algorithm \ref{alg2} satisfying \eqref{epsilon2} and $x^*$ be the unique solution to problem \eqref{nonsmooth}. Then $$\lim_{k\rightarrow \infty} x_{k,j_k}=x^*.$$
\end{theorem}
\begin{proof}
Recalling the definition of $\Omega(\mu)$ (cf. \eqref{omega}), it follows from part 2 of Lemma \ref{yinli} tha
$$\Omega(\mu_k)\subset\Omega:=\left\{x\,|\,f(x)\leq f(x_{0,0};\mu_0)\right\},~\forall~k\geq 0.$$ Since $f(x)$ is coercive, we know that $\Omega$ is bounded
~From part 1 of Lemma \ref{yinli} and \eqref{sufficient}, we have
$$f(x_{k,j_k};\mu_k)=f(x_{k+1,0};\mu_{k})>f(x_{k+1,0};\mu_{k+1})\geq f(x_{k+1,1};\mu_{k+1})\geq\cdots\geq f(x_{k+1,j_{k+1}};\mu_{k+1}).$$
Hence, the function values $\{f(x_{k,j_k};\mu_k)\}$ are decreasing, and the sequences $\{x_{k,j_k}\}$ lie in the bounded set $\Omega$.
Then there must exist an accumulation point for $\{x_{k,j_k}\}$. Let $\bar{x}$ denote an accumulation point such that
$$\bar{x}=\lim_{k \in {\cal {K}}, k\rightarrow\infty}x_{k,j_k}$$
for some subsequence indexed by $\cal {K}$. Since $\{f(x_{k,j_k};\mu_k)\}$ are decreasing and bounded below (by zero), it follows that $\displaystyle \lim_{k\rightarrow +\infty}f(x_{k,j_k};\mu_k)=f(\bar{x}).$
Next, we show that $\|\nabla f(x_{k,j_k};\mu_k)\|\rightarrow 0.$ In fact, it follows from \eqref{c2}, \eqref{mepsilon} and \eqref{epsilon2} that we have
\begin{align}\label{errorbound}\|\nabla f(x_{k,j_k};\mu_k)\|\leq \|\nabla f(x_{k,j_k};\mu_k)-\nabla \tilde f(x_{k,j_k};\mu_k)\|+\|\nabla \tilde f(x_{k,j_k};\mu_k)\| \leq 2\epsilon_2(\mu_k).
\end{align}
Letting $k$ go to infinity, we obtain the desired result $\|\nabla f(x_{k,j_k};\mu_k)\|\rightarrow 0.$ According to \cite[Lemma 2, Theorem 3]{16}, we know $\bar x=x^*.$ This completes the proof of Theorem \ref{thm-convergence}.
\end{proof}
\section{{Numerical Results}}\label{experiment}
In this section, the proposed inexact Newton-CG algorithm (Algorithm \ref{alg2}) was implemented and the
numerical experiments were done on a personal computer with Intel Core i7-4790K CPU (4.00 GHz) and 16GB of memory. We implemented our codes in C language and compared it with the state-of-the-art Algorithm 1 \cite{16} and the classical Newton-CG algorithm.
The test problems are generated randomly. Similar to \cite{16}, we use the following pseudo-random sequences:
$$\psi_0=7,~\psi_{i+1}=\left(445\psi_i+1\right)\!\!\!\!\!\mod4096,~\bar\psi_i=\psi_i/40.96,~i=1,2,\ldots$$
The elements of $r_i$ and $c_i,~i=1,2,\ldots,m,$ are successively set to
$\bar\psi_1,\bar\psi_2,\ldots,$ in the order:
$$r_1,c_1(1),c_1(2),\ldots,c_1(n),r_2,c_2(1),c_2(2),\ldots,c_2(n),\ldots,r_m,c_m(1),c_m(2),\ldots,c_m(n).$$
Different
scales of the SEB problem are tested and
the parameters used in Algorithm \ref{alg2} are set to be
$$\epsilon_1=1\text{E}-6,~c_1=1\text{E}-4,~\beta=0.5,~x_{0,0}=0,$$$$\mu_k=\left(0.1\right)^k,~\epsilon_2(\mu_k)=\max\left\{1\text{E}-5,\min\left\{1\text{E}-1;\mu_k/10\right\}\right\},~\epsilon_3(\mu_k)=1\text{E}-2,~k=0,1,\ldots,6.$$
The simulation results are summarized in Table \ref{table1}, Table \ref{table2}, and Table \ref{table3}
~where $n$ denotes the dimension of the Euclidean space, $m$ denotes the number
of balls, \textbf{Obj Value} denotes the value of the objective
function in \eqref{nonsmooth} at the final iterate, and \textbf{Time} denotes the CPU
time in seconds for solving the corresponding SEB
problem
\begin{table*
\caption{{Performance comparison of proposed Algorithm \ref{alg2}, classical Newton-CG algorithm, and Algorithm \ref{alg1} in \cite{16} with different large $m$ and $n=1000/2000.$}} \label{table1} \centering
\fontsize{7.0pt}{\baselineskip}\selectfont{\begin{tabular}{ccccccc}
\hline {Problem}& \multicolumn{2}{c}{Proposed Algorithm \ref{alg2}} & \multicolumn{2}{c}{Classical Newton-CG Algorithm}& \multicolumn{2}{c}{Algorithm \ref{alg1} \cite{16}}\\
\cline{1-1}\cline{2-3}\cline{4-5}\cline{6-7}
$(m,n)$&Time&Obj Value&Time&Obj value&Time&Obj value\\\hline
(10000,1000)& 6.01552E+00 &1.0228463348E+03 & 6.00472E+01 & 1.0228463348E+03 & 5.90720E+01 & 1.0228463348E+03 \\
(20000,1000)& 1.10170E+01 &1.0228463347E+03 & 1.16408E+02 & 1.0228463347E+03 & 1.08360E+02 & 1.0228463347E+03 \\
(30000,1000)& 1.92800E+01 &1.0228463347E+03 & 1.70676E+02 & 1.0228463347e+03 & 1.59982E+02 & 1.0228463347E+03 \\
(40000,1000)& 2.35979E+01 &1.0228463346E+03 & 2.49327E+02 & 1.0228463346e+03 & 2.14136E+02 & 1.0228463346E+03\\
(50000,1000)& 3.02345E+01 &1.0228463347E+03 & 2.92019E+02 & 1.0228463347e+03 & 2.88675E+02 & 1.0228463347E+03\\
(100000,1000)& 5.61113E+01 &1.0228463347E+03 & 5.58410E+02 & 1.0228463347e+03 & 5.86357E+02 & 1.0228463347E+03\\
(10000,2000)& 1.52080E+01 &1.3984577651E+03 & 1.63971E+02 & 1.3984577651E+03 & 1.46725E+02 & 1.3984577651E+03 \\
(20000,2000)& 3.04284E+01 &1.3984577651E+03 & 3.35675E+02 & 1.3984577651E+03 & 2.43072E+02 & 1.3984577651E+03 \\
(30000,2000)& 4.53796E+01 &1.3984577649E+03 & 4.41727E+02 & 1.3984577649E+03 & 4.17976E+02 & 1.3984577649E+03 \\
(40000,2000)& 5.89272E+01 &1.3984577650E+03 & 5.84628E+02 & 1.3984577650E+03 & 5.57352E+02 & 1.3984577650E+03\\
(50000,2000)& 7.38869E+01 &1.3984577650E+03 & 7.32414E+02 & 1.3984577650E+03 & 7.18307E+02 & 1.3984577650E+03\\
(100000,2000)& 1.45665E+02 &1.3984577650E+03 & 1.68460E+03 & 1.3984577650E+03 & 1.32608E+03 & 1.3984577650E+03\\
\hline
\end{tabular}
}
\end{table*}
\begin{table*
\caption{{Performance comparison of proposed Algorithm \ref{alg2}, Algorithm \ref{alg1} in \cite{16}, and classical Newton-CG algorithm with different large/huge $m$ and $n=100.$}} \label{table2} \centering
\fontsize{7.0pt}{\baselineskip}\selectfont{\begin{tabular}{ccccccc}
\hline {Problem}& \multicolumn{2}{c}{Proposed Algorithm \ref{alg2}} & \multicolumn{2}{c}{Classical Newton-CG Algorithm} & \multicolumn{2}{c}{Algorithm \ref{alg1} \cite{16}}\\
\cline{1-1}\cline{2-3}\cline{4-5}\cline{6-7}
$(m,n)$&Time&Obj Value&Time&Obj value&Time&Obj value\\\hline
(16000,100)& 5.92168E-01 &4.0409180661E+02 & 5.07334E+00 & 4.0409180661E+02 & 7.54851E+00 & 4.0409180661E+02 \\
(32000,100)& 1.27262E+00 &4.0409180660E+02 & 1.09639E+01 & 4.0409180660E+02 & 1.30855E+01 & 4.0409180660E+02 \\
(64000,100)& 2.49998E+00 &4.0409180660E+02 & 2.03190E+01 & 4.0409180660E+02 & 2.87963E+01 & 4.0409180660E+02 \\
(128000,100)& 4.75222E+00 &4.0409180660E+02 & 4.17729E+01 & 4.0409180660E+02 & 5.39767E+01 & 4.0409180660E+02\\
(256000,100)& 1.05534E+01 &4.0409180662E+02 & 9.10787E+01 & 4.0409180662E+02 & 1.12828E+02 & 4.0409180662E+02\\
(512000,100)& 2.14911E+01 &4.0409180662E+02 & 1.76339E+02 & 4.0409180662E+02 & 2.10324E+02 & 4.0409180662E+02\\
(1024000,100)& 4.51870E+01 &4.0409180662E+02 & 3.38687E+02 & 4.0409180662E+02 & 4.28966E+02 & 4.0409180662E+02\\
(2048000,100)& 9.00532E+01 &4.0409180662E+02 & 6.74268E+02 & 4.0409180662E+02 & 9.56397E+02 & 4.0409180662E+02\\
\hline
\end{tabular}
}
\end{table*}
\begin{table*
\caption{{Performance comparison of proposed Algorithm \ref{alg2}, Algorithm \ref{alg1} in \cite{16}, and classical Newton-CG algorithm with different large $m$ and different large $n.$}} \label{table3} \centering
\fontsize{7.0pt}{\baselineskip}\selectfont{\begin{tabular}{ccccccc}
\hline {Problem}& \multicolumn{2}{c}{Proposed Algorithm \ref{alg2}} & \multicolumn{2}{c}{Classical Newton-CG Algorithm} & \multicolumn{2}{c}{Algorithm \ref{alg1} \cite{16}}\\
\cline{1-1}\cline{2-3}\cline{4-5}\cline{6-7}
$(m,n)$&Time&Obj Value&Time&Obj value&Time&Obj value\\\hline
(2000,5000)& 1.62869E+01 &2.1340381607E+03 & 1.04678E+02 & 2.1340381607E+03 & 6.79232E+01 & 2.1340381608E+03 \\
(2000,10000)& 4.07598E+01 &2.9778347203E+03 & 2.43027E+02 & 2.9778347203E+03 & 1.41727E+02 & 2.9778347203E+03 \\
(5000,5000)& 3.21717E+01 &2.1377978300E+03 & 3.34972E+02 & 2.1377978300E+03 & 1.60278E+02 & 2.1377978300E+03 \\
(5000,10000)& 1.18895E+02 &2.9814491291E+03 & 9.20224E+02 & 2.9814491291E+03 & 3.69496E+02 & 2.9814491291E+03\\
(8000,7000)& 1.56516E+02 &2.5108384309E+03 & 1.50597E+03 & 2.5108384309E+03 & 4.17278E+02 & 2.5108384309E+03\\
(100000,8000)& 1.54017E+02 &2.6749515342E+03 & 1.24485E+03 & 2.6749515342E+03 & 5.43961E+02 & 2.6749515342E+03\\
(100000,10000)& 2.16453E+02 &2.9814491291E+03 & 1.66959E+03 & 2.9814491291E+03 & 8.13515E+02 & 2.9814491291E+03\\
(200000,10000)& 4.13139E+02 &2.9814491290E+03 & 3.04470E+03 & 2.9814491290E+03 & 1.59488E+03 & 2.9814491290E+03\\
\hline
\end{tabular}
}
\end{table*}
It can be seen from the three tables that
the proposed Algorithm \ref{alg2} significantly outperforms Algorithm \ref{alg1} in \cite{16} and the classical Newton-CG algorithm in terms of the CPU time to find the same solution. In particular, Algorithm \ref{alg1} and the classical Newton-CG algorithm take $8$ and $10$ times more CPU time than proposed Algorithm \ref{alg2} in average to find the same solution, respectively. The proposed algorithm is able to solve the SEB problem with $m=2048000$ and $n=100$ within about $90$ seconds, while the classical Newton-CG algorithm and Algorithm 1 in \cite{16} need $674$ and $956$ seconds to do so, respectively. The proposed inexact Newton-CG algorithm significantly improves the classical Newton-CG algorithm by computing the gradient and Hessian-vector product in an inexact fashion, which dramatically reduces the CPU time compared to the exact computations.
\begin{figure}
\includegraphics[width=0.85\textwidth]{SEB-Time2.pdf}
\caption{Time comparison of proposed Algorithm \ref{alg2}, Algorithm \ref{alg1} in \cite{16}, and classical Newton-CG algorithm with different large $m$ and fixed $n=2000.$}
\label{Time}
\end{figure}
We also plot the CPU time comparison of proposed Algorithm \ref{alg2}, Algorithm \ref{alg1} in \cite{16}, and classical Newton-CG algorithm with different large $m$ and fixed $n=2000$ as Fig. \ref{Time}. It can be observed from Fig. \ref{Time} that for fixed $n=2000$, the CPU time of all of three algorithms grow (approximately) linearly with $m.$ However, the CPU time of both Algorithm \ref{alg1} and the classical Newton-CG algorithm grows much faster than that of proposed Algorithm \ref{alg2}.
In a nutshell, our numerical simulation results show that the proposed inexact Newton-CG algorithm is particularly amenable to solve the SEB problem of large dimensions. First, the gradient and Hessian-vector product are inexactly computed at each iteration of the proposed algorithm by exploiting the (approximate) sparsity structure of the log-exponential aggregation function. This dramatically reduces the computational cost compared to compute the gradient and Hessian-vector product exactly and thus makes the proposed algorithm well suited to solve the SEB problem with large $m.$
Second,
at each iteration, the proposed algorithm computes the search direction by applying the CG method to solve the inexact Newton equation in an inexact fashion.
This makes the proposed algorithm also very attractive to solving the SEB problem with large $n.$
{\section{Conclusions}\label{conclusion}
In this paper, we developed a computationally efficient inexact Newton-CG algorithm for the SEB problem of large dimensions, which finds wide applications in pattern recognition, machine learning, support vector machines and so on. The key difference between the proposed inexact Newton-CG algorithm and the classical Newton-CG algorithm is that the gradient and the Hessian-vector product are inexactly computed in the proposed algorithm by exploiting the special (approximate) sparsity structure of its log-exponential aggregation function. We proposed an adaptive criterion of inexactly computing the gradient/Hessian and also established global convergence of the proposed algorithm. Simulation results show that the proposed algorithm significantly outperforms the classical Newton-CG algorithm and the state-of-the-art algorithm in \cite{16} in terms of the computational CPU time. Although we focused on the SEB problem in this paper, the proposed algorithm can be applied to solve other min-max problems in \cite{minimax1,minimax2,minimax3,coordinated,simo,ICC}.
\section{Acknowledgments}
The authors wish to thank Professor Ya-xiang Yuan and Professor Yu-Hong Dai of State Key Laboratory of
Scientific and Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences,
for their helpful comments on the paper. The authors also thank Professor Guanglu Zhou of Department of Mathematics and Statistics, Curtin University, for sharing the code of Algorithm \ref{alg1} in \cite{16}.
|
1,108,101,564,410 | arxiv | \section{Introduction}\label{Sect:intro}
Although solar flares have been investigated for decades, there are still open questions, such as how the released energy is transported throughout the atmosphere and deposited in the lower atmospheric layers. By studying chromospheric spectral lines, we probe the underlying physical processes and the response of the lower atmosphere to flare heating. NASA's most recent solar mission, the {\it Interface Region Imaging Spectrograph} \citep[{\it IRIS};][]{2014SoPh..289.2733D} showed puzzling spectra of the near-UV (NUV) \ion{Mg}{2}~h~and~k resonance lines in flares, which could not be explained by any modeling efforts so far.
The \ion{Mg}{2}~h~and~k lines are an important contributor to the UV emission during flares \citep{1984SoPh...90...63L}. Their unexplained characteristics in flares are: 1)~a lack of their central self-reversal, which is unusual for lines dominated by scattering; 2)~very broad line wings (line center~$\pm$~1.5~\AA); 3)~the presence of the subordinate \ion{Mg}{2} lines (3p-3d transition) in emission, which in the quiet Sun have been reported to be sensitive to heating in the low chromosphere \citep{2015ApJ...806...14P}, but may form differently in flares; and 4)~redshifts often occurring due to strong downward velocities \citep{2015A&A...582A..50K, 2015SoPh..290.3525L, 2015ApJ...807L..22G}.
The so far only direct comparison of observed flare \ion{Mg}{2} spectra and hydrodynamic simulations \citep{2016ApJ...827...38R} showed that the peculiar shape of flare spectra cannot be reproduced yet and that simulations always obtain profiles with a central reversal, more similar to quiet Sun than to flares. There have been several attempts to understand the behavior of the \ion{Mg}{2} spectra during solar flares \citep{1980ApJ...242..336M, 1983A&A...125..241L, 2016ApJ...827..101K, 2016ApJ...827...38R, 2015SoPh..290.3525L, 2016ApJ...830L..30D, 2017ApJ...836...12K, 2017arXiv170104213R}, but no simulation to date has reproduced any of the observed profiles and it is unclear, which physical mechanisms are responsible for the observed line profiles. Our goal for this paper is to carry out a parameter study to investigate the origin of the \ion{Mg}{2} spectral shapes during flares and to understand the atmospheric conditions needed to reproduce the observed flaring UV spectra. \citet{2015ApJ...809L..30C} performed a similar parameter study with the RH code, modifying the thermodynamic parameters in their model atmosphere, but enforcing hydrostatic equilibrium, to study the \ion{Mg}{2}~k profiles in plages with the objective of matching the observations. Here, we investigate even unlikely possibilities of variations, such as velocities or microturbulences significantly higher than those reported in observations, but we cannot provide an unambiguous solution to the conditions in a flaring atmosphere, which would require comparisons with multiple spectral lines and which will be our next step.
There are currently three codes commonly used for flaring hydrodynamic simulations: RADYN \citep{1997ApJ...481..500C}, FLARIX \citep{2009A&A...499..923K}, and HYDRAD \citep{2013ApJ...770...12B}. The HYDRAD code does not account for optically-thick radiative losses, an important energy term in the chromosphere. Therefore, it is not a suitable tool to study the response of the lower atmosphere to flare heating and in particular the \ion{Mg}{2} emission in the UV. The FLARIX code considers radiative transfer for hydrogen, calcium and more recently magnesium, but currently it does not consider the radiative losses of the helium atom, which may be up to 10\% of the total energy radiative loss \citep{1989ApJ...341.1067M, 2009ApJ...702.1553L}, affecting the modeled chromospheric emission. In contrast, RADYN solves the hydrodynamic equations together with the detailed radiative transfer for the atoms dominating the radiative losses in the chromosphere; i.e. hydrogen, helium, calcium and magnesium. Here we therefore use a model atmosphere from a RADYN simulation to start with the most realistic flare atmosphere and vary its thermodynamic parameters to investigate their influence on the \ion{Mg}{2} lines. The non-LTE radiative transfer code RH \citep{2001ApJ...557..389U} is used to calculate the \ion{Mg}{2} lines from the modified atmospheres. The advantage of RH is the proper treatment of the effects of angle-dependent partial frequency redistribution (PRD), improving the assumption of complete frequency redistribution (CRD) used by RADYN, which has previously been demonstrated to play an important role in the formation of the \ion{Mg}{2}~h~and~k line profiles \citep[see e.g.][]{2013ApJ...772...89L, 2013ApJ...772...90L}. Additionally, RH accounts for possible frequency overlap from bound-bound transitions \citep{1991A&A...245..171R, 1992A&A...262..209R}, whereas RADYN does not.
\begin{figure}[!tb]
\centering
\includegraphics[width=.5\textwidth]{fig1.eps}
\caption{Top: {\it IRIS}~SJI~1400 image of the X1 flare on 2014 March 29 with the spectrograph slit during the 8 raster positions drawn as vertical lines. The green contours show RHESSI HXR emission and the magenta crosses denote the positions where H$\alpha$ and Ca 8542 \AA\ spectra were compared in \citet{2016ApJ...827...38R}. The bottom panels show selected \ion{Mg}{2} flare spectra. The colors indicate the location on the slits and correspond to the small horizontal lines drawn in the SJI image. Typical flare profiles have single peaks, but may also have broad red wings. The barely visible gray profiles show typical quiet Sun profiles. }
\label{examplemg}
\end{figure}
The paper is organized as follows: a description of the motivation of our Mg line spectra study during solar flares is shown in Section~\ref{Sect:motivation}; a summary of the RADYN and RH codes is presented in Sections~\ref{sect:radyn} and~\ref{sect:rh}, followed by the study of the behavior of the \ion{Mg}{2}~h~and~k line profiles to different atmospheric conditions in Section~\ref{Sect:synthetic_profs}; a more detailed analysis on the line formation is described in Section~\ref{Sect:contr_fct}, and a summary and discussion of the results in Section~\ref{sect:conclusions}.
\section{Motivation}\label{Sect:motivation}
The observed shapes of the \ion{Mg}{2} lines are very diverse and variations exist on pixel-to-pixel scales. Some examples from the X1 flare on March 29, 2014 \citep{2015ApJ...806....9K} are shown in Figure~\ref{examplemg}. Flare profiles range from single peaks with broad wings (e.g.\,middle panel red and purple spectra), to small central reversals (bottom panel purple), to strongly asymmetric red components (e.g. top panel red and blue, bottom panel blue). The locations of the profiles are indicated with color-coded horizontal lines in the image on top. Grey spectra show the quiet Sun for a comparison. {\it RHESSI} HXR contours from 32-100 keV are also drawn. Because the non-thermal counts were comparably low at this time, we needed to integrate for 30 s (17:47:20-17:47:50 UT), using detectors 2 to 7. The {\it RHESSI} contours are therefore not strictly simultaneous to the {\it IRIS} data, but the sources did not move significantly at this time \citep{2015ApJ...813..113B}. The blue profile of slit position 7 corresponds to a time just when accelerated electrons were detected at the same location. This spectral profile shows a reversal, though shifted to the blue, and additionally a downflow, and its intensity increases significantly. A few seconds later, such profiles turn into the broad single-peak profiles. For example, the HXR source passed the red and purple profiles of slit 4 about 30 to 90 seconds earlier.
The line ratios given in the images are calculated as integrals from 2795--2798 \AA\ (k line), 2802--2805 \AA\ (h line), and 2798--2800 \AA\ (subordinate line at 2799 \AA).
The magenta crosses are for reference only and indicate locations where the H$\alpha$ and \ion{Ca}{2} 8542 \AA\ were compared in \citet{2016ApJ...827...38R}. These spectral lines matched relatively well to the model atmosphere that is used for the parameter study in this paper.
Single-peaked profiles seem universal for all analyzed flares and are also commonly observed in sunspots, though in sunspot they appear without any broad wings and with only tiny, if any, emission from the subordinate lines. So our goal is to simulate single-peak broad \ion{Mg}{2}~h~and~k profiles (e.g. red or purple profile of slit position 4) and simultaneously produce emission in the subordinate 3p-3d \ion{Mg}{2} lines at the correct intensity ratios, to understand the characteristics of the atmospheric parameters necessary to reproduce the observed typical line profiles during solar flares.
\subsection{Formation of the \ion{Mg}{2}~h~and~k Spectral Lines}{\label{Sect:formation_lines}}
The \ion{Mg}{2}~h~and~k lines are resonance lines formed under conditions of non-LTE. Considering that the radiative damping is the largest contribution to the total damping in the chromosphere (at $z>500$~km), that Van der Waals broadening is the dominant contribution in the photosphere and that quadratic Stark broadening is only of minor importance, the radiative damping component seems to be the main contribution to the formation of the \ion{Mg}{2}~h~and~k line core, while the Van der Waals broadening seems to be the main contribution to the broad line wings \citep{2013ApJ...772...89L}. 3D effects have been reported to be very important, especially in the line core. Unfortunately, there are no simulations yet that are able to calculate \ion{Mg}{2} realistically in 3D.
The 3p-3d level subordinate lines usually appear in absorption but show a significant enhancement during solar flares \citep[e.g.][]{2015A&A...582A..50K}. Their emission was analyzed by \citet{2015ApJ...806...14P} who concluded them to be optically thick and forming in the lower chromosphere in the quiet Sun. They found them to turn into emission only rarely, especially during their quiet Sun simulations, and concluded that a large temperature gradient ($\ge~1500$~K) must be present in the lower chromosphere to cause the subordinate lines to turn into emission.
\section{Modeling the line profiles}\label{sect:radyn}
We start with a modeled flare atmosphere from a RADYN simulation and modify its parameters to investigate their influence on the \ion{Mg}{2} lines, using the non-LTE radiative transfer code RH to properly treat the PRD effects. In the following sections~\ref{sect:radyn} and \ref{sect:rh} we will briefly introduce each code individually.
\subsection{RADYN Code}\label{sect:radyn}
We used the RADYN code of \citet{1997ApJ...481..500C}, including the modifications of \citet{1999ApJ...521..906A} and \citet{2005ApJ...630..573A}, to simulate the radiative-hydrodynamic response of the lower atmosphere to energy deposition by non-thermal electrons in a flare loop. We use the atmosphere from the run described in \citet{2016ApJ...827...38R} at 17:45:31~UT and manually modify its temperature, density, non-thermal broadening or plasma velocity with the RH code (see Section~\ref{sect:rh}). Additionally, we used RADYN simulations with electron beam fluxes from 10$^9$ to 10$^{12}$ erg s$^{-1}$ cm$^{-2}$ to investigate the dependence of the spectral line shape on beam flux. We also compared the spectra resulting from the RADYN simulations described in \citet{2015ApJ...813..133R}, including particle transport and stochastic acceleration with the ones assuming an ad-hoc single power-law, derived from fitting the non-thermal component of the hard X-ray spectra to a single power-law and applying the collisional thick-target modeling \citep{1971SoPh...18..489B, 1978ApJ...224..241E}.
Considering that the \ion{Mg}{2}~h~and~k lines are strongly affected by the effect of PRD, their simulations with RADYN, which only assume CRD, are not sufficiently realistic. We therefore recalculate the \ion{Mg}{2} lines with the RH code (see Section~\ref{sect:rh}) including PRD effects. As an input to RH we use snapshots of the atmosphere calculated with RADYN with manually modified parameters.
\subsection{RH Code}\label{sect:rh}
We performed non-LTE radiative transfer computations with a modified version of the RH code \citep{2001ApJ...557..389U, 2015A&A...574A...3P}, which includes the heating due to the injection of electrons in the atmosphere. The modification (performed by J. Allred; private communication) includes the computation of the collisional ionization rate from non-thermal electrons, following \citet{1983ApJ...272..739R}, given the electron heating rate estimated from the RADYN code. RH can treat the effects of angle-dependent partial frequency redistribution using the fast approximation by \citet{2012A&A...543A.109L}. We use a 10 level plus continuum model atom for \ion{Mg}{2} as described in \citet{2013ApJ...772...89L}.
One of the differences between RH and RADYN is that RADYN uses the Uppsala opacity package \citep{Gustafsson}, while RH calculates the opacity of the transitions in the indicated atoms. Additionally, by using snapshots of the atmospheric model from RADYN as input into the RH code, we recalculate the ionization populations using statistical equilibrium, instead of using the non-equilibrium ionization already estimated by RADYN. This changes the plasma density and, as a consequence, shifts the height scale in comparison with the atmosphere produced by RADYN, leading to differences in the formation height in units of length. Although statistical equilibrium can be a poor assumption at the beginning of the impulsive phase of the simulation, the effect should be smaller at later times. Unfortunately there is currently no option to avoid statistical equilibrium in RH. We would like to emphasize that the goal is to study the behavior of the \ion{Mg}{2} line for different atmospheric parameter changes. Even though we use the ionization population based on the statistical equilibrium assumption for excitation processes, we still keep the non-equilibrium electron density estimated from RADYN, rather than recalculating the equilibrium electron density, since it is the one resulting from the hydrodynamic equations. Therefore, we believe it to be more realistic and the electron density population is far from equilibrium in a flaring atmosphere. By not solving for hydrostatic equilibrium, we have not enforced the charge conservation in RH. We find that the velocity variations in the atmosphere do not displace the charge balance of positive vs. negative charges significantly ($<$ 4\%). The temperature and density variations affect the charge conversation more with the exact numbers listed in the respective sections below.
These points may make a difference when comparing the line profiles with the observations; for instance, the continuum emission at 2992~\AA\ resulting from RH is less than 20\% lower than the one estimated from RADYN. Another possibility that can explain the lower continuum emission estimated with RH, in comparison with RADYN, can be the lower opacity in the wings of the \ion{Mg}{2}~h and k lines. \citet{2017ApJ...836...12K} have included the \ion{Mg}{2}~h and k wing opacity in the calculations of the excess continuum intensity finding that the opacity at these wavelengths is important for an accurate treatment of the \ion{Mg}{2} wing emission as well as the continuum in the upper photosphere. An underestimation of the opacities can result in a lower continuum emission, by a factor of 15–-30\%.
Each line profile is calculated with up to 100 frequency points across the transition using a Voigt profile and a line broadening due to the Stark effect. Throughout the paper we use $\mu$=0.77 to be consistent with the observations in Figure~\ref{examplemg}.
\begin{figure*}[!tbh]
\centering
\includegraphics[width=\textwidth]{fig2.eps}
\caption{Variation of the temperature structure just below the transition region (middle panel) while keeping the electron density from RADYN (right panel) may lead to \ion{Mg}{2} lines with single peaks (e.g. yellow, left panel), similar to those observed in flares (example flare spectrum multiplied by 5 plotted as dotted gray line). The ratio between h and k lines to the subordinate lines ($\approx$ 15) is higher here than in the observations ($\approx$ 4). Note that the height scale is given in Mm is not linear in the plots because $\log_{10}$(column mass) is linear.}
\label{vart}
\end{figure*}
\begin{figure*}[!htb]
\centering
\includegraphics[width=\textwidth]{fig3.eps}
\caption{Increase of the chromospheric temperatures up to 3000~K (red, blue) slightly broaden the line wings, but not enough compared to the observations. Temperature spikes in the middle chromosphere lead to irregular unrealistic profiles (yellow). Temperature spikes in the upper chromosphere lead to single peak profiles. The temperature behavior (e.g. drop or increase) above the line core formation height does not influence the lines (light blue in comparison with Figure~\ref{vart}).}
\label{truns}
\end{figure*}
\section{Synthetic line profiles}\label{Sect:synthetic_profs}
In this section we modify one thermodynamic parameter at a time. While this probably is not representative of real solar changes during flares, it allows us to investigate the dependence of the \ion{Mg}{2} spectra on different parameters. We ran variations of several hundred atmospheres and show the main results in the following sections.
\subsection{Varying temperature}\label{Sect:vart}
We varied the temperature structure, including {(1)}~temperature steps at different heights below the transition region, {(2)}~higher chromospheric temperatures, and {(3)}~localized heating at certain heights. The electron density and plasma velocities were kept constant for all runs. Figure~\ref{vart} shows some representative runs in which the temperature increases just below the transition region, where the core of the \ion{Mg}{2}~k and h lines are formed. A strong temperature increase in this region leads to single-peaked profiles whose intensity is lower than the original atmosphere. However, the intensities of the subordinate lines are too low compared to the h and k lines. The dotted gray line corresponds to the red spectrum at slit 4 in Figure~\ref{examplemg} with its intensity multiplied by 5. The observed line wings are clearly much broader.
We also increased the temperature of the chromosphere (red and blue lines in Figure~\ref{truns}) or introduced strong heating at limited heights (yellow and light blue lines in Figure~\ref{truns}). Even when unrealistically increasing the chromospheric temperature by 3000~K (blue), the line wings are only slightly broadened, far less than the observed broadening. Temperature spikes in the middle chromosphere (yellow) lead to sharp spikes in the line wings, which to our knowledge have not been found in observations. Temperature spikes in the upper chromosphere (light blue) lead to a similar behavior as shown in Figure~\ref{vart}.
As long as there is a steep temperature increase at the formation height of the line core, the profile turns into single peak and the line is insensitive to the temperature behavior above these heights.
Because the charge conservation has not been enforced, the charge balance is displaced. Considering the atoms of hydrogen, helium, and magnesium, the maximum ratio between positive and negative charges is 1.16 at 10000~K and 1.01 at 50000~K in the models of Figure~\ref{vart}.
\begin{figure*}[!hbt]
\centering
\includegraphics[width=\textwidth]{fig4.eps}
\caption{Variation of electron density. To obtain a single peak, a high density (factor of 10 increase) at the formation height of the line core (here -3.8 $\le$ log$_{10}$(column mass) $\le$ -3.6~g~cm$^{-2}$) is required as illustrated by the red and yellow examples. The intensity is too high compared to the observations, but the k/h ratio is similar and the k/2799 ratio is within a factor of 2. The dotted gray line is an example flare spectrum.}
\label{vard}
\end{figure*}
\begin{figure*}[!htb]
\centering
\includegraphics[width=\textwidth]{fig5.eps}
\caption{Linear increases of the density closer to the transition region than in Figure~\ref{vard}. A too high increase in density (red) not only leads to too high intensities, but also worsens the ratio of the k line vs. the subordinate lines.}
\label{lind}
\end{figure*}
\subsection{Varying electron density}\label{Sect:vard}
In this set of models, we kept the temperature constant and increased the electron density at different heights. Figure~\ref{vard} shows that heights just below the transition region have no effect on the shape of the line core (yellow and red lines). The comparison of yellow and light blue lines shows that the density between log(column mass) between $-3.6$ and $-3.8$~g~cm$^{-2}$, which corresponds to 1.15-1.17 Mm, is responsible for the single peak of the h and k lines. By increasing the density at least by a factor of 10, the reversed profile turns into a single peak (black vs. blue vs. red profiles). The k/h ratio is similar to the observations, the k/2799 ratio differs by a factor of 2, however the intensity is significantly too high in the models. Therefore the observed spectrum (dotted gray) was multiplied by a factor of 10.
Figure~\ref{lind} shows the effect of more smoothly increasing and decreasing densities. Similarly to the previous figure, only densities at the formation height of the line core matter for obtaining single-peaked profiles.
\citet{2017ApJ...836...12K} have recently studied the continuum emission together with the asymmetries observed by {\it IRIS} in the \ion{Fe}{2}~2832.39~\AA\ line profile during the impulsive phase of an X1.0 class flare. They suggest a model producing two flaring regions (a condensation and stationary layers) in the lower atmosphere. The condensation, due to the high densities in the chromosphere, compresses the chromosphere and therefore enhances the continuum emission; while the stationary flare layers in the chromosphere explain the bright redshifted \ion{Fe}{2} emission component, reported to be observed during solar flares. This plausible scenario with two flaring regions in the chromosphere, could explain the sudden density increase within a narrow region.
Another possible physical mechanism that can explain the increase of the electron density without elevating the temperature is the non-thermal ionization resulting from a beam of non-thermal electrons, creating a compressed region in the chromosphere for short timescales. The lack of charge conservation leads to unrealistic values (increase of negative charges by an order of magnitude at T=10000~K) in the yellow and light blue examples in Figure~\ref{vard}, because we modified the density starting low in the atmosphere. For all other atmospheres in Figure~\ref{vard} and \ref{lind}, the ratios are maximally 1.19 and 1.01 at T=10000~K and 50000~K, respectively.
\begin{figure*}[!htb]
\centering
\includegraphics[width=\textwidth]{fig6.eps}
\caption{Variations of the Doppler velocities. Negative values denote redshifts. For the light blue example we approximated the velocity profile observed by \citet{2009ApJ...699..968M}. While single-peak profiles can be reproduced, the resulting spectral lines are not symmetric, contrary to the observations (dotted gray, multiplied by a factor of 6), and may include small extra components in the blue side of the h and k lines.}
\label{varv}
\end{figure*}
\subsection{Varying Doppler velocity}\label{Sect:varv}
The Doppler velocities may change strongly during flares, generally from strong upflows at coronal temperatures to downflows of few tens of km s$^{-1}$ in the chromosphere \citep[e.g.][]{2009ApJ...699..968M}. For these tests, we vary the magnitude of the downflow at and near the line core formation height. Figure~\ref{varv} shows the approximate Doppler velocities from \citet{2009ApJ...699..968M} in light blue (negative velocities represent downflows), which lead to an asymmetric profile with a stronger blue peak. It is possible to obtain a single-peak profile with its center at the line core just by varying velocities (e.g. yellow). But such variations do not lead to a symmetric profile.
\begin{figure*}[!htb]
\centering
\includegraphics[width=\textwidth]{fig7.eps}
\caption{Variation of the Doppler velocities to match the blue profile from Figure~\ref{examplemg}. The red and dark blue profiles were combined to the light blue profile, which agrees well with the {\it IRIS} observations from near the HXR footpoint. The peak, the extended red wing, and the location of the blue peak at 2796~\AA\ can be reproduced, but the subordinate lines only roughly agree in shape, not intensity. The yellow profile shows the RH simulation using the combined velocity profile, indicating that a single component cannot match the {\it IRIS} observations and that there likely are unresolved downflows in one pixel.}
\label{combine}
\end{figure*}
\begin{figure*}[!htb]
\centering
\includegraphics[width=\textwidth]{fig8.eps}
\caption{Variation of the Doppler up- and downflowing velocities to match the red profile from Figure~\ref{examplemg}, slit position 4. The red and dark blue profiles were combined to the yellow profile, which agrees well with the {\it IRIS} observations from the ribbon. The broad line wings, the shift of the line and the narrow peak, can be reproduced, but the modeled subordinate lines have lower intensities and broader profiles. Considering that such symmetric profiles cannot be obtained with any 1D velocity structure, this may indicate that unresolved up- and downflows exist in the same pixel.}
\label{combine2}
\end{figure*}
All single-peak profiles plotted in Figure~\ref{varv} (yellow, blue, red) show an extra enhancement at the blue side of the line, at around 2795.9~\AA, similarly to the blue wing enhancement observed by {\it IRIS} in the bottom panel of Figure~\ref{examplemg} (blue profile). Such {\it IRIS} profiles seem to only occur during a few seconds and exactly at locations where HXR footpoints are observed. Figure~\ref{combine} shows the ``HXR-footpoint'' {\it IRIS} profile in dotted gray. We noticed that by starting the velocity variation lower in the chromosphere the emission becomes red-shifted (dark blue vs. red in Figure~\ref{combine}), but the location of the dip at $\sim$2796~\AA\ remains. We combined the red and blue profiles (35\% blue + 65\% red) into a new profile (light blue), simulating what would be observed if these two components existed within one pixel. The light blue profile qualitatively matches the ``HXR-footpoint'' {\it IRIS} profile very well, both in the shape and extent of the red wing and in the location of the dip. We re-ran RH with the velocities derived from this combination (yellow), but the resulting profile shows that a single component cannot explain the observations. While the non-perfect fit of the emission feature at $\sim$2796~\AA\ and of the subordinate lines indicates that the middle-lower chromosphere is not fully reproduced yet, our results indicate that ``HXR-footpoint'' {\it IRIS} profiles are due to unresolved strong downflows.
\begin{figure*}[!htb]
\centering
\includegraphics[width=\textwidth]{fig9.eps}
\caption{Variation of the transition region, simultaneously for the temperature and electron density. The non-thermal broadening and Doppler velocities were not changed. None of these variation lead to realistic flare profiles. Either the central reversal remains (blue, red), or in case of the yellow example, the subordinate lines are higher than the \ion{Mg}{2}~h~and~k lines, which is unrealistic.}
\label{movetr}
\end{figure*}
\begin{figure*}[!tb]
\centering
\includegraphics[width=\textwidth]{fig10.eps}
\caption{Increase of the non-thermal broadening in the upper chromosphere, up to (probably unrealistic) 40~km~s$^{-1}$. The electron density was taken from the red atmosphere of Figure~\ref{vard} to show the effect of the microturbulence parameter on an already existing single-peak profile. None of these examples show broad enough wings compared to the observations and additionally, high microturbulent values unrealistically broaden the line core.}
\label{vturb}
\end{figure*}
To study how the emission of the \ion{Mg}{2}~k line responds to the combination of plasma motions in opposite directions, we combined a strong downflow (red line in Figure~\ref{combine2}) with a moderate upflow (blue) into a new profile (yellow, consisting of 35\% blue + 65\% red). The existence of unresolved up- and downflows in the same pixel could explain the broad wings. We increased the turbulent velocity to 15~km~s$^{-1}$, otherwise the absorption feature (as seen in Figure~\ref{combine}) affected the shape of the line wings. Also, the downflow velocities have to be larger than the upflows, because the red wing is slightly more extended in observations. Since nearly all {\it IRIS} flare ribbon pixels have these typical profiles, this result may indicate that there are similar unresolved up- and downflows throughout the ribbons.
\subsection{Varying the radiation field}
By artificially including a UV radiation field in the upper corona in the RH code, including emission lines such as \ion{He}{1}, \ion{He}{2} and \ion{Mg}{10}, we aim at compensating for energy losses of optically thin lines not considered in the RH code, which may influence the resulting synthetic emission during solar flares, especially in the UV.
We considered a wavelength range between 302 and 613~\AA\ and estimated the total irradiance \citep{2004ApJ...606.1239P, 2004ApJ...606.1258J}. Comparing the resulting \ion{Mg}{2}~h and k line profiles with the original ones without external irradiation field in the upper corona, we found that the differences are minimal, especially in the wings. The emission in the line core is 0.02\% higher when including the external irradiation field in the upper corona. Therefore, the inclusion of coronal irradiance does not contribute enough to the total \ion{Mg}{2} UV emission, to explain the differences between the synthetic line profiles and those observed by {\it IRIS}.
\subsection{Varying the transition region}
For this set of models we vary the location of the transition region by modifying the column mass at which the transition region is located. Figure~\ref{movetr} shows the modifications of temperature and density. The lower the transition region, the lower the intensity of \ion{Mg}{2}~h~and~k. These variations do not reproduce typical flare profiles. Even the yellow single-peaked profiles are not similar to observed profiles because the subordinate lines have higher intensities than the h and k lines.
\subsection{Varying the non-thermal broadening}\label{Sect:vturb}
Since the resulting synthetic line profiles are narrower than the observed flaring \ion{Mg}{2} profiles, we also tried to modify the non-thermal broadening parameter, by increasing it in the upper chromosphere, at a column mass higher than -3.1~g~cm$^{-2}$. Figure~\ref{vturb} shows the broadening of the line profile for different test runs, while increasing the non-thermal broadening and varying the region where this occurs. For these simulations, we also increased the electron density in a narrow layer at the upper chromosphere, where the line core is formed, to obtain a single core line emission to be able to better compare the effects of microturbulence to observed flare profiles.
Considering that typical values for microturbulence to compensate for the lack of small-scale random motions in the model are $\sim 3$~km~s$^{-1}$ \citep{2012A&A...543A..34D, 2016ApJ...830L..30D}, even unrealistically high velocities of 40~km~s$^{-1}$ are not enough to explain the wide broad wings observed by {\it IRIS}. For instance \citet{2016ApJ...830L..30D} use 8~km~s$^{-1}$, describing this value as very high. Moreover, the line profile not only becomes broader, but also flatter in the core of the line, contrary to the observations.
\subsection{Varying the flux of non-thermal electrons}
We studied how the variation of the non-thermal electron flux influences the \ion{Mg}{2}~UV emission by using atmospheres from different RADYN simulations and calculating the profiles with RH. The resulting atmospheres correspond to four different simulations with a constant total non-thermal electron energy flux of $10^9$, $10^{10}$, $10^{11}$ and $10^{12}$ erg~cm$^{-2}$~s$^{-1}$ and assuming that the electron spectra follow a single power law at non-thermal energies, with a cutoff energy of $E_c=25$~keV and a spectral index $\delta=4$ for the first three simulations and $\delta=3$ for the simulation with the highest electron energy flux.
\begin{figure}[!tb]
\centering
\epsscale{1.2}
\plotone{fig11.eps}
\vspace{-0.2cm}
\caption{\ion{Mg}{2} spectra resulting from different atmospheres as a result of the RADYN simulations, varying the non-thermal electron flux. A very high flux (10$^{12}$ erg cm$^{-2}$~s$^{-1}$) leads to unusual profiles with a strong red component, however they do not match the observations. None of these simulations showed single-peak profiles at any time step.}
\label{mgii_varF}
\end{figure}
Since the atmosphere evolves differently for each electron energy flux simulation, it is hard to make a direct comparison of the line profiles and assess a direct contribution of the electron flux. We tried to minimize this effect by selecting a timestep at the beginning of the simulation, when the atmosphere has not reached the ``explosive'' chromospheric evaporation described by \citet{2005ApJ...630..573A}. We selected a timestep of $t=5$~seconds after heating the atmosphere for the first two runs (F09 and F10), at which point the atmosphere has not reached the ``explosive'' chromospheric evaporation; $t=6.1$~seconds in the case of F11, having very strong upflow velocities in the lower corona of almost 1200 km~s$^{-1}$ as well as downflowing velocities close to 25 km~s$^{-1}$; and $t=2.8$~seconds for F12, with downflowing velocities of 70 km~s$^{-1}$. Figure~\ref{mgii_varF} shows how much the atmospheres differ within the runs.
In general, increasing the beam strength leads to higher intensities in the \ion{Mg}{2}~UV wavelength range, especially in the continuum regime as well as the subordinate line profiles. The asymmetries (higher red peaks) of the F09 and F10 runs (red and blue lines of Figure~\ref{mgii_varF}) are mostly due to the downflowing velocities (negative values) at a column mass region of $\approx -4$~g~cm$^{-2}$, where the line core is formed. The line profile resulting from the F11 run (green line) is the strongest among the four cases, having also the highest electron density and temperature, more than one order of magnitude increase in the upper chromosphere, between -3.4 and -4~g~cm$^{-2}$. The strong slightly blueshifted line core is due to the upflowing (positive) velocities of almost 25~km~s$^{-1}$ in the upper chromosphere. The F12 simulation (violet line) shows lower intensities in the line core and a lack of reversal at 2796.35~\AA\ due to the sudden temperature increase below a column mass of -4~g~cm$^{-2}$, decoupled from the electron density, since the sudden temperature increase does not reflect such sudden changes in the electron density. The secondary emission peak at higher wavelengths (2796.8~\AA) is due to the sudden changes in the velocity stratification, from upflowing to downflowing plasma motions, located at the column mass of -4~g~cm$^{-2}$. These changes are associated to the sudden temperature increase, located at that height.
Even though we only show the line profile from a specific time step of the RADYN simulations, we have performed a study of the temporal evolution of the line profiles. We found that none of the simulations showed \ion{Mg}{2}~h~and~k as single-peaked profiles for any beam strength at any time step.
\subsection{Including particle transport and acceleration}
By considering the coupling of particle transport and stochastic acceleration using the FLARE code \citep{2002ApJ...569..459P} with the radiative transfer and hydrodynamics from the RADYN code, we obtain a spectrum formed by a quasi-thermal component and a power-law tail as explained by \citet{2015ApJ...813..133R}. The atmospheric evolution of the run when coupling of the particle transport and acceleration equations with the hydrodynamics results in a stronger chromospheric evaporation as well as greater up- and downflowing velocities, when compared with the atmosphere resulting from the single power-law run.
By taking the atmosphere from the hydrodynamic simulations of \citet{2015ApJ...813..133R} after 60 seconds of heating, we studied how the \ion{Mg}{2}~h~and~k~line profiles are affected when coupling particle transport and stochastic acceleration. We find that although the intensity decreases by almost a factor of 2, the line profiles still show narrow wings and a core reversal at the line center (see~Figure~\ref{mgii_var_SA_PL}).
\begin{figure}[!tb]
\centering
\epsscale{1.25}
\plotone{fig12.eps}
\vspace{-0.2cm}
\caption{\ion{Mg}{2} spectra resulting from the PL and SA1 runs of \citet{2015ApJ...813..133R}, at the peak of the electron energy flux, after 60 seconds of heating. SA leads to lower intensities, but single-peaked profiles cannot be reproduced with either method.}
\label{mgii_var_SA_PL}
\end{figure}
\begin{figure*}[!htb]
\centering
\includegraphics[width=.49\textwidth]{fig13a.eps}
\includegraphics[width=.49\textwidth]{fig13b.eps}
\includegraphics[width=.49\textwidth]{fig13c.eps}
\includegraphics[width=.49\textwidth]{fig13d.eps}
\caption{Intensity contribution function resulting from the different atmospheres. Darker regions indicate areas of strong contribution. The green line shows the $\tau_{\nu}=1$ location and the black dashed line shows the vertical velocity as a function of height. The black dotted line in the lower left panel represents the Planck function $B_{\nu}$ and the red dashed-dotted line, the source function $S_{\nu}$ calculated at the line center. The black line in the lower right panel show the spectral line profile. Top left: $C_i$ from the original RADYN atmosphere, without any modification. Top right: $C_i$ from the modified RADYN atmosphere, after increasing the temperature (yellow atmosphere from Figure~\ref{vart}). Bottom left: $C_i$ from the modified RADYN atmosphere, after increasing the density (yellow atmosphere from Figure~\ref{vard}). Bottom right: $C_i$ from the modified RADYN atmosphere, after increasing the plasma velocity (dark blue atmosphere from Figure~\ref{combine}).}
\label{ci_images}
\end{figure*}
The line profiles resulting from the power-law run (red line) have stronger intensities in the line core, but almost the same contribution in the continuum as the stochastic acceleration run. This is because, although the temperature and electron densities are about the same order of magnitude, the atmosphere resulting from the power-law run shows slightly higher densities and temperatures at a column mass between -3.4 and -4~g~cm$^{-2}$.
The inclusion of particle transport and acceleration results in higher upflowing plasmas close to the transition region (below to a column mass of -6~g~cm$^{-2}$) and stronger downflowing velocities in the upper chromosphere (at a column mass of -5~g~cm$^{-2}$). The line core is formed lower in the atmosphere, at a column mass close to -3.6~g~cm$^{-2}$; therefore the plasma motions that significantly differ in both runs do not affect the line formation. Instead, the line profiles are symmetric, as the synthetic spectra in Figure~\ref{mgii_var_SA_PL} show.
\section{Formation of the \ion{Mg}{2}~k line profile: intensity contribution function}\label{Sect:contr_fct}
To investigate the behavior of the source function and where in the atmosphere the line is formed, we calculate the so-called contribution function. By writing the formal solution of the transfer equation for emergent intensity \citep[Equation~\ref{eq_contribution_function}; ][]{1997ApJ...481..500C}, we can investigate how the atmospheric stratification affects the formation of the line profile and where exactly the line is formed. The line emergent intensity $I_{\nu}^{0}$ can be written as:
\begin{equation}
I_{\nu}^{0} =
\frac{1}{\mu} \int_{z} S_{\nu} \chi_{\nu} e^{-\frac{\tau_{\nu}}{\mu}} dz = \frac{1}{\mu} \int_{z} C_i \,dz \,\,,
\label{eq_contribution_function}
\end{equation}
where $z$ is the atmospheric height; $\tau_{\nu}$ is the monochromatic optical depth; $\chi_{\nu}$ is the monochromatic opacity per unit volume; $S_{\nu}$ is the source function, defined as the ratio between the emissivity to the opacity of the atmosphere and $C_i$ is the intensity contribution function, indicating how much emergent intensity originates at a certain height $z$.
The intensity contribution function, $C_i$, can be expressed as the product of $S_{\nu}$, $\frac{\chi_{\nu}}{\tau_{\nu}}$ and $\tau_{\nu}e^{-\frac{\tau_{\nu}}{\mu}}$, as the panels in Figure~\ref{ci_images} show \citep[for a more detailed explanation see][]{2013ApJ...772...90L}. Since both \ion{Mg}{2}~h and k line profiles have a similar behavior, we will focus on the \ion{Mg}{2}~k line profile in the following sections.
\subsection{Original RADYN atmosphere}\label{Sect:contr_fct_orig}
The \ion{Mg}{2}~k line profile resulting from the simulation using the original RADYN atmosphere shows a reversal at the line center (top left panel of Figure~\ref{ci_images}). The asymmetry in the blue wing of the line is associated to the asymmetry in the $\tau_{\nu}=1$ layer at different frequencies, as the term $\tau_{\nu}e^{-\frac{\tau_{\nu}}{\mu}}$ in the upper right panel indicates, since its contribution to the total intensity at each frequency has an asymmetry towards blue wavelengths. The source function $S_{\nu}$ in the lower left panel, calculated for the line center wavelength, closely follows the Planck function (dotted line) in the lower atmosphere, up to 1.14~Mm in comparison with the sudden increase at higher layers, due to the temperature stratification. The departure of the source function from the Planck function means that the line core is formed under non-LTE conditions, because both profiles have a different temperature response.
The intensity contribution function $C_i$ is the product of the previous three panels: the main contribution comes from the upper chromosphere, between 1.06 and 1.14~Mm, since the ratio of the opacity to the optical depth ($\frac{\chi_{\nu}}{\tau_{\nu}}$) is very small at lower heights. The integration of the intensity contribution function at each frequency results in the black line profile displayed in the lower right panel. The core reversal is due to the sudden decrease of the source function due to the decoupling of the source and Planck functions at 1.14~Mm, where the photons of the line core are formed.
By studying the formation of the subordinate lines between the 3p$^2P$ and 3d$^2D$ states (figure not shown), we found that they are formed in the upper chromosphere, between 0.83 and 1.14~Mm, at higher layers than the values reported by \citet{2015ApJ...806...14P} for the quiet Sun.
\subsection{Atmosphere with increased temperature}\label{sectempatmos}
By increasing the temperature in the upper chromosphere, as described by the yellow atmosphere of Figure~\ref{vart}, we find that the formation height of the line center is reduced by $\sim$200~km (see top right panel of Figure~\ref{ci_images}), explaining the steeper peak at the line core. The $\tau_{\nu}=1$ layer in the line wings has the same shape as in the top left panel of Figure~\ref{ci_images} with the exception of the plateau at wavelengths close to the line core. The redshifted line core is due to the downflowing velocities of $\sim$~14~km~s$^{-1}$ present at the line core formation height (between 1.06 and 1.14~Mm). Although the ratio between the opacity and the optical depth is almost the same as in the previous atmospheric example, there is no clear emissivity above $\tau_{\nu}=1$ close to the line center. The temperature increase results in a coupling of the source function to the Planck function at higher heights, especially also where the line core is formed (lower left panel). This indicates LTE conditions for the whole line and explains the single-peaked profile.
The asymmetry of the line profile is reflected by the frequency distribution of the intensity contribution function. There is a secondary intensity contribution function component at a height of 1.13~Mm, and above $\tau_{\nu}=1$, which indicates that there is some contribution to the core emission, which is formed under optically thin conditions.
\subsection{Atmosphere with increased density} \label{Sect:contr_fct_dens}
The bottom left panel of Figure~\ref{ci_images} shows the different terms of the total intensity contribution function for the yellow atmosphere of Figure~\ref{vard}. Here the electron density increase results in a slightly stronger opacity component $\chi_{\nu}$ at a height of 1.14~Mm, in comparison with the resulting opacity of Section~\ref{sectempatmos}, with an increased atmospheric temperature. The term $\chi_{\nu}/\tau_{\nu}$ is also stronger in the line center, in comparison with the original atmosphere of Section~\ref{Sect:contr_fct_orig} because $\chi_{\nu}$ is proportional to the density of emitting particles. Therefore, $\frac{\chi_{\nu}}{\tau_{\nu}}$ is higher when there is a large number of emitters at low optical depth (i.e., the produced photons can escape), explaining the stronger emissivity component in comparison with the top right panel of Figure~\ref{ci_images}.
The main difference in comparison with the increase of temperature is that the coupling between the source function and the Planck function is located at slightly higher heights, as well as the formation height ($\tau_{\nu}=1$) of the line core. By increasing the electron density, the source function $S_{\nu}$ keeps increasing with the temperature, although it is decoupled from the Planck function $B_{\nu}$. As in Section~\ref{sectempatmos}, the source function is still coupled to the Planck function at the core formation height, indicating LTE conditions. The line core shows a flatter profile because the intensity contribution function has a more symmetric frequency distribution, with a fainter blue wing asymmetry.
\subsection{Atmosphere with increased velocities} \label{Sect:contr_fct_vel}
Previous simulations have shown that asymmetries in chromospheric line profiles during flares are strongly affected by the velocity field in the flaring atmosphere \citep{2015ApJ...813..125K}. The bottom right panel of Figure~\ref{ci_images} shows that by increasing the downflowing plasma velocity to about 200~km~s$^{-1}$, as shown in the blue atmosphere of Figure~\ref{combine}, the line profile is formed in a broader region, in comparison with the original atmosphere of the top left panel of Figure~\ref{ci_images}. The intensity contribution function also covers a much broader frequency range, broadening the wings of the line profile (note the different scaling of the x-axis). The Planck and source functions have similar behavior, increasing with temperature in the lower atmosphere, departing from the Planck function above 1.12~Mm, similarly to the original atmosphere. At the core formation height, just above 1.12~Mm, the source function is starting to decouple from the Planck function, although both still increase with temperature, resulting in a single core in emission. Due to the plasma motions, the line shows an asymmetric profile, especially towards redder wavelengths, as a result of the increased downflowing plasma motions.
By increasing the plasma velocity, the formation height at which $\tau_{\nu}=1$ (green line) is not flat near the line core, as in the previous cases, but it linearly increases within the wavelength range $-0.2$ to $1.6$~\AA.
For other variations of the velocity, we can either have still coupled Planck and source functions (e.g. red line in Figure~\ref{varv}), or an already decoupled and decreasing source function with temperature (e.g. red line in Figure~\ref{combine}). Yet we do not observe a reversal in any of the cases because even with a decreasing source function, the large velocities fill the intensity where usually a reversal is observed.
It is also worth noting the secondary contribution coming from heights close to 1.2~Mm (not shown), above the $\tau_{\nu}=1$ curve, indicating that there is a contribution to the intensity in the red wing (at wavelengths between 0.6 and 2.1~\AA) formed under optically-thin conditions.
\section{Summary and Discussion}\label{sect:conclusions}
\subsection{Options to simulate flare-like Mg profiles}
Our parameter study has shown several methods to obtain more flare-like \ion{Mg}{2}~h~and~k profiles in simulations with a single peak in emission. In Section~\ref{Sect:vart} we have seen that by increasing the temperature at the line core formation height, we do not only obtain a single peak in emission, but also decrease the peak intensity. This is favorable, considering that synthetic line profiles tend to show higher intensities than {\it IRIS} observations \citep{2016ApJ...827...38R, 2017ApJ...836...12K}. This modification also results in symmetric line profiles, similar to the typical observed flare ribbon profiles. While the line ratio between the \ion{Mg}{2}~k and \ion{Mg}{2}~h lines looks reasonable for most of our simulations, the ratio between them and the subordinate triplet lines is too large for the atmosphere with a modified temperature (cf. Figure~\ref{vart}). This may indicate that the temperature or its gradient in the lower chromosphere may also need to be modified. However, this can be tricky as it might affect other lines, such as H\textalpha\ or \ion{Ca}{2}~8542~\AA, which already show a good agreement with observations (see Figure~12 in \citet{2015ApJ...804...56R}, or Figure~11 in \citet{2016ApJ...827...38R}, for an easier comparison, we marked the locations in Figure~\ref{examplemg}).
The increase of the electron density results in a line core in emission as well, but as can be seen in Figure~\ref{vard}, the intensity of the line increases significantly, to values not reached by observations. The line ratio of \ion{Mg}{2}~k to subordinate lines is closer to the observations for the electron density variation than for the temperature variation.
In the case of increased downflowing velocities, the line core may show a single-peak in emission as well. The formation height at which the optical depth $\tau_{\nu}=1$ increases linearly near the center of the line, with its peak at 1.15~Mm and 1.6~\AA. At this height, $S_{\nu}$ and $B_{\nu}$ are already decoupled. The source function at the line core formation heights for the velocity variations can either be coupled to or decoupled from the Planck function, depending on the velocity stratification.
The intensity contribution function $C_i$ shows a secondary component in emission, formed at heights above $\tau_{\nu}=1$ and therefore under optically-thin conditions. However, the resulting line profile is highly asymmetric, contrary to standard flare profiles. Only by combining profiles with different velocity structures, one can reproduce different {\it IRIS} profiles, thus indicating that the {\it IRIS} profiles may be unresolved. By combining downflows velocities, starting at different heights, the shape of the short-lived {\it IRIS} profile from HXR emission sites can be reproduced. The single peak and broad wings of typical {\it IRIS} flare profiles can be reproduced by combining up- and downflows. The unresolved components could also be explained by a combination of several threads overlaying in space \citep{2006ApJ...637..522W, 2016ApJ...827...38R, 2016ApJ...827..145R}. The subordinate lines are comparably low in this case, and would require further modifications in the lower chromosphere as discussed above.
\begin{figure}[!tb]
\centering
\includegraphics[width=.45\textwidth]{fig14.eps}
\vspace{-0.4cm}
\caption{Planck and source functions resulting from the three different atmospheres discussed in Section~\ref{Sect:contr_fct}. The vertical dashed line marks the formation height of the \ion{Mg}{2}~k line core for each atmosphere.}
\label{src_fct}
\end{figure}
By increasing the temperature or density (Section~\ref{Sect:vart} and \ref{Sect:vard} and purple and blue lines in Figure~\ref{src_fct}), the line core is formed under LTE conditions; while the velocity variations may result in a decoupling of the source function from the Planck function, therefore non-LTE conditions (see green line in Figure~\ref{src_fct}). In these cases we still have single peak profiles due to the large asymmetries of the line. It is possible that these conditions occur during flares, possibly through a combination of all parameters discussed above.
\subsection{Options that fail to simulate flare-like Mg profiles}
The increase of the chromospheric temperature (Figure~\ref{truns}) as well as the shift of the transition region towards lower heights (Figure~\ref{movetr}) does not result in realistic line profiles. Either the line core is still reversed, or the subordinate lines have stronger intensities than those of the \ion{Mg}{2}~h~and~k lines, showing a discrepancy with respect to {\it IRIS} flare observations.
As discussed in Section~\ref{Sect:vturb}, we also discarded a higher non-thermal broadening as a possibility to explain the broad observed line wings. Although the Van der Waals broadening seems to be the main contribution to the broad line wings, this term does not explain the widely broadened observed line wings. Although the RH code considers quadratic Stark broadening in the calculation, the adiabatic approximation used in RH for the quadratic Stark effect for the Mg atom underestimates the broadening \citep{1978stat.book.....M}. Therefore, we should keep in mind that the implementation of the quadratic Stark effect may not be sufficiently accurate. Very wide \ion{Mg}{2} line profiles have also been observed in stellar flares \citep[i.e. on YZ CMi, observed by][]{2007PASP..119...67H}. \citet{2007PASP..119...67H} discussed non-thermal beam excitations as the reason for the broadening of the \ion{Mg}{2} wings. They argue that unresolved flows with high velocities ($\sim$~200~km~s$^{-1}$), which would result in high kinetic energies that may exceed the radiated energy in the flare, are a possible explanation of the broad wings. Further studies with higher non-thermal electron fluxes will clarify this point.
A larger opacity in the photosphere would result in broader wings and this could be accomplished by increasing the temperature or the density, but would probably affect other spectral lines whose fit currently is reasonably good. 3D effects, as discussed by \citet{2013ApJ...772...89L}, may have an important contribution on the formation of the \ion{Mg}{2}~h~and~k line emission, especially in the line core, but are currently not computationally feasible.
The importance of the advection term in the statistical equilibrium equation for \ion{Mg}{2} it is still unclear. Cool flows flowing into hot plasma or hot flows into cool plasma could result in non-equilibrium of the ionization and excitation.
We would like to emphasize that this is purely a parameter study and we do not claim that the density or temperature must increase by an order of magnitude in flares. We do not have a physical explanation yet as to what is occurring in a flaring atmosphere. Nevertheless, we can conclude that a critical piece is missing in current hydrodynamic simulations, possibly more particles that may increase densities, more heat dissipation, or unresolved and stronger velocities. For future work, we plan to simultaneously compare many spectral lines, which will better constrain the atmospheric parameters at different heights and will potentially allow us to better constrain the conditions in flaring atmospheres.
\acknowledgments
We would like to thank P. Judge, P. Heinzel, W. Liu, R. Rutten, G. Kerr, M. Carlsson, T. Pereira, and the anonymous referee for their helpful discussions. Work performed by F.R.dC. is supported by NASA grants NNX13AF79G, NNX14AG03G, 8100003073 and NNX17AC99G. L.K. was partially supported by NASA grant NNX13AI63G. F.R.dC. and L.K. thank ISSI and ISSI-BJ for the support of the team ``Diagnosing heating mechanisms in solar flares through spectroscopic observations of flare ribbons''. We gratefully acknowledge the use of supercomputer resources provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center.
{\it IRIS} is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research center and major contributions to downlink communications funded by ESA and the Norwegian Space Centre. CHIANTI is a collaborative project involving George Mason University, the University of Michigan (USA), and the University of Cambridge (UK).
\bibliographystyle{apj}
|
1,108,101,564,411 | arxiv | \section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{figures/teaser}
\caption{\textbf{Our few-shot video classification setting}. %
Pairs of semantically matched frames are connected with a blue dashed line. The arrows show the direction of the temporal alignment path.}
\label{fig:teaser}
\end{figure}
The emergence of deep learning has greatly advanced the frontiers of action recognition \cite{wang2016temporal,carreira2017quo}. The main focus tends to center around learning effective video representations for classification using large amounts of labeled data. In order to recognize novel classes that a pretrained network has not seen before, typically we need to manually collect hundreds of video samples for knowledge transferring. But such a procedure is rather tedious and labor intensive especially for videos, where the difficulty and cost of labeling is much higher compared to images.
There is a growing interest in learning models capable of effectively adapting themselves to recognize novel classes with only a few labeled examples. This is known as the few-shot learning task \cite{garcia2017few,chen19closerfewshot}. Under the setup of meta-learning based few-shot learning, the model is explicitly trained to deal with scarce training data for previously unseen classes across different episodes. While the majority of recent few-shot learning works focus on image classification, adapting it to video data is not a trivial extension.
Videos are much more complicated than images, as recognizing some specific actions, such as opening the door, usually requires a complete modeling of temporal information. In the previous literature of video classification, 3D convolution and optical flow are two of the most popular methods to model temporal relations. The direct output of neural network encoders is always a temporal sequence of deep encoded features.
State-of-the-art approaches commonly apply a temporal pooling module (usually mean pooling) in order to make final prediction. As observed before, averaging the deep features only captures the co-occurrence rather than the temporal ordering of patterns, which will unavoidably result in information loss.
Loss of information is even more severe for few-shot learning.
It is hard to learn the local temporal patterns which are useful for few-shot classification with limited amount of data. Utilizing the long-term temporal ordering information, which is often neglected in previous works on video classification, might potentially help with few-shot learning. For example, if the model could verify that there is a procedure of pouring water before a close-up view of a just made tea, as shown in Fig. \ref{fig:teaser}, the model will then become quite confident about predicting the class of this query video to be making tea, rather than some other potential predictions like boiling water or serving tea.
In addition, Fig.~\ref{fig:teaser} also shows that for two videos in the same class, even though they both contain a procedure of pouring water followed by closed-up view of tea, the exact temporal duration of each atomic step can vary dramatically. This non-linear temporal variations of videos pose great challenges for few-shot video learning.
With these insights, we thus propose Temporal Alignment Module (TAM) for few-shot video classification, a novel temporal-alignment based approach that learns to estimate temporal alignment score of a query video with corresponding proxies in the support set. To be specific, we compute temporal alignment score for each potential query-support pair by averaging per-frame distances along a temporal alignment path, which enforces the score we use to make prediction to preserve temporal ordering. Furthermore, TAM is fully differentiable so that the model can be trained end-to-end and optimize the few-shot objective directly. This in turn helps the model to better utilize long-term temporal information to make few-shot learning predictions. This module allows us to better model the temporal evolution of videos, while enabling stronger data efficiency in the few-shot setting.
We evaluate our model for few-shot video classification task on two action recognition datasets: Kinetics \cite{kay2017kinetics} and Something-Something V2 \cite{goyal2017something}. We show that when there is only a single example available, our method outperforms the mean pooling baseline which does not consider temporal ordering information by approximately 8\% in top-1 accuracy. We also show qualitatively that the proposed framework is able to learn meaningful alignment path in an end-to-end manner.
In summary, our main contributions are: (i) We are the first to explicitly address the non-linear temporal variations issue in the few-shot video classification setting. (ii) We propose Temporal Alignment Module (TAM), a data-efficient few-shot learning framework that can dynamically align two video sequences while preserving the temporal ordering, which is often neglected in previous works. (iii) We use continuous relaxation to make our model fully differentiable and show that it outperforms previous state-of-the-art methods by a large margin on two challenging datasets.
\section{Related Work}
\noindent \textbf{Few-Shot Learning.}
To address few-shot learning, a direct approach is to train a model on the training set and fine-tune with the few data in the novel classes. Since the data in novel classes are not enough to fine-tune the model with general learning techniques, methods are proposed to learn a good initialization model \cite{finn2017model,nichol2018reptile,rusu2018meta} or develop a novel optimizer \cite{ravi2016optimization,munkhdalai2017meta}. These works aim to relieve the difficulty of fine-tuning the model with limited samples. However, such methods suffer from overfitting when the training data in novel classes are scarce but the variance is large. Another branch of works, which learns a common metric for both seen and novel classes, can avoid overfitting to some extent. Convolutional Siamese Net \cite{koch2015siamese} trains a Siamese network to compare two samples. Latent Embedding Optimization \cite{vinyals2016matching} employs attention
kernel to measure the distance. Prototypical Network \cite{snell2017prototypical} utilizes the Euclidean distance to the class center. Graph Neural Networks \cite{garcia2017few} constructs a weighted graph to represent all the data and measure the similarity between data. Other methods use data augmentation, which learns to augment labeled data in unseen classes for supervised training \cite{hariharan2017low,wang2018low}. However, video generation is still an under-explored problem at least generating videos condition on a typical category. Thus, in this paper, we employ the metric learning approach and designs a temporal-aligned video metric for few-shot video classification.
There are works exploring few-shot recognition. OSS-Metric Learning \cite{kliper2011one} proposes a novel OSS-Metric Learning to measure the similarity of video pairs to enable one-shot video classification. \cite{mishra2018generative} introduces a zero-shot method which learns a mapping function from an attribute to a class center. It has an extension to few-shot learning by integrating labeled data on unseen classes. CMN \cite{zhu2018compound} is the most related work to ours. They introduce a multi-saliency embedding algorithm to encode video into a fixed-size matrix representation. Then they propose a compound memory network (CMN) to compress and store the representation and classify videos by matching and ranking. However, previous works collapse the order of frames at representation \cite{kliper2011one,mishra2018generative,zhu2018compound}. Thus, the learned model is sub-optimal for video datasets where sequence order is important. In this paper, we preserve the frame order in video representation and estimate distance with temporal alignment, which utilizes video sequence order to solve few-shot video classification.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/arch.pdf}
\caption{\textbf{Overview of our method}. We first extract per-frame deep features using the embedding network. We then compute the distance matrices between the query video and videos in the support set. Next, an alignment score is computed out of the matrix representation. Finally we apply softmax operator over the alignment score of each novel class.}
\label{fig:pipeline}
\end{figure*}
\noindent \textbf{Video Classification.}
A significant amount of research has tackled the problem of video classification. State-of-the-art video classification methods have evolved from hand-crafted representation learning \cite{3dhog, sift3d, idt} to deep-learning based models. C3D \cite{tran2015learning} utilizes 3D spatial-temporal convolutional filters to extract deep features from sequences of RGB frames. TSN \cite{wang2016temporal} and I3D \cite{carreira2017quo} uses two-stream 2D or 3D CNNs with larger size on both RGB and optical flow sequences. By factorizing 3D convolutional filters into separate spatial and temporal components, P3D \cite{p3d} and R(2+1)D \cite{r2+1d} yield models with comparable or superior classification accuracy but smaller in size. An issue of these video representation learning methods is their dependence on large-scale video datasets for training. Models with an excessive amount of learnable parameters tend to fail when only a small number of training samples are available.
Another concern of video representation learning is the lack of temporal relational reasoning. Classification on the videos sensitive to temporal ordering poses a more significant challenge to the above networks which are tailored to capture short-term temporal features. Non-local neural networks \cite{wang2018non} introduce self-attention to aggregate temporal information in the long-term. Wang \etal \cite{wang2018videos} further employ space-time region graphs to model the spatial-temporal reasoning. Recently, TRN \cite{zhou2018temporal} proposes a temporal relational module to achieve superior performance. Still, these networks inevitably pool/fuse features from different frames in the last layers to extract a single feature vector representing the whole video. In contrast, our model is able to learn video representation without loss of temporal ordering in order to generate more accurate final predictions.
\noindent \textbf{Sequence Alignment.}
Sequence alignment is of great importance in the field of bioinformatics, which describes the way of arrangement of DNA/RNA or protein sequences, in order to identify the regions of similarity among them~\cite{altschul1997gapped}. In the vision community, researchers have growing interests in tackling the sequence alignment problem with high dimensional multi-modal data, such as finding the alignment between untrimmed video sequence and the corresponding textual action sequence~\cite{chang2019d, dogan2018neural,richard2018neuralnetwork}. The main technique that has been applied to this line of work is dynamic programming. While dynamic programming is guaranteed to find the optimal alignment between two sequences given a prescribed distance function, the discrete operations used in dynamic programming are non-differentiable and hence prevent learning distance functions with gradient-based methods. Our work is closely related to recent progress on using continuous relaxation of discrete operations to tackle sequence alignment problem~\cite{chang2019d} and hence allow us to train our entire model end-to-end.
\section{Methods}
Our goal is to learn a model which can classify novel classes of videos with only a few labeled examples. The wide range of intra-class spatial-temporal variations of videos poses great challenges for few-shot video classifications. We address this challenge by proposing a few-shot learning framework with Temporal Alignment Module (TAM), which is to our best knowledge the first model that can explicitly learn a distance measure independent of non-linear temporal variations in videos. The use of TAM sets our approach apart from previous works that fail to preserve temporal ordering and relation during meta training and meta testing. Fig.\ref{fig:pipeline} shows the outline of our model.
In the following, we will first provide a problem formulation of few-shot video classification task, and then define our model and show how it can be used at training and test time.
\subsection{Problem Formulation}
In the few-shot video classification setting, we split the classes we have annotation of into $\mathcal{C}_{train}$: the base classes that have sufficient data for representation learning and $\mathcal{C}_{test}$: the novel or unseen classes that have only a few labeled data during testing stage. The goal of few-shot learning is then to train a network that can generalize well to new episodes over novel classes. In a $n$-way, $k$-shot problem, for each episode the support set will contain $n$ novel classes, and each class will have a very small amount of samples ($k$ in our setting). The algorithm will have to classify videos from query set to one of the novel classes in support set. Episodes are randomly drawn from a larger collection of data, which we hereby denote them as meta set. In our setting, we introduce 3 splits over classes as meta training $\mathcal{T}_{train}$, meta validation $\mathcal{T}_{val}$ and meta testing $\mathcal{T}_{test}$ sets.
We formulate the few-shot learning as a representation learning problem through a distance function $\phi(f_{\varphi}(x_1), f_{\varphi}(x_2))$, where $x_1$ and $x_2$ are two samples drawn from $\mathcal{C}_{train}$ and $f_{\varphi}(\cdot)$ is an embedding function that maps samples to their representations. The difference between our problem formulation with the majority of previous few-shot learning researches lies in the fact that we are now dealing with higher dimensional inputs, i.e. (2+1)D volumes instead of 2D images. The addition of the time dimension in few-shot setting demands the model to be able to learn temporal ordering and relation with limited data in order to generalize to novel classes, which pose challenges that have not been properly addressed by previous works.
\subsection{Model}
With the above problem formulation, our goal is to learn a video distance function by minimizing the few-shot learning objective. Our key insight is that we want to explicitly learn a distance function independent of non-linear temporal variations by aligning the frames of two videos. Unlike previous works which use weighted average or mean pooling along the time dimension~\cite{wang2016temporal,tran2015learning,wang2018non,xie2018rethinking,carreira2017quo,zhu2018compound}, our model is able to infer temporal ordering and relationship during meta training or meta testing in an explicit and data efficient manner. In this subsection, we will breakdown our model following the pipeline as illustrated in Fig.~\ref{fig:pipeline}.
\noindent
\textbf{Embedding Module:} The purpose of the embedding module $f_{\varphi}$ is to generate a compact representation of a trimmed action video that encapsulates its visual content. A raw video usually consists of hundreds of frames, whose information could be redundant were to perform per frame inference. Thus frame sampling is usually adopted as a preproccessing stage for video inputs. Existing frame sampling schemes can be mainly divided into two categories: dense sampling (randomly cropping out
$T$ consecutive frames from the original full-length video and
then dropping every other frame) \cite{tran2015learning, carreira2017quo, wang2018non, xie2018rethinking} and sparse sampling (samples distribute uniformly along the temporal dimension) \cite{wang2016temporal, zolfaghari2018eco, zhou2018temporal, lin2018temporal}. We follow the sparse sampling protocol first described in TSN \cite{wang2016temporal}, which divides the video sequence into $T$ segments and extracts a short snippets in each segment. The sparse sampling scheme allows each video sequence to be represented by a fix number of snippets. The sampled snippets span the whole video, which enables long-term temporal modeling.
Given a input sequence $S = \{ X_1, X_2, ..., X_T \}$, we will apply a CNN backbone network $f_{\varphi}$ to process each of the snippets individually. After the encoding, raw video snippets will then turn into a sequence of feature vectors $f_{\varphi}(S) = \{f_{\varphi}(X_1), f_{\varphi}(X_2), ..., f_{\varphi}(X_T)\}$. It is worth noticing that for the embedding of each video $f_{\varphi}(S)$, its dimension is $T \times D_f$, rather than $D_f$ for image embedding, which is usually chosen as the activation before final fully-connected layer of a CNN network.
\noindent
\textbf{Distance Measure with Temporal Alignment Module (TAM):}
Given two videos $S_i, S_j$ and their embedded features $f_{\varphi}(S_i), f_{\varphi}(S_j) \in \mathbb{R}^{T \times D}$, we can calculate the frame-level distance matrix $D \in \mathbb{R}^{T \times T}$ as
\begin{align}
D_{l,m} = 1 - \frac{f_{\varphi}(S_i)_{l,} \cdot f_{\varphi}(S_j)_{m,}}{\lvert\lvert f_{\varphi}(S_i)_{l,}\rvert \rvert \; \lvert\lvert f_{\varphi}(S_j)_{m,}\rvert \rvert},
\label{eq:similarity}
\end{align}
where $D_{l,m}$ is the frame-level distance value between the $l$th frame of video $S_i$ and the $m$th frame of video $S_j$.
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\textwidth]{figures/DTW.png}
\caption{\textbf{Methods for calculating alignment score}. Each subplot shows a distance matrix. The darker of the color of an entry, the smaller the distance value is of a pair of relevant frames. The entries with green border denotes the entries contributing to the final alignment score.}
\label{fig:DTW}
\end{figure*}
We further define $\mathcal{W} \subset \{0,1\}^{T\times T}$ to be the set of possible binary alignment matrices, where $\forall W \in \mathcal{W}$, $W_{ij} = 1$ if the $i$th frame of video $S_i$ is aligned to the $j$th frame of video $S_j$. Our goal is to find the best alignment $W^* \in \mathcal{W}$
\begin{align}
W^{*} = \underset{W \in \mathcal{W}}{\mathrm{argmin}} \langle W, D(f_{\varphi}(S_i),f_{\varphi}(S_j))\rangle,
\label{eqn:dtw_objective}
\end{align}
which minimizes the inner product between the alignment matrix $W$ and the frame-level distance matrix $D$ defined in Eq.~\eqref{eq:similarity}. The video distance measure is thus given by
\begin{align}
\phi(f_{\varphi}(S_i), f_{\varphi}(S_j)) = \langle W^*, D \rangle.
\label{eq:video_score}
\end{align}
We propose to use a variant of Dynamic Time Warping (DTW) algorithm~\cite{muller2007dynamic} to solve Eq.~\eqref{eqn:dtw_objective}. This is achieved by solving for a cumulative distance function
{\small \begin{align}
\gamma(i,j) = D_{ij} + \min \{ \gamma(i-1, j-1), \gamma(i-1,j), \gamma(i,j-1) \}.
\end{align}}
In this setting of plain DTW, an alignment path is a contiguous set of matrix elements which defines a mapping between two sequences that satisfies the following conditions: boundary conditions, continuity and monotonicity. The boundary condition poses constraints on alignment matrix $W$ such that $W_{11} = 1$ and $W_{TT} = 1$ must be true in all possible alignment paths.
In our alignment formulation, though the videos are trimmed, the action in the query video does not have to match exactly about its start and end action with the proxy. For example, consider the action of making coffee, there might be a snippet of stirring coffee at the end of action and it might not. To address this issue, we propose to relax this the boundary condition. Instead of having a path aligning the two videos from start to end, we allow the algorithm to find a path with flexible starting and ending points, while maintaining continuity and monotonicity. To work through this, we pad two column of $0$s at the start and end of the distance matrix so that it functions as enabling the alignment process to start and end at arbitrary position. So for our method, instead of computing the alignment score on a $T\times T$ matrix, we work with the padded matrix of size $T\times(T+2)$. We further denote the indexes of the first dimension as $1, 2, ..., T$, and indexes of the second dimension as $0, 1, 2, ..., T, T+1$, for simplicity. The compute of cumulative distance function is then changed into function
{\small \begin{align}
& \gamma(i, j) = \nonumber \\
& D_{ij} +
\begin{cases}
\min \{ \gamma(i-1, j-1), \gamma(i-1,j), \gamma(i,j-1) \}, \\
\qquad \qquad \qquad \qquad \qquad j = 0 \text{ or } j = T + 1 \\
\min \{ \gamma(i-1, j-1), \gamma(i,j-1), \; \text{otherwise}
\end{cases}
\label{cumul distance}
\end{align}}
Note that if we follow the Eq. \ref{cumul distance} to compute the alignment score, the score by itself is naturally normalized. Since at each time step except $j=0$ and $j=T-1$, the alignment function forces a path from $\gamma(\cdot,j-1)$ to $\gamma(\cdot,j)$, the final alignment score will be a summation of exactly $T$ scores. In this light, alignment scores computed from different pairs of query videos and support videos are normalized, which means that the scale will not be affected by the path chosen.
\noindent
\textbf{Differentiable TAM with Continuous Relaxation:} While the above formulation is straightforward, the key technical challenge is that $\gamma$ is not differentiable with respect to the distance function $D$. Following the recent works on continuous relaxation of discrete operation and its application in video temporal segmentation~\cite{chang2019d, mensch2018differentiable}, we introduce a continuous relaxation to our Temporal Alignment Module (TAM). We use log-sum-exp with a smoothing parameter $\lambda > 0$ to approximate the non-differentiable minimum operator in Eq.~\eqref{cumul distance}
\begin{align}
\min (x_1, x_2, ..., x_n) \approx - \lambda \log \sum_{i=1}^n e^{-x_i/\lambda} \text{ if } \lambda \rightarrow 0.
\label{eq:relaxation}
\end{align}
While the use of continuous relaxation in Eq.~\eqref{eq:relaxation} does not convexify the the objective function, it helps the optimization process and allows gradients to be backpropagated through TAM.
\noindent
\textbf{Training and Inference:}
We have shown how to compute the cumulative distance function $\gamma$ and use continuous relaxation to make it differentiable given a pair of input videos $(S_i, S_j)$. The video distance measure is given by
\begin{align}
\phi(f_{\varphi}(S_i), f_{\varphi}(S_j)) = \gamma(T, T+1).
\end{align}
In training time, given ground-truth video pair $(S, \hat{S})$ and support set $\mathcal{S}$, we train our entire model end-to-end by directly minimizing the loss function
\begin{align}
& \mathcal{L} = - \log \frac{\exp(-\phi(f_{\varphi}(S), f_{\varphi}(\hat{S})))}{\sum_{Z \in \mathcal{S}} \exp(-\phi(f_{\varphi}(S), f_{\varphi}(Z)))}.
\label{eq:loss}
\end{align}
At test time we are given an unseen query video $Q$ and its support set $\mathcal{S}$, our goal is to find the video $S^* \in \mathcal{S}$ that minimize the video distance function
\begin{align}
S^* = \underset{S \in \mathcal{S}}{\mathrm{argmin}} \; \phi(Q, S).
\end{align}
\section{Experiments}
In this work, our task is few-shot video classification, where the objective is to classify novel classes with only a few examples from the support set. We divide our experiments into the following sections.
\subsection{Datasets}
As pointed out by \cite{xie2018rethinking,zhou2018temporal}, existing action recognition datasets can be roughly classified into two groups: Youtube type videos: UCF101 \cite{soomro2012ucf101}, Sports 1M \cite{karpathy2014large}, Kinetics \cite{kay2017kinetics}, and crowd-sourced videos: Jester\cite{jester}, Charades \cite{sigurdsson2016hollywood}, Something-Something V1\&V2 \cite{goyal2017something}, in which the videos are collected by asking the crowd-source workers to record themselves performing instructed activities. Crowd-sourced videos usually focus more on modeling the temporal relationships, since visual contents among different classes are more similar than those of Youtube type videos. To demonstrate the effectiveness of our approach on these two groups of video data, we base our few-shot evaluation on two action recognition datasets, Kinetics \cite{kay2017kinetics} and Something-Something V2 \cite{goyal2017something}.
Kinetics \cite{kay2017kinetics} and Something-Something V2 \cite{goyal2017something} are constructed to serve as action recognition datasets so we have to construct their few-shot versions. For Kinetics dataset, we follow the same split as CMN \cite{zhu2018compound} and sample 64 classes for meta training, 12 classes for validation and 24 classes for meta testing. Since there is no existing split for few-shot classification on Something-Something V2, we construct a few-shot dataset following the same rule as CMN \cite{zhu2018compound}. We randomly selected 100 classes from the whole dataset. The 100 classes are then split into 64, 12 and 24 classes as the meta-training, meta-validation and meta-testing set, respectively.
\subsection{Implementation Details}
For a $n$-way, $k$-shot test setting, we randomly sample $n$ classes with each class containing $k$ examples as the support set. We construct the query set to have $n$ examples, where each unlabeled sample in the query set belongs to one of the $n$ classes in the support set. Thus each episode has a total of $n(k+1)$ examples. We report the mean accuracy by randomly sampling 10,000 episodes in the following experiments.
We follow the video preprocessing procedure introduced in TSN \cite{wang2016temporal}. During training we first resize each frame in the video to $256 \times 256$ and then randomly crop a $224 \times 224$ region from the video clip. For inference we change the random crop to center crop. For Kinetics dataset we randomly apply horizontal flip during training. Since the label in Something-Something V2 dataset incorporates an assumption of left and right, e.g. pulling something from left to right and pulling something from right to left, so we do not use horizontal flip for this dataset.
Following the experiment setting of CMN, we use ResNet-50 \cite{he2016deep} as the backbone network for TSN. We initialize network using pre-trained models on ImageNet \cite{deng2009imagenet}. We optimized our model with SGD \cite{bottou2010large}, with a starting learning rate of $0.001$ and decaying every 30 epochs by $0.1$. We use meta-validation set to tune the parameters, and stop the training process when the accuracy on the meta-validation set is about to decrease. We implemented the whole framework with PyTorch \cite{paszke2017automatic} framework. The whole model takes 4 TITAN Xp GPUs to train for $10$ hours.
\subsection{Evaluating Few-Shot Learning}
\begin{table}[bt]
\small
\caption{\textbf{Few-shot video classification results.} We report 5-way video classification accuracy on meta-testing set.}
\vspace{-10pt}
\begin{center}
\begin{tabular}{c|cc|cc}
\hline
& \multicolumn{2}{c|}{Kinetics} & \multicolumn{2}{c}{Something V2} \\ \hline
Method & \multicolumn{1}{c|}{1-shot} & 5-shot & \multicolumn{1}{c|}{1-shot} & 5-shot \\ \hline
Matching Net \cite{zhu2018compound} & 53.3 & 74.6 & - & - \\
MAML \cite{zhu2018compound} & 54.2 & 75.3 & - & - \\
CMN \cite{zhu2018compound} & 60.5 & 78.9 & - & - \\
TSN++ & 64.5 & 77.9 & 33.6 & 43.0 \\
CMN++ & 65.4 & 78.8 & 34.4 & 43.8 \\
TRN++ & 68.4 & 82.0 & 38.6 & 48.9 \\
TAM (ours) & \textbf{73.0} & \textbf{85.8} & \textbf{42.8} & \textbf{52.3} \\
\hline
\end{tabular}
\end{center}
\label{main_table}
\end{table}
We now evaluate the representations we learned after optimizing few-shot learning objective. We compare our method with the two following categories of baselines:
\subsubsection{Train from ImageNet Pretrained Features}
For baselines that use ImageNet pretrained features, we follow the same setting as described in CMN. As the fact that previous few-shot learning algorithms are all designed to deal with images, they usually take image-level feature encoded by some backbone network as input. To circumvent this discrepancy, we first feed frames of a video to a ResNet-50 network pretrained on ImageNet, and then average frame-level features to obtain a video-level feature. The averaged video-level feature is then served as the input of few-shot algorithms.
\noindent
\textbf{Matching Net \cite{vinyals2016matching}} We use an FCE classification layer in the original paper without fine-tuning in all experiments. The FCE module uses a bidirectional-LSTM and each training example could be viewed as an embedding of all the other examples.
\noindent
\textbf{MAML \cite{finn2017model}} Given the video-level feature as the input, we train the model following the default hyper-parameter and other settings described in \cite{finn2017model}.
\noindent
\textbf{CMN \cite{zhu2018compound}} As CMN is specially designed for few-shot video classification, it could handle video feature inputs directly. The encoded feature sequence is first fed into a multi-saliency embedding function to get a video-level feature. Final few-shot prediction is done by a compound memory structure similar to \cite{kaiser2017learning}.
For the experiment results using ImageNet pretrained backbones, we directly take the numbers from CMN \cite{zhu2018compound} to ensure fair comparison.
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{figures/vis.pdf}
\caption{\textbf{Visualization of our learning results.} Comparison of our matched with CMN's matched results in an episode. Although the averaged score is quite high given the false matching and the query image, our algorithm is able to find the correct alignment path the minimize the alignment score, which ultimately results in the correct prediction.}
\label{fig:vis}
\vspace{-0.5cm}
\end{figure*}
\subsubsection{Finetune from Backbone on Meta Training Set}
As raised by \cite{chen19closerfewshot,gidaris2018dynamic,qi2018low}, using cosine distances between the input feature and the trainable proxy for each class could explicitly reduce intra-class variations among features during training. The rigorous experiments in \cite{chen19closerfewshot} has shown that the Baseline++ model is competitive or even surpass when compared with other few-shot learning methods. So in finetuned settings we adapt several previous approaches with the structure of Baseline++ to serve as strong baselines.
\noindent
\textbf{TSN++} For TSN++ baseline, we also use episode-based training to simulate the few-shot setting at meta-train stage to directly optimize for generalization to unseen novel classes. In order to get a video-level representation, we average over the temporal dimension of extracted per frame features for both query sets and support sets. The video level feature from support set could then serve as proxies for each novel class. We can then obtain the prediction probability for each class by normalizing these cosine distance values with a softmax function. For inference during meta-testing stage, we first forward each video in the support set to get proxies for each class. Given the proxies we can then make prediction for videos in query set.
\noindent
\textbf{CMN++} We follow the setting of CMN and reimplement this method by ourselves. The only difference about CMN++ and CMN is that we replace the ImageNet pretrained feature with the feature extracted by TSN++ mentioned above.
\noindent
\textbf{TRN++} We also compare our approach against methods that attempt to learn a compact video-level representation given a sequence of image-level features. TRN \cite{zhou2018temporal} proposes a temporal relation module, which uses multilayer perceptrons (MLP) to fuse features of different frames. We refer TRN++ to one of baselines by replacing average consensus module in TSN++ with temporal relation module.
By default we conduct 5-way few-shot classification if there is no further clarification. The 1-shot and 5-shot video classification results on both the Kinetics and Something-Something V2 datasets are listed in Table \ref{main_table}. It can be concluded that our approach significantly outperforms all the baselines on both datasets. In CMN paper, the experimental observations show that fine-tuning the backbone module on the meta-training set does not improve the few-shot video classification performance. In contradiction, we find that with proper data augmentation and training strategy, a model could be trained to generalize better on unseen classes in a new domain given the meta-training set. By comparing the results of TSN++ and TRN++, we could conclude that considering temporal relation explicitly helps with model generalization on unseen classes. Compare to TSN++, the improvement brought by CMN++ is not as large as the gap on ImageNet pretrained features reported in the original paper. This may be due to the reason that we are now using a more suitable distance function (cosine distance) during meta-training so that the frame-level feature is more discriminative among unseen classes. This in turn makes it harder to improve the final prediction given those strong features as the input. Finally it is worthwhile to note that TAM outperforms all the finetuned baselines by a large margin. This demonstrates the importance of taking temporal ordering information into consideration while dealing with few-shot video classification problem.
\subsection{Qualitative Results and Visualizations}
We show qualitative results comparing CMN and TAM in Fig. \ref{fig:vis}. In particular, we observe that CMN has difficulty in differentiating two actions from different classes with very similar visual clues among all the frames, e.g., backgrounds. As can be seen from the distance matrices in Fig. \ref{fig:vis}, though our method cannot alter the fact that the two visually similar action clips will have an averagely lower frame-wise distance value, it is able to find a temporal alignment that minimize the cumulative distance score between the query action video and the true support class video while the per-frame visual clue is not evident enough. Though the mean score of TAM is lower than the match of CMN, TAM succeeds in making the right prediction via calculating a lower alignment score out of the distance matrix.
\subsection{Ablation Study}
\begin{table}[bt]
\small
\caption{\textbf{Temporal matching ablation study.} We compare our method to temporal-agnostic and temporal-aware baselines.}
\vspace{-10pt}
\begin{center}
\begin{tabular}{c|cc|cc}
\hline
& \multicolumn{2}{c|}{Kinetics} & \multicolumn{2}{c}{Something V2} \\ \hline
matching type & \multicolumn{1}{c|}{1-shot} & 5-shot & \multicolumn{1}{c|}{1-shot} & 5-shot \\ \hline
Min & 52.4 & 71.6 & 29.7 & 38.5 \\
Mean & 67.8 & 78.9 & 35.2 & 45.3 \\
Diagonal & 66.2 & 79.3 & 38.3 & 48.7 \\
Plain DTW & 69.2 & 80.6 & 39.6 & 49.0 \\
TAM(Ours) & \textbf{73.0} & \textbf{85.8} & \textbf{42.8} & \textbf{52.3} \\
\hline
\end{tabular}
\end{center}
\vspace{-0.5cm}
\label{temporal}
\end{table}
Here we perform ablation experiments to demonstrate the effectiveness of our selections of the final model. We have shown in Section 4.3 that explicitly modeling the temporal ordering plays an important role for generalization to unseen classes. We now analyze the effect of different temporal alignment approaches.
While having the cosine distance matrix $D$, there are several choices we could adopt to extract the alignment score out of the matrix, as visualized in Fig. \ref{fig:DTW}. In addition to our proposed method, we consider several heuristics for generating the scores. The first is ``Min'', where we use the minimum element in the matrix $D$ to represent the video distance value. The second is ``Mean'', for which we average over the cosine distance value of all pairs of frames. These two choices both neglect the temporal ordering. We will then introduce a few potential choices that explicitly consider sequence ordering when computing the temporal alignment score. An immediate scheme is to take an average over the diagonal of the distance matrix. The assumption made behind this approach is that the query video sequence shall be perfectly aligned with its corresponding support proxy of the same class, which could be somewhat ideal in read world applications. To allow for more adaptive alignment strategy, we introduce Plain DTW and our method. Here the Plain DTW in Table. \ref{temporal} means that there is no padding so that $W_{11}$ and $W_{TT}$ are assumed to be in the alignment path, and for each time step during computing alignment score we allow a possible movement choice among $\longrightarrow$, $\searrow$ and $\downarrow$.
The results are shown in Table. \ref{temporal}. It can be observed that we are able to improve the few-shot learning by considering temporal ordering explicitly. There are some slight differences in performance between method Diagonal and Mean regarding to the two datasets here. There are less visual clues in each frame of Something-Something V2 than that of Kinetics, so the improvement of using Diagonal with regard to using Mean is prominent for Something-Something V2, while the gap is closed for Kinetics. However, we see that through adaptive temporal alignment, our method consistently improve the baselines on two datasets by more than 3\% accross 1-shot and 5-shots. This shows that by reinforcing the model to learn an adaptive alignment path across query videos and proxies, the final model could learn to encode better representations for the video, as well as a more accurate alignment score which could in turn help with few-shot classification.
The next ablation study is on the sensitivity of smoothing parameter $\lambda$. Previous works~\cite{chang2019d, mensch2018differentiable} have shown that using $\lambda$ empirically helps optimization in many tasks. Intuitively, a smaller $\lambda$ functions more like the min operation and a larger $\lambda$ means a heavier smoothing effect over the values in nearby positions. We experimented on $\lambda$ within the value set of $[0.01, 0.05, 0.1, 0.5, 1]$.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/kinetics_iccv.pdf}
\includegraphics[width=1.0\linewidth]{figures/ssv2_iccv.pdf}
\vspace{-2mm}
\caption{\textbf{Smoothing factor sensitivity.} We compare the effect of using different smoothing factors.}
\vspace{-5mm}
\label{fig:lambda}
\end{figure}
The results are shown in Fig. \ref{fig:lambda}. In general, the performance is stable across values of $\lambda$. We observe that in practice $\lambda$ ranges from 0.05 to 0.1 works relatively good under the setting of both two datasets. Thus we notice that a suitable $\lambda$ is essential for the representation learning. When $\lambda$ is too small, though it is able to function most similarly as the real min operator, the gradient is too imbalanced so that some pairs of frames are not adequately trained. On the contrary, a large $\lambda$ might be too smooth so that the difference among all kinds of alignments are not notable enough.
\section{Conclusion}
We propose Temporal Alignment Module (TAM), a novel few-shot framework that can explicitly learn distance measure and representation independent of non-linear temporal variations in videos using very few data. In contrast to previous works, TAM dynamically aligns two video sequences while preserving the temporal ordering and it further uses continuous relaxation to directly optimize for the few-shot learning objective in an end-to-end fashion. Our results and ablations show that our model significantly outperforms a wide range of competitive baselines and achieves state-of-the-art results on two challenging real-world datasets.
\paragraph{Acknowledgements} This work has been partially supported by JD.com American Technologies Corporation (”JD”) under the SAILJD AI Research Initiative. This article solely reflects the opinions and conclusions of its authors and not JD or any entity associated with JD.com.
{\small
\bibliographystyle{ieee}
|
1,108,101,564,412 | arxiv | \section{Introduction}
In recent years, an increasing number of transients have been observed to peak with optical luminosities between those of novae (typically $M_V>-10$) and supernovae (SNe; typically $M_V<-15$). This is due in large part to the increasing coverage, depth, and cadence of astronomical surveys. Some of these intermediate luminosity objects are particularly red in colour and those can typically be placed in one of two categories:
\textit{Luminous red novae} (LRNe) -- The first well-observed LRN was V838 Monocerotis in our Galaxy in 2002 \citep{2002A&A...389L..51M,2005MNRAS.360.1281R}. The pre-outburst light curve of red nova V1309 Scorpii revealed it to be the result of a compact binary merger \citep{2011A&A...528A.114T}. A number of similar events have now been observed, including extragalactic examples in M31 (e.g.\ \citealp{2015A&A...578L..10K,2015ApJ...805L..18W,Williams_2019}) and M101 \citep{2015A&A...578L..10K,2017ApJ...834..107B}.
\textit{Intermediate luminosity red transients} (ILRTs) -- Several extragalactic examples of ILRTs have been observed over the last decade or so. See, for example, M85~OT~2006-1 \citep{2007Natur.447..458K}, SN~2008S \citep{2009MNRAS.398.1041B,2009ApJ...697L..49S}, NGC~300~OT~2008-1 (hereafter NGC~300~OT; \citealp{2009ApJ...699.1850B,2009ApJ...695L.154B}), and AT~2017be \citep{2018MNRAS.480.3424C}. These are generally more luminous than LRNe and, in some cases, they have been found to be associated with dusty progenitor stars \citep{2008ApJ...681L...9P,2009ApJ...699.1850B,2009ApJ...695L.154B}. SN~2008S and NGC~300~OT are also discussed in detail by \citet{2011ApJ...741...37K}.
The spectra of LRNe and ILRTs at peak are broadly similar, showing Balmer emission on a F-type supergiant-like spectrum. However, after peak, the two classes deviate, with the LRNe rapidly reddening and becoming cool enough for strong TiO absorption bands to appear in the spectrum \citep[e.g.][]{2005MNRAS.360.1281R,2015ApJ...805L..18W}. ILRTs have shown slower spectroscopic evolution and can typically be identified by strong and narrow [Ca~{\sc ii}] emission lines, although these have also been observed in LRNe \citep{2019arXiv190913147C}. To unambiguously distinguish between the two categories of object requires the transients to be followed over the course of several months in some cases.
The nature of ILRTs has not been settled, with plausible scenarios including, for example, an electron capture SN (see e.g. \citealp{2009MNRAS.398.1041B}) or a giant eruption from a luminous blue variable (LBV; see e.g. \citealp{2009ApJ...697L..49S}). Both SN~2008S and NGC~300~OT eventually became fainter than their respective progenitors \citep{2016MNRAS.460.1645A}. \citet{2016MNRAS.460.1645A} suggest that this may point to a weak SN scenario, as extreme dust models were needed to reconcile the very late-time observations with a surviving star. \citet{2010MNRAS.403..474W} modelled the pre-existing dust around SN~2008S as amorphous carbon grains and indeed found that the observations were inconsistent with silicate grains making up a significant component of the dust. See \citet{2019NatAs...3..676P} for a recent review of the different classes of intermediate luminosity transients.
AT 2019abn (ZTF19aadyppr) was first detected as a transient in M51 by the Zwicky Transient Facility (ZTF; \citealp{2019PASP..131a8002B}) on 2019 Jan 22.56~UT and announced following the second detection on 2019 Jan 25.51, at $13^{\mathrm{h}}29^{\mathrm{m}}42^{\mathrm{s}}\!.394$ $+47^{\circ}11^{\prime}16^{\prime\prime}\!\!.99$ (J2000; \citealp{2019TNSTR.141....1N}), by \texttt{AMPEL} (Alert Management, Photometry and Evaluation of Lightcurves; \citealp{2019arXiv190405922N}). In this work we refer to this first optical detection on 2019 Jan 22.56~UT as the discovery date ($t=0$).
\citet{2019arXiv190407857J} presented the discovery of this object and its variable progenitor system, along with its evolution over the first $\sim$80 days after discovery. Here we present detailed photometric and spectroscopic observations of the first $\sim$200 days after discovery, including the best observed early light curve of any ILRT to date. Our photometry begins 3.7\,days after discovery (0.7\,d after it was announced) and our spectra span the range from 7.7 to 165.4\,days after discovery.
\section{Observations and data reduction}
\subsection{Liverpool Telescope photometry}
We obtained multi-colour follow-up using the IO:O \citep{smith_steele_2017} and IO:I \citep{2016JATIS...2a5002B} imagers on the 2\,m Liverpool Telescope (LT; \citealp{2004SPIE.5489..679S}). We used SDSS $u'r'i'z'$ and Bessel \textit{BV} filters in IO:O and \textit{H}-band imaging with IO:I. AT~2019abn is coincident with significant dust absorption in M51, meaning the background at the position of the transient is not captured well by an annulus and template subtraction is required. For the $u'r'i'z'$ observations, we used archival Sloan Digital Sky Survey (SDSS) DR12 \citep{2015ApJS..219...12A} images for template subtraction and for the \textit{BV} observations, we used archival LT observations. The template subtraction was performed using standard routines in \texttt{IRAF}\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.} \citep{1986SPIE..627..733T,1993ASPC...52..173T}. Template subtraction was not performed on the \textit{H}-band images, however the effect in this band is expected to be relatively small as the interstellar dust absorption will be lower at these wavelengths.
As M51 takes up a large fraction of the IO:I imager's field of view (FoV), the dithering of the telescope did not result in a good sky subtraction. We therefore obtained offset\footnote{The offset sky observations were centred at the position $13^{\mathrm{h}}28^{\mathrm{m}}24^{\mathrm{s}}\!.66$ $+47^{\circ}05^{\prime}10^{\prime\prime}\!\!.8$ (J2000).} sky frames immediately after each \textit{H}-band observation (except for the first observation). These offset sky frames were combined, then scaled to and subtracted from each individual science frame. The sky-subtracted frames were then aligned and combined to produce the final \textit{H}-band images on which photometry was performed.
\subsection{Nordic Optical Telescope photometry}
AT~2019abn was observed with the near-infrared instrument NOTCam at the 2.56\,m Nordic Optical Telescope (NOT) at 2019 Jun~13.94\,UT in the $K_{\mathrm{s}}$-band and at 2019 Jul 21.89 in the \textit{J}, \textit{H}, and $K_{\mathrm{s}}$-bands. As the angular extension of M51 on the sky is significantly bigger than the FoV of NOTCam, beam-switching with an sky-offset of 5\,arcmin was used to allow for a good sky-subtraction. A 9-point dither pattern was used with $5\times4$\,s ramp-sampling exposures, resulting in a total on-source time of 180\,s in each filter, and the same time spent on the sky observation. The images were reduced using a custom pipeline written in \texttt{IDL}. For sky-subtraction, a median combination of the sky exposures was used which was then scaled to the background level of the object exposures. The sky-subtracted images were then aligned by the WCS coordinates and mean-combined. Bad pixels were treated by discarding them in the combination.
\subsection{Gran Telescopio Canarias photometry}
Near-infrared images of the transient were also obtained at the 10.4\,m Gran Telescopio Canarias (GTC) with the EMIR instrument \citep{2007hsa..conf...81G} as part of an outreach programme. AT~2019abn was observed at 2019 April 12.97\,UT through the $JHK_{\mathrm{s}}$ 2MASS filters under grey sky (moon at 51\% illumination), 1.1$^{\prime\prime}$ seeing and variable atmospheric transparency. In each filter, a series of exposures were taken, adopting a seven point dithering pattern with 10$^{\prime\prime}$ offsets. Since the host galaxy covers a large part of the EMIR imaging mode FoV ($6.67\times6.67$\,arcmin), sky frames with a telescope offset of 7\,arcmin to the east were also obtained. Total exposure times were 280, 140, and 84\,s on source in the \textit{J}, \textit{H} and $K_{\mathrm{s}}$ filters, with individual frames duration of 10, 5, and 3\,s, respectively. The dark current of the EMIR detector is below 0.15 electrons\,s$^{-1}$, so the individual frames were directly divided by the corresponding twilight sky flats and stacked together in the final image using the Cambridge Astronomy Survey Unit \texttt{imstack} routine.
\subsection{Photometric Calibration}
Optimal photometry was performed on the processed images, except on a few occasions when there were tracking or guiding issues with the observations, which caused elongated and irregular PSFs. For these images, aperture photometry was used. All optimal and aperture photometry was performed using the \texttt{STARLINK} package \texttt{GAIA}\footnote{\url{http://star-www.dur.ac.uk/~pdraper/gaia/gaia.html}} \citep{2014ASPC..485..391C}.
All \textit{u$'$BVr$'$i$'$z$'$} photometry presented in this work was calibrated against SDSS DR9 \citep{2012ApJS..203...21A}. Due to the lack of stars with SDSS photometry in our FoV, we used the star J132918.06+471616.9, which has a magnitude of $u=19.137\pm0.023$, $g=18.297\pm0.007$, $r=17.902\pm0.007$, $i=17.741\pm0.007,$ and $z=17.695\pm0.016$\,mag \citep{2012ApJS..203...21A}. The star also has very similar magnitude measurements in Pan-STARRS DR1 \citep{2016arXiv161205560C}. From the magnitude of this star, we calibrated a sequence of eight further standard stars in the FoV across several nights of observations. Each observation of AT~2019abn was then calibrated against the calculated magnitudes of these stars. The \textit{B} and \textit{V}-band magnitudes of the standard stars were computed using the transformations from \citet{2006A&A...460..339J} and then used to calibrate the \textit{BV} photometry of AT~2019abn. All \textit{JHK} photometry was calibrated against several stars from 2MASS \citep{2006AJ....131.1163S}. Template subtraction was not performed on any of our $J$, $H,$ or $K_{\mathrm{s}}$-band data.
\subsection{Gran Telescopio Canarias spectroscopy}
AT~2019abn was observed on 2019~Feb~25.27, Apr~10.93, May~31.04, and Jul~6.92\,UT with the Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy (OSIRIS) instrument mounted on GTC as part of a Director's Discretionary Time award. On each night, exposures of 945\,s were taken through a 0.6\arcsec{} wide long slit at the parallactic angle, first with the R2500V grating (4500\,\AA~$<\lambda<$~6000\,\AA), then the R2500R grating (5600\,\AA~$<\lambda<$~7600\,\AA), both of which provide a resolving power, $R\sim2500$. The resulting spectra were then debiased, wavelength calibrated against HgAr, Xe and Ne arc lamp spectra, sky-subtracted, and optimally-extracted using the algorithm of \citet{1986PASP...98..609H}.
Flux calibration was performed using observations of the standard star Ross 640 \citep{1974ApJS...27...21O}, taken on the same nights using the same instrumental set-up and reduced in the same manner. We performed an absolute flux calibration using our LT \textit{V} and $r'$-band photometry. These spectra were linearly warped to match the two photometry bands. This relative correction was generally relatively small, however, with the initial relative flux calibrations agreeing well across the two filters.
\subsection{Liverpool Telescope spectroscopy}
We obtained eight low-resolution (R~$\sim$~350) spectra with the SPectrograph for the Rapid Acquisition of Transients (SPRAT) on the LT. The spectra were reduced using the SPRAT pipeline \citep{2014SPIE.9147E..8HP} and an absolute flux calibration made using the \textit{V} and $r'$-band light curves in the same way as for the GTC spectra. A log of our spectra of AT~2019abn is shown in Table~\ref{tab:log}. Line identification for our spectra was aided by the multiplet tables of \citet{1945CoPri..20....1M} and the NIST Atomic Spectra Database \citep{NIST}.
\begin{table}
\caption{Log of Gran Telescopio Canarias and Liverpool Telescope spectra taken of AT~2019abn.} \label{tab:log}
\centering
\begin{tabular}{l c c c}
\hline\hline
Instrument &Date [UT] &$t$ [days] &Resolution\\
\hline
LT SPRAT &2019 Jan 30.21 &7.7 &350\\
LT SPRAT &2019 Feb 03.22 &11.7 &350\\
LT SPRAT &2019 Feb 06.23 &14.7 &350\\
LT SPRAT &2019 Feb 10.11 &18.6 &350\\
LT SPRAT &2019 Feb 11.11 &19.6 &350\\
LT SPRAT &2019 Feb 20.25 &28.7 &350\\
LT SPRAT &2019 Feb 24.07 &32.5 &350\\
GTC OSIRIS &2019 Feb 25.27 &33.7 &2500\\
GTC OSIRIS &2019 Apr 10.93 &78.4 &2500\\
LT SPRAT &2019 Apr 15.96 &83.4 &350\\
GTC OSIRIS &2019 May 31.04 &128.5 &2500\\
GTC OSIRIS &2019 Jul 06.92 &165.4 &2500\\
\hline
\end{tabular}
\end{table}
\section{Photometric evolution}
Our typically daily $BVr'i'$ photometry of AT~2019abn during the rise to peak optical brightness yields the best early-time light curve of any ILRT to date, beginning $>$2\,mag prior to peak in each filter. The full light curve of AT~2019abn is shown in Figure~\ref{fig:lc}. We find that both the first stage of the decline, and the early rise are well described by linear (in magnitude vs time) fits. Here we refer to the initial rise as before the light curve begins to turnover as it approaches peak ($t<9$\,d; our first six data points). These linear rises and declines are summarised in Table~\ref{tab:decl}. The initial rise rate has no strong colour dependency and generally at the level of $\sim$0.26 mag\,day$^{-1}$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{2019abnlc}
\includegraphics[width=\columnwidth]{2019abncol}
\caption{\textit{Top:} Multi-colour light curve of AT~2019abn, with: $u'$, purple; \textit{B}, blue; \textit{V}, green; $r'$, orange; $i'$, red; $z'$, grey; \textit{H}, black (they appear on the light curve from faintest to brightest in that order). The \textit{J}-band photometry is in cyan, with $K_{\mathrm{s}}$-band in magenta. \textit{Bottom:} $(B-V)$ and $(r'-i')$ colour evolution (again in AB mags) of AT~2019abn, compared to that of AT~2017be. The photometry for both objects is corrected for Galactic reddening only ($E_{B-V}=0.03$ and $E_{B-V}=0.05$\,mag respectively), and Vega \textit{BV} mags are converted to the AB system using \citet{1998A&A...333..231B}.}
\label{fig:lc}
\end{figure}
\begin{table*}
\caption{Light-curve parameters of AT~2019abn in each band. Rise times are computed based on the first six data points ($t<9$\,d), before the light curve starts to turnover as it gets closer to peak. The plateau decline rates are fit between 25 and 130 days after discovery.} \label{tab:decl}
\centering
\begin{tabular}{c c c c c c}
\hline\hline
Filter & Rise rate [mag\,day$^{-1}$] &Decline rate [mag\,day$^{-1}$] &Peak magnitude [AB] &Time of peak [MJD] &Peak - $t_0$ [days]\\
\hline
\textit{B} &$0.252\pm0.013$ &$0.0264\pm0.0008$ &$18.17\pm0.04$ &$58530.4\pm2.1$ &$24.9\pm2.1$\\
\textit{V} &$0.240\pm0.005$ &$0.0184\pm0.0002$ &$17.12\pm0.02$ &$58530.4\pm2.0$ &$24.9\pm2.0$\\
$r'$ &$0.265\pm0.009$ &$0.0136\pm0.0001$ &$16.59\pm0.02$ &$58529.1\pm1.0$ & $23.5\pm1.0$\\
$i'$ &$0.268\pm0.011$ &$0.0102\pm0.0001$ &$16.12\pm0.02$ &$58528.6\pm2.0$ &$23.0\pm1.0$\\
$z'$ &... &$0.0081\pm0.0002$ &$15.93\pm0.02$ &$58526.9\pm1.9$ &$21.4\pm1.9$\\
\textit{H} &... &$0.0060\pm0.0004$ &$15.73\pm0.04$ &$58527.6\pm2.4$ &$22.1\pm2.4$\\
\hline
\end{tabular}
\end{table*}
To derive the date and magnitude of peak brightness for each filter, we fitted a cubic spline to the light curve data. The exact times of peak brightness in each filter, shown in Table~\ref{tab:decl}, have relatively large uncertainties. This is due to the portion of the light curve around peak being approximately flat for a number of days; therefore, even when they are relatively small, the errors on the photometry around this time lead to a much larger uncertainty on the date of maximum. The peak magnitudes themselves are well constrained however. Given the complex field, the systematic errors on the photometry are likely larger than the (statistical) errors derived from the fitting (only the latter of which are shown in Table~\ref{tab:decl}).
Taking the distance modulus of M51 to be $29.67\pm0.02$\,mag ($8.58\pm0.10$\,Mpc; \citealp{2016ApJ...826...21M}), and correcting for the small amount of foreground Galactic extinction (see Section~\ref{ext}), we find a peak absolute magnitude of $M_{r'}=-13.08\pm0.04$ (statistical errors only). However, the transient is subject to substantial additional reddening internal to M51, which we estimate to be between $E_{B-V}=0.79$ and 0.9 (taking $R_V=3.1$; see Section~\ref{ext}). This implies that the intrinsic absolute peak is $M_{r'}=-15.2\pm0.2$, making AT~2019abn the most luminous ILRT to date.
In the $u'$-band we see a rapid decline after peak. Between $t=35.6$ and 48.5\,days, AT~2019abn fades by $0.88\pm0.17$\,mag in the $u'$-band, while fading by only $0.28\pm0.04$\,mag in the \textit{B}-band. Given we see the appearance of many metal absorption lines between the two GTC spectra taken at $t=33.7$\,d and 78.4\,d, we interpret this $u'$-band drop-off as being predominately caused by increased line blanketing as the temperature falls. This rapid change in $(u'-B)$ colour is inconsistent with what is expected from a blackbody given the temperature evolution at this time (see Figure~\ref{fig:temp}). We note that ILRT AT~2017be also displayed a rapid decline in the \textit{u}-band \citep{2018MNRAS.480.3424C}.
There appears to be a break in the light curve after around 130\,days, where the transient begins to decline more rapidly. This is not seen in the \textit{H}-band, where the later decline is consistent with the linear fit to the earlier data, and there is too little data in the \textit{B}-band to be able to see any change. We find later decline rates of $0.0281\pm0.0017$ (\textit{V}), $0.0247\pm0.0009$ ($r'$), $0.0221\pm0.0011$ ($i'$) and $0.0156\pm0.0005$\,mag\,day$^{-1}$ ($z'$).
The overall relative colour evolution of AT~2019abn and AT~2017be (Figure~\ref{fig:lc}, lower panel) is broadly similar, although the timescales appear different. This may be expected given the light curves of ILRTs vary considerably in the evolution timescale (see Section~\ref{sec:com} for further discussion; also see e.g. Figure~4 of \citealp{2018MNRAS.480.3424C}). What is clear from this comparison is that AT~2019abn has much redder apparent colours than AT~2017be.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{2019abnbb}
\includegraphics[width=\columnwidth]{2019abntemp}
\caption{\textit{Top:} Two example SEDs and the best-fit reddened blackbody function. The $K_{\mathrm{s}}$ photometry is shown for the $t=80.4$\,d epoch, although is not included in the fitting. \textit{Bottom:} Temperature evolution of AT~2019abn, as derived from SED fitting of the optical-NIR photometry and assuming $E_{B-V}=0.85$. \textit{B}-band was used in the fitting when available. All $BVr'i'z'H$ epochs shown in the plot were also fitted using $Vr'i'z'H$, which agreed well. For clarity we therefore only show the $Vr'i'z'H$ fits at epochs with no \textit{B}-band. The filters included in the fitting of each point are shown in the legend. The dashed lines illustrate the evolution for $E_{B-V}=0.90$ (upper) and $E_{B-V}=0.79$ (lower).}
\label{fig:temp}
\end{figure}
To estimate the temperature evolution of AT~2019abn, we first correct for the well-constrained Galactic reddening of $E_{B-V}=0.03$ (see Section~\ref{ext}) assuming $R_V=3.1$ and a \citet{1999PASP..111...63F} reddening law. We then use the additional (i.e.\ M51 interstellar + circumstellar) extinction derived in Section~\ref{ext} to fit a reddened blackbody function to our photometry. Given our peak temperature is tied to the initial assumption we use to estimate the extinction at peak ($7000\leq{T_{\mathrm{eff}}}\leq8000$\,K), we can only interpret the relative change in temperature over the evolution of AT~2019abn and the precise value, and uncertainty, of the peak temperature is largely meaningless.
We fit our photometry (\textit{B}-band to \textit{H}-band) using a reddened blackbody function, with $E_{B-V}=0.85$ and $R_V=3.1$, as derived in Section~\ref{ext}. Where possible we use all filters between these two bands in the fitting, however, we only have two \textit{J}-band observations, so this is normally not included. The $K_{\mathrm{s}}$-band observations are not included in the fitting as they may be contaminated by re-emission from circumstellar dust. At some epochs there is no \textit{B}-band detection, particularly at late times as AT~2019abn fades. In these cases we fit the SED using \textit{V}-band to \textit{H}-band. For epochs with \textit{B}-band to \textit{H}-band data, we also fit the SED using just \textit{V}-band to \textit{H}-band observations, and found very little difference in the implied temperature. An example of the blackbody fits to two epochs of photometry is shown in the upper panel of Figure~\ref{fig:temp}. The lower panel of Figure~\ref{fig:temp} shows the evolution of the SED-fitted temperature of AT~2019abn, assuming that the extinction does not evolve during this part of the event. In Figure~\ref{fig:temp}, we also illustrate the temperature evolution when taking reddening values of either $E_{B-V}=0.79$ or $E_{B-V}=0.90$ (again, see Section~\ref{ext}). Our fitting is done using the specific filter responses and CCD quantum efficiency of the LT (see \citealp{smith_steele_2017}).
After appearing to stay approximately constant while AT~2019abn is around optical peak, the temperature then declines approximately linearly with time until the end of our observations. A linear fit to all of the temperature data at times $t>30$\,days indicates a decline of $25\pm3$\,K\,day$^{-1}$. The uncertainty on this temperature decline rate is dominated by the assumed reddening (i.e.\ between $E_{B-V}=0.79$ and 0.90, which in turn is tied to the assumption made regarding the temperature at peak optical brightness).
\section{Extinction}\label{ext}
Given the implied temperature from the spectra of ILRTs around peak brightness (i.e.\ similar to that of an F-star) combined with the extremely red colour, it is clear that AT~2019abn suffers from significant dust extinction. This can broadly be split into three categories: Galactic-interstellar, M51-interstellar, and circumstellar.
\citet{2011ApJ...737..103S} find Galactic reddening of $E_{B-V}=0.03$\,mag toward M51. The redshift of M51 allows us to separate the Galactic and M51 Na~{\sc i}\,D interstellar lines in the GTC spectra. From the first GTC spectrum, we measure the Galactic absorption from Na~{\sc i}\,D$_2$ 5890.0\,\AA\ to have an equivalent width (EW) of $0.18\pm0.04$\,\AA, corresponding to $E_{B-V}\sim0.03$\,mag \citep{2012MNRAS.426.1465P}, consistent with that derived from the \citet{2011ApJ...737..103S} dust maps. The Galactic Na~{\sc i}\,D$_1$ 5895.9\,\AA\ line is blended with the much stronger M51 Na~{\sc i}\,D$_2$ absorption. The different components of Na~{\sc i}\,D absorption are illustrated in Figure~\ref{nai}. The Galactic reddening is therefore well constrained and we adopt the value of $E_{B-V}=0.03$ for this work.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{nai.pdf}
\caption{\textit{Top:} Region around Na~{\sc i}\,D from the first GTC spectrum, with the Galactic and M51 components indicated. The Gaussian fits of the Galactic and M51 components are shown by the red dashed line. \textit{Bottom:} Solid black lines show the spectra for each of the second, third, and fourth GTC epochs, divided by the fitted Na~{\sc i}\,D profile shown in the upper panel (corrected for the $+0.4$\,\AA\ shift discussed in the text) . This removes the effect from line-of-sight absorption and reveals higher velocity Na~{\sc i}\,D absorption, associated with the outburst of AT~2019abn itself. There is no evidence for such absorption in the final spectrum, however. In each case, the uncorrected spectrum is shown by the dashed grey lines.\label{nai}}
\end{figure}
We measure EW~=~$1.00\pm0.03$\,\AA\ for the M51 Na~{\sc i}\,D$_1$ absorption. At such high EW values, Na~{\sc i}\,D is saturated and no longer gives a meaningful constraint on the reddening \citep{1997A&A...318..269M}. We can also measure M51 diffuse interstellar bands (DIBs) and find 5778+5780\,\AA\ DIB EW = 0.32\,\AA. This is at the point where many DIB measurements also poorly constrain the reddening, with this 5778+5790 DIB indicating M51 reddening of $E_{B-V}>0.4$\,mag \citep{2015MNRAS.452.3629L}.
In order to constrain the reddening from M51-ISM, plus CSM, we must make some assumptions for temperature. If we follow \citet{2019arXiv190407857J} and assume a peak temperature of 7500\,K, fitting a blackbody to our $BVr'i'z'H$ epoch closest to optical peak, we derive a reddening of $E_{B-V}=0.85\pm0.03$. The reddening will be of course be sensitive to the assumed temperature. If we alternatively assume a temperature of 7000 and 8000\,K, we derive reddening of $E_{B-V}=0.79\pm0.03$ and $E_{B-V}=0.90\pm0.03$ respectively, all assuming $R_V=3.1$. These values are also calculated using the specific filter responses and CCD quantum efficiency of the LT.
\section{Spectroscopic Evolution}
The spectra of AT~2019abn are very similar to other ILRTs, with the strongest features being Na~{\sc i}\,D absorption, along with H$\alpha$ and [Ca~{\sc ii}] (7291 and 7324\,\AA) in emission. The spectra evolve to cooler temperatures, with singly ionised and neutral metal absorption lines appearing. This is illustrated in Figure~\ref{fig:metal}. The full series of LT and GTC spectra are shown in Figures~\ref{fig:sprat} and \ref{fig:osiris}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{red}
\caption{Sections of the GTC spectra illustrating the appearance and evolution of absorption lines, along with the evolution of the [Ca~{\sc ii}] emission lines. The calibrated spectra are shown in black, with the telluric-corrected $t=33.7$\,d spectrum shown in red.}
\label{fig:metal}
\end{figure}
\subsection{Narrow, low-velocity absorption lines} \label{sec:narrow}
Our first GTC spectrum, taken 33.7\,days after discovery, as the transient had just passed peak optical luminosity, shows low-velocity, narrow absorption lines, which are displayed in Figure~\ref{fig:low}. The lines are unresolved in our spectra, indicating FWHM~$<$~120\,km\,s$^{-1}$. These features are seen clearly for Cr~{\sc ii}, Sc~{\sc ii}, Si~{\sc ii} and Y~{\sc ii}. Na~{\sc i}\,D could well be present, but would be swamped by persistent narrow Na~{\sc i}\,D absorption seen throughout all spectra. These other lines cannot be interstellar in origin (which presumably is the case for at least some fraction of the Na~{\sc i}\,D absorption) as they are not resonance lines. Tracing the velocity evolution of the Sc~{\sc ii} absorption lines (which are visible in most of the spectra) shows the velocity dramatically increasing between $t=33.7$ and $t=78.4$\,days (see top panel of Figure~\ref{fig:metal}). It is hard to reconcile these early low-velocity absorption features with the outflow or ejecta. As in this scenario, the region where the absorption lines are produced would need to move to dramatically higher velocities as the object initially fades from peak brightness.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{lowvel}
\caption{Sections of the $t=33.7$\,d GTC spectrum of AT~2019abn, clearly showing narrow, low-velocity absorption from species such as Cr~{\sc ii}, Sc~{\sc ii}, Si~{\sc ii} and Y~{\sc ii}. The calibrated observed spectrum is shown in black, with the telluric-corrected spectrum in red (this only significantly affects the bottom plot).}
\label{fig:low}
\end{figure}
We instead interpret these absorption lines as arising from a different component. This seems most likely to be pre-existing circumstellar material, ejected from the star either in a stellar wind or prior outburst. The implied velocity of this absorbing material is low. The best-fit radial velocity for the Si~{\sc ii} lines, which have the highest signal-to-noise (S/N) and are least likely to be contaminated, with respect to the M51 Na~{\sc i}\,D absorption is $\sim$\,$+55$\,km\,s$^{-1}$. At the same epoch, the best-fit velocity of the Sc~{\sc ii} lines is of similar order at $\sim$\,$+43$\,km\,s$^{-1}$. The uncertainties on these velocities will be dominated by the uncertainty in the wavelength calibration. While in principle such redshift could imply inflowing gas, it could simply be due to the random motion of the AT~2019abn system with respect to the M51-interstellar dust, which may dominate the Na~{\sc i}\,D absorption at this epoch. It is worth noting that narrow, low-velocity absorption lines were also seen in NGC~300~OT \citep{2009ApJ...699.1850B,2011ApJ...743..118H}.
In addition to the narrow lines, we observe a much broader absorption line at 6294\,\AA\, shown in the bottom panel of Figure~\ref{fig:low}. This is near a region of O$_2$ telluric absorption. As this first spectrum has high S/N and a much clearer continuum than the later GTC spectra (as it is not affected by metal absorption to any large degree), a good telluric correction can be made. We corrected the spectrum for O$_2$ and H$_2$O absorption by fitting for those molecules in the regions $6800-7000$ and $7215-7285$\,\AA\ in \texttt{Molecfit} \citep{2015A&A...576A..78K,2015A&A...576A..77S}. The best-fit O$_2$ and H$_2$O absorption from those regions was then applied to the entire spectrum. This telluric-corrected spectrum is also shown in Figure~\ref{fig:low}. It can be seen from this that telluric absorption can explain the absorption feature blue-ward of 6294\,\AA, but the 6294\,\AA\ absorption line itself remains. There is also tentative evidence for such a feature in the highest S/N LT SPRAT spectra (see Figure~\ref{fig:sprat}). This line is probably due to the 6283\,\AA\ DIB in M51.
\subsection{Velocity evolution} \label{sec:vel}
Ignoring the potential CSM origin of the early low-velocity lines, we see a move from higher to lower velocities for the absorption lines between the third and fourth GTC spectra. We simultaneously fit the spectral region $6600-6860$ with multiple Gaussian absorption lines, corresponding to Sc~{\sc ii}, Fe~{\sc i}, Ca~{\sc i}, Li~{\sc i,} and Ni~{\sc i,} with the offsets set between the different lines fixed with respect to each other in velocity-space and a single FWHM (in velocity space) for all the lines. This yields best-fitting absorption minimum velocities of $-166$, $-149,$ and $-76$\,km\,s$^{-1}$ (taking $z=0.00154$ as $0$\,km\,s$^{-1}$) for the 78.4, 128.5, and 165.4\,day spectra, respectively. The uncertainties on these velocities will be dominated by the continuum fitting, which is difficult with so many absorption lines present, and the uncertainty on the wavelength calibration of the spectra. We therefore estimate the errors to be $\sim$\,20\,km\,s$^{-1}$.
The Na~{\sc i}\,D complex fit of the first GTC spectrum (shown in Figure~\ref{nai}) initially came out slightly bluer than expected at $z=-0.00006\pm0.00004$ and $z=0.00148\pm0.00001$ for the two components. Correcting the subsequent GTC spectra for this fitted Na~{\sc i}\,D absorption left some residual absorption at the red end of Na~{\sc i}\,D 5895.9\,\AA, visible in all three spectra. If we shift the spectra by $+$\,0.4\,\AA, thereby placing the Galactic Na~{\sc i} lines at $z=0$ and that of M51 to $z=0.00155$, essentially identical to the canonical value of $z=0.00154$, the previously discussed residual then disappears, so we interpret this as a systematic error in the wavelength calibration. The region around Na~{\sc i}\,D, corrected for the narrow Na~{\sc i}\,D complex (Galactic and M51; fit from the first spectrum) is shown in Figure~\ref{nai}. This reveals that after the first spectrum, we also see higher velocity Na~{\sc i}\,D absorption associated with the outflowing or ejected material. This is already visible before the correction as we can see a blue-ward broadening of the lines. This additional Na~{\sc i}\,D absorption is at consistent velocities to other lines, such as Ba~{\sc ii} and Sc~{\sc ii}.
In the second, third, and fourth GTC spectra, the forest of metal absorption lines makes accurately measuring the width of the [Ca~{\sc ii}] lines difficult, including some cases where lines are superimposed, but there is no clear evidence of [Ca~{\sc ii}] velocity evolution during this time. The observation that [Ca~{\sc ii}] does not mirror the absorption line evolution is unsurprising given the lines must be produced in the low density region (to avoid collisional de-excitation).
\subsection{Emission lines}
In the day~33.7\,d spectrum, the high S/N H$\alpha$ line shows a narrow peak with broad wings. The line is poorly fit by either a single Gaussian or single Lorentzian profile. A two-component Lorentzian profile gives a better fit to the data than a two-component Gaussian profile. However, given the narrow component of the fit is only just resolved, we opt to fit this component with a Gaussian. The resulting broad-Lorentzian + narrow-Gaussian fit is shown in Figure~\ref{fig:ha-morph}. After correcting for spectral resolution, we measure a FWHM of $\sim$\,900\,km\,s$^{-1}$ for the broad component and a FWHM of $\sim$\,130\,km\,s$^{-1}$ for the narrow component. The narrower component is approaching the spectral resolution ($R\sim2500$), which should be kept in mind when considering this $\sim$\,130\,km\,s$^{-1}$ velocity measurement.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ha_plot.pdf}
\caption{Evolution of the H$\alpha$ emission line of AT~2019abn in our four GTC spectra. The red dashed line shows a two-component (Lorentzian + Gaussian) fit to the $t=33.7$\,day spectrum.}
\label{fig:ha-morph}
\end{figure}
Fitting the H$\alpha$ and H$\beta$ lines in the first GTC spectrum indicates a redshift of $z\sim0.00178$. This is higher than the typical redshift value for M51 and could potentially be influenced by asymmetric lines. High-resolution spectroscopy of NGC~300~OT showed some of the emission lines to be highly asymmetric, with the blue side of the [Ca~{\sc ii}] lines almost entirely suppressed \citep{2009ApJ...699.1850B}. At a lower resolution, this could still give a reasonable line fit, but with the effect of having the lines appearing to be slightly redshifted. It is worth noting however, that this higher redshift of $z\sim0.00178$ (i.e.\ $\sim$\,70\,km\,s$^{-1}$ from the redshift of M51), is similar to the observed `redshift' of the early narrow absorption lines discussed in Section~\ref{sec:narrow}, possibly pointing towards a peculiar velocity of AT~2019abn with respect to the dust causing the majority of the Na~{\sc i}\,D absorption. If this is the case, then the absorption-line velocities discussed in Section~\ref{sec:vel} would be $\sim$\,60\,km\,s$^{-1}$ higher (i.e.\ more negative).
In the latter three GTC spectra, the H$\alpha$ line shows more structure. Some of this double-peaked structure is real and has been seen in other ILRTs (e.g.\ NGC~300~OT; \citealp{2009ApJ...699.1850B}). However, it is not possible to quantitatively fit these components due to H$\alpha$ emission in the surrounding region of M51, which makes an accurate background subtraction difficult. Given all of our spectra were taken at the parallactic angle, it is also possible that the amount of background contamination varies between spectra. The double peaked structure of the H$\alpha$ line indicates asphericity and could even point to a disk-like configuration (see e.g.\ \citealp{2000ApJ...536..239L,2018MNRAS.477...74A}). The fact that this is seen only at late times could point towards it being embedded in a more spherical ejecta initially and then becoming revealed when the optical depth of the more spherical component becomes sufficiently low.
\subsection{Temperature from spectra}
The absorption line evolution from the GTC spectra is broadly consistent with that implied from the post-peak SED fitting, where the material gradually cools. Assuming the peak-luminosity low-velocity absorption lines are associated with pre-existing material as discussed in Section~\ref{sec:narrow}, at peak, the photosphere is too hot to produce strong metal absorption lines. Around 45 days later, when the second GTC spectrum is taken, the ejecta have cooled sufficiently for a host of singularly ionised metal lines to appear, along with some neutral metal lines. The photometric SED fitting implies that during this time, the photosphere could have cooled by $\sim$1000\,K.
When the next GTC spectrum was taken, another 50\,days later, the SED fitting indicates that the material has cooled by a further $>$1000\,K, which is reflected by more prominent neutral metal absorption, particularly Fe~{\sc i} and Ni~{\sc i}. Our SED fitting assumes no dust formation during the phases observed in this work, which, if present, would have the effect of the fitting giving a cooler temperature than was the case. While we cannot accurately measure the temperature from the spectra, they do at least confirm that the observed photosphere is indeed cooling with time.
\section{Discussion}
After correcting for the implied reddening, AT~2019abn is shown to be the most luminous ILRT observed to date (see Section~\ref{sec:com}). At $M_{r'}=-15.2\pm0.2$, it is in the absolute magnitude range of low-luminosity Type~IIP SNe. However, these low-luminosity Type IIP~SNe still have velocities $>$\,1000\,km\,s$^{-1}$ (see e.g. \citealp{2004MNRAS.347...74P,2018ApJ...859...78N}), much higher than anything we observe from AT~2019abn. While AT~2019abn and other ILRTs show similar spectra to LBV outbursts such as UGC~2773~OT~2009-1 and SN~2009ip \citep{2010AJ....139.1451S,2011ApJ...732...32F}, their light-curve evolution is much more rapid that those LBV eruptions.
\citet{2019arXiv190407857J} identified a variable 4.5\,$\mu$m source in archival \textit{Spitzer} data, coincident with the position of AT~2019abn. Dusty progenitor stars were also found for SN~2008S and NGC~300~OT \citep{2008ApJ...681L...9P,2009ApJ...699.1850B,2009ApJ...695L.154B}. However, the data published by \citet{2019arXiv190407857J} for AT~2019abn demonstrate the first time that variability has been detected in the progenitor luminosity. Infrared follow-up of AT~2019abn over the coming decade will be important in helping to understand its nature. Both SN~2008S and NGC~300~OT are now fainter than their progenitor stars \citep{2016MNRAS.460.1645A}.
\subsection{Comparison to other ILRTs} \label{sec:com}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{2019abnlccomp.pdf}
\caption{Comparison between $r'/r/R$-band light curves of AT~2019abn to SN~2008S, NGC~300~OT, PTF~10fqs and AT~2017be. All light curves have been corrected for Galactic and internal reddening (as discussed in the text).\label{fig:comp}}
\end{figure}
We compare the absolute $r'$-band light curve of AT~2019abn with other ILRTs in Figure~\ref{fig:comp}. The light curves are SN~2008S (assuming $E_{B-V,~ {\mathrm{Host+CSM}}}=0.3$, \citealp{2009MNRAS.398.1041B}; $\mu_0=28.78$, \citealp{2006MNRAS.372.1315S}), NGC~300~OT ($E_{B-V,~ {\mathrm{Host+CSM}}}=0.25$, \citealp{2018MNRAS.480.3424C}; $\mu_0=26.29$, \citealp{2016AJ....151...88B}), PTF~10fqs ($E_{B-V,~ {\mathrm{Host+CSM}}}=0.4$, this work; $\mu_0=30.82$, \citealp{2009ApJ...694.1067P}), and AT~2017be ($E_{B-V,~ {\mathrm{Host+CSM}}}=0.04$, $\mu_0=29.47$; \citealp{2018MNRAS.480.3424C}). All light curves are also corrected for foreground Galactic reddening from the \citet{2011ApJ...737..103S} dust maps.
Figure~\ref{fig:comp} shows that AT~2019abn is substantially more luminous than other members of the class.
The ($r-i$) colour of PTF~10fqs near peak (see \citealp{2011ApJ...730..134K}) suggests significant reddening. The assumption of significant dust is consistent with the high Na~{\sc i}\,D EW and low SED-fitted temperature of $\sim3900$\,K (when no extinction correction is made; \citealp{2011ApJ...730..134K}). Given the spectroscopic similarity between PTF~10fqs and other ILRTs, such a low peak temperature seems implausible (indeed \citealp{2011ApJ...730..134K} only derive this temperature as a lower limit). Assuming a similar temperature to other ILRTs and $R_V=3.1$, the $(r-i)$ colour suggests additional reddening in the region of $E_{B-V}\sim0.4$\,mag for PTF~10fqs. We use this value to correct the $r$-band light curve of PTF~10fqs, as discussed above.
Despite AT~2017be and AT~2019abn being the lowest and highest luminosity ILRTs respectively, the shape of their \textit{r}-band light curves are very similar, with both displaying a fast rise to peak, a linear (in magnitude) slow decline, which was then followed by a faster linear decline. The exception to this was AT~2017be showing a faster decline shortly after peak, prior to the slow linear decline described above \citep{2018MNRAS.480.3424C}.
\subsection{H$\alpha$ flux evolution}
The evolution of the H$\alpha$ emission-line flux for AT~2019abn is shown in Figure~\ref{fig:ha}. This shows that H$\alpha$ emission peaks at around the same time as the optical continuum. However, the H$\alpha$ flux declines much more rapidly than the optical continuum, even when compared to the \textit{B}-band. The \textit{u}$'$-band decline could be similar but the combination of poor \textit{u}$'$-band coverage (due to the faintness of AT~2019abn in that band) and lack of spectra between day~34 and day~79 makes it impossible to tell. From our spectra taken from $t=78.4$ onward, the H$\alpha$ flux appears to remain approximately constant. Similar behaviour of a rapid H$\alpha$ decline followed by a plateau has been seen in other ILRTs (see Fig.~11 of \citealp{2018MNRAS.480.3424C}).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{2019abnha.pdf}
\caption{Evolution of H$\alpha$ emission flux for AT~2019abn. Black circles show SPRAT measurements, with grey triangles showing OSIRIS measurements. The uncertainties on the calibration and measurements themselves will generally be relatively low. In some cases there may be significant uncertainties from the sky subtraction however, due to the complicated M51 background. The \textit{u}$'$-band (purple), \textit{B}-band (blue) and \textit{r}-band (orange) optical light curves are also shown for reference (in relative flux, rather than magnitude, and not to any absolute scale).}
\label{fig:ha}
\end{figure}
\subsection{Temperature and luminosity evolution}
Using our photometry and the fitted temperatures from Figure~\ref{fig:temp}, we compute the integrated luminosity between 3000--23000\,\AA\ (i.e.\ approximately \textit{u}--$K_{\mathrm{s}}$-band), assuming a blackbody. The calculated luminosity evolution is shown in Fig.~\ref{fig:lum}. After turning over at peak, the luminosity evolution follows a monotonic decline, as seen in other ILRTs \citep{2018MNRAS.480.3424C}. There is no evidence for a secondary peak as seen in some LRNe. The rate of decline at the end of our light curve rules out radioactive decay of $^{56}$Co as the primary energy source powering the light curve up to the end of our observations. This monotonically declining luminosity is similar to that seen in other ILRTs, and in sharp contrast to LRNe, which show a long plateau or second peak in their luminosity. This plateau or rise to secondary peak in LRNe has already started by 40\,days after peak brightness (see Fig.~4 of \citealp{2019arXiv190913147C}), yet our data of AT~2019abn over the course of 200\,days ($>$170\,days after peak) show no sign of such a signature.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{2019abnlum.pdf}
\includegraphics[width=\columnwidth]{2019abnraf.pdf}
\caption{\textit{Top:} Luminosity evolution of AT~2019abn, derived from the blackbody fits to the photometry and integrated between 3000--23000\,\AA. The fading timescale expected if the light curve was powered by the radioactive decay of $^{56}$Co is also indicated for reference. \textit{Bottom:} Radius evolution of AT~2019abn, as derived from the luminosity and temperature fits, assuming luminosity, $L=4\pi R^2\sigma{T_{\mathrm{eff}}}^{4}$. The data points, which are joined by solid lines represent $E_{B-V}=0.85$, with the dashed lines representing $E_{B-V}=0.9$ and~0.79.\label{fig:lum}}
\end{figure}
We also compute the radius evolution from the luminosity and temperature calculations, assuming luminosity, $L=4\pi R^2\sigma{T_{\mathrm{eff}}}^{4}$, where \textit{R} is the radius and $\sigma$ is the Stefan–Boltzmann constant. This evolution is shown in the bottom panel of Figure~\ref{fig:lum}. The radius initially seems to fall before gradually increasing. This appears different from the typical ILRT behaviour, which shows a slowly declining radius; whereas LRNe show radius increasing with time (see Fig.~4 of \citealp{2019arXiv190913147C}). However, if we look in more detail, this also differs from LRN behaviour which show an increase in radius of as much as an order of magnitude by $t=200$\,days (again, see Fig.~4 of \citealp{2019arXiv190913147C}), whereas the difference between the maximum and minimum computed radius for the entire span of observations is only a factor of $\sim1.5$. At the end of our observations (when AT~2019abn became unobservable due to the Sun), AT~2019abn had a magnitude of $V\sim21.5$ and $r'\sim20$, so deep, late-time observations will be needed to track the radius evolution further. The increase in radius from peak optical brightness translates to an increase of $\sim$100\,km\,s$^{-1}$, which is of the same order as the absorption line velocities we observe in the spectra.
\subsection{Rise to peak}
The early discovery of AT~2019abn and our daily cadence multi-colour follow-up at early times makes it possible to probe the early evolution of an ILRT for the first time. We therefore show the early portion of the light curve in more detail in Figure~\ref{early}. We also fit the the $BVr'i'$ photometry with a blackbody during this stage of the evolution to derive changes in the temperature, dust, luminosity, and radius that are implied by different assumptions. These fits are also shown in Figure~\ref{early}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{2019abnlc_early.pdf}
\caption{\textit{First panel (top):} Early light curve of AT~2019abn (key as in Figure~\ref{fig:lc}). \textit{Second panel:} Early temperature evolution of AT~2019abn, assuming no change in dust extinction. \textit{Third panel:} Implied additional dust extinction to that seen at peak, if one assumes a constant temperature of $T=7500$\,K over this portion of the light curve. \textit{Fourth panel:} early luminosity evolution of AT~2019abn. \textit{Fifth panel (bottom):} early radius evolution of AT~2019abn. In the last two panels, the black and grey points represent the different assumptions of constant extinction ($E_{B-V}=0.85$) and constant temperature ($T=7500$\,K), respectively. \label{early}}
\end{figure}
Assuming a constant temperature of 7500\,K, \citet{2019arXiv190407857J} suggest dust destruction as the cause of AT~2019abn becoming increasingly blue during its rise to peak. Using the same assumption of constant temperature on our data shows that if all of the pre-peak colour change is due to changes in extinction, additional extinction of $A_V\sim0.7$\,mag (assuming constant $R_V=3.1$) is required to explain the observed colours in the first $\sim$10\,days after discovery. This is consistent with the result derived by \citet{2019arXiv190407857J}. The evolution of this as AT~2019abn rises to peak can be seen in the middle panel of Figure~\ref{early}.
Unfortunately, our early-time data are not sufficient to discriminate between changes in temperature and changes in dust extinction. As an alternative, we therefore assume a fixed extinction (i.e.\ no dust destruction) and we assume that changes in the pre-maximum colour are entirely due to changes in the intrinsic temperature of the photosphere. This is shown in the second panel of Figure~\ref{early}. The implied change in temperature is $\sim$1500\,K between our early data and peak. If similar colour evolution is seen in other ILRTs, high S/N spectra of an ILRT during this early rise should, therefore, help reveal the nature of the behaviour and distinguish between temperature and dust evolution. The bottom two panels of Figure~\ref{early} show the relative early luminosity and radius evolution derived under the two alternative assumptions of constant extinction and constant temperature. Evidence of the approximately achromatic early rise can be seen under both assumptions, where significant changes in the second and third panels of Figure~\ref{early} only really begin around ten\,days after discovery.
\section{Summary and conclusions}
We conducted multi-wavelength follow-up observations of AT~2019abn, located in the nearby M51 galaxy. Here we summarise our findings:
\begin{enumerate}
\item Our observations of AT~2019abn yield the most detailed early light curve of any ILRT to date, starting around three weeks before and more than two magnitudes below peak brightness.
\item The observations of the initial rise, when AT~2019abn was $>$\,1~mag below peak, are consistent with an achromatic (\textit{BVr$'$i$'$}) rise in luminosity. As it approaches peak, the colours become bluer.
\item AT~2019abn is subject to significant M51 extinction (interstellar + CSM). From the expected peak temperature of an ILRT, we derive an estimate of $E_{B-V}\sim0.85$, assuming $R_V=3.1$.
\item Low-resolution spectroscopy of AT~2019abn, beginning during the rise to peak, shows that the H$\alpha$ flux peaks at a similar time to the optical continuum but then fades much more rapidly than the optical continuum before plateauing.
\item Fitting a blackbody to our multi-wavelength photometry indicates that after peak brightness, the temperature declines slowly over time. We estimate the rate of this decline to be $\sim25$\,K\,day$^{-1}$ during this phase.
\item From the blackbody fits, we find that the luminosity of the transient shows a monotonic decline after peak, similar to other ILRTs and in contrast to LRNe. The fits indicate that the implied radius slowly increases, with this marginal increase over time not matching well with other transients of either the ILRT or LRN class.
\item The GTC spectra taken as AT~2019abn declines are broadly consistent with this temperature evolution, showing increasingly strong absorption from neutral species such as Fe~{\sc i}, Ni~{\sc i,} and Li~{\sc i}.
\item The first GTC spectrum, taken 33.7\,d after discovery, shows narrow (unresolved) low-velocity absorption from species such as Si~{\sc ii}, Sc~{\sc ii}, Y~{\sc ii,} and Cr~{\sc ii}, which we interpret as most likely arising from pre-existing material from around the system.
\item We conclude that while there may be some differences with other members of the class (such as the radius evolution), AT~2019abn is best described as an ILRT. Observations of the final stages of the transient's evolution will be needed to confirm this evaluation. \end{enumerate}
The early discovery of nearby ILRTs, such as AT~2019abn, will be key in furthering our understanding of these objects, as will building a larger sample with the increased volume in which such objects can be regularly discovered thanks to deeper all-sky surveys such as ZTF, ATLAS, and, in the future, LSST.
\begin{acknowledgements}
We thank the anonymous referee for useful feedback on the submitted manuscript. SCW and IMH acknowledge support from UK Science and Technology Facilities Council (STFC) consolidated grant ST/R000514/1. DJ acknowledges support from the State Research Agency (AEI) of the Spanish Ministry of Science, Innovation and Universities (MCIU) and the European Regional Development Fund (FEDER) under grant AYA2017-83383-P. DJ also acknowledges support under grant P/308614 financed by funds transferred from the Spanish Ministry of Science, Innovation and Universities, charged to the General State Budgets and with funds transferred from the General Budgets of the Autonomous Community of the Canary Islands by the Ministry of Economy, Industry, Trade and Knowledge. This research was also supported by the Erasmus+ programme of the European Union under grant number 2017-1-CZ01-KA203-035562. MJD acknowledges support from STFC consolidated grant ST/R000484/1. The work of OP has been supported by Horizon 2020 ERC Starting Grant ``Cat-In-hAT'' (grant agreement \#803158) and INTER-EXCELLENCE grant LTAUSA18093 from the Czech Ministry of Education, Youth, and Sports.
The work is based on observations with the Liverpool Telescope, which is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the STFC. Some of these observations were obtained through director’s discretionary time (DDT) programme JQ19A01 (PI: Darnley).
This work is also based on observations made with the Gran Telescopio Canarias (GTC), installed in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias, in the island of La Palma, under DDT (programme ID GTC2019-110, PI: Jones).
Also based on observations made with the IAC80 telescope operated on the island of Tenerife by the Instituto de Astrof\'isica de Canarias in the Spanish Observatorio del Teide, and on observations made with the Nordic Optical Telescope, operated by the Nordic Optical Telescope Scientific Association at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrof\'isica de Canarias.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,564,413 | arxiv | \section{Introduction}
\label{sec:introduction}
A completeness proof for a 1-free process theory modulo bisimilarity was recently
presented in \cite{gf20}\footnote{Preprint available at:
\texttt{https://www.cs.vu.nl/{\textasciitilde}wanf/publications.html}}\footnote{Extended
version available at: \texttt{https://arxiv.org/abs/2004.12740}}.
Being somewhat daunted by the complexity of this proof provided a powerful incentive
to search for a simpler solution, which is presented in this work. This different and
considerably simpler proof only uses induction and normal forms, and has been additionally
verified by means of the Coq proof assistant.
This paper is an intermediate step in a research line towards the resolution of a question
originally posed in \cite{mil84}: does there exist a finite and complete
axiomatization for the unary Kleene star under bisimilarity equivalence? This problem
is usually considered in the context of process theories including the constants 0
(deadlock), 1 (empty process) and the operators + (non-deterministic choice) and
$\cdot$ (sequential composition). Milner himself suggested that solving this problem
may involve a considerable effort \cite{mil84}. It is clear that the question remains
unanswered to this day.
Earlier attempts in this direction include a completeness proof in absence of both
the constants 0 and 1 in \cite{bbp94}, and a variant where every Kleene star appears
as $p^*\cdot 0$ in \cite{fok97}. Completeness proofs for simpler theories (e.g.
without the Kleene star) can be found in any process algebra handbook (cf. \cite{jb10}).
The new approach in \cite{gf20} deviates from the rather syntactic treatment in earlier
works and instead takes the more semantic avenue of using process charts. This toolset is
applied to prove completeness for a 1-free process theory over the binary Kleene star
modulo bisimilarity. While novel and innovative, this results in a quite complex proof.
This paper presents a two-fold approach. First, it is shown that every process term
has a bisimulant in a normal form and of non-increased star nesting depth. In essence,
our normalization requirement conforms to the following congruence property:
if term $p$ reduces to both $t$ and $u$ in one or more steps then bisimilarity of
$t$ and $u$ is a consequence of bisimilarity of $t\cdot\binstar{p}{q}$ and $u\cdot
\binstar{p}{q}$. The second part of the proof applies induction towards the star
nesting depth and proves equality under the condition that one operand is normalized,
which is sufficient due to symmetry and transitivity.
This paper is set up in self-contained form. No claim is made that the supporting Coq
code is the most neat or elegant reflection of such a proof in formalized mathematics,
it just serves as an additional layer of verification.
The remainder of this paper is organized as follows. Section \ref{sec:definitions}
contains a number of basic definitions and section \ref{sec:soundness} concerns
the soundness of the axiom system. These basic preliminaries are then followed
by three sections in which the the completeness proof is built from the ground up.
Section \ref{sec:summation} defines a summation operator and proves some basic
results. Normal forms of terms are defined and shown to be derivable under
bisimilarity in section \ref{sec:normalization}. These results are then integrated
into a completeness proof in section \ref{sec:completeness}. The formalization of
the theory in the Coq proof assistant is detailed in section \ref{sec:formalization},
which mainly serves to make this material more accessible to readers who are less
familiar with such techniques. A short concluding section \ref{sec:conclusions}
refines Milner's completeness problem to a more detailed conjecture.
\section{Definitions}
\label{sec:definitions}
We will mostly follow standard nomenclature and definitions in process algebra and
propose the books \cite{jb10} and \cite{fok13} as general reference texts. Throughout
this work, we will assume $A$ to be a set of \emph{actions}. There is no requirement that
$A$ is finite, as this proof concerns the completeness of closed terms only. Actions form
the elementary operations in the set of process terms $T$ defined inductively as
\begin{center}
\begin{math}
T\mathbin{::=}\,\,0\mid A\mid T+T\mid T\cdot T\mid\binstar{T}{T}.
\end{math}
\end{center}
In the process algebra $T$ the constant $0$ expresses deadlock (i.e.
a process exhibiting no behavior). Every action $a\in A$ induces a step
$a$ to the special termination symbol $\surd$ as its sole behavior (note
that $\surd$ is not part of the algebra $T$). The operation $p+q$
models a (possibly non-deterministic) choice between $p$ and $q$, while the
sequential composition $p\cdot q$ denotes the concatenation of the behaviors
of $p$ and $q$. The binary Kleene star $\binstar{p}{q}$ models zero or more
iterations of $p$, possibly followed by $q$. Sequential composition is assumed
to be right-associative, while the other operators associate to the left. The
Kleene star binds stronger than sequential composition, which in turn binds
stronger than plus.
Process terms exhibit behavior defined as taking actions resulting in
either a new process term or the termination symbol $\surd$. We set
$V=T\cup\{\surd\}$ and define a relation $\longrightarrow
\subseteq V\times A\times V$ to formally capture this behavior. Assume that
$p,q,p',q'\in T$, $v\in V$, and $a\in A$ in the set of derivation rules listed
below, using the notation $p\step{a}q$ for $(p,a,q)\in\longrightarrow$.
\begin{center}
\begin{tabular}{c}
$\displaystyle\frac{}{a\step{a}\surd}$
\qquad
$\displaystyle\frac{p\step{a}v}{p+q\step{a}v}$
\qquad
$\displaystyle\frac{q\step{a}v}{p+q\step{a}v}$
\qquad
$\displaystyle\frac{p\step{a}p'}{p\cdot q\step{a}p'\cdot q}$
\bigskip \\
$\displaystyle\frac{p\step{a}\surd}{p\cdot q\step{a}q}$
\qquad
$\displaystyle\frac{p\step{a}p'}{\binstar{p}{q}\step{a}p'\cdot\binstar{p}{q}}$
\qquad
$\displaystyle\frac{p\step{a}\surd}{\binstar{p}{q}\step{a}\binstar{p}{q}}$
\qquad
$\displaystyle\frac{q\step{a}v}{\binstar{p}{q}\step{a}v}$
\bigskip \\
\end{tabular}
\end{center}
Bisimilarity is a coinductively defined relationship which relates process
terms in a lock-step fashion. The proof in \cite{gf20} defines bisimilarity
in terms of process charts. As these constructs are not used in this
proof, we will employ a definition in terms of $V$.
Elements $u,v\in V$ are bisimilar (notation $\bisim{u}{v}$) if
there exists a relation $R\subseteq V\times V$ such that
$(u,v)\in R$ and for all $(x,y)\in R$ the following are satisfied
\begin{enumerate}
\item $x = \surd$ if and only if $y = \surd$,
\item for all $x\step{a}x'$ there exists a $y'\in V$ such that
$y\step{a}y'$ and $(x',y')\in R$, and
\item for all $y\step{a}y'$ there exists an $x'\in V$ such that
$x\step{a}x'$ and $(x',y')\in R$.
\end{enumerate}
Examples of pairs of bisimilar terms include $\bisim{\binstar{(\binstar{a}{b})}{c}}{\binstar{(a+b)}{c}}$
and $\bisim{\binstar{(a+aa)}{0}}{\binstar{a}{0}}$, but the terms
$a\cdot b+a\cdot c$ and $a\cdot(b+c)$ are not bisimilar.
The following set of axioms will be shown to be sound and complete
with regard to bisimilarity in this paper. In a very strict context,
these should be interpreted as axiom-schemes, in the sense that for
each closed instance of the variables, the corresponding axiom is
defined.
\begin{center}
\begin{tabular}{lrcl|llrcl}
(B1) & $x+y$ & = & $y+x$ & & (B6) & $x+0$ & = & $x$ \\
(B2) & $(x+y)+z$ & = & $x+(y+z)$ & & (B7) & $0\cdot x$ & = & $0$ \\
(B3) & $x+x$ & = & $x$ & & (BKS1) & $x\cdot\binstar{x}{y}+y$ & = & $\binstar{x}{y}$ \\
(B4) & $(x+y)\cdot z$ & = & $x\cdot z+y\cdot z$ &
& (BKS2) & $(\binstar{x}{y})\cdot z$ & = & $\binstar{x}{(y\cdot z)}$ \\
(B5) & $(x\cdot y)\cdot z$ & = & $x\cdot(y\cdot z)$ &
& (RSP) & $x$ & = & $y\cdot x+z$\,\,\,\,\textrm{implies} \\
& & & & & & $x$ & = & $\binstar{y}{z}$ \\
\end{tabular}
\end{center}
The axiomatization is a straightforward adaptation of the set of
axioms originally proposed in \cite{mil84}, which was in turn
adapted from a language-theoretic setting based on work by Salomaa
\cite{sal66}). It is well-known that at least one higher-order
construct is required as shown in \cite{sew94}. Variants of
the axiom of the recursive specification principle (RSP) have been
studied extensively (cf. \cite{jb10}).
For the remainder of this paper, the notation $p=q$ will be used to
denote axiomatic equality whereas $p\equiv q$ will be used to denote
exact syntactic equality (e.g. $a+(b+c)=(a+b)+c$ but
$a+(b+c)\not\equiv(a+b)+c$).
The definitions listed here are sufficient to formulate the main
result of this work, the proof of which is divided over several succeeding
sections.
\begin{theorem}
For all $p,q\in T$ such that $\bisim{p}{q}$ it holds that $p=q$.
\end{theorem}
\section{Soundness}
\label{sec:soundness}
We briefly consider soundness of the theory to fulfill the objective
of being self-contained and to state a simple lemma which serves as a
very useful building block in the remainder of the proof.
\begin{lemma}
\label{lem:bisim_next}
For all $p,q\in T$ such that
\begin{enumerate}
\item for all $p\step{a}p'$ there exists a $q\step{a}q'$ such that $\bisim{p'}{q'}$ and
\item for all $q\step{a}q'$ there exists a $p\step{a}p'$ such that $\bisim{p'}{q'}$,
\end{enumerate}
it holds that $\bisim{p}{q}$.
\end{lemma}
Soundness of the axiom RSP is neither deep nor straightforward.
\begin{lemma}
\label{lem:rsp_sound}
If $\bisim{p}{q\cdot p+r}$ then $\bisim{p}{\binstar{q}{r}}$, for all $p,q,r\in T$.
\end{lemma}
\begin{proof}
Assume $R$ is the witnessing relation for $\bisim{p}{q\cdot p+r}$. It is
straightforward to show that the transitive closure $\overline{R}$
of $R$ is again a bisimulation. If $R'$ is defined as:
\begin{center}
\begin{math}
R'=\{(p,\binstar{q}{r})\}\cup\overline{R}\cup\{(p',q'\cdot\binstar{q}{r})
\mid(p',q'\cdot p)\in\overline{R}\}\cup\{(p',\binstar{q}{r})\mid(p',p)\in\overline{R}\}
\end{math}
\end{center}
then $R'$ can be chosen as a witnessing relation for $\bisim{p}{\binstar{q}{r}}$.
\end{proof}
Soundness of the theory is required as a lemma in the completeness proof.
\begin{lemma}
\label{lem:soundness}
For all $p,q\in T$ such that $p=q$ it holds that $\bisim{p}{q}$.
\end{lemma}
\begin{proof}
Assume that $p=q$ and apply induction towards the derivation tree of $p=q$.
Most of the cases can be resolved easily via Lemma \ref{lem:bisim_next}.
Soundness for the axioms B5 and BKS2 is only slightly more complicated.
Soundness for the axiom RSP is shown in Lemma \ref{lem:rsp_sound}.
\end{proof}
\section{Summation}
\label{sec:summation}
Expressing process terms as sums forms a crucial step between algebraic
and semantic reasoning in the succeeding proofs. For example, the proof
of Lemma \ref{lem:next} becomes much easier once we are able to obtain
such sums under equality.
For finite sets $N\subseteq A \times V$ we define the summation operator
$\sigma:\{N\subseteq A\times V\}\rightarrow T$ recursively by setting
$\sigma(\emptyset)=0$ and
\begin{center}
\begin{math}
\sigma(\{(a,u)\}\cup N)=\sigma(N)+\begin{cases}
a & \textrm{if}\,\,u\,\,\textrm{equals}\,\,\surd \\
a\cdot u & \textrm{if}\,\,u\in T \\
\end{cases}
\end{math}
\end{center}
where the expression $\{(a,u)\}\cup N$ refers to the disjoint union
(i.e. $(a,u)\not\in N$).
\begin{lemma}
\label{lem:summation}
For all $p\in T$ there exists an $N\subseteq A\times V$
such that $p=\sigma(N)$ and for all $(a,u)\in A\times V$ it holds
that $p\step{a}u$ if and only if $(a,u)\in N$.
\end{lemma}
\begin{proof}
Apply induction towards the structure of $p$. In case $p\equiv 0$, choose $\emptyset$
for $N$. If $p\equiv a$ for some $a\in A$, choose $\{(a,\surd)\}$ for $N$. For the case
$p\equiv p_1+p_2$, observe that in general $\sigma(N_1\cup N_2)=\sigma(N_1)+\sigma(N_2)$
always holds.
If $p\equiv p_1\cdot p_2$, first use induction to obtain $N_1$ such that
$p_1=\sigma(N_1)$. Then, apply the following projection to each element
$(a,u)\in N_1$: (1) if $u$ equals $\surd$ then project to $(a,p_2)$, (2)
if $u\in T$ then project to $(a,u\cdot p_2)$ and name the result $N'$.
Subsequently, we have $p_1\cdot p_2=\sigma(N')$ using B5.
For the case $p\equiv\binstar{p_1}{p_2}$, use BKS1 to rewrite as $p_1\cdot
\binstar{p_1}{p_2}+p_2$ and treat the part $p_1\cdot\binstar{p_1}{p_2}$ in
the same way as for $p\equiv p_1\cdot p_2$.
\end{proof}
We apply Lemma \ref{lem:summation} to obtain a useful intermediate result
stated in Lemma \ref{lem:next}, which is very similar in form to Lemma
\ref{lem:bisim_next}. We define the predicate $\teq{u}{v}$ for all $u,v\in
V$ as shown below, where $\mathrm{teq}$ stands for termination-or-(axiomatically)equal.
\begin{center}
\begin{math}
\teq{\surd}{\surd}=\mathit{true}\qquad
\teq{p}{q}\iff p=q\,\,\textrm{if}\,\,p,q\in T
\end{math}
\end{center}
and define $\teq{u}{v}$ as $\mathit{false}$ in all other cases.
\begin{lemma}
\label{lem:next}
For all $p,q\in T$ such that
\begin{enumerate}
\item for all $p\step{a}p'$ there exists a $q\step{a}q'$ such that $\teq{p'}{q'}$ and
\item for all $q\step{a}q'$ there exists a $p\step{a}p'$ such that $\teq{p'}{q'}$,
\end{enumerate}
it holds that $p=q$.
\end{lemma}
\begin{proof}
Assume $p=\sigma(M)$ and $q=\sigma(N)$ for $M,N\subseteq A\times V$
as derived by Lemma \ref{lem:summation}. Rewrite as $p=\sigma(M)=\sigma(M)+\sigma(N)=
\sigma(N)=q$ and solve the two equalities $\sigma(M)=\sigma(M)+\sigma(N)$ and
$\sigma(N)=\sigma(M)+\sigma(N)$ separately by induction towards the size of the
set that appears once in the respective equality.
\end{proof}
\section{Normalization}
\label{sec:normalization}
In the remainder, let $p\longrightarrow^*q$ denote the fact that $p$ reduces to
$q$ in zero or more steps for $q\in T$. Similarly, we define $p\longrightarrow^+q$
representing a reduction in one or more steps for $q\in T$.
The core of this completeness proof uses the result that terms in $T$ can
be normalized under bisimilarity such that for every subterm $\binstar{r}{s}$ and
all reductions $r\longrightarrow^+p$ and $r\longrightarrow^+q$ such that
$\bisim{p\cdot\binstar{r}{s}}{q\cdot\binstar{r}{s}}$ it holds that $\bisim{p}{q}$.
The precise meaning of 'subterm' will become more clear in a short while. We require
a predicate $\mathrm{congr}$ to express (a premise for) this congruence property:
\begin{center}
\begin{math}
\congr{p}{q}\iff\textrm{for all}\,\,p\longrightarrow^+t:\,\,\bisim{t\cdot q}{q}
\,\,\textrm{does not hold}.
\end{math}
\end{center}
Informally, if there exists a $p\longrightarrow^+t$ such that
$\bisim{t\cdot q}{q}$ then $\congr{p}{q}$ is false, and true if no such
$p\longrightarrow^+t$ exists. We may now prove a crucial lemma.
\begin{lemma}
\label{lem:congruence}
If $\bisim{p\cdot r}{q\cdot r}$ and $\congr{p}{r}$ and $\congr{q}{r}$
then $\bisim{p}{q}$, for all $p,q,r\in T$.
\end{lemma}
\begin{proof}
Assume $R$ is a witnessing relation for $\bisim{p\cdot r}{q\cdot r}$
and define $R'$ as follows:
\begin{center}
\begin{math}
R'=\{(\surd,\surd)\}\cup\{(p,q)\}\cup\{(p',q')\mid (p'\cdot r,q'\cdot r)\in R,\,
p\longrightarrow^+p',\,q\longrightarrow^+q'\}
\end{math}
\end{center}
Assume there exists $(p'\cdot r,q'\cdot r)\in R$ such that $(p',q')\in R'$ for some
$p',q'\in T$. If there exists a step $p'\cdot r\step{a}p''\cdot r$ for $p''\in T$
and a step $q'\cdot r\step{a}r$ (i.e. when $q'\step{a}\surd$) such that
$\bisim{p''\cdot r}{r}$ then this contradicts $\congr{p}{r}$. The argument is
symmetric.
\end{proof}
The congruence property may be used to recursively define a normal form,
thereby making the notion of subterm with regard to the congruence property
more precise. Two remarks are important now.
First, suppose we have a term $(\binstar{p}{q})\cdot(\binstar{r}{s})$ then
two properties are desired:
\begin{enumerate}
\item for $p\longrightarrow^+t$ and $p\longrightarrow^+u$ we have:
$\bisim{t\cdot\binstar{p}{q}\cdot\binstar{r}{s}}{u\cdot\binstar{p}{q}\cdot\binstar{r}{s}}$
implies $\bisim{t}{u}$ and
\item for $r\longrightarrow^+x$ and $r\longrightarrow^+y$ we have:
$\bisim{x\cdot\binstar{r}{s}}{y\cdot\binstar{r}{s}}$ implies $\bisim{x}{y}$.
\end{enumerate}
Therefore, the cases for $\binstar{p}{q}$ and $\binstar{r}{s}$ cannot be
separated into two conditions relying solely on $\binstar{p}{q}$ and
$\binstar{r}{s}$. This is resolved by using a binary predicate to express
the fact that a term is normalized.
In general, the process algebra $T$ does not have a neutral element under sequential
composition. This necessitates the definition of two slightly different
predicates for normal forms. Although this makes the proof a slightly more
difficult, this does not present a fundamental complication.
We define two normal form predicates $\mathrm{nfmult}$ and $\mathrm{nf}$ recursively
as shown below. Note that there is no mutual dependence between $\mathrm{nf}$ and
$\mathrm{nfmult}$.
\begin{center}
\begin{tabular}{rcl}
$\nfmult{0}{q}$ & $\iff$ & $\mathit{true}$ \\
$\nfmult{a}{q}$ & $\iff$ & $\mathit{true}$ \\
$\nfmult{p_1+p_2}{q}$ & $\iff$ & $\nfmult{p_1}{q}\,\,\textrm{and}\,\,\nfmult{p_2}{q}$ \\
$\nfmult{p_1\cdot p_2}{q}$ & $\iff$ & $\nfmult{p_1}{p_2\cdot q}\,\,\textrm{and}\,\,\nfmult{p_2}{q}$ \\
$\nfmult{\binstar{p_1}{p_2}}{q}$ & $\iff$ & $\nfmult{p_1}{\binstar{p_1}{p_2}\cdot q}\,\,\textrm{and}\,\,
\nfmult{p_2}{q}\,\,\textrm{and}$ \\
& & $\congr{p_1}{\binstar{p_1}{p_2}\cdot q}$ \\
\end{tabular}
\end{center}
and
\begin{center}
\begin{tabular}{rcl}
$\nf{0}$ & $\iff$ & $\mathit{true}$ \\
$\nf{a}$ & $\iff$ & $\mathit{true}$ \\
$\nf{p_1+p_2}$ & $\iff$ & $\nf{p_1}\,\,\textrm{and}\,\,\nf{p_2}$ \\
$\nf{p_1\cdot p_2}$ & $\iff$ & $\nfmult{p_1}{p_2}\,\,\textrm{and}\,\,\nf{p_2}$ \\
$\nf{\binstar{p_1}{p_2}}$ & $\iff$ & $\nfmult{p_1}{\binstar{p_1}{p_2}}\,\,\textrm{and}\,\,
\nf{p_2}\,\,\textrm{and}$ \\
& & $\congr{p_1}{\binstar{p_1}{p_2}}$ \\
\end{tabular}
\end{center}
We consider several simple examples. Observe that $\nf{\binstar{(a\cdot b+a)}{0}}$
holds because there does not exist a reduction sequence $a\cdot b+a\longrightarrow^+t$
such that $\bisim{t\cdot\binstar{(a\cdot b+a)}{0}}{\binstar{(a\cdot b+a)}{0}}$. For
the term $\binstar{(a\cdot\binstar{a}{a})}{0}$, such a reduct $a\cdot\binstar{a}{a}
\longrightarrow^+\binstar{a}{a}$ indeed exists. However, the bisimulant
$\binstar{(a+a)}{0}$ of $\binstar{(a\cdot\binstar{a}{a})}{0}$ is normalized
and constructed as such in Lemma \ref{lem:congr_mult} and Lemma \ref{lem:congr_ex}.
The term $\binstar{(a\cdot\binstar{(a\cdot(b\cdot a+a))}{c})}{0}$ is an example originally
proposed in \cite{fok97} that is re-used in the recent result of Fokkink and Grabmayer \cite{gf20}. For
compactness we abbreviate these as $\binstar{q}{0}\equiv\binstar{(a\cdot\binstar{(a\cdot(b\cdot a+a))}{c})}{0}$
and $p\equiv a\cdot(b\cdot a+a)$ such that $\binstar{q}{0}\equiv\binstar{(a\cdot\binstar{p}{c})}{0}$.
Now observe that $p\longrightarrow^+a$ such that
$\bisim{(a\cdot\binstar{p}{c})\cdot\binstar{q}{0}}{\binstar{q}{0}}$. In
this case, we can obtain a normalized bisimulant
$\bisim{\binstar{q}{0}}{a\cdot\binstar{(p+c\cdot a)}{0}}$. The setup of Lemma
\ref{lem:congr_mult} and Lemma \ref{lem:congr_ex} is precisely tailored to
construct normal forms for these types of cases.
We now prove two straightforward results concerning the normal form
predicates.
\begin{lemma}
\label{lem:right_compat}
Both $\mathrm{nfmult}$ and $\mathrm{congr}$ are right-compatible
under bisimilarity.
\end{lemma}
\begin{proof}
For the first result, use induction towards $p$ to prove that
$\nfmult{p}{r}$ is a consequence of $\nfmult{p}{q}$, given $\bisim{q}{r}$.
A similar result for $\mathrm{congr}$ follows directly from the definition.
\end{proof}
\begin{lemma}
\label{lem:preserved}
Both $\mathrm{nf}$ and $\mathrm{nfmult}$ are preserved under $\longrightarrow$.
\end{lemma}
\begin{proof}
Use induction towards $p$ to to prove that $\nf{q}$ is a consequence of
$\nf{p}$ and $p\step{a}q$, for some $a\in A$. Similarly, induction towards
$p$ can be applied to derive $\nfmult{q}{r}$ from $\nfmult{p}{r}$ if
$p\step{a}q$ for some $a\in A$ and $r\in T$.
\end{proof}
We are now ready to prove the two key lemmas for deriving a bisimilar term
satisfying the congruence property. Note that Lemma \ref{lem:congr_mult} is
the first point in the proof where we will use induction towards the star-depth
$d$ which is defined straightforwardly as shown below.
\begin{center}
\begin{tabular}{rcl}
$d(0)=d(a)$ & = & 0 \\
$d(p_1+p_2)=d(p_1\cdot p_2)$ & = & $\max\{d(p_1),d(p_2)\}$ \\
$d(\binstar{p_1}{p_2})$ & = & $\max\{1+d(p_1),d(p_2)\}$ \\
\end{tabular}
\end{center}
\begin{lemma}
\label{lem:congr_mult}
For all $p,q,r\in T$ such that $\nfmult{p\cdot q}{r}$ and
$\congr{q}{r}$ at least one of the following always holds:
\begin{enumerate}
\item There exists an $s\in T$ such that $\bisim{p\cdot q\cdot r}{s\cdot r}$
and $\nfmult{s}{r}$ and $\congr{s}{r}$ and $d(s)\leq d(p\cdot q)$, or
\item There exists an $s\in T$ such that $\bisim{r}{s\cdot 0}$ and
$\nf{s\cdot 0}$ and $d(s)\leq 1+d(p\cdot q)$.
\end{enumerate}
\end{lemma}
\begin{proof}
We first apply (strong) induction towards $d(p)$, thereby generalizing over
all variables, and subsequently induction towards the structure of $p$,
thereby generalizing over $q$ and $r$. Note that the case for $p\equiv 0$
can be solved directly by choosing $s\equiv 0$. If $p\equiv a$ for some $a\in A$
then we distinguish between two cases: (1) if $\bisim{q\cdot r}{r}$ then set
$s\equiv a$, (2) otherwise set $s\equiv a\cdot q$. For the situations $p\equiv p_1+p_2$
and $p\equiv p_1\cdot p_2$, we first consider the cases where the $p$-induction
hypothesis for both operands corresponds with the first possibility in this lemma.
\begin{itemize}
\item If $p\equiv p_1+p_2$ then by induction we can derive $s_1,s_2\in T$ such
that $\bisim{p_1\cdot q\cdot r}{s_1\cdot r}$ and $\bisim{p_2\cdot q\cdot r}{s_2\cdot r}$.
Choose $s\equiv s_1+s_2$.
\item If $p\equiv p_1\cdot p_2$ then first use induction to derive $\bisim{p_2\cdot q\cdot r}{s_2\cdot r}$.
Then, apply induction again setting $q\equiv s_2$ to derive $\bisim{p_1\cdot s_2\cdot r}{s\cdot r}$.
\end{itemize}
Note that for both the cases $p\equiv p_1+p_2$ and $p\equiv p_1\cdot p_2$, the
second case in the result is directly satisfied.
If $p\equiv\binstar{p_1}{p_2}$ then first obtain $s_2\in T$ such
that $\bisim{p_2\cdot q\cdot r}{s_2\cdot r}$ via induction. Now first suppose
that $\bisim{\binstar{p_1}{s_2}\cdot r}{r}$ and observe that in this case we
have $\bisim{\bisim{r}{\binstar{(\binstar{p_1}{s_2})}{0}}}{\binstar{(p_1+s_2)}{0}}$.
Now $s\equiv\binstar{(p_1+s_2)}{0}$ is a witness for the second result. For
the remainder of this case, suppose that $\bisim{\binstar{p_1}{s_2}\cdot r}{r}$
does not hold.
Assume there exists a $p_1\longrightarrow^+t$ such that $\bisim{t\cdot\binstar{p_1}{s_2}\cdot r}{r}$
and observe that now $\bisim{r}{t\cdot\binstar{(p_1+s_2\cdot t)}{0}}$ holds. If there
exists a $s_2\longrightarrow^+u$ such that $\bisim{(u\cdot t)\cdot\binstar{(p_1+s_2\cdot t)}{0}}{\binstar{(p_1+s_2\cdot t)}{0}}$
then we have $\bisim{r}{\binstar{(t\cdot u)}{0}}$. As $d(t)<d(\binstar{p_1}{s_2})$ we
may use induction to obtain an $s\in T$ such that $\bisim{t\cdot u\cdot\binstar{(t\cdot u)}{0}}{s\cdot\binstar{(t\cdot u)}{0}}$,
which leads to a witness for the second result, otherwise, choose $s\equiv\binstar{t\cdot(p_1+s_2\cdot t)}{0}$ for
the second result.
If there does not exists a $p_1\longrightarrow^+t$ such that $\bisim{t\cdot\binstar{p_1}{s_2}\cdot r}{r}$
then $s\equiv\binstar{p_1}{s_2}$ can be chosen as a witness for the first result of the Lemma.
\end{proof}
Lemma \ref{lem:congr_ex} is very similar to Lemma \ref{lem:congr_mult}, but
it is required as a separate result due to the aforementioned absence of a
neutral element under sequential composition.
\begin{lemma}
\label{lem:congr_ex}
For all $p,r\in T$ such that $\nfmult{p}{r}$
at least one of the following always holds:
\begin{enumerate}
\item There exists a $q\in T$ such that $\bisim{p\cdot r}{q\cdot r}$
and $\nfmult{q}{r}$ and $\congr{q}{r}$ and $d(q)\leq d(p)$, or
\item There exists a $q\in T$ such that $\bisim{r}{q\cdot 0}$ and
$\nf{q\cdot 0}$ and $d(q)\leq 1+d(p)$.
\end{enumerate}
\end{lemma}
\begin{proof}
By induction towards the structure of $p$. For the case $p\equiv p_1\cdot p_2$,
one simply invokes Lemma \ref{lem:congr_mult} directly. The cases for
$p\equiv\binstar{p_1}{p_2}$ are almost the same. Note that induction towards
$d(p)$ is not required in this Lemma as one can use Lemma \ref{lem:congr_mult}
to handle the $s_2\cdot t$-situation for the case $p\equiv\binstar{p_1}{p_2}$.
\end{proof}
In order to apply the derivation of a bisimilar term satisfying the congruence
property we first formulate a Lemma corresponding to the derivation of $\mathrm{nfmult}$,
followed by a lemma corresponding to the predicate $\mathrm{nf}$.
\begin{lemma}
\label{lem:nf_mult_ex}
For all $p,r\in T$, there exists a $q\in T$ such that
$\bisim{p\cdot r}{q\cdot r}$ and $\nfmult{q}{r}$ and $d(q)\leq d(p)$.
\end{lemma}
\begin{proof}
Apply induction towards the structure of $p$. For the case $p\equiv\binstar{p_1}{p_2}$,
use Lemma \ref{lem:congr_ex} to derive an $s\in T$ such that
$\bisim{s\cdot\binstar{p_1}{p_2}\cdot r}{p_1\cdot\binstar{p_1}{p_2}\cdot r}$,
which results in $\bisim{\binstar{s}{p_2}\cdot r}{\binstar{p_1}{p_2}\cdot r}$,
due to soundness of RSP, and $\congr{s}{\binstar{s}{p_2}\cdot r}$ if the
first result of Lemma \ref{lem:congr_ex} holds. Otherwise, in case of the
second result of Lemma \ref{lem:congr_ex}, $\nfmult{s\cdot 0}{r}$ is satisfied
directly.
\end{proof}
\begin{lemma}
\label{lem:normalization}
For all $p\in T$ there exists a $q\in T$ such that $\bisim{p}{q}$
and $\nf{q}$ and $d(p)\leq d(q)$.
\end{lemma}
\begin{proof}
Apply induction towards the structure of $p$ and handle the case $p\equiv\binstar{p_1}{p_2}$
as indicated in Lemma \ref{lem:nf_mult_ex}.
\end{proof}
\section{Completeness}
\label{sec:completeness}
We will use the normal form obtained under bisimilarity in section \ref{sec:normalization}
to finish the completeness proof. This requires some administrative work but is not very deep.
Clearly the implication $\bisim{p}{q}\Rightarrow p=q$ is our proof obligation. The
completeness proof uses induction towards $d(p)$. Lemma \ref{lem:normalization} can
be used to derive an $r\in T$ such that $\bisim{p}{\bisim{r}{q}}$ and such that
$\nf{r}$ holds. By transitivity, these two equalities may be solved separately. This
mainly comes down to two steps where deriving axiomatic equality is mostly done by
invocations of Lemma \ref{lem:next}.
\begin{enumerate}
\item Using Lemma \ref{lem:next} and induction to reduce to an equality of a form
similar to $t\cdot\binstar{p}{q}=\binstar{r}{s}$ for $p\longrightarrow^+t$ or
$\binstar{p}{q}=\binstar{r}{s}$, which in turn can be reduced to an equality
of the form $x\cdot\binstar{p}{q}=y\cdot\binstar{p}{q}$ for $p\longrightarrow^+x$
and $p\longrightarrow^+y$ via the axiom RSP and Lemma \ref{lem:next}.
\item Using $\nf{\binstar{p}{q}}$ and application of Lemma \ref{lem:congruence}
to solve $x=y$ using the $d$-induction hypothesis for completeness with
regard to $d(\binstar{p}{q})$.
\end{enumerate}
We require the definition of two more predicates to aid in compact formulation of
the succeeding lemmas. We say that a set $M\subseteq A\times V$ is
a tail of $p$ (notation: $\tail{p}{M}$) if there exists a $q\in T$ such
that $p\longrightarrow^*q$ and for all $(a,u)\in M$ it holds that $q\step{a}u$. In
addition, we say that a term is next-provable (notation: $\nextt{p}$) if for all
$v\in V$ and for all $p\step{a}u$ for $u\in V$ it holds that
$\bisim{u}{v}$ implies $\teq{u}{v}$. Using the $d$-induction hypothesis, we
can prove the following lemma.
\begin{lemma}
\label{lem:core}
For $M,N,K,L\subseteq A\times V$ and $p,q\in T$ such
that $\tail{p}{M}$, $\tail{p}{K}$, $\nextt{\sigma(N)}$, $\nextt{\sigma(L)}$ and
$\nf{\binstar{p}{q}}$ it holds that
\begin{center}
\begin{math}
\bisim{\sigma(M)\cdot\binstar{p}{q}+\sigma(N)}{\sigma(K)\cdot\binstar{p}{q}+\sigma(L)}
\end{math}
\end{center}
implies
\begin{center}
\begin{math}
\sigma(M)\cdot\binstar{p}{q}+\sigma(N)=\sigma(K)\cdot\binstar{p}{q}+\sigma(L)
\end{math}
\end{center}
\end{lemma}
\begin{proof}
Apply Lemma \ref{lem:next} and note that the premises arising from steps
from $\sigma(N)$ or $\sigma(L)$ are directly resolved. Furthermore, apply
Lemma \ref{lem:congruence} and the induction hypothesis for every step arising
from $\sigma(M)$ and $\sigma(K)$.
\end{proof}
Lemma \ref{lem:complete_split} is crucial in the transformation of the
equality towards application of Lemma \ref{lem:core}. We define the
predicate $\mathrm{obl}$ to formalize the proof obligation as follows. Say
that $\obl{p}{q}{r}$ holds true if and only if for all $M,N\subseteq
A\times V$ such that:
\begin{enumerate}
\item $\bisim{\sigma(M)\cdot\binstar{p}{q} + \sigma(N)}{r}$,
\item $\tail{p}{M}$,
\item $\nextt{\sigma(N)}$,
\item $\nextt{q}$,
\item $\nf{\binstar{p}{q}}$ and
\item for all $x,y\in T$ such that $d(x)<d(\binstar{p}{q})$
we have $\bisim{x}{y}$ implies $x=y$,
\end{enumerate}
the conclusion $\sigma(M)\cdot\binstar{p}{q}+\sigma(N)=r$ follows.
\begin{lemma}
\label{lem:complete_split}
For all $p,q,r,s\in T$ it holds that $\obl{p}{q}{s}$ implies
$\obl{p}{q}{r\cdot s}$.
\end{lemma}
\begin{proof}
Apply induction towards the structure of $r$, thereby generalizing over
all other variables. For $r\equiv 0$, use Lemma \ref{lem:next}. For
$r\equiv a$ for some $a\in A$, use Lemma \ref{lem:next} and
the premise $\obl{p}{q}{s}$. For $r\equiv r_1+r_2$, one must first derive
sets $M_1,M_2,N_1,N_2\subseteq A\times V$ such that:
$M=M_1\cup M_2$ and $N=N_1\cup N_2$ and
\begin{center}
\begin{math}
\bisim{\sigma(M_1)\cdot\binstar{p}{q}+\sigma(N_1)}{r_1\cdot s}\quad\textrm{and}\quad
\bisim{\sigma(M_2)\cdot\binstar{p}{q}+\sigma(N_2)}{r_2\cdot s}.
\end{math}
\end{center}
The result then follows from the respective induction hypotheses for
$r_1$ and $r_2$. Note that the case $r\equiv r_1\cdot r_2$ is immediate
due to the setup of this lemma.
The remaining case is $r\equiv\binstar{r_1}{r_2}$. Apply the axiom RSP and note
that the reversal of this axiom is sound, this leads to the following proof obligation
\begin{center}
\begin{math}
\sigma(M)\cdot\binstar{p}{q}+\sigma(N)=r_1\cdot(\sigma(M)\cdot\binstar{p}{q}+\sigma(N))+r_2\cdot s
\end{math}
\end{center}
One first derives four sets $M_1,M_2,N_1,N_2\subseteq
A\times V$ such that
\begin{center}
\begin{math}
\bisim{\sigma(M_1)\cdot\binstar{p}{q}+\sigma(N_1)}{r_1\cdot(\sigma(M)\cdot\binstar{p}{q}+\sigma(N)}
\end{math}
\end{center}
and
\begin{center}
\begin{math}
\bisim{\sigma(M_2)\cdot\binstar{p}{q}+\sigma(N_2)}{r_2\cdot s}.
\end{math}
\end{center}
These two equalities can be resolved via induction. The premise
$\obl{p}{q}{\sigma(M)\cdot\binstar{p}{q}+\sigma(N)}$ is a result
of Lemma \ref{lem:core}.
\end{proof}
The following lemma is the analog of Lemma \ref{lem:complete_split}
and only required due to the aforementioned absence of a neutral
element under multiplication.
\begin{lemma}
\label{lem:complete}
For $p,q,r\in T$ and $M,N\subseteq A\times V$
such that $\bisim{\sigma(M)\cdot\binstar{p}{q}+\sigma(N)}{r}$,
$\tail{p}{M}$, $\nextt{\sigma(N)}$, $\nextt{q}$ and $\nf{\binstar{p}{q}}$
we have $\sigma(M)\cdot\binstar{p}{q}+\sigma(N)=r$, given the $d$-induction
hypothesis with regard to $d(\binstar{p}{q})$.
\end{lemma}
\begin{proof}
By induction towards the structure of $r$, thereby generalizing over
all other variables. One must use Lemma \ref{lem:complete_split} here
for the case $r\equiv r_1\cdot r_2$.
\end{proof}
Before the completeness proof can be finished, we require two lemmas to
work towards a star-reduct on one side of the equality. Once we have an
equality in this form, Lemma \ref{lem:summation} can be applied to invoke
Lemma \ref{lem:complete}.
\begin{lemma}
\label{lem:towards_split}
For all $p,q,r\in T$ such that $\bisim{p\cdot r}{q}$ and
$\nf{p\cdot r}$ and $\nextt{r}$ we have $p\cdot r=q$, under the
$d$-induction hypothesis with regard to $d(p\cdot r)$.
\end{lemma}
\begin{proof}
The most straightforward way to prove this lemma is a setup close
to Lemma \ref{lem:next}. For this purpose, define the helper
function $\mathrm{mult}:V\times T\rightarrow T$ as follows: $\mult{\surd}{q}=q$ and $\mult{p}{q}=
p\cdot q$ for $p\in T$. One may then prove: for all $p\step{a}u$ and $v\in V$:
$\bisim{\mult{u}{r}}{v}$ implies $\teq{\mult{u}{r}}{v}$ by induction
towards the structure of $p$. For the case $p\equiv\binstar{p_1}{p_2}$,
use Lemma \ref{lem:summation} to prepare in a suitable form for
application of Lemma \ref{lem:complete}.
\end{proof}
\begin{lemma}
\label{lem:towards}
For all $p,q\in T$ such that $\bisim{p}{q}$ and $\nf{p}$
it holds that $p=q$ given the induction hypothesis towards $d(p)$.
\end{lemma}
\begin{proof}
Analogous to Lemma \ref{lem:towards_split}.
\end{proof}
We may now prove the completeness theorem.
\begin{theorem}
\label{thm:completeness}
For all $p,q\in T$ such that $\bisim{p}{q}$ it holds that $p=q$.
\end{theorem}
\begin{proof}
Apply strong induction towards $d(p)$, thereby generalizing over $p$
and $q$. By Lemma \ref{lem:normalization} there exists an $r\in T$
such that $\bisim{p}{r}$ and $\nf{r}$ and $d(r)\leq d(p)$.
Lemma \ref{lem:towards} can now be applied to solve $r=p$ and $r=q$
separately.
\end{proof}
\section{Formalization}
\label{sec:formalization}
All the results in this paper have been formalized and verified using
version 8.4pl4 of the Coq proof-assistant (\cite{coq97}). The Coq code
is available at
\begin{center}
\texttt{https://github.com/allanvanhulst/mscs/}
\end{center}
We import Coq-libraries for lists, maximality natural numbers, and for Presburger arithmetic.
\begin{verbatim}
Require Import List.
Require Import Max.
Require Import Omega.
\end{verbatim}
We then encode the law of the excluded middle (LEM) as an additional
axiom. The addition of this axiom is consistent with Coq (cf. \cite{coq97}).
\begin{verbatim}
Axiom LEM : forall (P : Prop), P \/ ~ P.
\end{verbatim}
The set of actions $A$ is a parameter for the theory $T$.
\begin{verbatim}
Parameter A : Set.
Inductive T :=
| zero : T
| act : A -> T
| plus : T -> T -> T
| mult : T -> T -> T
| star : T -> T -> T.
\end{verbatim}
Notations for 0, plus, multiplication and the binary Kleene-star are
subsequently introduced. The Unicode character $\cdot$ is used for
sequential composition. Note that non-ASCII characters are valid
within the Coq proof assistant but may render in different ways on
different platforms. Moreover, the operators for plus and Kleene-star
are overloaded and assigned a new meaning with regard to $T$ here. This
is perfectly valid in Coq but due to parsing limitations, one cannot
re-assign associativity or precedence to an already existing operator.
However, this is not a problem here.
\begin{verbatim}
Notation "0" := zero.
Notation "p + q" := (plus p q).
Notation "p · q" := (mult p q) (at level 45, right associativity).
Notation "p * q" := (star p q).
\end{verbatim}
We then define the set $V$ as an inductive construction
for either $\surd$ (i.e. \texttt{term}) or the constructor
$\texttt{emb}:T\rightarrow V$.
\begin{verbatim}
Inductive V :=
| term : V
| emb : T -> V.
\end{verbatim}
The transition relation $\longrightarrow\subseteq V\times
A\times V$ is then defined as a ternary predicate
having a corresponding notation $\texttt{p -(a)-> q}$.
\begin{verbatim}
Inductive step : V -> A -> V -> Prop :=
| step_act : forall (a : A), step (emb (act a)) a term
| step_plus_left : forall (p q : T) (v : V) (a : A),
step (emb p) a v -> step (emb (p + q)) a v
| step_plus_right : forall (p q : T) (v : V) (a : A),
step (emb q) a v -> step (emb (p + q)) a v
| step_mult_left : forall (p q p' : T) (a : A),
step (emb p) a (emb p') -> step (emb (p · q)) a (emb (p' · q))
| step_mult_right : forall (p q : T) (a : A),
step (emb p) a term -> step (emb (p · q)) a (emb q)
| step_star_left : forall (p q p' : T) (a : A),
step (emb p) a (emb p') -> step (emb (p * q)) a (emb (p' · (p * q)))
| step_star_term : forall (p q : T) (a : A),
step (emb p) a term -> step (emb (p * q)) a (emb (p * q))
| step_star_right : forall (p q : T) (a : A) (v : V),
step (emb q) a v -> step (emb (p * q)) a v.
Notation "p '-(' a ')->' q" := (step p a q) (at level 30).
\end{verbatim}
Bisimilarity then directly follows the definition in this paper.
\begin{verbatim}
Definition bisim (u v : V) : Prop := exists (R : V -> V -> Prop),
R u v /\ forall (x y : V), R x y ->
(x = term <-> y = term) /\
(forall (a : A) (x' : V), x -(a)-> x' ->
exists (y' : V), y -(a)-> y' /\ R x' y') /\
(forall (a : A) (y' : V), y -(a)-> y' ->
exists (x' : V), x -(a)-> x' /\ R x' y').
\end{verbatim}
The simplest way to encode the axioms in Coq while having the most
possible certainty that no interference with the internal Coq-axioms
takes place is to inductively define axiomatic equality as a binary
predicate. This makes the proof somewhat cumbersome. For instance,
every instance where transitivity or symmetry of the axiom system is
applied needs to be specified explicitly. Essentially, we are encoding
the derivation of axiomatic equality as a proof-tree here.
\begin{verbatim}
Inductive ax : T -> T -> Prop :=
| refl : forall (x : T), ax x x
| symm : forall (x y : T), ax x y -> ax y x
| trans : forall (x y z : T), ax x y -> ax y z -> ax x z
| comp_plus : forall (w x y z : T), ax w y -> ax x z -> ax (w + x) (y + z)
| comp_mult : forall (w x y z : T), ax w y -> ax x z -> ax (w · x) (y · z)
| comp_star : forall (w x y z : T), ax w y -> ax x z -> ax (w * x) (y * z)
| B1 : forall (x y : T), ax (x + y) (y + x)
| B2 : forall (x y z : T), ax ((x + y) + z) (x + (y + z))
| B3 : forall (x : T), ax (x + x) x
| B4 : forall (x y z : T), ax ((x + y) · z) (x · z + y · z)
| B5 : forall (x y z : T), ax ((x · y) · z) (x · (y · z))
| B6 : forall (x : T), ax (x + 0) x
| B7 : forall (x : T), ax (0 · x) 0
| BKS1 : forall (x y : T), ax (x · (x * y) + y) (x * y)
| BKS2 : forall (x y z : T), ax ((x * y) · z) (x * (y · z))
| RSP : forall (x y z : T), ax x (y · x + z) -> ax x (y * z).
\end{verbatim}
The notation \texttt{<=>} is used for bisimilarity, and the
notation \texttt{==} is used for axiomatic equality.
\begin{verbatim}
Notation "u '<=>' v" := (bisim u v) (at level 25).
Notation "p '==' q" := (ax p q) (at level 25).
\end{verbatim}
This is all the Coq-code required to express the completeness theorem
at the very end of the Coq-proof.
\begin{verbatim}
Theorem completeness : forall (p q : T), emb p <=> emb q -> p == q.
\end{verbatim}
The Coq-proof in its present form consists of 3959 lines of code,
and is divided into 73 lemmas and one theorem. Note that most of the
lemmas are very simple. An example is shown below.
\begin{verbatim}
Lemma step_plus_fmt : forall (p q : T) (a : A) (u : V),
emb (p + q) -(a)-> u -> emb p -(a)-> u \/ emb q -(a)-> u.
Proof.
intros p q a u H ; inversion H ; auto.
Qed.
\end{verbatim}
\section{Conclusions}
\label{sec:conclusions}
Compared to the proof method proposed in \cite{gf20}, the
completeness result presented in this work is simpler, although a rather
technical treatment was essentially unavoidable. The formalization in the
Coq proof assistant ensures guaranteed correctness of the result. The method
is not very distant to an algorithmic rewriting procedure and an actual
implementation may be constructed by reasonable effort.
The theory and axiom system originally considered by Milner \cite{mil84} allows for a
congruence property to be formulated in a very similar way as was done here.
Again, we require that for all $p\longrightarrow^+t$ and $p\longrightarrow^+u$
we have $\bisim{t\cdot p^*\cdot q}{u\cdot p^*\cdot q}$ implies $\bisim{t}{u}$.
This follows if for all $p\longrightarrow^+t$ the following two conditions are
satisfied
\begin{enumerate}
\item if $\bisim{t\cdot p^*\cdot q}{(t+1)\cdot p^*\cdot q}$ then $t\downarrow$ and
\item if $t\downarrow$ and $t\step{a}t'$ then there does not exist a step
$p^*\cdot q\step{a}r$ such that $\bisim{t'\cdot p^*\cdot q}{r}$.
\end{enumerate}
It is not unlikely that the difficulty in obtaining such normal forms lies
almost entirely in ensuring that the first of these two conditions is met.
Assume that the predicate $\mathrm{nfmult}$ now expresses this new congruence
property recursively and further assume that the theory $T$ now has the unary
Kleene star and 1. Resolving the following conjecture may be the key to finding
a solution to Milner's question.
\begin{conjecture}
\label{con:milner}
For $p\in T$ there exists a $q\in T$ such that
$\bisim{p}{q}$ and $\nfmult{q}{1}$ and $d(q)\leq d(p)$.
\end{conjecture}
At this point, it cannot be excluded that Conjecture \ref{con:milner} can
be resolved using a proof method similar to the one applied in this paper.
|
1,108,101,564,414 | arxiv | \section{Introduction}\label{sec:int}
This work encompasses the study of controlled predator-prey systems.
The control process is devoted to harvesting activities, which
is one of the central issues in bio-economics.
It has been widely recognized that it may not be a good idea to consider only maximizing short-term benefits focusing purely on harvesting. Although over-harvesting
in a short period may maximize the short-term economic benefits,
it breaks
the balance between harvesting and its ecological implications.
Thus simple minded policies may lead to detrimental after effect.
As a result, it is crucially important to
pay attention not to render
exceedingly harmful decision to the environment.
This situation has been observed
in some optimal harvesting models
with finite-time yield or discounted yield; see, e.g., \cite{LA, ALO, LO, SSZ, TY}, among others.
In contrast, ecologists and bio-economists
emphasize the importance of sustainable harvest
in both biological conservation
and long-term economic benefits; see \cite{AS, CC, MM}.
They introduce the concept of {\it maximum sustainable yield}, which is the largest yield (or catch) that can be taken from a species' stock over an infinite horizon.
Their findings
indicate
that
it is more reasonable to maximize the yield in such a way that a species is sustainable and not in danger leading to extinction of the species.
Inspired by the idea of using maximum sustainable yield,
we pay special attentions to sustainability, biodiversity, biological conservation,
and long-term economic benefits,
and consider long-term horizon optimal strategies in this paper. In lieu of discounted profit, we examine objective functions that are long-run average per unit time type.
As was alluded to, the papers \cite{LA, ALO, LO, SSZ, TY} concentrated on finite time horizon problems as well as long term objective function under discounting. However, there seems to be not much effort devoted to long-run average criteria
for the harvesting problem to the best of our knowledge.
Discounted objective pays more attention to the current performance, whereas it is certainly necessary to examine the performance when the future is as important.
This is particularly the case when we take sustainability and long-term economic benefits into consideration.
We consider a long-run-average optimal harvesting problem
for a predator-prey model subject to random perturbations,
in which only the predator takes harvesting action.
This type of optimal harvesting problems
have been studied by some authors; see for example, \cite{LB16, LB14}.
However, harvesting efforts in these
papers are confined to constant-harvesting strategies only,
which are usually far from optimal
for a larger and more realistic class of harvesting strategies.
In contrast to the discounted criteria, the long-run average criteria are much more difficult to handle.
One of the main difficulties is due to the long-run average cost
criteria. To treat long-run average objective, one has to handle a number of delicate issues that are related to ergodicity.
To date, ecological systems under environmental noise
are usually modeled by
stochastic differential equations driven by a Brownian motion.
An important aspect of our work is concerned with what if the
noise is not of Brownian motion type. An innovation of the current paper is the use of wideband noise.
It has been widely recognized that
Brownian motion is
only an idealized formulation or suitable limits of systems in
the real world.
To be more realistic, we would better assume that the environment is subject to
disturbances characterized by a jump process with
rapid jump rates.
This jump process can be modeled by the so-called wideband noise.
Motivated by the approach in \cite{KR},
we consider a Lotka-Volterra predator-prey model
with wideband noise and harvesting in this paper. Denote by
$X^\varepsilon(t)$ and $Y^\varepsilon(t)$
the sizes of the prey and the predator, respectively.
The system of interest is of the form
\begin{equation}\label{model-2}
\begin{cases}
d X^{\varepsilon}(t)=&X^{\varepsilon}(t)\big[a_1-b_1X^{\varepsilon}(t)-c_1Y^{\varepsilon}(t)\big]dt + \dfrac1\varepsilon X^{\varepsilon}(t)r_1(\xi^\varepsilon(t))dt\\
d Y^{\varepsilon}(t)=&Y^{\varepsilon}(t)\big[a_2-h(Y^{\varepsilon}(t))u(t)-b_2Y^{\varepsilon}(t)+c_2X^{\varepsilon}(t)\big]dt +\dfrac1\varepsilon Y^{\varepsilon}(t)r_2(\xi^\varepsilon(t))dt,
\end{cases}
\end{equation}
where $\varepsilon$ is a small parameter, $\xi(t)$ is an ergodic,
time-homogeneous, Markov-Feller process,
and $\xi^\varepsilon(t)=\xi\left(\frac{t}{\varepsilon^2}\right)$,
$a_i, b_i, c_i, i=1,2$ are positive constants, and
$u(t)$ represent the harvesting effort at time $t$
while $h(\cdot):\mathbb{R}_+\mapsto[0,1]$
indicates the effectiveness of harvesting,
which is assumed to be dependent of the population of the predator.
Thus, the amount of harvested biomass
in a short period of time $\Delta t$
is $Y^{\varepsilon}(t)h(Y^{\varepsilon}(t))u(t)\Delta t$.
Let $\Phi(\cdot):\mathbb{R}_+\mapsto\mathbb{R}_+$
be the revenue function that provides
the economic value as a function
of harvested biomass.
The time-average harvested value
over an interval $[0,T]$
is $\dfrac1T\int_0^T\Phi\Big(h(Y^{\varepsilon}(t)Y^{\varepsilon}(t))u(t)\Big)dt$.
Our goal is to
\begin{equation}\label{opt-har}
\text{ maximize } \liminf_{T\to\infty}
\dfrac1T\int_0^T\Phi\Big(h(Y^{\varepsilon}(t)Y^{\varepsilon}(t))u(t)\Big)dt \ \hbox{a.s.}
\end{equation}
In our set up,
the harvesting strategy (the control) is only for the predator, $Y^\varepsilon(\cdot)$, which is also assumed in many papers (e.g., \cite{AHL, BC, MC, XLH}).
The rational is
that the predator has main impacts on the system,
whereas the economic influence of the prey is
not as significant.
In addition,
the prey may be too small or too passive to catch.
Thus we focus on the situation when the control is
in $Y^\varepsilon$ equation only.
Because of the complexity of the model,
developing optimal strategies for the
controlled system \eqref{model-2} and \eqref{opt-har},
are usually difficult. Nevertheless,
one may wish to construct policies based on the limit system.
A natural question arises:
Can optimal or near-optimal harvesting strategies
for the diffusion model
be near optimal harvesting strategies
for the wideband-width model
when $\varepsilon$ is sufficiently small?
In a finite horizon,
nearly optimal controls for systems under wideband noise perturbations
were developed in
the work of Kushner and Ruggaldier \cite{KR}.
As was noted in their paper,
that the original systems subject to wideband noise perturbations are rather difficult to handle;
there may be additional difficulties if
the systems are non-Markovian. For infinite horizon problems, it was assumed in \cite{KR} that the slow and fast components are jointly Markovian.
By working with the associated probability measures, under suitable conditions, the authors established that there is a limit system being a controlled diffusion process.
Using the optimal or near-optimal controls of the limit systems, one constructs controls for the original systems and show the controls are nearly optimal.
Inspired by their work, we aim to develop near-optimal policies in this paper in an infinite horizon.
We focus on objective functions being long-run average per unit time type.
By assuming the perturbing noise being Markovian,
we develop near-optimal harvesting strategies (near-optimal controls).
In contrast to optimal controls in a finite horizon, to show that the approximation
works over an infinite time interval
as in our setting, the ergodicity and the existence of the invariant measure have to be established.
In this paper,
we first show that there exists
an optimal harvesting strategy
for the limit controlled diffusion.
Then, we show that using near-optimal control of the limit diffusion system in the original system leads to near-optimal controls of
the original system.
We note that in \cite{KR}, nonlinear systems were treated so
a number of assumptions were posed for such wideband noise driven systems in a general setting. In contrast, we have specific systems to work with thus we can no longer posing general conditions as in the aforementioned paper. Instead, we need to start from scratch.
In fact,
conditions (C1)-(C4) posed in \cite[Section 7, p. 310]{KR}
include the existence of $\delta$-optimal control, the existence of the associate invariant measure, tightness of the state process, and the value function under certain admissible class and
the value function
under stationary admissible relaxed controls being equal.
Because the problems were formulated in a general setting, these conditions are abstract and
are used as
sufficient conditions
to obtain near-optimal controls for wide-band noise
systems. In contrast,
for the system that we are dealing with,
it is rather difficult to verify these conditions.
Some sufficient conditions
were also proposed in \cite[Conditions (D1)-(D4)]{KR}
by means of a perturbed Lyapunov function method. These conditions
were given to verify conditions (C1)-(C4).
Nevertheless, verifying
conditions (D1)-(D4) in \cite{KR} is still a difficult task for
our model. To begin with, it is difficult to find appropriate Lyapunov functions verifying conditions (D1)-(D4).
To overcome the difficulty,
we propose a new approach rather than finding a function $V$
satisfying the conditions (D1)-(D4) in \cite{KR}.
More precisely,
by analyzing the dynamics of the limit controlled diffusion
when the population of the species is low,
we obtain the tightness of probability measures of the controlled diffusion process.
Then,
using the above as a bridge,
probabilistic arguments
enable us to prove the tightness
of probability measure of the controlled process perturbed by wideband noise.
Moreover,
we use stochastic analysis to carry out the desired estimates. The analysis itself is new and interest in its own right.
Therefore, the problem arises in control and optimization, but our solution methods are mainly probabilistic.
The rest of the paper is organized as follows.
In Section \ref{sec:form}, we formulate the problem
and identify the limit diffusion system.
The main results are given in Section \ref{sec:main}
while their proofs are provided in Section \ref{sec:pf}.
Section \ref{sec:rem} is devoted to some remarks and possible generalizations.
Finally,
we prove some auxiliary results in an appendix.
\section{Formulation}\label{sec:form}
We work with a complete filtered probability space
$(\Omega,\mathcal{F},\mathcal{F}_t,\mathbb{P})$ satisfying the usual condition.
Denote $\mathbb{R}^2_+=\{(x,y)\in\mathbb{R}^2: x\geq0, y\geq 0\}$
and $\mathbb{R}^{2,\circ}_+=\{(x,y)\in\mathbb{R}^2: x>0, y> 0\}$.
To simplify notations,
we
denote $z=(x,y), \tilde z=(\tilde x,\tilde y),
Z(t)=(X(t),Y(t)), Z^\varepsilon(t)=(X^\varepsilon(t),Y^\varepsilon(t))$.
We assume that harvest efforts
can be represented by a number in a finite interval $\mathcal{M}:=[0,M]$.
Suppose $\xi(t)$
is a pure jump Markov-Feller process
taking values in a compact metric space $\mathcal{S}$.
Suppose its generator is given by
$$Q\phi(w)=q(w)\int_{\mathcal{S}}\Lambda(w,d\tilde w)\phi(\tilde w)-q(w)\phi(w)$$
where $q(\cdot)$ is continuous on $\mathcal{S}$
and $\Lambda(w, \cdot)$
is a probability measure on $\mathcal{S}$ for each $w$.
Suppose that
$\xi(t)$ is uniformly geometric ergodic,
that is
\begin{equation}\label{e1.1a}
\|P(t,w,\cdot)-\overline P(\cdot)\|_{TV}\leq C_0\exp(-\gamma_0 t),\,\text{ for any }\, t\geq 0, w\in\mathcal{S},
\end{equation}
where $\overline P(\cdot)$ is a probability measure in $\mathcal{S}$ and $C_0, \gamma_0$ are some positive constants.
Clearly $\overline P(\cdot)$ is an invariant probability measure of $\{\xi(t)\}$.
Let $\chi(w,\cdot)=\int_0^\infty \big[P(t,w,\cdot)-\overline P(\cdot)\big]dt$.
It is well known that
if $\phi(w)$ is a continuous function on $\mathcal{S}$ satisfying $\int_{\mathcal{S}}\phi(w)\overline P(dw)=0$
then
\begin{equation}\label{e1.1}
\psi(w):=\int_{\mathcal{S}}\chi (w,d\tilde w)\phi(\tilde w)\,\text{
satisfying }\,Q\psi(w)=-\phi(w).
\end{equation}
Note that $\psi(\cdot)$ is well defined thanks to
the exponential decay in \eqref{e1.1a}.
Suppose that
\begin{equation}\label{e1.2}
r_i(\cdot)\,\text{ is bounded in }\, \mathcal{S},\text{ and }\, \int_{\mathcal{S}}r_i(w)\overline P(dw)=0, \ i=1,2.
\end{equation}
Let
$A=(a_{ij})_{2\times2}$ with
$$a_{ij}=\int_{\mathcal{S}}\int_{\mathcal{S}}\chi(w,d\tilde w)\overline P(dw)\Big[r_i(w)r_j(\tilde w)+r_j(w)r_i(\tilde w)\Big].$$
We suppose that $A$ is positive definite with square root $(\sigma_{ij})_{2\times2}.$
Consider the diffusion
\begin{equation}\label{model-3}
\begin{cases}
d X(t)=X(t)\big[\overline a_1-b_1X(t)-c_1Y(t)\big]dt + X(t)(\sigma_{11}dW_1(t)+\sigma_{12}dW_2(t))\\
d Y(t)=Y(t)\big[\overline a_2-h(Y(t))u(t)-b_2Y(t)+c_2X(t)\big]dt + Y(t)(\sigma_{12}dW_1(t)+\sigma_{22}dW_2(t)),
\end{cases}
\end{equation}
where
$\overline a_1=a_1+\dfrac{a_{11}}2=a_1+\dfrac{\sigma_{11}^2+\sigma_{12}^2}2$,
$\overline a_2=a_2+\dfrac{a_{22}}2=a_2+\dfrac{\sigma_{22}^2+\sigma_{12}^2}2$,
$W_1, W_2$ are two independent Brownian motions.
We suppose that the function $\Phi(\cdot):\mathbb{R}_+\mapsto\mathbb{R}_+$ represents the yield that is
Lipschitz in its argument satisfying $\Phi(0)=0$.
That is, the yield is zero if
we harvest nothing.
If we want to maximize the average amount of the species harvested,
then $\Phi(y)=y$.
If we want to maximize the average money earned,
$\Phi(y)$ should have a ``saturated'' form, such as
$\Phi(y)=\dfrac{y}{c+y}$.
We assume the effectiveness $h(\cdot):\mathbb{R}_+\mapsto[0,1]$
is an increasing function and $h(0)=0$.
This
stems from
that the effectiveness
increases as the density of the species increases.
Let $\text{PM}^\eps$ be the class of functions
$v:\mathbb{R}^2_+\times\mathcal{S}\mapsto\mathcal{M}$ such that
under the feedback control $u(t)=v(Z^\varepsilon(t))$
there exists a solution process to \eqref{model-2},
which is a Markov-Feller process.
For $v\in\text{PM}^\eps$, define
$$J^{\varepsilon}
(v):=\liminf_{T\to\infty}\dfrac1T\int_0^T\Phi
\Big(h(Y^{\varepsilon}(t)Y^{\varepsilon}(t))v(Y^\varepsilon(t))\Big)dt \ \hbox{ a.s.}$$
For the wideband noise system,
it is difficult to find an optimal control, that is, a control $v^*\in\text{PM}^\eps$ satisfying
$${\mathfrak J}^\varepsilon
=\sup_{v\in\text{PM}^\eps}\{J^{\varepsilon}(v)
\}.$$
Thus, our goal is to find a near-optimal control $v\in\text{PM}^\eps$
using the limit diffusion system.
To do that, we broaden the class of controls
by use of the ``relaxed controls".
We present here some concepts and notation introduced in \cite{KR}.
Let $M(\infty)$
denote the family of measures $\{m(\cdot)\}$ on the Borel subsets of
$[0,\infty)\times U$
satisfying $m([0,t]\times U)=t$
for all $t\geq0$.
By
the weak convergence $m_n(\cdot)\rightarrow m(\cdot)$
in $M(\infty)$
we mean $\lim_{n\to\infty}\int f(s,\alpha)m_n(ds\times d\alpha)
=\int f(s,\alpha)m(ds\times d\alpha)
$
for any continuous function $f(\cdot):[0,\infty)\times U\mapsto\mathbb{R}$ with compact support.
A random measure $m(\cdot)$ with values in $M(\infty)$ is said to be an admissible relaxed control for \eqref{model-2}
if $\int_U\int_0^tf(s,\alpha)m(ds\times d\alpha)$
is progressively measurable with respect to $\mathcal{F}^\varepsilon_t:=\mathcal{F}_{\frac{t}\varepsilon}$ for each bounded continuous function $f(\cdot)$.
With a relaxed control $m(\cdot)$,
let $\overline m_t=\lim_{s\to t} \dfrac1{s-t}\int_\mathcal{M}\int_t^sm(ds\times d\alpha)$,
the model \eqref{model-2} becomes
\begin{equation}\label{model-4}
\begin{cases}
d X^{\varepsilon}(t)=&X^{\varepsilon}(t)\big[a_1-b_1X^{\varepsilon}(t)-c_1Y^{\varepsilon}(t)\big]dt + \dfrac1\varepsilon X^{\varepsilon}(t)r_1(\xi^\varepsilon(t))dt\\
d Y^{\varepsilon}(t)=&Y^{\varepsilon}(t)\big[a_2-h(Y^{\varepsilon}(t))\overline m_t-b_2Y^{\varepsilon}(t)+c_2X^{\varepsilon}(t)\big]dt + \dfrac1\varepsilon Y^{\varepsilon}(t)r_2(\xi^\varepsilon(t))dt
\end{cases}
\end{equation}
Let $\mathcal P(\mathcal{M})$ be the space of invariant probability measures
with Prohorov's topology.
A relaxed control is said to be Markov
if there exists a measurable function $v:\mathbb{R}^2_+\mapsto\mathcal P(\mathcal{M})$
such that $m_t=v(X^\varepsilon(t)), t\geq0.$
For $z\in\mathbb{R}^2_+$, $w\in\mathcal{S}$ and $u\in\mathcal{M}$, define
$$
F(z,w)=\Big(xr_1(w), yr_2(w)\Big)^\top
$$
and
$$
G(z, u)=
\Big(x[a_1-b_1x-c_1y], y[a_2-h(y)u-b_2y+c_2x]\Big)^\top
.$$
By an ergodicity argument
(see, for example,
\cite{DN,DNY,HNY,RR}),
it can be shown that
if $-a_2+c_2\dfrac{a_1}{b_1}<0$ then
for any admissible control $u(t)$,
$Y^\varepsilon(t)$ tends to $0$ with probability 1,
which implies
$$\lim_{T\to\infty}\dfrac1T\int_0^T\Phi\Big(h(Y^{\varepsilon}(t))Y^{\varepsilon}(t)u(t)\Big)dt=0 \text{ a.s.}$$
Thus, to avoid the trivial limit,
we assume throughout this paper that
\begin{equation}\label{positive}
-a_2+c_2\dfrac{a_1}{b_1}>0.
\end{equation}
Define the operator
$${\cal L}^\varepsilon_u \phi(z, w)
=\dfrac1{\varepsilon^2}Q\phi(z,w)+\dfrac1\varepsilon \dfrac{\partial \phi(z,w)}{\partial z}F(z,w)+\dfrac{\phi(z,w)}{\partial z}G(z, u),
$$
where
$\phi:\mathbb{R}^2_+\times\mathcal{S}\mapsto \mathbb{R}$ is continuous and
have continuous derivative with respect to the first variable,
$\dfrac{\partial \phi(z,w)}{\partial z}$.
Denote by $\mathbb{P}_{z,w}$ and $\mathbb{E}_{z,w}$
the probability measure and the corresponding expectation
of the process $(Z^\varepsilon(\cdot),\xi^\varepsilon(\cdot))$
with initial condition $(z,w)$.
Note that $\mathbb{P}_{z,w}$ and $\mathbb{E}_{z,w}$
depends implicitly on the control $m(t)$.
For any bounded stopping times $\tau_1\leq\tau_2$,
we have
$$\mathbb{E}_{z,w}\phi(Z^\varepsilon(\tau_2),\xi^\varepsilon(\tau_2))
=\mathbb{E}_{z,w}\phi(Z^\varepsilon(\tau_1),\xi^\varepsilon(\tau_1))
+\mathbb{E}_{z,w}\int_{\tau_1}^{\tau_2}{\cal L}^\varepsilon_{\overline m_s}\phi(Z^\varepsilon(s),\xi^\varepsilon(s))ds
$$
given that the expectations involved exist.
A random measure $m(\cdot)$ with values in $M(\infty)$ is said to be an admissible relaxed control for \eqref{model-3}
if $\int_U\int_0^tf(s,\alpha)m(ds\times d\alpha)$
is independent of $\{W_i(t+s)-W_i(t), s>0, i=1,2\}$ for each bounded continuous function $f(\cdot)$.
Under a relaxed control $m(\cdot)$, the controlled diffusion \eqref{model-3} becomes
\begin{equation}\label{model-5}
\begin{cases}
d X(t)=X(t)\big[\overline a_1-b_1X(t)-c_1Y(t)\big]dt + X(t)(\sigma_{11}dW_1(t)+\sigma_{12}dW_2(t))\\
d Y(t)=Y(t)\big[\overline a_2-h(Y(t))\overline m_t-b_2Y(t)+c_2X(t)\big]dt + Y(t)(\sigma_{12}dW_1(t)+\sigma_{22}dW_2(t)).
\end{cases}
\end{equation}
The generator for the controlled diffusion process \eqref{model-5} is
$$
\begin{aligned}
{\cal L}_u\phi(z)=&\dfrac{\partial\phi(z)}{\partial x}x[\overline a_1-b_1x-c_1y]+\dfrac{\partial\phi(z)}{\partial x} y[\overline a_2-h(y)u-b_2y+c_2x]\\
&+\dfrac12\left(a_{11}\dfrac{\partial^2\phi(z)}{\partial x^2}x^2+2a_{12}\dfrac{\partial^2\phi(z)}{\partial x\partial y}xy+a_{22}\dfrac{\partial^2\phi(z)}{\partial y^2}y^2\right).
\end{aligned}
$$
\begin{defn}{\rm
A relaxed control $m(\cdot)$ for \eqref{model-5} is said to be Markov
if there exists a measurable function $v:\mathbb{R}^2_+\mapsto\mathcal P(\mathcal{M})$
such that $m_t=v(Z(t)), t\geq0.$
A Markov control $v$ is a relaxed control satisfying that
$v(z)$ is a Dirac measure on $\mathcal{M}$ for each $z\in\mathbb{R}^2_+$.
Denote the set of Markov controls
and relaxed Markov controls by $\Pi_{M}$ an $\Pi_{RM}$, respectively.
With a relaxed Markov control, $Z(t)$
is a Markov process that has the strong Feller property in $\mathbb{R}^{2,\circ}_+$;
see \cite[Theorem 2.2.12]{ABG}.
Since the diffusion is nondegenerate in $\mathbb{R}^{2,\circ}_+$,
if the process $Z(t)$ has an invariant probability measure in $\mathbb{R}^{2,\circ}_+$,
the invariant measure is unique, denoted by $\eta_v$.
In this case, the control $v$ is said to be stable.}\end{defn}
\section{Main Results}\label{sec:main}
First, we need the existence and uniqueness of positive solutions
to \eqref{model-5} for any admissible relaxed control.
\begin{lm}\label{lm3.1}
If $m(\cdot)$ is an admissible relaxed control for \eqref{model-3} $($or \eqref{model-5}$)$,
then there exists a unique nonanticipative solution to
\eqref{model-5} with initial value $z=(x,y)\in\mathbb{R}^2_+$ satisfying
\begin{enumerate}
\item $\mathbb{P}_{z}\{X(t)>0, \ t\geq 0\}=1$
$($resp. $\mathbb{P}_{z}\{X(t)>0, \ t\geq 0\}=1)$
if $x>0$ $($resp. $y>0)$,
and
$\mathbb{P}_{z}\{X(t)=0\, t\geq 0\}=1$
$($resp. $\mathbb{P}_{z}\{Y(t)=0\, t\geq 0\}=1)$
if $x=0$ $($resp. $y=0)$.
\item
$$
\mathbb{E}_{z} \sup_{t\leq T}(|Z(t)|^2)\leq K(1+|z|^2)
$$
where $K$ depends only on $T$.
\end{enumerate}
\end{lm}
\begin{proof}
This lemma can be proved by arguments in \cite[Theorem 1]{KR}
or \cite[Theorem 2.2.2]{ABG}.
Note that the coefficients in \eqref{model-5} do not satisfy the
linear growth condition.
However, using a truncation argument
and a Khaminskii-type method in \cite{LM},
we can easily prove the existence of a unique solution to \eqref{model-5}
satisfying claim 1.
Moreover, we can estimate
$$
\begin{aligned}
d[c_2X(t)+c_1Y(t)]=&c_2X(t)\big[\overline a_1-b_1X(t)\big]dt +c_1Y(t)\big[\overline a_2-h(Y(t))\overline m_t-b_2Y(t)\big]dt \\
&+c_2X(t)(\sigma_{11}dW_1(t)+\sigma_{12}dW_2(t))
+ c_1Y(t)(\sigma_{12}dW_1(t)+\sigma_{22}dW_2(t))\\
\leq& [c_2\overline a_1X(t)+c_1\overline a_2X(t)]dt
\\&+c_2X(t)(\sigma_{11}dW_1(t)+\sigma_{12}dW_2(t))+ c_1Y(t)(\sigma_{12}dW_1(t)+\sigma_{22}dW_2(t)).
\end{aligned}
$$
In this estimate,
the right-hand side is linear in $X(t)$ and $Y(t)$.
Using standard arguments, (e.g., \cite[Theorem 3.5]{RK} or \cite[Proposition 3.5]{ZY}),
we can obtain the moment estimate, the second claim of this lemma.
\end{proof}
With this lemma, in each finite interval,
we can approximate $Z^\varepsilon(t)$
by $Z(t)$, which is proved in \cite[Theorem 5]{KR}.
\begin{lm}\label{lm2.2}
For any compact set $\mathcal K\in\mathbb{R}^2_+$,
$\{(Z^\varepsilon(\cdot),m(\cdot)), t\geq 0\}$ with $Z^\varepsilon(0)\in\mathcal K$
is tight in $D[0,\infty)\times M(\infty)$.
If $(Z^{\varepsilon_k}(\cdot),m^{(k)}(\cdot))$
converges weakly to $(\widehat Z(\cdot),\widehat m(\cdot))$ as $k\to\infty$
with $\varepsilon_k\to0$ as $k\to\infty$,
then there exists independent Brownian motions $W_1(t)$ and $W_2(t)$
such that $\widehat m(\cdot)$ is progressively measurable with respect to the filtration generated by $W_1(t), W_2(t)$,
and
$\widehat Z$ satisfying \eqref{model-5}
with $(Z(\cdot),m(\cdot))$ replaced by $(\widehat Z(\cdot),\widehat m(\cdot))$.
\end{lm}
We need the following lemma, whose proof is postponed to
the appendix.
\begin{lm}\label{lm2.3}
The following claims hold.
\begin{itemize}
\item For any admissible relaxed control $m(\cdot)$,
we have that
\begin{equation}\label{e1-lm2.3}
\limsup_{t\to\infty} \dfrac1T\int_0^T \Phi\Big(Y(t)h(Y(t))\overline m_t\Big)dt\leq \widetilde C \ \hbox{ a.s. }
\end{equation}
for some constant $\widetilde C$.
\item Every relaxed Markov control is stable and
there exists $\widehat C>0$ such that
\begin{equation}\label{e2-lm2.3}
\int_{\mathbb{R}^{2,\circ}\times U} [1+\Phi(yh(y)u)]^2\pi_v(dz\times du)\leq \widehat C
\end{equation}
for any relaxed Markov control $v$,
where
$\pi$ is a measure in $\mathbb{R}^{2,\circ}_+\times\mathcal{M}$
defined by
$$\pi_v(dz\times du)=[v(z)(du)]\times \eta_v(du).$$
\item The family $\{\eta_\nu:\,\nu\in\Pi_{RM}\}$
is tight in $\mathbb{R}^{2,\circ}_+$.
$[$Recall that $\eta_\nu$ is the invariant measure.$]$
\end{itemize}
\end{lm}
With this lemma, letting
$$\rho^*=\sup_{v\in\Pi_{RM}}\left\{\int_{\mathbb{R}^{2,\circ}\times U} \Phi(yh(y)u)\pi_v(dz\times du)\right\},$$
we have the following result from \cite[Theorems 3.7.11 and 3.7.14]{ABG}.
\begin{thm}\label{thm2.1}
The Hamilton-Jacobi-Bellman $($HJB$)$ equation
$$\max_{u\in U}\Big[{\cal L}_u V(x)+c(x,u)\Big]=\rho$$
admits a solution $V^*\in C^2(\mathbb{R}^{2,\circ}_+)$ satisfying $V^*(0)=0$
and $\rho=\rho^*$.
A relaxed Markov control is optimal if and only if
it satisfies
$$
\begin{aligned}&\!\!\!
\dfrac{\partial V^*}{\partial y}\Big[y(-a_2-h(y)\overline v(z) -b_2y+c_2x]+\Phi(yh(y)v(z))\Big]\\
&\qquad =\max_{u\in U} \dfrac{\partial V^*}{\partial y}\Big[y(-a_2-h(y)u-b_2y+c_2x]+\Phi(yh(y)u)\Big],
\end{aligned}
$$
where $\overline v(z)=\int_\mathcal{M} u[v(z)(du)].$
\end{thm}
The existence of an optimal Markov control can be
derived from a well-known selection theorem; see e.g., \cite[pp. 199-200]{FR75}.
Let $v^*$ be an optimal Markov control.
There exists a sequence of $v_n:\mathbb{R}^2_+\mapsto U$
such that $v_n(z)$ is locally Lipschitz in $z$
and $\lim_{n\to\infty} v_n=v$ almost everywhere in $\mathbb{R}^{2,\circ}_+$.
Since every Markov control is stable,
and the family $\{\nu_v, v\in \Pi_{RM}\}$
is tight on $\mathbb{R}^{2,\circ}_+$,
we have from \cite[Lemma 3.2.6]{ABG}
that
\begin{equation}\label{e-delta.opt}
\lim_{n\to\infty} \rho_{\nu_n}=\rho_{v^*}=\rho^*.
\end{equation}
This indicates that
we can always find a $\delta$-optimal Markov control
that is locally Lipschitz.
We state here the main result of this paper.
\begin{thm}\label{thm2.2}
For any $\delta>0$, there exists a locally Lipschitz Markov control $u^\delta$
such that
$$J^\varepsilon(u^\delta):=\liminf_{T\to\infty}\dfrac1T\int_0^T\Phi\Big(h(Y^{\varepsilon}(t)Y^{\varepsilon}(t))u^\delta(t)\Big)dt\leq \rho^*+\delta$$
and that for sufficiently small $\varepsilon>0$,
we have
$$J^\varepsilon(u^\delta)\geq \mathfrak{J^\varepsilon}-3\delta.$$
\end{thm}
The result above is known as chattering-type theorem. It connects relaxed controls and that of ordinary controls, and indicates that for any relaxed control, we can find a locally Lipschitz control to approximate the relaxed control. This is important because even though relaxed controls facilitate the establishment of the desired asymptotic results. Such control sets are much larger than the usual ordinary controls and cannot be used in the real applications. Thus viable approximation will be much appreciated.
In view of \cite[Theorem 8]{KR},
we proceed to verify the following conditions to prove the desired result.
\begin{enumerate}[label=(\rm C{\arabic*})]
\item \label{C1}
There is an $\varepsilon_0>0$ such that $\{Z^\varepsilon(u,t), u\in {PM}^\varepsilon, 0\leq t<\infty, \varepsilon\leq \varepsilon_0\}$ is $\mathbb{P}_{z,w}$-tight in $\mathbb{R}^{2,\circ}_+$
for each $(z,w)\in\mathbb{R}^{2,\circ}_+\times\mathcal{S}$.
\item \label{C2}
There is a $\delta$-optimal Markov control $u(z)$
that is locally Lipschitz in $z$ for any $\delta>0$.
\end{enumerate}
Condition \ref{C2} has been verified in our manuscript; see \eqref{e-delta.opt}.
Since the dynamics of $Z^\varepsilon(t)$ is
dominated
by negative quadratic terms
when $Z^\varepsilon(t)$ is large,
it is easy to prove the tightness of $\{Z^\varepsilon(u,t), u\in {PM}^\varepsilon, 0\leq t
<\infty, \varepsilon\leq \varepsilon_0\}$ in $\mathbb{R}^{2}_+$.
However, we need the tightness in $\mathbb{R}^{2,\circ}_+$ to achieve the near optimality.
To do that we need to analyze the behavior of $Z^\varepsilon(u,t)$
near the boundary.
Inspired by \cite{BL},
we utilize the ergodicity of the system on the boundary and
a property of the Laplace transform
to construct a function
$V^\varepsilon(z,w)$
satisfying the inf-compact condition in $\mathbb{R}^{2,\circ}_+$, i.e.,
$$\lim_{R\to\infty}\inf\left\{V^\varepsilon(z,w): z+\frac1x+\frac1y>R\right\}=\infty$$
and that
$$\mathbb{E}_{z,w} V^\varepsilon(Z^\varepsilon(t),\xi^\varepsilon(t))\leq C(1+V(z,w))$$
for any control $u\in {PM}^\varepsilon$ and $t\geq0$.
Clearly, \ref{C1} is proved if such a function is constructed.
In contrast to
the technique used in \cite{BL},
which is applied to a process in a compact space,
the verification in our case is more difficult
because the space $\mathbb{R}^2_+$ is not compact
and we have to treat a family of singularly perturbed processes
rather than a single process.
\section{Proofs of Results}\label{sec:pf}
First, when $p_0$, $p_1$, $p_2>0$
are sufficiently small, we have
\begin{equation}\label{e3.1}
2p_0+p_1b_1+p_2c_2<b_1,\,\text{ and }\, 2p_0+ p_1c_1+p_2b_2<c_1.
\end{equation}
We can also choose $p_1$ and $p_2$ such that
\begin{equation}\label{e3.2}
p_1a_1-p_2a_2<0.
\end{equation}
By \eqref{positive} and \eqref{e3.2},
we have
\begin{equation}\label{e-lambda}\lambda=\dfrac{1}{11}\min\left\{p_1a_1-p_2a_2, p_2\left(-a_2+\dfrac{a_1c_2}{b_1}\right)\right\}>0
\end{equation}
In view of \eqref{e1.1} and \eqref{e1.2}, there exist
bounded functions
$r_3(w)$
and $r_4(w)$
such that
$Qr_3(w)=r_1(w)$
and $Qr_4(w)=r_2(w)$.
Let $V(x,y)=\dfrac{1+c_2x+c_1y}{x^{p_1}y^{p_2}}.$
Define
$$
V_1(z,w):=xr_3(w)\dfrac{\partial V(z)}{\partial x}+yr_4(w)\dfrac{\partial V(z)}{\partial y}.
$$
We have
\begin{equation}\label{e3.8}
QV_1(z,w)=-xr_1(w)\dfrac{\partial V(z)}{\partial x}-yr_2(w)\dfrac{\partial V(z)}{\partial y}=-\dfrac{\partial V(z)}{\partial z}\cdot F(z,w).
\end{equation}
By direct calculation and the boundedness of $r_i(w)$, for $i=3,4$, there is a $K_2>0$ such that
\begin{equation}\label{e3.3}
\left|V_1(z,w)\right|\leq K_2V(z),\, (z,w)\in\mathbb{R}^{2,\circ}_+\times\mathcal{S},
\end{equation}
\begin{equation}\label{e3.4}\left|\dfrac{\partial V_1(z,w)}{\partial z}\cdot F(z,w)\right|\leq K_2V(z),\, (z,w)\in\mathbb{R}^{2,\circ}_+\times\mathcal{S},
\end{equation}
and
\begin{equation}\label{e3.5}\left|\dfrac{\partial V_1(z,w)}{\partial z}\cdot G(z,u)\right|\leq K_2(1+|z|)V(z),\, (z,w)\in\mathbb{R}^{2,\circ}_+\times\mathcal{S}, u\in\mathcal{M}.
\end{equation
In view of \eqref{e3.1}, there exists an $H>0$ such that
$$
\begin{aligned}
\inf_{z\in\mathbb{R}^2_+,|z|>H,u\in\mathcal{M}}\bigg\{p_1&\big|a_1-b_1x-c_1y\big|+p_2\big|-a_2-h(y)u-b_2y+c_2x\big|\\
&+3+K_2+p_0(1+|z|)+\dfrac{c_2x(a_1-b_1x)+c_1y(-a_2-h(y)u-b_2y)}{1+c_2x+c_1y}\bigg\}<0.
\end{aligned}
$$
Let
$$
\begin{aligned}
H_1:=\sup_{z\in\mathbb{R}^2_+,|z|\leq H,u\in\mathcal{M}}\bigg\{&p_1\big|a_1-b_1x-c_1y\big|+p_2\big|-a_2-h(y)u-b_2y+c_2x\big|+3+K_2\\
&+p_0(1+|z|)+\dfrac{c_2(a_1-b_1x)
+c_1(-a_2-h(y)u-b_2y)}{1+c_2x+c_1y}\bigg\}<\infty.
\end{aligned}
$$
By the definitions of $H$ and $H_1$, we have
\begin{equation}\label{e3.9}
\begin{aligned}
\dfrac{\partial V(z)}{\partial z}\cdot G(z, u)
=&V(x,y)\bigg[-p_1\big(a_1-b_1x-c_1y\big)-p_2\big(-a_2-h(y)u-b_2y+c_2x\big)\\
&\qquad\qquad+\dfrac{c_2(a_1-b_1x)+c_1(-a_2-h(y)u-b_2y)}{1+c_2x+c_1y}\bigg]\\
\leq&\big(H_1\boldsymbol{1}_{\{z<H\}}-3-K_2-p_0(1+|z|)\big) V(z).
\end{aligned}
\end{equation}
Let $V^\varepsilon(z,w)=V(z)+\varepsilon V_1(z,w),$
we have from \eqref{e3.3} that
\begin{equation}\label{e3.6}
(1-\varepsilon K_2)V(z)\leq V^\varepsilon(z,w)\leq (1+\varepsilon K_2)V(z),\,\,\, z\in\mathbb{R}^{2,\circ}_+, s\in\mathcal{S}.
\end{equation}
If $\varepsilon>0$ is sufficiently small such that
\begin{equation}\label{e3.10}
\varepsilon K_2\leq p_0;\,\, (H_1+3)\varepsilon K_2<1;\,\,
\end{equation}
using \eqref{e3.4}, \eqref{e3.5}, \eqref{e3.8}, and \eqref{e3.9},
we can estimate
\begin{equation}\label{e3.7}
\begin{aligned}
{\cal L}^\varepsilon_u V^\varepsilon(z,w)=&\dfrac{\partial V(z)}{\partial z}\left[\dfrac1\varepsilon F(z,w)+G(z, u)\right]\\
&+\varepsilon\dfrac{\partial V_1(z, w)}{\partial z}\left[\dfrac1\varepsilon F(z,w)+G(z, m)\right]+\dfrac1{\varepsilon}Q V_1(z,w)
\\
\leq& \big(H_1\boldsymbol{1}_{\{z<H\}}-3-K_2-p_0(1+|z|)\big) V(z)+K_2V(z)+\varepsilon K_2(1+|z|)V(z)\\
\leq& \big((H_1+1)\boldsymbol{1}_{\{z<H\}}-2)V(z)\\
\leq&\big((H_1+2)\boldsymbol{1}_{\{z<H\}}-1)V^\varepsilon(z,w),
\end{aligned}
\end{equation}
where the last two lines follow from \eqref{e3.6} and \eqref{e3.10}.
By virtue of \eqref{e3.7},
standard arguments show that
\begin{equation}\label{e3.11}
\mathbb{E}_{z,w} V^\varepsilon(Z(t))\leq e^{(H_1+2)t}V^\varepsilon(z),\, t\geq0, z\in\mathbb{R}^{2,\circ}_+, w\in\mathcal{S}
\end{equation}
Let $\tau^\varepsilon=\inf\{s\geq 0: Z^\varepsilon(s)\leq H\}$. Since ${\cal L}^\varepsilon_u V^\varepsilon(z,w)\leq -V^\varepsilon(z,w)$ if $z\geq H$,
we have that
\begin{equation}\label{e3.12}
\begin{aligned}
\mathbb{E}_{z,w} e^{t\wedge\tau^\varepsilon}V^\varepsilon\big(Z^\varepsilon(t\wedge\tau^\varepsilon),\xi^\varepsilon(t\wedge\tau^\varepsilon)\big)
&=V^\varepsilon(z)+\mathbb{E}_{z,w}\int_0^{t\wedge\tau^\varepsilon}e^{s}\Big[V^\varepsilon(s)+{\cal L}^\varepsilon_{\overline m_s}V^\varepsilon(Z^\varepsilon(s),\xi^\varepsilon(s))\Big]ds\\
&\leq V^\varepsilon(z),\,\text{ for }\, t\geq0, z\in\mathbb{R}^{2,\circ}_+, w\in\mathcal{S}.
\end{aligned}
\end{equation}
\begin{lm}\label{lm3.2}
There exist $L>0$ and $\varepsilon_1>0$
such that for all $\varepsilon<\varepsilon_1$,
\begin{equation}\label{e3.18}
\mathbb{E}_{z,w} \dfrac{1}{V_1^\varepsilon(Z^\varepsilon(t),\xi^\varepsilon(t))}\leq Le^{(H_1+2)t}\dfrac{1+|z|^2}{V_1^\varepsilon(z,w)},\,\text{ for }\, (z,w)\in\mathbb{R}^{2,\circ}_+\times\mathcal{S}, \ t\geq0.
\end{equation}
\end{lm}
\begin{proof}
Let $\tilde V(z)=(1+c_2x+c_1y)x^{p_1}y^{p_2}$.
Construct a perturbed Lyapunov function
$$\tilde V^\varepsilon(z,w)=\tilde V(z)+\varepsilon\left(xr_3(w)\dfrac{\partial\tilde V(z)}{\partial x}+yr_4(w)\dfrac{\partial\tilde V(z)}{\partial y}\right)
$$
Similar to estimates in \eqref{e3.7},
we can find $K_3>0$ such that
\begin{equation}\label{e3.13}
(1-\varepsilon K_3)\tilde V(z)\leq\tilde V^\varepsilon(z,w)\leq (1+\varepsilon K_3)\tilde V(z)
\end{equation}
and
\begin{equation}\label{e3.14}
\mathbb{E}_{z,w} \tilde V^\varepsilon(Z^\varepsilon(t), \xi^\varepsilon(t))\leq e^{(H_1+2)t}\tilde V^\varepsilon(z,w)
\end{equation}
when $\varepsilon$ is sufficiently small.
On the other hand, for any $(z,w)\in\mathbb{R}^{2,\circ}_+\times\mathcal{S}$, we have
\begin{equation}\label{e3.15}
\dfrac1{V(z)}\leq \tilde V(z)\leq (1+c_2x+c_1y)^2 \dfrac1{V(z)}.
\end{equation}
which combined with \eqref{e3.6} and \eqref{e3.13} implies that
\begin{equation}\label{e3.16}
\dfrac{1}{V_1^\varepsilon(z,w)}\leq \dfrac{1}{(1-\varepsilon K_2)V(z)}\leq \dfrac{1}{(1-\varepsilon K_2)(1-\varepsilon K_3)}\tilde V^\varepsilon(z,w)
\end{equation}
and
\begin{equation}\label{e3.17}
\tilde V^\varepsilon(z,w)
\leq (1+\varepsilon K_3)\tilde V(z)\leq (1+\varepsilon K_3) \dfrac{(1+c_2x+c_1y)^2}{V(z)}\leq (1+\varepsilon K_2)^2 \dfrac{(1+c_2x+c_1y)^2}{V^\varepsilon(z,w)}
.\end{equation}
Applying \eqref{e3.16} and \eqref{e3.17} to \eqref{e3.14},
we can
easily obtain \eqref{e3.18}
for suitable $L>0$
when $\varepsilon$ is sufficiently small.
\end{proof}
\begin{lm}\label{lm3.3}
There are $\widehat K>0$ and $\varepsilon_2>0$ such that
for any $\varepsilon<\varepsilon_2$, and any admissible control
$m(\cdot)$ for \eqref{model-2}, we have
$$\mathbb{E}_{z,w}\int_0^t |Z^\varepsilon(s)|^2ds\leq \widehat K(1+|z|+t),$$
and
$$\mathbb{E}_{z}\int_0^t |Z(s)|^2ds \leq \widehat K(1+|z|+t).$$
\end{lm}
\begin{proof}
Let $V_2(z)=1+c_2x+c_1y$
and
$$
V_3(z,w):=xr_3(w)\dfrac{\partial V_2(z)}{\partial x}+yr_4(w)\dfrac{\partial V_2(z)}{\partial y}.
$$
We can find a $K_4>0$ satisfying
\begin{equation}\label{e5-lm3.3}
\left|V_3(z,w)\right|\leq K_4V_2(z),\, (z,w)\in\mathbb{R}^{2,\circ}_+\times\mathcal{S}.
\end{equation}
and
\begin{equation}\label{e6-lm3.3}
\left|\dfrac{\partial V_3(z,w)}{\partial z}\cdot F(z,w)\right|\leq K_4V_2(z),\, (z,w)\in\mathbb{R}^{2,\circ}_+\times\mathcal{S}.
\end{equation}
\begin{equation}\label{e7-lm3.3}
\left|\dfrac{\partial V_3(z,w)}{\partial z}\cdot G(z,u)\right|\leq K_4(1+|z|)V_2(z),\, (z,w)\in\mathbb{R}^{2,\circ}_+\times\mathcal{S}, u\in\mathcal{M}.
\end{equation}
We have
$$\dfrac{\partial V_2(z)}{\partial z}\cdot F(z,u)=
c_2x\big[a_1-b_1x\big] +c_1y\big[a_2-h(y)u-b_2y\big].$$
Let $\beta\in\big(0, (c_2b_1)\wedge(c_1b_2)\big)$.
Clearly, we can choose a $K_5>0$ such that
\begin{equation}\label{e4-lm3.3}
\dfrac{\partial V_2(z)}{\partial z}\cdot F(z,u)\leq K_5-V_2(z)-\beta(x^2+y^2)\,\forall (x,y)\in\mathbb{R}^2_+, u\in[0,M].
\end{equation}
Let
$$V_2^\varepsilon(z,w)=V_2(z)+\varepsilon V_3(z,w)$$
Similar to \eqref{e3.7},
from \eqref{e5-lm3.3}, \eqref{e6-lm3.3}, and \eqref{e7-lm3.3}, we have
$${\cal L}^\varepsilon_u V^\varepsilon_3(z,w)\leq 2K_5-\dfrac{\beta
|z|^2}2$$
for sufficiently small $\varepsilon$.
As a result,
\begin{equation}\label{e3-lm3.3}
\begin{aligned}
\mathbb{E}_{z,w} V^\varepsilon_3(Z^\varepsilon(t),\xi^\varepsilon(t))=&V^\varepsilon_3(z,w)+\mathbb{E}_{z,w}\int_0^t{\cal L}^\varepsilon_{\overline m_t} V^\varepsilon_3(Z^\varepsilon(ds),\xi^\varepsilon(s))ds\\
\leq&V^\varepsilon_3(z,w)+2K_5t-\dfrac\beta2\int_0^t\mathbb{E}_{z,w} |Z^\varepsilon(t)|^2,
\end{aligned}
\end{equation}
which leads to
$$
\dfrac\beta2\int_0^t\mathbb{E}_{z,w} |Z^\varepsilon(t)|^2
\leq V^\varepsilon_3(z,w)+2K_5t
$$
The first claim of the lemma follows directly from the above estimate.
The second claim can be derived by
applying It\^o's formula for $V_2(z)$ to \eqref{model-5}
and then proceeding like \eqref{e3-lm3.3}.
\end{proof}
\begin{lm}\label{lm3.4}
There is a $\tilde K>0$ such that
$$\bigg|\mathbb{E}_{z,w}\big[\ln V(Z^\varepsilon(T))\big]-\ln V(z)-
\mathbb{E}_{z,w}\int_0^T{\cal L}_{\overline m_t}\ln V(Z^\varepsilon(t), \xi^\varepsilon(t))dt\bigg|\leq\tilde K(1+T)\varepsilon.$$
for any admissible relaxed control $m(\cdot)$.
\end{lm}
\begin{proof}
Let $$g_1(z,w)=\int_\mathcal{S}\chi(w, d\tilde w)\dfrac{\partial(\ln V(z))}{\partial z}\cdot F(z,\tilde w),$$
and
$$g_2(z,w)=\int_\mathcal{S}\chi(w, d\tilde w)\left[\dfrac{\partial g_1(z,w)}{\partial z}F(x, \tilde w)+\dfrac{\partial(\ln V(z))}{\partial z}\cdot G(x, u)-{\cal L}_u \ln V(z)\right].$$
Note that
$g_2$ does not depend on $u$ since there is no $u$ dependence in
$$\dfrac{\partial(\ln V(z))}{\partial z}\cdot G(x, u)-{\cal L}_u \ln V(z)=\dfrac12\dfrac{a_{11}c_2^2x^2+a_{22}c_1^2y^2+2a_{12}c_1c_2xy}{(1+c_2x+c_1y)^2}-\dfrac{c_2xa_{11}+c_1y a_{22}}{1+c_2x+c_1y}.
$$
Moreover, direct calculations show that
$\dfrac{\partial(\ln V(z))}{\partial z}\cdot F(z,w)
$
and
$\dfrac{g_1(z,w)}{\partial z}\cdot F(x, y)$ are bounded along with $\dfrac{\partial(\ln V(z))}{\partial z}\cdot G(x, u)-{\cal L}_u \ln V(z)$.
Consequently,
$g_i(z,w), i=1,2$ are also bounded in $\mathbb{R}^{2,\circ}_+\times\mathcal{S}$.
As a result,
we have from \cite[Formula (4.21)]{BP} that
$$
\Big|{\cal L}^\varepsilon_u[\ln V(z)+\varepsilon g_1(z,w)+\varepsilon^2g_2(z,w)]
-{\cal L}_u \ln V(z)\Big|\leq K_6\varepsilon\,\text{ for all }\, (z,w)\in\mathbb{R}^{2,\circ}_+\times\mathcal{S}
$$
for some constant $K_6>0$ independent of $m$.
Combining this and the equality
$$
\begin{aligned}
\mathbb{E}_{z,w}&\big[\ln V(Z^\varepsilon(T))+\varepsilon g_1(Z^\varepsilon(T),\xi^\varepsilon(T))+\varepsilon^2g_2(Z^\varepsilon(T),\xi^\varepsilon(T))\big]\\
=&\ln V(z)+\varepsilon g_1(z,w)+\varepsilon^2g_2(z,w)\\
&+\mathbb{E}_{z,w}\int_0^T {\cal L}^\varepsilon_{\overline m_t}[\ln V(Z^\varepsilon(t))+\varepsilon g_1(Z^\varepsilon(t),\xi^\varepsilon(t))+\varepsilon^2g_2(Z^\varepsilon(t),\xi^\varepsilon(t))]dt,
\end{aligned}
$$
we obtain
$$
\begin{aligned}
\bigg|&\mathbb{E}_{z,w}\big[\ln V(Z^\varepsilon(T))+\varepsilon g_1(Z^\varepsilon(T),\xi^\varepsilon(T))+\varepsilon^2g_2(Z^\varepsilon(T),\xi^\varepsilon(t))\big]\\
&-\ln V(z)-\varepsilon g_1(z,w)-\varepsilon^2g_2(z,w)-\mathbb{E}_{z,w}\int_0^T{\cal L}_{\overline m_t}[\ln V(Z^\varepsilon(t), \xi^\varepsilon(t))]dt\bigg|\leq K_6T\varepsilon.
\end{aligned}
$$
By the boundedness of $g_i(z,w), i=1,2$, we deduce that
$$\bigg|\mathbb{E}_{z,w}\big[\ln V(Z^\varepsilon(T))\big]-\ln V(z)-\mathbb{E}_{z,w}\int_0^T{\cal L}_{\overline m_t}[\ln V(Z^\varepsilon(t), \xi^\varepsilon(t))]dt\bigg|\leq (K_6T+K_7)\varepsilon.
$$
for some $K_7>0$.
The lemma is therefore proved.
\end{proof}
Define $f,g:\mathbb{R}^2_+\mapsto\mathbb{R}$ by
\begin{equation}\label{e-f}
f(x,y)=p_1\big(a_1-b_1x-c_1y\big)+p_2\big(-a_2-b_2y+c_2x\big)
\end{equation}
and
\begin{equation}\label{e-g}
\begin{aligned}
g(x,y)
=&\dfrac{c_2x(\overline a_1-b_1x)+c_1y(\overline a_2-b_2y)}{1+c_2x+c_1y}-\dfrac12\dfrac{a_{11}c_2^2x^2+a_{22}c_1^2y^2+2a_{12}c_1c_2xy}{(1+c_2x+c_1y)^2}.
\end{aligned}
\end{equation}
\begin{lm}\label{lm3.5}
For any $H>0$ and $k_0>1$, there exist $T_1=T_1(H,\varepsilon_0,k_0)>0$ and $\delta=\delta(H,\varepsilon_0,k_0)>0$
such that for any admissible control $m(\cdot)$,
and $z\in D_{\delta,H}:=([0,H]\times[0,\delta])\cup([0,\delta]\times[0,H])$,
we have
$$
\dfrac1t \int_0^t\mathbb{E}_{z}f(Z(s))ds>9\lambda,\,\text{ and }\,
\dfrac1t \int_0^t\mathbb{E}_{z} g(Z(s))ds\leq\lambda,\,\forall\,t\in[T_1,T_2],$$
and
$$\dfrac1t \int_0^t\mathbb{E}_{z} h(Y(s))ds\leq\dfrac{\lambda}{p_2M},\,\forall\,t\in[T_1,T_2]$$
where $T_2=(k_0+1)T_1$ and $\lambda$ is defined in \eqref{e-lambda}.
\end{lm}
The results in this lemma are obtained by analyzing the behavior
of $Z(t)$ near the boundary.
The proof is postponed to the appendix.
\begin{lm}\label{lm3.6}
With $H, k_0, T_1, T_2,\delta$ as given in Lemma \ref{lm3.5},
there is an $\varepsilon_3>0,\theta\in(0,1)$ such that
for any $\varepsilon\in(0,\varepsilon_3)$.
Let $D^\circ_{\delta,H}=((0,H]\times(0,\delta])\cup((0,\delta]\times(0,H])$.
For any admissible control $m(\cdot)$, $(z,w)\in D_{\delta,H}\times\mathcal{S}$,
we have
$$\mathbb{E}_{z,w} \left[V^\varepsilon(Z^\varepsilon(t),\xi^\varepsilon(t))\right]^\theta
\leq e^{-\lambda\theta t}[V^\varepsilon(z,w)]^\theta, t\in[T_1,T_2].$$
\end{lm}
\begin{proof}
Since $D_{\delta,H}$ is a compact set, by virtue of Lemma \ref{lm3.5}
and \cite[Theorem 5]{KR},
(which tell us we can approximate solutions to \eqref{model-4}
by the corresponding solutions to \eqref{model-5}),
there is an $\varepsilon_2>0$ such that
for any $\varepsilon\in(0,\varepsilon_2)$,
and
for any admissible control $m(\cdot)$, $(z,w)\in D_{\delta,H}\times\mathcal{S}$,
we have
\begin{equation}\label{e1-lm3.5}
\dfrac1t \int_0^t\mathbb{E}_{z,w}f(Z^\varepsilon(s))ds>8\lambda,\,\, t\in[T_1,T_2],
\end{equation}
\begin{equation}\label{e2-lm3.5}
\dfrac1t \int_0^t\mathbb{E}_{z,w}g(Z^\varepsilon(s))ds<2\lambda,\,\, t\in[T_1,T_2],
\end{equation}
and
\begin{equation}\label{e3-lm3.5}
\dfrac1t \int_0^t\mathbb{E}_{z} h(Y^\varepsilon(s))ds\leq2\dfrac{\lambda}{p_2M},\,\forall\,t\in[T_1,T_2].
\end{equation}
Note that $f$ and $g$ are not bounded. Thus
\eqref{e1-lm3.5} and \eqref{e2-lm3.5} do not follow
from the weak convergence of $Z^\varepsilon(\cdot)$ to $Z(\cdot)$.
However, $f$ and $g$ have linear growth rates.
Thus, \eqref{e1-lm3.5} and \eqref{e2-lm3.5}
can still be obtained
from the uniform integrability in Lemma \ref{lm3.3}
combined with the weak convergence.
On the other hand,
\begin{equation}\label{e4-lm3.5}
\begin{aligned}
{\cal L}_u \ln V(z,w)=&-f(z)+g(z)-\dfrac{c_1yh(y)u}{1+c_2x+c_1y}+p_2h(y)u\\
\leq&-f(z)+g(z)+Mp_2h(y).
\end{aligned}
\end{equation}
It follows
from \eqref{e1-lm3.5}, \eqref{e2-lm3.5}, \eqref{e3-lm3.5},
and \eqref{e4-lm3.5} that
\begin{equation}\label{e5-lm3.5}
\begin{aligned}
\dfrac1t \int_0^t\mathbb{E}_{z,w} {\cal L}_{\overline m_t} \ln V(Z^\varepsilon(s),\xi^\varepsilon(s))ds
\leq -4\lambda,\,\, t\in[T_1,T_2], z\in D^\circ_{\delta,H},\varepsilon<\varepsilon_2
\end{aligned}
\end{equation}
for any admissible control.
In view of \eqref{e5-lm3.5} and Lemma \ref{lm3.4},
when $\varepsilon$ is sufficiently small, we have
\begin{equation}\label{e3-lm3.6}
\mathbb{E}_{z,w}\big[\ln V(Z^\varepsilon(t))\big]-\ln V(z)\leq -3\lambda t,\,\, t\in [T,k_0 T],z\in D^\circ_{\delta,H}.
\end{equation}
Combining \eqref{e3-lm3.6} and \eqref{e3.6},
we have that
$$\mathbb{E}_{z,w}\big[\ln V^\varepsilon(Z^\varepsilon(t))\big]-\ln V^\varepsilon(z)\leq -2\lambda t,\,\, t\in [T,k_0 T],z\in D^\circ_{\delta,H}$$
if $\varepsilon$ is sufficiently small.
Let
$$\Upsilon^\varepsilon_{z,w}(t)=\ln V^\varepsilon(Z^\varepsilon(t),\xi^\varepsilon(t))-\ln V^\varepsilon(Z^\varepsilon(0),\xi^\varepsilon(0)).$$
By \eqref{e3.11} and Lemma \ref{lm3.2},
there is a $\widehat K$ depending only on $T_1,T_2$ and $H$ such that
$$\max\Big\{\mathbb{E}_{z,w} \exp(-\Upsilon^\varepsilon(t)), \mathbb{E}_{z,w} \exp(\Upsilon^\varepsilon(t))\Big\}<\widehat K,\, z\in D^\circ_{\delta,H},w\in\mathcal{S}, t\in[T_1,T_2]$$
for any admissible control.
By Lemma \ref{lm-a0}, there is a $\widehat K_2>0$ such that
$$
\begin{aligned}
\ln\left(\mathbb{E}_{z,w} \left[\dfrac{V^\varepsilon(Z^\varepsilon(t),\xi^\varepsilon(t))}{V(z,w)}\right]^\theta\right)=
&\ln\left(\mathbb{E}_{z,w} \exp(\theta\Upsilon^\varepsilon(t))\right)\\
\leq& \theta\mathbb{E}_{z,w} \Upsilon^\varepsilon(t)+\theta^2 \widehat K_2\\
\leq& -2\lambda\theta t+\theta^2\widehat K_2,\,\,\,\,(z,w)\in D^\circ_{\delta,H}\times \mathcal{S}, t\in[T_1,T_2], \theta\in[0,0.5].
\end{aligned}
$$
Letting $\theta=\lambda T_1[\widehat K_2]^{-1}\wedge 0.5$,
we have
$$\mathbb{E}_{z,w} \left[V^\varepsilon(Z^\varepsilon(t),\xi^\varepsilon(t))\right]^\theta
\leq e^{-\lambda\theta t}[V^\varepsilon(z,w)]^\theta, (z,w)\in D^\circ_{\delta,H}\times \mathcal{S}, t\in[T_1,T_2].$$
\end{proof}
\begin{lm}\label{lm3.7}
Let $\theta$ satisfy the conclusion of Lemma {\rm\ref{lm3.6}}.
There are $q\in(0,1)$ and $C>0$ such that
$$
\mathbb{E}_{z,w} \left[V^\varepsilon(Z^\varepsilon(T_2),\xi^\varepsilon(T_2))\right]^\theta
\leq q [V^\varepsilon(z,w)]^\theta +C,
$$
for any relaxed Markov control $u^\varepsilon\in {PM}^\varepsilon$ when $\varepsilon$ is sufficiently small.
\end{lm}
\begin{proof}
Applying Jensen's inequality to \eqref{e3.11} and \eqref{e3.12},
we have that for $t\geq0$,
\begin{equation}\label{e1-lm3.7}
\mathbb{E}_{z,w} e^{\theta(t\wedge\tau^\varepsilon)}\left[V^\varepsilon(Z^\varepsilon(t\wedge\tau^\varepsilon),\xi^\varepsilon(t\wedge\tau^\varepsilon))\right]^\theta
\leq [V^\varepsilon(z,w)]^\theta
\end{equation}
and
\begin{equation}\label{e2-lm3.7}
\mathbb{E}_{z,w} \left[V^\varepsilon(Z^\varepsilon(t),\xi^\varepsilon(t))\right]^\theta
\leq e^{(H_1+2)\theta t}[V^\varepsilon(z,w)]^\theta.
\end{equation}
Since $\tilde D_{\delta,H}:=(0,H]^2\setminus D_{\delta,H}$
is a compact subset of $\mathbb{R}^{2,\circ}_+$,
$$C:=e^{(H_1+2)\theta T_2}\sup_{z\in\tilde D_{\delta,H}, w\in\mathcal{S}} [V^\varepsilon(z,w)]^\theta <\infty.$$
By virtue of \eqref{e2-lm3.7} and Lemma \ref{lm3.5}, we have
\begin{equation}\label{e3-lm3.7}
\mathbb{E}_{z,w} \left[V^\varepsilon(Z^\varepsilon(t),\xi^\varepsilon(t))\right]^\theta
\leq C+e^{-\theta\lambda} [V^\varepsilon(z,w)]^\theta,\,\forall\, (z,w)\in(0,H]^2\times \mathcal{S}, t\in[T_1,T_2].
\end{equation}
We have the following estimate.
\begin{equation}\label{e4-lm3.7}
\begin{aligned}
\mathbb{E}_{z,w}& e^{\theta(T_2\wedge\tau^\varepsilon)}\left[V^\varepsilon(Z^\varepsilon(T_2\wedge\tau^\varepsilon),\xi^\varepsilon(T_2\wedge\tau^\varepsilon))\right]^\theta\\
=&\mathbb{E}_{z,w} \boldsymbol{1}_{\{\tau^\varepsilon<k_0T_1\}}e^{\lambda(T_2\wedge\tau^\varepsilon)}\left[V^\varepsilon(Z^\varepsilon(T_2\wedge\tau^\varepsilon),\xi^\varepsilon(T_2\wedge\tau^\varepsilon))\right]^\theta\\
&+\mathbb{E}_{z,w} \boldsymbol{1}_{\{k_0T_1\leq \tau^\varepsilon<T_2\}}e^{\theta\lambda(T_2\wedge\tau^\varepsilon)}\left[V^\varepsilon(Z^\varepsilon(T_2\wedge\tau^\varepsilon),\xi^\varepsilon(T_2\wedge\tau^\varepsilon))\right]^\theta\\
&+\mathbb{E}_{z,w} \boldsymbol{1}_{\{\tau^\varepsilon\geq T_2\}}e^{\theta\lambda(T_2\wedge\tau^\varepsilon)}\left[V^\varepsilon(Z^\varepsilon(T_2\wedge\tau^\varepsilon),\xi^\varepsilon(T_2\wedge\tau^\varepsilon))\right]^\theta\\
\geq&\mathbb{E}_{z,w} \boldsymbol{1}_{\{\tau^\varepsilon\leq k_0T_1\}}\left[V^\varepsilon(Z^\varepsilon(\tau^\varepsilon),\xi^\varepsilon(\tau^\varepsilon))\right]^\theta\\
&+e^{\theta\lambda_2k_0T}\mathbb{E}_{z,w} \boldsymbol{1}_{\{k_0T\leq \tau^\varepsilon<T_2\}}[V^\varepsilon(Z^\varepsilon(\tau^\varepsilon),\xi^\varepsilon(\tau^\varepsilon))]^\theta\\
&+e^{\theta\lambda_2T_2}\mathbb{E}_{z,w} \boldsymbol{1}_{\{\tau^\varepsilon\geq T_2\}}[V^\varepsilon(Z^\varepsilon(T_2),\xi^\varepsilon(T_2))]^\theta.
\end{aligned}
\end{equation}
With a relaxed Markov control $u^\varepsilon\in{PM}^\varepsilon$,
the process $(Z^\varepsilon(t),\xi^\varepsilon(t))$ is a Markov-Feller process.
Thus, we have from \eqref{e3-lm3.7} that
\begin{equation}\label{e5-lm3.7}
\begin{aligned}
\mathbb{E}_{z,w} &\boldsymbol{1}_{\{\tau^\varepsilon\leq k_0T_1\}}\left[V^\varepsilon(Z^\varepsilon(T_2),\xi^\varepsilon(T_2))\right]^\theta\\
\leq& \mathbb{E}_{z,w} \boldsymbol{1}_{\{\tau^\varepsilon\leq k_0T_1\}}[C+e^{-\theta\lambda(T_2-\tau^\varepsilon)}\left[V^\varepsilon(Z^\varepsilon(\tau^\varepsilon),\xi^\varepsilon(\tau^\varepsilon))\right]^\theta\\
\leq& C+ e^{-\theta\lambda T_1}\mathbb{E}_{z,w} \boldsymbol{1}_{\{\tau^\varepsilon\leq k_0T_2\}}\left[V^\varepsilon(Z^\varepsilon(\tau^\varepsilon),\xi^\varepsilon(\tau^\varepsilon))\right]^\theta.
\end{aligned}
\end{equation}
Similarly, it follows from \eqref{e2-lm3.7} and the inequality $(H_1+2)T\leq \lambda(k_0-1)$ that
\begin{equation}\label{e6-lm3.7}
\begin{aligned}
\mathbb{E}_{z,w} &\boldsymbol{1}_{\{ k_0T_1\leq \tau^\varepsilon\leq T_2\}}\left[V^\varepsilon(Z^\varepsilon(T_2),\xi^\varepsilon(T_2))\right]^\theta\\
\leq& \mathbb{E}_{z,w} \boldsymbol{1}_{\{ k_0T_1\leq \tau^\varepsilon\leq T_2\}}e^{\theta(H_1+2)\lambda(T_2-\tau^\varepsilon)}\left[V^\varepsilon(Z^\varepsilon(\tau^\varepsilon),\xi^\varepsilon(\tau^\varepsilon))\right]^\theta\\
\leq& e^{(H_1+2)\theta T_1}\mathbb{E}_{z,w} \boldsymbol{1}_{\{ k_0T_1\leq \tau^\varepsilon\leq T_2\}}\left[V^\varepsilon(Z^\varepsilon(\tau^\varepsilon),\xi^\varepsilon(\tau^\varepsilon))\right]^\theta\\
\leq& e^{-\theta\lambda T_1}e^{\theta\lambda k_0T_1}\mathbb{E}_{z,w} \boldsymbol{1}_{\{ k_0T_1\leq \tau^\varepsilon\leq T_2\}}\left[V^\varepsilon(Z^\varepsilon(\tau^\varepsilon),\xi^\varepsilon(\tau^\varepsilon))\right]^\theta.
\end{aligned}
\end{equation}
Moreover,
\begin{equation}\label{e7-lm3.7}
\mathbb{E}_{z,w} \boldsymbol{1}_{\{\tau^\varepsilon\geq T_2\}}[V^\varepsilon(Z^\varepsilon(T_2),\xi^\varepsilon(T_2))]^\theta
= e^{-\theta\lambda T_1}e^{\theta\lambda T_2}\mathbb{E}_{z,w} \boldsymbol{1}_{\{\tau^\varepsilon\geq T_2\}}[V^\varepsilon(Z^\varepsilon(T_2),\xi^\varepsilon(T_2))]^\theta.
\end{equation}
Owing to \eqref{e5-lm3.7}, \eqref{e6-lm3.7}, and \eqref{e7-lm3.7}, we have
$$\mathbb{E}_{z,w} [V^\varepsilon(Z^\varepsilon(T_2),\xi^\varepsilon(T_2))]^\theta\leq C+e^{-\theta\lambda T_1}\mathbb{E}_{z,w} e^{\theta(T_2\wedge\tau^\varepsilon)}\left[V^\varepsilon(Z^\varepsilon(T_2\wedge\tau^\varepsilon),\xi^\varepsilon(T_2\wedge\tau^\varepsilon))\right]^\theta.$$
This together with \eqref{e1-lm3.7}
concludes the proof with $q=e^{-\theta\lambda T_1}$.
\end{proof}
\begin{thm}\label{thm3.1}
With $q$ and $C$ given in Lemma \ref{lm3.7},
for sufficiently small $\varepsilon$, we have
\begin{equation}\label{e0-thm3.1}
\mathbb{E}_{z,w}\left[V^\varepsilon(Z^\varepsilon(t),\xi^\varepsilon(t))\right]^\theta
\leq e^{(H_1+2)T_2}q^{t/(2T_2)} [V^\varepsilon(z,w)]^\theta +\dfrac{C}{1-q},
\end{equation}
for any relaxed Markov control $u\in{PM}^\varepsilon$.
\end{thm}
\begin{proof}
By the Markov property,
we have
$$
\mathbb{E}_{z,w} \left[V^\varepsilon(Z^\varepsilon((k+1)T_2),\xi^\varepsilon((k+1)T_2))\right]^\theta
\leq q \mathbb{E}_{z,w}\left[V^\varepsilon(Z^\varepsilon(kT_2),\xi^\varepsilon(kT_2))\right]^\theta +C, k\in\mathbb{N}.
$$
Using this inequality recursively,
we obtain
\begin{equation}\label{e1-thm3.1}
\mathbb{E}_{z,w} \left[V^\varepsilon(Z^\varepsilon(kT_2),\xi^\varepsilon(kT_2))\right]^\theta
\leq q^n [V^\varepsilon(z,w)]^\theta +\dfrac{C(1-q^n)}{1-q}.
\end{equation}
The assertion of this theorem follows from
\eqref{e1-thm3.1} and \eqref{e2-lm3.7}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm2.2}]
Since
$$\lim_{r\to\infty} \left(\inf_{\{|z|\vee x^{-1}\vee y^{-1}>r, w\in\mathcal{S}\}}[V^\varepsilon(z,w)]^\theta\right)=\infty, \
\hbox{ and } \ q<1,$$
the conclusion of Theorem \ref{thm3.1} clearly implies Condition \ref{C1}.
Theorem \ref{thm2.2} is therefore proved.
\end{proof}
\section{Concluding Remarks}\label{sec:rem}
Our main effort in this
paper is to demonstrate that
we can obtain near-optimal policies for average-cost per unit time yield
for a predator-prey model under fast-varying jump noise
by using a near optimal strategy of
a controlled diffusion model.
Due to the technical complexity of the proofs,
we made some simplifications in the model
in order to facilitate the presentation
but still preserve
important properties of the model.
The main result, Theorem \ref{thm2.2} still hold true if
the following generalizations are made.
\begin{itemize}
\item[(a)] The coefficients $a_i, b_i, c_i, i=1,2$ depend on the state of $\xi^\varepsilon(t)$.
\item[(b)] The wideband noise in \eqref{model-2}, which is linear in the current setup, can be replaced by nonlinear terms.
\item[(c)] The assumption on $\xi(t)$ in Section 2 can be reduced to the condition that $\xi(t)$ a stationary zero mean process
which is either (i) strongly mixing, right continuous and bounded, with the mixing
rate function $\phi(\cdot)$ satisfying $\int_0^\infty\phi^{1/2}(s)ds<\infty$, or (ii) stationary Gauss-Markov with
an integrable correlation function as in \cite{KR}.
\end{itemize}
With
the generalization specified in (a) above, the proofs carry over, although the notations are more complicated.
With (b),
we need some additional conditions imposed on the wideband noise parts to obtain
certain boundedness of the solutions to the limit diffusion equation.
Throughout the paper, we assume that $\xi(t)$ is an ergodic Markov process,
under which we can utilize the Fredholm alternative to construct
Lyapunov functions for the wideband noise model \eqref{model-2} based on those for the controlled diffusion \eqref{model-3}.
If that assumption is replaced by (c),
it is slightly more complicated to
construct Lyapunov functions for the wideband noise model \eqref{model-2}.
However, it is doable using the perturbed Lyapunov method in \cite{KR}.
In such a setup, however, we need to work mainly with convergence of probability measures.
In this paper,
we consider the situation that only the predator is harvested.
It is also interesting to deal with the optimization problem of
harvesting both species under the constraint that
the extinction of each species is avoided.
Moreover, time-average optimal harvesting problems
for different ecological models also deserve careful study.
Our methods can be generalized to treat harvested ecological models
of higher dimensions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.